Monday, November 24, 2014

How Neutral Is the Net?


Earlier this month, President Obama asked the U. S. Federal Communications Commission (FCC) to classify the Internet as a public utility in order to preserve net neutrality.  While in principle the FCC is an independent regulatory authority, it usually takes the President seriously, and this proposed action led to both cheers and boos. 

The cheering came from mostly liberal observers who see threats to the Internet coming from internet service providers (ISPs), who have expressed a desire to discriminate (either favorably or unfavorably) among their customers.  One form of discrimination that has come up for discussion is that a big outfit such as Google or Facebook would pay ISPs for preferential treatment—a "fast lane" on the Internet so their websites would work faster compared to everyone else's.  Another idea, one that Comcast actually tried to implement a few years ago, is that certain types of Internet services that hog bandwidth (such as file sharing of music and videos) could be artificially slowed or discriminated against.  In that case, the FCC told Comcast to quit discriminating, and it did.  But more recently, similar attempts on the part of the FCC to enforce net neutrality have been struck down by federal courts, which said that the FCC doesn't have the legal authority to regulate the Internet in that way.  Hence the President's call to reclassify the Internet as a Title II public utility, which refers to a section in the FCC's enabling legislation that was originally intended to cover things like the telephone network.

And that leads to the boos, coming mainly from conservatives who see danger in letting the FCC treat the Internet basically the same way it treats the phone network.  Hidden on your phone bill is a little item called the Universal Service Fee.  On my cellphone bill it's $2.22 a month.  It was originally intended to provide subsidies for rural telephone service, but like most government fees and taxes, once it was planted as a tiny seed it put down roots and is now a mighty oak of revenue for the FCC, which supports itself entirely on fees.  If the phone network was not classified under Title II, the FCC could not assess this fee.  But such fees can be charged to a Title II service, which the Internet would become if the FCC does what the President asked it to.  That doesn't mean we would instantly start paying fees as soon as the FCC reclassified the Internet, but it does mean that they would have the legal right to.

From the viewpoint of consumers, it's hard to make an argument that a non-neutral net would be anything but bad.  The net (so to speak) effect of a non-neutral net would be to restrict access to something or other—either the firms that couldn't afford the extra fees that the ISPs want to charge the Googles for fast-lane services, or the types of services that cause ISPs headaches such as certain file-sharing activities.  But how neutral is the net today?

The picture is sometimes painted of a happy, absolutely free Internet world where equality reigns, versus a dismal, corporate-dominated few-rich among many-poor non-neutral Internet that the liberals warn us may happen if we don't guard net neutrality.  The facts are otherwise.  Right now the Internet is a great deal less neutral than it used to be.  If you don't belong to Facebook, for instance (as I don't), access to that world within a world of social media is highly restricted from you.  This has come about not because of anything an ISP has done, but because Facebook, in order to operate, requires certain information from you before you join, and hopes your signing up and consequent Facebook profile will attract other viewers.  Many of the various Google accounts and services work the same way.  My point is that there are huge regions on the Internet that are closed to you unless you pony up something to get into them (not necessarily cash), which is basically what the net-neutral advocates say will happen unless we preserve net neutrality.  But it already happens.

And what about people who live in areas that have slow or no access to the Internet?  It's not neutral to them.  Nobody has gone so far as to say every citizen of the U. S. has a right to X megabits per second access to the Internet.  But there was a time when the idea that everyone should have access to a telephone was a radical notion that telephone companies fought against, until the Bell System decided to join instead of fight and willingly put itself under the supervision of government authorities in exchange for promoting universal access. 

As I blogged in this space a few years ago, when you have a large network that thrives on maximizing the number of people connected to it, any artificial attempt to limit that access damages the system.  And over time, most such systems have ways of figuring this out, and tend to rid themselves of such restrictions.  But government fees and regulations are another matter.  It took years of court battles to free up the phone system from the old-style regulated monopoly pattern that was appropriate to the technology of 1945, but by 1980 was outmoded and needed to change. 

By and large, the Internet has stayed fairly neutral, not so much because the players all have a principled commitment to net neutrality, but because restrictions that move it in the non-neutral direction tend to harm the system as a whole.  My own inclination is to let things more or less alone, rather than reclassifying the Internet into a category that would make it vulnerable to a whole array of regulations that might be well-intended at the time, but could become albatrosses around the neck of a technology that has so far proved to be quite agile and dynamic.  But whatever happens, we should all realize that net neutrality is an ideal that has never been completely realized in practice.

Sources:  President Obama's statement on favoring FCC action to preserve net neutrality was announced on Nov. 10, 2014, and is available at http://www.whitehouse.gov/net-neutrality.  I referred to the conservative National Journal's piece on his move at http://www.nationaljournal.com/tech/obama-s-net-neutrality-plan-could-mean-new-internet-fees-20141120.  I also referred to the Wikipedia articles on network neutrality and the Federal Communications Commission.  My blog "Will the Net Stay Neutral if Google Doesn't Want It To?" appeared on Aug. 9, 2010.

Monday, November 17, 2014

Red Vs. Blue: Politics of the Nobel Prize in Physics


This year's Nobel Prize in physics went to three Japanese scientist-engineers who developed the first practical high-efficiency blue light-emitting diodes (LEDs).  Isamu Akasaki, Hiroshi Amano, and Shuki Nakamura received the award "for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources."  Shortly after the award was announced, former University of Illinois researcher Nick Holonyak made the news by complaining publicly that the Japanese work would not have been possible without the invention of the red LED, which he and coworkers at General Electric developed in 1962.  The Nobel committee has not chosen to honor Holonyak's work with the Prize, however, and he calls this neglect "insulting."  Beyond the immediate question of whether the Nobel Foundation should recognize red LEDs as well as blue ones is the wider issue of how important such prizes are to the field of engineering in general, and how fairly they are awarded.

Alfred Nobel himself was an engineer, inventor, and entrepreneur, not a scientist.  After a French newspaper prematurely ran an obituary on him when a reporter mistook his brother's death for his own, he learned that at least one prominent news outlet considered him a "merchant of death" because of his invention of dynamite, which was already being used as a military explosive in the 1880s.  Nobel never married, and in his will he directed that the bulk of his estate be used to establish an endowment to pay for a series of annual prizes for work that benefited humanity.  Thus the Nobel Prizes were born.

The Nobel Prize in Physics is awarded by a committee selected by the Royal Swedish Academy of Sciences, which votes as a whole on finalists selected by the committee.  Over the years, the process has worked well for the most part.  The first winner, Wilhelm Conrad Röntgen, was recognized in 1901 for his revolutionary discovery of X-rays, and both the magnitude of the discovery and his clear priority in the field went unquestioned.  But over the years, the prize has gone to a few people who in retrospect might not have been the best choice available.  For example, is there anyone today who remembers Nils Gustaf Dalén, who won the physics prize in 1912 for his "invention of automatic valves designed to be used in combination with gas accumulators in lighthouses and buoys"?  Admittedly, lighthouses and buoys were technologically important in 1912, but at a time when Einstein's discoveries were being widely recognized, you wonder what the committee was thinking.  Then again, Dalén was Swedish, and maybe his home-team advantage had something to do with it.

Anyone who has studied the history of technology knows that the twenty-words-or-less summary you read in the newspapers about any given invention is almost certainly not literally true.  For example, consider the question "Who invented the LED?"  Was it Henry Round, who when experimenting in 1907 with silicon-carbide cat-whisker radio-signal detectors at the behest of inventor (and 1909 Nobel Physics prizewinner) Guglielmo Marconi, discovered that under unpredictable conditions, the material emitted flashes of light?  Was it Russian scientist Oleg Losev, who published papers in English, French, and Russian in the 1920s describing not only experiments involving what we would now call LEDs, but a theory of why silicon carbide could emit light?  Was it James R. Biard and Gary Pittman, who, while working for Texas Instruments in 1962, patented a design for a gallium-arsenide diode that emitted infrared light?  In terms of technological significance to humanity, this discovery may outshine all the others, because the fiber-optic cables that make possible our wired world rely on infrared light emitted by direct descendants of Biard's infrared-emitting diode.  Or was it Nick Holonyak, who published the first report of a visible-light (red) LED that he developed at General Electric, also in 1962? 

And if we just stick to blue LEDs, which when combined with red and green allow the production of white light, there are others besides the 2014 Nobel prizewinners who should at least be considered.  In 1972, Stanford Ph. D. students Herb Maruska and Wally Rhines demonstrated a blue-violet LED made from magnesium-doped gallium nitride.  This was the first LED that made blue light, but it was very inefficient.  What Akasaki, Amano, and Nakamura did was to develop ways of growing epitaxial crystal layers of high-quality gallium nitride combined with other materials in a way that greatly improves the device's efficiency.  By the early 1990s, they had carried their improvements far enough so that high-brightness LEDs could hit the commercial market.  Further developments with phosphors and other techniques have finally pushed LEDs to the point that they can compete economically with older forms of electric lighting. 

I think the lesson to learn here is that the awarding of every prize, including the Nobels, is a combination of good judgment (one hopes), timing, the composition of the committee deciding on the prize, and the flukes and random effects of history and chance events.  In other words, the Nobel Prize is what you would get if you mixed God's absolutely correct insight on exactly what went on, with a lottery.  And sometimes the lottery part plays more of a role than the perfect-judgment part. 

Nick Holonyak certainly has a case.  But so does Biard (who is still alive as of this writing), and so would Maruska, Rhines, and a host of others who made various contributions of lesser importance to the long saga of the LED, which began as a gleam on a silicon-carbide radio detector in 1907. 

Sources:  I referred to reports on the 2014 Nobel Prize in physics carried by the Independent (UK) at http://www.independent.co.uk/news/uk/home-news/nobel-prize-2014-japanese-scientists-isamu-akasaki-and-hiroshi-amano-and-american-shuji-nakamura-win-physics-award-for-led-invention-9779517.html.  The same paper reported on Nick Holonyak's comments at http://www.independent.co.uk/news/science/nobel-prize-2014-inventor-of-the-red-led-hits-out-at-committee-for-overlooking-his-seminal-1960s-work-9782948.html.  I also referred to the Wikipedia articles on light-emitting diodes, its list of the Nobel laureates in physics, and its articles on Alfred Nobel and Oleg Losev. 

Monday, November 10, 2014

Yik Yak—Yuck


In discussions about the ethics of technology, every now and then you hear something like the following argument:  "Technology is neutral—it's just people who are good or bad."  Or take the bumper sticker favored by some members of the National Rifle Association:  "Guns don't kill people—people do."  While there is a measure of truth in this idea, it applies better to some technologies than to others.  It doesn't make much sense to apply it to the gas chambers used by the Nazis to kill Jews at Auschwitz, for instance.  So those who use this argument as a blanket excuse for opposing the regulation or curtailment of a certain technology should know that their case is not airtight, and needs to be considered with regard to the circumstances in which the technology is typically used.  This is especially true of the new smart-phone app called Yik Yak.

It sounds harmless enough at first.  You can buy it at the Apple iTunes store and other places, and it runs on iOS or Android phones.  It's sort of like Twitter with a 200-character limit.  But there are two main differences.  One, it is limited to communicating within a 1.5-mile radius (by a tie-in with your phone's GPS system).  Two, all posts are anonymous—no passwords, no usernames, and no way to tell who posted what.  Yik Yak is the digital equivalent of a wall waiting to be covered with graffiti.  And as you might expect, the average level of messages on Yik Yak appears to be pretty much what you'd find scribbled on a bathroom wall. 

The way I found out about Yik Yak wasn't by buying it and trying it out.  (My clamshell phone is so old it barely manages texts.)  I happened to pick up a copy of the University Star, the student paper at Texas State University, and read an editorial by a journalism major urging students not to do drugs.  And by the way, he said, it's so easy now—all you have to do is get on Yik Yak and start asking around, and presto—here comes the pusher, or dealer, or whatever they call the scumbag these days who sells illegal drugs. 

Normally I don't read editorials in the student paper, because I typically disagree with 95% of whatever they say.  But here was a man-bites-dog story—a student saying that Yik Yak was leading fellow students astray.

That's not all.  Although Yik Yak is supposed to be limited to those 17 and older, the app simply asks you to certify your age.  Anybody old enough to spell and use a smart phone can register, and nowadays that means grade-schoolers.  The anonymity of the app is an open invitation to bullying, sexual-themed texts, and bomb threats.  One Long Island teen found out the hard way that the purported anonymity of Yik Yak has a limit.  He posted a bomb threat, the cops presumably got a warrant and went to Yik Yak, and the company fingered their unhappy customer, who is now facing a possible jail sentence.  So much for truth in advertising.  The firm does have some legal boilerplate on their website to the effect that the only way they will break anonymity is if a duly authorized government entity asks them to.  But that can certainly happen.

Nevertheless, a lot of bad stuff can and does go on before the police have to get involved.  A Google search turns up numerous cases of cyber-bullying aided by Yik Yak.  If five or more people within your range vote your posts down, you disappear—but how often is that likely to happen?  Mob psychology dictates against it.  Asking a mob to transform itself into a deliberative democracy and vote bad actors off the air is like putting a pound of hamburger in front of a pack of hungry dogs and asking them to vote about fasting for Lent. 

I don't often unequivocally condemn a particular technology, but Yik Yak is getting my Bonehead-App-Of-The-Year award, which I just came up with.  Putting a way of posting anonymous comments in the hands of teenagers is simply asking for trouble.  There are places for anonymity—the ballot box, for instance.  But voting is something we want to encourage.  Buying drugs, making sexual and other kinds of insults, and threatening mass destruction are things that we want to discourage—I hope there is still enough left of the tatters of Judeo-Christian civilization in U. S. culture to form a consensus on that.  And ever since the app came out last year, the firm has evidently been engaged in various types of damage control—posting warnings about misuse on their website and discouraging users from the very types of behavior that drive the app's popularity. 

I've run across this kind of insidious fraud before—websites that sell ready-made essays and homework solutions to students and warn that "these documents are for reference only."  Corporations are increasingly immune to moral arguments and tend to respond only to threats of legal action, either by civil lawsuits or by criminal-law regulation.  With the heightened sensitivity we have these days to the problem of bullying, it would not surprise me if a clever lawyer filed a class-action lawsuit on behalf of parents whose children have been abused by means of Yik Yak.  Failing that, I would hope that some regulatory agency—the FCC comes to mind—would step in to tell Yik Yak either to change their rules radically or get lost.  In today's deregulated political atmosphere, the latter is unlikely, and the lawsuit route requires the prospect of a large financial settlement to get enough high-dollar lawyers motivated.  Unfortunately, Yik Yak is a small startup with only a few million dollars of funding, and so the lawsuit might have to wait till a big company like Google swallows it up. 

But Google's code of ethics—"Don't be evil"—would presumably make them hesitate before getting mixed up in a technology that panders so easily to the worse angels—in other words, devils—of our nature.  So let's hope that Yik Yak either gets buried under a pile of lawsuits and is never heard from again, or even better, the people in charge of it realize that they've created a monster, and drive a digital stake through its heart.

Sources:  The editorial about drug use and Yik Yak I read was written by Rivers Wright and posted on the University Star website at http://star.txstate.edu/node/2817.  I referred to articles on Yik Yak from several news sources.  The story of the Long Island teenager was carried by WPIX-TV, New York City, on their website at 
http://pix11.com/2014/09/16/li-teen-now-behind-bars-learns-that-yik-yak-is-not-anonymous-after-all/.  Internet security expert Tim Woda warns parents about Yik Yak at the website http://resources.uknowkids.com/blog/why-yik-yak-is-the-most-dangerous-app-you-have-never-heard-of.  I also referred to the Wikipedia articles on Yik Yak and Auschwitz. 

Monday, November 03, 2014

Space Flight: A Risky Business


The commercial space flight business suffered a one-two punch last week.  On Tuesday, an unmanned rocket carrying supplies for the International Space Station and launched by Orbital Sciences Inc. failed a few seconds after launch, falling back to the launch pad and exploding to make a spectacular nighttime video that must have been shown on every TV outlet in the U. S.  It was the company's third commercial launch of a contract to supply the Space Station, whose residents will now have to wait a while longer for the next garbage pickup.  (A side benefit of the long-distance unmanned deliveries is that the Space Station folks can cram the vehicle with their trash and let it burn up in the atmosphere.) 

And then Friday, Virgin Galactic's SpaceShipTwo, manned by two experienced test pilots, broke up high above the Mojave Desert in California, killing pilot Michael Alsbury, 39, and injuring the other, Peter Siebold.  The crash scattered debris over a five-mile-long area and initiated an investigation by both Virgin Galactic and the U. S. National Transportation Safety Board which could take as long as a year.

Any time anyone is injured or killed in a space-related accident, engineers are obliged to get to the bottom of the technical whys and hows of the mishap.  But beyond the specific technical causes of these particular accidents, tragic as they were, is the question of how reliable commercial manned space flight is going to be.  And a little history can throw some light on that question.

A man named Ed Kyle maintains an extensive statistical study of space-flight launches at a website called www.spacelaunchreport.com.  He compiles both unmanned and manned flights, although in the nature of the business, the vast majority of launches are unmanned.  Bearing that in mind, we can look at a convenient summary table he provides of success rates of launches by decade, going all the way back from the infancy of space flight in the 1950s to the 2010s. 

America's first attempt to launch a satellite into orbit, the Vanguard launch on Dec. 8, 1957, was a highly publicized failure, exploding after reaching the breathtaking altitude of four feet (1.2 meters).  And overall, only about half the launch attempts by all parties in the 1950s were successful.  But aerospace engineers began climbing that long haul called the learning curve, and by the 1970s the average success rate was around 95%, where it has hovered ever since.  In the last two complete years, for example (2012 and 2013), Kyle logged 159 launch attempts and 9 failures among them, for a failure rate (for the pessimists among us) of 5.6%.  So even today, forty years after the space-rocket business reached maturity, there is about one chance in twenty that your satellite will not end up in space, but in a watery or earthy grave.

Despite all the fuss about NASA turning space flight over to commercial interests, satellite launches have been commercial transactions for decades.  And it appears that a failure rate of 5% is an acceptable level to support a generally prospering space industry.  The companies and their insurers can handle that level of failure and still accomplish what they want to do, most of the time. 

But launching cans of beans for a space station, and launching people who have paid a quarter of a million dollars for the ride (as prospective passengers in the Virgin Galactic rocket have coughed up in advance), are two different propositions.  Commercial airlines would not have many customers if it were well known that one out of every twenty flights was going to crash.  It took the business of aviation twenty years or so to be safe enough to offer commercial passenger service, but by 1930 or so the risks of commercial scheduled flights to the individual passenger were largely imaginary, and today you take more of a risk of dying on your drive to the airport than you take in the air. 

It may be harder for the space-flight engineers to drive their failure rates down to the level at which people could buy space-flight life insurance for a few dollars, like you used to be able to do for commercial aviation flights at airports.  Rocket hardware operates at the outer limits of materials science.  The engines run so hot that liquid-fueled nozzles have to be cooled continuously to keep them from melting, and the fluid dynamics of the combustion of rocket fuel is still so complex that an exhaustive, essentially complete mathematical model of a rocket in flight, including vibration modes and so on, is quite possibly still beyond our abilities.  So rocket designs are a combination of science-based modeling and engineering intuition, added to a large measure of experience of what has worked in the past.

I think it is significant that the Virgin Galactic flight was using a different type of fuel than they had used in previous flights.  Such a major change, even if tried out on the ground with similar hardware, can lead to unpredictable results, and may turn out to have contributed to the disastrous crash of SpaceShipTwo.  Rocket engineers, at least the successful ones, tend to be highly conservative in their designs.  Anyone who has seen both an old V-2 rocket engine in a museum and the massive Apollo engines used to launch men to the moon can see that Wernher von Braun found something that worked at Peenemunde, Germany in the 1930s, and stuck with it all the way through the 1960s. 

Such conservatism is increasingly rare among engineers in general today, influenced by innovations in hardware and software which happen so fast that you can squeeze an entire product life cycle, from introduction to obsolescence, into six months.  But the adage "if it ain't broke, don't fix it" applies in spades to space travel.  And as we find out in the coming months what caused SpaceShipTwo's failure, we may find that experimenting with a different fuel was a bad idea. 

Unless we colonize the Moon or Mars to a great extent, space travel will always be an exotic, low-volume business, like tours to the Antarctic are today.  And it is by no means clear to me that even the super-rich will be willing to take the kind of risks that simple statistics tell us space travel entails—at least, not for quite a while yet.

Sources:  Ed Kyle maintains his Space Launch Report at http://www.spacelaunchreport.com.  I referred to an article carried by the BBC on the Virgin Galactic disaster at http://www.bbc.com/news/world-us-canada-29869070
and by www.space.com on the Orbital Sciences launch failure at http://www.space.com/27615-antares-rocket-explosion-timeline.html.  I also referred to the Wikipedia article on the Vanguard (rocket).