Monday, November 24, 2014

How Neutral Is the Net?

Earlier this month, President Obama asked the U. S. Federal Communications Commission (FCC) to classify the Internet as a public utility in order to preserve net neutrality.  While in principle the FCC is an independent regulatory authority, it usually takes the President seriously, and this proposed action led to both cheers and boos. 

The cheering came from mostly liberal observers who see threats to the Internet coming from internet service providers (ISPs), who have expressed a desire to discriminate (either favorably or unfavorably) among their customers.  One form of discrimination that has come up for discussion is that a big outfit such as Google or Facebook would pay ISPs for preferential treatment—a "fast lane" on the Internet so their websites would work faster compared to everyone else's.  Another idea, one that Comcast actually tried to implement a few years ago, is that certain types of Internet services that hog bandwidth (such as file sharing of music and videos) could be artificially slowed or discriminated against.  In that case, the FCC told Comcast to quit discriminating, and it did.  But more recently, similar attempts on the part of the FCC to enforce net neutrality have been struck down by federal courts, which said that the FCC doesn't have the legal authority to regulate the Internet in that way.  Hence the President's call to reclassify the Internet as a Title II public utility, which refers to a section in the FCC's enabling legislation that was originally intended to cover things like the telephone network.

And that leads to the boos, coming mainly from conservatives who see danger in letting the FCC treat the Internet basically the same way it treats the phone network.  Hidden on your phone bill is a little item called the Universal Service Fee.  On my cellphone bill it's $2.22 a month.  It was originally intended to provide subsidies for rural telephone service, but like most government fees and taxes, once it was planted as a tiny seed it put down roots and is now a mighty oak of revenue for the FCC, which supports itself entirely on fees.  If the phone network was not classified under Title II, the FCC could not assess this fee.  But such fees can be charged to a Title II service, which the Internet would become if the FCC does what the President asked it to.  That doesn't mean we would instantly start paying fees as soon as the FCC reclassified the Internet, but it does mean that they would have the legal right to.

From the viewpoint of consumers, it's hard to make an argument that a non-neutral net would be anything but bad.  The net (so to speak) effect of a non-neutral net would be to restrict access to something or other—either the firms that couldn't afford the extra fees that the ISPs want to charge the Googles for fast-lane services, or the types of services that cause ISPs headaches such as certain file-sharing activities.  But how neutral is the net today?

The picture is sometimes painted of a happy, absolutely free Internet world where equality reigns, versus a dismal, corporate-dominated few-rich among many-poor non-neutral Internet that the liberals warn us may happen if we don't guard net neutrality.  The facts are otherwise.  Right now the Internet is a great deal less neutral than it used to be.  If you don't belong to Facebook, for instance (as I don't), access to that world within a world of social media is highly restricted from you.  This has come about not because of anything an ISP has done, but because Facebook, in order to operate, requires certain information from you before you join, and hopes your signing up and consequent Facebook profile will attract other viewers.  Many of the various Google accounts and services work the same way.  My point is that there are huge regions on the Internet that are closed to you unless you pony up something to get into them (not necessarily cash), which is basically what the net-neutral advocates say will happen unless we preserve net neutrality.  But it already happens.

And what about people who live in areas that have slow or no access to the Internet?  It's not neutral to them.  Nobody has gone so far as to say every citizen of the U. S. has a right to X megabits per second access to the Internet.  But there was a time when the idea that everyone should have access to a telephone was a radical notion that telephone companies fought against, until the Bell System decided to join instead of fight and willingly put itself under the supervision of government authorities in exchange for promoting universal access. 

As I blogged in this space a few years ago, when you have a large network that thrives on maximizing the number of people connected to it, any artificial attempt to limit that access damages the system.  And over time, most such systems have ways of figuring this out, and tend to rid themselves of such restrictions.  But government fees and regulations are another matter.  It took years of court battles to free up the phone system from the old-style regulated monopoly pattern that was appropriate to the technology of 1945, but by 1980 was outmoded and needed to change. 

By and large, the Internet has stayed fairly neutral, not so much because the players all have a principled commitment to net neutrality, but because restrictions that move it in the non-neutral direction tend to harm the system as a whole.  My own inclination is to let things more or less alone, rather than reclassifying the Internet into a category that would make it vulnerable to a whole array of regulations that might be well-intended at the time, but could become albatrosses around the neck of a technology that has so far proved to be quite agile and dynamic.  But whatever happens, we should all realize that net neutrality is an ideal that has never been completely realized in practice.

Sources:  President Obama's statement on favoring FCC action to preserve net neutrality was announced on Nov. 10, 2014, and is available at  I referred to the conservative National Journal's piece on his move at  I also referred to the Wikipedia articles on network neutrality and the Federal Communications Commission.  My blog "Will the Net Stay Neutral if Google Doesn't Want It To?" appeared on Aug. 9, 2010.

Monday, November 17, 2014

Red Vs. Blue: Politics of the Nobel Prize in Physics

This year's Nobel Prize in physics went to three Japanese scientist-engineers who developed the first practical high-efficiency blue light-emitting diodes (LEDs).  Isamu Akasaki, Hiroshi Amano, and Shuki Nakamura received the award "for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources."  Shortly after the award was announced, former University of Illinois researcher Nick Holonyak made the news by complaining publicly that the Japanese work would not have been possible without the invention of the red LED, which he and coworkers at General Electric developed in 1962.  The Nobel committee has not chosen to honor Holonyak's work with the Prize, however, and he calls this neglect "insulting."  Beyond the immediate question of whether the Nobel Foundation should recognize red LEDs as well as blue ones is the wider issue of how important such prizes are to the field of engineering in general, and how fairly they are awarded.

Alfred Nobel himself was an engineer, inventor, and entrepreneur, not a scientist.  After a French newspaper prematurely ran an obituary on him when a reporter mistook his brother's death for his own, he learned that at least one prominent news outlet considered him a "merchant of death" because of his invention of dynamite, which was already being used as a military explosive in the 1880s.  Nobel never married, and in his will he directed that the bulk of his estate be used to establish an endowment to pay for a series of annual prizes for work that benefited humanity.  Thus the Nobel Prizes were born.

The Nobel Prize in Physics is awarded by a committee selected by the Royal Swedish Academy of Sciences, which votes as a whole on finalists selected by the committee.  Over the years, the process has worked well for the most part.  The first winner, Wilhelm Conrad Röntgen, was recognized in 1901 for his revolutionary discovery of X-rays, and both the magnitude of the discovery and his clear priority in the field went unquestioned.  But over the years, the prize has gone to a few people who in retrospect might not have been the best choice available.  For example, is there anyone today who remembers Nils Gustaf Dalén, who won the physics prize in 1912 for his "invention of automatic valves designed to be used in combination with gas accumulators in lighthouses and buoys"?  Admittedly, lighthouses and buoys were technologically important in 1912, but at a time when Einstein's discoveries were being widely recognized, you wonder what the committee was thinking.  Then again, Dalén was Swedish, and maybe his home-team advantage had something to do with it.

Anyone who has studied the history of technology knows that the twenty-words-or-less summary you read in the newspapers about any given invention is almost certainly not literally true.  For example, consider the question "Who invented the LED?"  Was it Henry Round, who when experimenting in 1907 with silicon-carbide cat-whisker radio-signal detectors at the behest of inventor (and 1909 Nobel Physics prizewinner) Guglielmo Marconi, discovered that under unpredictable conditions, the material emitted flashes of light?  Was it Russian scientist Oleg Losev, who published papers in English, French, and Russian in the 1920s describing not only experiments involving what we would now call LEDs, but a theory of why silicon carbide could emit light?  Was it James R. Biard and Gary Pittman, who, while working for Texas Instruments in 1962, patented a design for a gallium-arsenide diode that emitted infrared light?  In terms of technological significance to humanity, this discovery may outshine all the others, because the fiber-optic cables that make possible our wired world rely on infrared light emitted by direct descendants of Biard's infrared-emitting diode.  Or was it Nick Holonyak, who published the first report of a visible-light (red) LED that he developed at General Electric, also in 1962? 

And if we just stick to blue LEDs, which when combined with red and green allow the production of white light, there are others besides the 2014 Nobel prizewinners who should at least be considered.  In 1972, Stanford Ph. D. students Herb Maruska and Wally Rhines demonstrated a blue-violet LED made from magnesium-doped gallium nitride.  This was the first LED that made blue light, but it was very inefficient.  What Akasaki, Amano, and Nakamura did was to develop ways of growing epitaxial crystal layers of high-quality gallium nitride combined with other materials in a way that greatly improves the device's efficiency.  By the early 1990s, they had carried their improvements far enough so that high-brightness LEDs could hit the commercial market.  Further developments with phosphors and other techniques have finally pushed LEDs to the point that they can compete economically with older forms of electric lighting. 

I think the lesson to learn here is that the awarding of every prize, including the Nobels, is a combination of good judgment (one hopes), timing, the composition of the committee deciding on the prize, and the flukes and random effects of history and chance events.  In other words, the Nobel Prize is what you would get if you mixed God's absolutely correct insight on exactly what went on, with a lottery.  And sometimes the lottery part plays more of a role than the perfect-judgment part. 

Nick Holonyak certainly has a case.  But so does Biard (who is still alive as of this writing), and so would Maruska, Rhines, and a host of others who made various contributions of lesser importance to the long saga of the LED, which began as a gleam on a silicon-carbide radio detector in 1907. 

Sources:  I referred to reports on the 2014 Nobel Prize in physics carried by the Independent (UK) at  The same paper reported on Nick Holonyak's comments at  I also referred to the Wikipedia articles on light-emitting diodes, its list of the Nobel laureates in physics, and its articles on Alfred Nobel and Oleg Losev. 

Monday, November 10, 2014

Yik Yak—Yuck

In discussions about the ethics of technology, every now and then you hear something like the following argument:  "Technology is neutral—it's just people who are good or bad."  Or take the bumper sticker favored by some members of the National Rifle Association:  "Guns don't kill people—people do."  While there is a measure of truth in this idea, it applies better to some technologies than to others.  It doesn't make much sense to apply it to the gas chambers used by the Nazis to kill Jews at Auschwitz, for instance.  So those who use this argument as a blanket excuse for opposing the regulation or curtailment of a certain technology should know that their case is not airtight, and needs to be considered with regard to the circumstances in which the technology is typically used.  This is especially true of the new smart-phone app called Yik Yak.

It sounds harmless enough at first.  You can buy it at the Apple iTunes store and other places, and it runs on iOS or Android phones.  It's sort of like Twitter with a 200-character limit.  But there are two main differences.  One, it is limited to communicating within a 1.5-mile radius (by a tie-in with your phone's GPS system).  Two, all posts are anonymous—no passwords, no usernames, and no way to tell who posted what.  Yik Yak is the digital equivalent of a wall waiting to be covered with graffiti.  And as you might expect, the average level of messages on Yik Yak appears to be pretty much what you'd find scribbled on a bathroom wall. 

The way I found out about Yik Yak wasn't by buying it and trying it out.  (My clamshell phone is so old it barely manages texts.)  I happened to pick up a copy of the University Star, the student paper at Texas State University, and read an editorial by a journalism major urging students not to do drugs.  And by the way, he said, it's so easy now—all you have to do is get on Yik Yak and start asking around, and presto—here comes the pusher, or dealer, or whatever they call the scumbag these days who sells illegal drugs. 

Normally I don't read editorials in the student paper, because I typically disagree with 95% of whatever they say.  But here was a man-bites-dog story—a student saying that Yik Yak was leading fellow students astray.

That's not all.  Although Yik Yak is supposed to be limited to those 17 and older, the app simply asks you to certify your age.  Anybody old enough to spell and use a smart phone can register, and nowadays that means grade-schoolers.  The anonymity of the app is an open invitation to bullying, sexual-themed texts, and bomb threats.  One Long Island teen found out the hard way that the purported anonymity of Yik Yak has a limit.  He posted a bomb threat, the cops presumably got a warrant and went to Yik Yak, and the company fingered their unhappy customer, who is now facing a possible jail sentence.  So much for truth in advertising.  The firm does have some legal boilerplate on their website to the effect that the only way they will break anonymity is if a duly authorized government entity asks them to.  But that can certainly happen.

Nevertheless, a lot of bad stuff can and does go on before the police have to get involved.  A Google search turns up numerous cases of cyber-bullying aided by Yik Yak.  If five or more people within your range vote your posts down, you disappear—but how often is that likely to happen?  Mob psychology dictates against it.  Asking a mob to transform itself into a deliberative democracy and vote bad actors off the air is like putting a pound of hamburger in front of a pack of hungry dogs and asking them to vote about fasting for Lent. 

I don't often unequivocally condemn a particular technology, but Yik Yak is getting my Bonehead-App-Of-The-Year award, which I just came up with.  Putting a way of posting anonymous comments in the hands of teenagers is simply asking for trouble.  There are places for anonymity—the ballot box, for instance.  But voting is something we want to encourage.  Buying drugs, making sexual and other kinds of insults, and threatening mass destruction are things that we want to discourage—I hope there is still enough left of the tatters of Judeo-Christian civilization in U. S. culture to form a consensus on that.  And ever since the app came out last year, the firm has evidently been engaged in various types of damage control—posting warnings about misuse on their website and discouraging users from the very types of behavior that drive the app's popularity. 

I've run across this kind of insidious fraud before—websites that sell ready-made essays and homework solutions to students and warn that "these documents are for reference only."  Corporations are increasingly immune to moral arguments and tend to respond only to threats of legal action, either by civil lawsuits or by criminal-law regulation.  With the heightened sensitivity we have these days to the problem of bullying, it would not surprise me if a clever lawyer filed a class-action lawsuit on behalf of parents whose children have been abused by means of Yik Yak.  Failing that, I would hope that some regulatory agency—the FCC comes to mind—would step in to tell Yik Yak either to change their rules radically or get lost.  In today's deregulated political atmosphere, the latter is unlikely, and the lawsuit route requires the prospect of a large financial settlement to get enough high-dollar lawyers motivated.  Unfortunately, Yik Yak is a small startup with only a few million dollars of funding, and so the lawsuit might have to wait till a big company like Google swallows it up. 

But Google's code of ethics—"Don't be evil"—would presumably make them hesitate before getting mixed up in a technology that panders so easily to the worse angels—in other words, devils—of our nature.  So let's hope that Yik Yak either gets buried under a pile of lawsuits and is never heard from again, or even better, the people in charge of it realize that they've created a monster, and drive a digital stake through its heart.

Sources:  The editorial about drug use and Yik Yak I read was written by Rivers Wright and posted on the University Star website at  I referred to articles on Yik Yak from several news sources.  The story of the Long Island teenager was carried by WPIX-TV, New York City, on their website at  Internet security expert Tim Woda warns parents about Yik Yak at the website  I also referred to the Wikipedia articles on Yik Yak and Auschwitz. 

Monday, November 03, 2014

Space Flight: A Risky Business

The commercial space flight business suffered a one-two punch last week.  On Tuesday, an unmanned rocket carrying supplies for the International Space Station and launched by Orbital Sciences Inc. failed a few seconds after launch, falling back to the launch pad and exploding to make a spectacular nighttime video that must have been shown on every TV outlet in the U. S.  It was the company's third commercial launch of a contract to supply the Space Station, whose residents will now have to wait a while longer for the next garbage pickup.  (A side benefit of the long-distance unmanned deliveries is that the Space Station folks can cram the vehicle with their trash and let it burn up in the atmosphere.) 

And then Friday, Virgin Galactic's SpaceShipTwo, manned by two experienced test pilots, broke up high above the Mojave Desert in California, killing pilot Michael Alsbury, 39, and injuring the other, Peter Siebold.  The crash scattered debris over a five-mile-long area and initiated an investigation by both Virgin Galactic and the U. S. National Transportation Safety Board which could take as long as a year.

Any time anyone is injured or killed in a space-related accident, engineers are obliged to get to the bottom of the technical whys and hows of the mishap.  But beyond the specific technical causes of these particular accidents, tragic as they were, is the question of how reliable commercial manned space flight is going to be.  And a little history can throw some light on that question.

A man named Ed Kyle maintains an extensive statistical study of space-flight launches at a website called  He compiles both unmanned and manned flights, although in the nature of the business, the vast majority of launches are unmanned.  Bearing that in mind, we can look at a convenient summary table he provides of success rates of launches by decade, going all the way back from the infancy of space flight in the 1950s to the 2010s. 

America's first attempt to launch a satellite into orbit, the Vanguard launch on Dec. 8, 1957, was a highly publicized failure, exploding after reaching the breathtaking altitude of four feet (1.2 meters).  And overall, only about half the launch attempts by all parties in the 1950s were successful.  But aerospace engineers began climbing that long haul called the learning curve, and by the 1970s the average success rate was around 95%, where it has hovered ever since.  In the last two complete years, for example (2012 and 2013), Kyle logged 159 launch attempts and 9 failures among them, for a failure rate (for the pessimists among us) of 5.6%.  So even today, forty years after the space-rocket business reached maturity, there is about one chance in twenty that your satellite will not end up in space, but in a watery or earthy grave.

Despite all the fuss about NASA turning space flight over to commercial interests, satellite launches have been commercial transactions for decades.  And it appears that a failure rate of 5% is an acceptable level to support a generally prospering space industry.  The companies and their insurers can handle that level of failure and still accomplish what they want to do, most of the time. 

But launching cans of beans for a space station, and launching people who have paid a quarter of a million dollars for the ride (as prospective passengers in the Virgin Galactic rocket have coughed up in advance), are two different propositions.  Commercial airlines would not have many customers if it were well known that one out of every twenty flights was going to crash.  It took the business of aviation twenty years or so to be safe enough to offer commercial passenger service, but by 1930 or so the risks of commercial scheduled flights to the individual passenger were largely imaginary, and today you take more of a risk of dying on your drive to the airport than you take in the air. 

It may be harder for the space-flight engineers to drive their failure rates down to the level at which people could buy space-flight life insurance for a few dollars, like you used to be able to do for commercial aviation flights at airports.  Rocket hardware operates at the outer limits of materials science.  The engines run so hot that liquid-fueled nozzles have to be cooled continuously to keep them from melting, and the fluid dynamics of the combustion of rocket fuel is still so complex that an exhaustive, essentially complete mathematical model of a rocket in flight, including vibration modes and so on, is quite possibly still beyond our abilities.  So rocket designs are a combination of science-based modeling and engineering intuition, added to a large measure of experience of what has worked in the past.

I think it is significant that the Virgin Galactic flight was using a different type of fuel than they had used in previous flights.  Such a major change, even if tried out on the ground with similar hardware, can lead to unpredictable results, and may turn out to have contributed to the disastrous crash of SpaceShipTwo.  Rocket engineers, at least the successful ones, tend to be highly conservative in their designs.  Anyone who has seen both an old V-2 rocket engine in a museum and the massive Apollo engines used to launch men to the moon can see that Wernher von Braun found something that worked at Peenemunde, Germany in the 1930s, and stuck with it all the way through the 1960s. 

Such conservatism is increasingly rare among engineers in general today, influenced by innovations in hardware and software which happen so fast that you can squeeze an entire product life cycle, from introduction to obsolescence, into six months.  But the adage "if it ain't broke, don't fix it" applies in spades to space travel.  And as we find out in the coming months what caused SpaceShipTwo's failure, we may find that experimenting with a different fuel was a bad idea. 

Unless we colonize the Moon or Mars to a great extent, space travel will always be an exotic, low-volume business, like tours to the Antarctic are today.  And it is by no means clear to me that even the super-rich will be willing to take the kind of risks that simple statistics tell us space travel entails—at least, not for quite a while yet.

Sources:  Ed Kyle maintains his Space Launch Report at  I referred to an article carried by the BBC on the Virgin Galactic disaster at
and by on the Orbital Sciences launch failure at  I also referred to the Wikipedia article on the Vanguard (rocket). 

Monday, October 27, 2014

Do Not Sit Here: The Exploding Airbag Recall

Airbags are a required safety feature on cars sold in the U. S. since at least 1998.  They have undoubtedly saved lives, especially in situations where the driver or passengers neglected to use seatbelts.  So whatever else we say about them, we should bear in mind that overall, cars are probably safer with airbags than without them.  But only if the airbags themselves are safe.  And lately, some drivers have found that the airbag cure was much worse than the accident disease.

Over a hundred injuries and at least two fatalities have been attributed to defective airbags made by Japanese supplier Takata.  According to the New York Times, in 2009 a 33-year-old mother of three ran into a mail truck in Richmond, Virginia, and her airbag deployed.  The injuries from the wreck itself were minor.  But a piece of shrapnel from the metal canister containing the airbag explosive shot through her neck and she allegedly bled to death as a result. 

For an airbag to be an effective cushion during a collision, it has to deploy in well under a tenth of a second.  This involves creating a large volume of high-pressure gas in a short time.  The early airbags used an explosive called sodium azide, but the residue was toxic. So in the 1990s, manufacturers began to research other chemicals that would be less noxious and also allow for a smaller propellant package. 

Takata, one of the largest airbag suppliers in the world, developed a compound based largely on good old ammonium nitrate (the same chemical involved in the West, Texas explosion on April 17, 2013), along with other components designed to moderate the tendency of this substance to detonate and to absorb moisture.  The manufacture of any product involving explosives requires rigorous adherence to procedures that maintain the integrity of the ingredients all the way from the raw materials to the finished item.  But as various documents have indicated, Takata has not always been sufficiently diligent in their manufacturing processes.

As Takata has responded to inquiries by automaker customers and regulatory agencies, it has admitted to several manufacturing errors over the years.  Again according to the Times, one set of defective airbags was attributed to workers in a Mexican assembly plant who allowed moisture-sensitive explosive ingredients to sit on the plant floor too long in a humid environment.  Other documents show rusty propellant containers and foreign objects in the propellant cans may have been responsible.  Problems with the airbags began to show up as long ago as 2004, and in a series of widening recalls in the last few months, eleven automakers have recalled over 14 million vehicles for replacement of suspect airbags made by Takata.  Many of the vehicles being recalled are in the most humid states in the U. S., which indicates that deterioration due to high humidity is the main culprit here.  Toyota has told its dealers that if the replacement airbags on a recalled vehicle are not immediately available, they should put a sticker on the dashboard next to the defective airbag.  The sticker reads "Do Not Sit Here."  Good luck with that.

This particular story comes close to home, personally.  In our Honda household we operate both a Civic and an Element.  They are very good cars, but neither has been in a major collision that set off the airbags.  For this I am grateful.  I checked their VINs (vehicle identification numbers) at a U. S. government website designed to let owners know of any recalls out on their vehicles, and hit the jackpot both times.  I don't think I'll wait for the dealer to write me.  My 89-year-old father-in-law rides in the passenger seat of the Element.  It would be a shame for a World War II U. S. Navy veteran of the Pacific theater to be cut down by a defective Japanese airbag.  But it could happen, at least until I get those airbags replaced. 

As hazards go, this one is not worth lying awake nights about, unless maybe you work for Takata or one of the affected automakers.  As long as you're not in a wreck, apparently the airbags won't spontaneously combust, and most of them appear to work properly, especially if you don't live in an area that's particularly humid (watch out, Houstonians!).  But even a few defective airbags are too many. 

We won't know for some time why it took so long to uncover the problems and do something about them.  But some contributing factors are apparent already.  First, the problem arose not in a particular automaker's design (as was the case with the GM ignition recall), but with a supplier's manufacturing process.  It is impossible to test an airbag non-destructively, so except for sample testing, which automakers may or may not do, I'm not sure how they could have caught the problem by incoming inspections of Tanaka's product. 

People can be injured even by airbags that work properly and have no design or manufacturing defects, so sorting out incidents that involve defective airbags from those that don't is not a trivial problem, except in the glaringly obvious cases when metal shards from the airbag tear it to ribbons and slice into passengers.  And while the automakers did the minimum required when they received word about the airbag injuries, which was to notify the National Highway Traffic Safety Administration (NHTSA) within five days, they don't have to give a lot of details.  And if the feds choose not to follow up the notification, the matter ends there, as it did for most of the last ten years.  Only when lawsuits and headlines began to pop up about the matter did the automakers start issuing recalls and pressured Takata to shape up.

I don't know what Takata's market share in the airbag industry is, but my guess is it's pretty high.  Companies that sell products to large OEM (original equipment manufacturer) firms often develop too-chummy relationships with their few customers, who in turn are reluctant to threaten to take their business elsewhere if problems arise.  It's the old monopoly problem, but in this case the consumer is harmed not by exploitative prices—I'm sure the automakers pressured Takata to keep their prices down—but by defective merchandise.  Unfortunately, there is no easy solution for this type of structural problem, except for buyers and regulators to be increasingly vigilant for signs that there is a manufacturing problem.

If you happen to drive one of the fourteen million vehicles affected by the recall, here's hoping you get your car to the dealer soon—and you get it back with something better than a "Do Not Sit Here" sticker.

Sources:  Car and Driver magazine's online edition carried a report on the recall that I referred to, at  I also referred to the New York Times article published online on Sept. 11, 2014 at  The U. S. NHTSA's VIN recall website is at

Monday, October 20, 2014

Handling Ebola Patients: An Engineering Problem

Two nurses who treated the late Ebola-virus victim Thomas Eric Duncan have been diagnosed with Ebola virus as well.  They treated him at the Texas Health Presbyterian Center hospital in Dallas, where he died on October 8 after traveling there from Liberia, where he acquired the virus.  Despite apparently following the protocols recommended by the U. S. Center for Disease Control (CDC) for dealing with patients with Ebola, nurses Nina Pham and Amber Joy Vinson are now being treated for the disease as well.  Their chances are grim:  the death rate from the virus can be as high as 50%. 

Besides all that, one could be excused from believing that nothing else is going on in the U. S. right now except the Ebola virus, at least judging from the media coverage in Texas.  If there is a futures market in Clorox, now's your chance.

We are used to thinking of technology only in terms of hardware, or maybe hardware and software.  But engineering designs can center around people and their behavior too.  The elaborate protocols and procedures that integrated-circuit manufacturers follow are just as essential to making their chips as the silicon is.  A roomful of advanced medical equipment is just so much scrap metal without the people and plans and procedures that can use them effectively.  And just as machines can be well or poorly designed, so can protocols.

Let's look at two protocols.  One is posted on the CDC website under the title "Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Virus Disease in U.S. Hospitals."  That's pretty clear.  What do they say about personal protective equipment for the nurses and other personnel who care for Ebola patients?  It's pretty simple:  a face mask, eye protection (goggles or a face shield), gloves, and a gown ("fluid resistant or impermeable").  I don't know about you, but if I was within a few feet of a potential source of fluid that had a good chance of giving me a deadly illness, I would want to be covered by something more substantial than a "fluid-resistant" gown.

Now, let's consider another set of protocols.  In an editorial in the Oct. 16 Austin American-Statesman, critical care physician Bryan Fisk recalls the protocol he used when he was in charge of a Biosafety Level 4 Patient Isolation Suite at Ft. Detrick, Maryland.  This was a military facility designed to handle patients with diseases as dangerous as Ebola.  What kind of personal protective equipment did they use at this facility?  "[F]ully encapsulated positive-pressure protective suits with a tethered air supply."  In other words, a diving suit without the water.  Not only were they trained to do all sorts of procedures—intubation, catheterization—while wearing these undoubtedly cumbersome outfits.  Once they left the isolation unit, they underwent a complete chemical scrubdown while still wearing the suits, with the aid of other technicians.  And as long as they were treating the patient and for the length of the incubation period afterwards, they were confined to on-site quarters and not allowed to leave until there was no chance that they had acquired the virus.

There are reportedly about four of these types of isolation units in the U. S.  Understandably, they are more expensive than the standard emergency-room or intensive-care isolation units maintained by even the best public hospitals.  But in view of the fact that the CDC protocols, even if followed, fall far short of what the U. S. military does when dealing with Ebola-type situations, it's hard to resist the temptation to repeat an old consulting-engineer saying. 

The story goes that one day a consulting engineer gets a call from a factory manager where things are going haywire.  He flies out to the site, walks around a half hour or so, and then motions for the manager to come into a private office with him.  He sits down and says to the manager, "Your system is perfectly designed to give you the results you're getting."  In other words, you should not expect a badly designed protocol to deliver good results.

Fortunately, nurse Nina Pham has been transferred to a National Institutes of Allergy and Infectious Diseases isolation unit in Bethesda, Maryland.  I was unable to find any information on the protocols for protecting healthcare workers in that unit, but one hopes that it is better than the CDC's bare minimum. 

The perception of competence can be as important as actual competence.  Doctors and medical-care workers are some of the most trusted professionals in society, and when a scary thing like an Ebola case happens, the presumption is that those in charge will follow the best practices available to ensure that the disease doesn't spread.  With the failure of Texas Health Presbyterian Center to use adequate protocols, whether due to thinking that the CDC knew what it was talking about or otherwise, that trust has been severely damaged, and the word "panic" has started to show up in news items on the virus.  Professionals can be excessively reluctant to second-guess other professionals, but in this case it looks like it would have been better for someone in authority to order the Texas hospital to send Duncan to a military or equivalent-quality isolation unit the instant it became clear he was infected.  He might have died anyway, but we would have avoided any possibility that Ebola carriers were running around in public and flying in planes, which is the situation we face now.

Realistically, the risk of catching Ebola for the average person in the U. S. is virtually no higher than it was a month ago, which was approximately zero.  But already, serious damage has been done to the medical profession's reputation, and it will be some time before the fears of Ebola subside.  We can get there sooner if every organization involved with Ebola fully acknowledges the seriousness of the problem and spends the money and resources necessary to deal with it safely—or else admits they can't do it and defers to an organization that can. 

Sources:  Dr. Bryan Fisk's article "We need to send Ebola patients to U. S. disease-isolation facilities," appeared in the Oct. 16 edition of the Austin American-Statesman, p. A10.  The CDC's recommended protocol for Ebola appears at, and as of this writing was last updated Oct. 6.  The Dallas Morning News has a helpful timeline on Ebola in the U. S. at

Monday, October 13, 2014

Imagining Geoengineering

Okay, suppose some of the most extreme voices warning of global warming are right.  Suppose we really do face the inundation of much of the world's coastlines in a generation or two.  Even if, starting tomorrow, nobody ever burned a drop or a gram of fossil fuel ever again, the carbon dioxide now in the atmosphere might take hundreds of years to fall to pre-industrial levels.  So simply implementing restrictions on fossil fuels to reduce carbon-dioxide levels may not do the job fast enough.  What do we do in the meantime?  To use an automotive analogy, if you're going too fast and you see that the road ahead of you ends in a cliff, it might not be sufficient simply to take your foot off the gas.  You might actually have to apply the brakes.  David Keith says we ought to at least talk about applying the global-warming brakes.  But the question I have is, how could it ever get beyond talk?

Keith is a professor with appointments at both the Harvard Kennedy School, where he teaches public policy, and Harvard's School of Engineering and Applied Sciences.  An environmental engineer by training, Keith thinks that "geoengineering" ought to be considered along with reductions in fossil-fuel consumption as a way to reduce the effects of carbon dioxide in the atmosphere.  Geoengineering refers to intentional efforts to manipulate the climate.  So far, the only moderately successful geoengineering projects have been cloud-seeding efforts that arguably increased rainfall in some areas.  But Keith is talking about a worldwide effort to do something that will counteract global warming by artificially cooling the planet somehow.

Interviewed last March by the CBC (Keith is Canadian), he admitted that ideas such as spreading small sulfur particles in the stratosphere to reflect solar radiation as a way of countering global warming are a "brutally ugly technical fix."  But he thinks such geoengineering solutions should be on the table, rather than brushed aside scornfully, as they are by many environmental activists.

Let's try to imagine how such a geoengineering fix would work, not just technically, but politically.  Many of the geoengineering solutions that have been posed are not terribly expensive, globally speaking.  We are talking about industrial quantities of sulfur or other chemicals dispersed in the upper atmosphere, but the cost in terms of the global economy is miniscule.  There is no question that such a project could be mounted by even one well-prepared industrial nation.  The question I'd like to examine is:  could the nations of the world ever reach a consensus on what geoengineering solution to adopt?

If we examine the track record of united global action on the main cause of the carbon-dioxide increase, namely the use of fossil fuels, history is not encouraging.  The most significant effort in this direction is the Kyoto Protocol, adopted in 1997.  It is technically an extension of a 1995 UN agreement that parties signing it will reduce their emissions of greenhouse gases in accordance with certain goals spelled out in the document.  While 192 countries signed the accord, some of the most significant producers of greenhouse gases either did not participate at all (e. g. the U. S. A., China, India) or have not met their targets (e. g. New Zealand). 

The only global environmental agreement I can recall that actually worked was the way we kept chlorinated fluorocarbons (CFCs) from destroying the ozone layer.  CFCs were once used widely as refrigerant fluids (e. g. under the trademark "Freon"), but in the 1970s, scientists figured out that (a) these compounds lasted for a long time in the atmosphere and (b) they catalyzed the destruction of the important ozone layer in the stratosphere, which protects us from harmful UV radiation from the sun.  The Montreal Protocol, which went into effect in 1989, set its signatories on a path to eliminating the production of new CFCs and phasing out their use by finding alternatives.  By and large, the Montreal Protocol is a success story in international technical agreements, because most of the industrialized world signed on and actually did what they agreed to do.

Why can't we get such cooperation with the global-warming issue?  The simple answer is, it would cost more.  Telling the world economy to give up CFCs was like telling a dieter to give up the tutti-frutti milkshake he has every Shrove Tuesday.  CFCs were a minor part of the global economy compared to fossil fuels.  If we accept the most radical recommendations of those alarmed about global warming and implement restrictions as fast as they want us to, well, the point is, the world won't do it without something approaching a global police state.  Developing nations such as China and India will not willingly forego the advantages of wider use of fossil fuels to grow their economies.  It would take a world war and dictatorial economic domination by a single global-warming-prevention entity to make the world go on a fossil-fuel diet.  And that doesn't sound like a good tradeoff.

The thing that geoengineering proponents like David Keith have going for them is that many geoengineering proposals would cost a lot less than replacing fossil fuels with a sustainable alternative.  Whether geoengineering would work is another question, unfortunately even more complicated than the still-controversial question of exactly how bad climate change is going to get, and what adverse effects it will have in the future. 

Besides the technical issue of whether geoengineering would work, I think there is an esthetic or philosophical factor involved.  Many of those who advocate harsh restrictions on fossil-fuel use to avert further climate change seem to have bought into the "deep-green" assumption that humanity is really a net liability for Planet Earth.  Burning fossil fuels represents meddlesome tinkering with what Mother Nature was up to naturally, and geoengineering would be another step down that evil road of manipulating the environment.  Better we just fold our tents, globally and economically speaking, and go back to living off nuts and berries.  The trouble with that notion is that there would not be enough nuts and berries to go around unless we keep burning fossil fuels, or find an energy-equivalent alternative that won't bankrupt us.  Such an alternative is not yet at hand. 

I admire engineers like David Keith for thinking through important problems such as climate change to arrive at possible solutions that might actually work, at least technically.  Given the dismal track record of the Kyoto Protocol, the chances of arriving at a truly global accord to implement significant fossil-fuel reductions are vanishingly small.  If some of the more dire climate-change predictions come to pass, it might be easier to get international agreement on a geoengineering strategy than it would on fossil-fuel reductions, especially if the price is right.

Sources:  An article on David Keith's ideas about geoengineering appeared on March 29, 2014 on the Canadian Broadcasting Corporation's website  I also referred to Wikipedia articles on solar radiation management, the Kyoto Protocol, and chlorofluorocarbons.