Monday, October 29, 2012

Prison Terms for Careless Words: The L’Aquila Seismology Verdict

 
Suppose you are a citizen of the medieval Italian town of L’Aquila.  Your city has survived several earthquakes in its history, including ones that led to considerable loss of life in 1461 and 1703.  In late March of 2009, you notice that there have been small tremors lately that perhaps rattle a dish or two, but this is nothing particularly unusual in highly seismic Italy.  However, a lab technician in a town thirty miles away garners some publicity when he announces that a method he’s come up with to predict earthquakes is telling him there’s going to be a big one soon.  You wonder whether it will be worth the trouble to sleep outside, which is something you have done on occasion when the temblors get severe.

Then you tune in to a news report of an announcement by a seven-member expert commission made up of seismologists, engineers, and a government official.  The panel’s conclusion is that the tremors over the past four months pose “no danger.”  In fact, one of the panelists says that “in fact it is a favorable situation, that is to say a continuous discharge of energy.”  This makes you feel better, and you decide to continue sleeping indoors.

Six days later, at 3:30 in the morning on April 6, a 5.8-magnitude earthquake hits L’Aquila.  You survive, but many people are killed, about 300 in all, and hundreds of buildings of both medieval and modern construction either collapse or are severely damaged.  Now the question:  what should be done about that panel of experts?

The Italian legal system has answered that question.  On Oct. 22 of this year, all seven members of the panel were convicted of manslaughter and sentenced to six years in prison.  This verdict has made headlines around the world for a number of reasons.

Italian prosecutors insist that the scientists and engineers are not being charged with failing to predict the earthquake.  Everyone (or nearly everyone) acknowledges that earthquake prediction is still an inexact science compared to weather forecasting, for instance.  But except for relatively rare fatal storms such as hurricanes and tornadoes, one’s life does not depend on the accuracy of a weather forecast.  The government prosecution contended that the panel’s reassurances that the temblors did not mean an earthquake was imminent, prevented L’Aquila residents from sleeping outside and thus led to a larger number of deaths than if they had kept quiet.

For the convicted scientists and engineers, things may not be quite as bad as they seem.  On average, in Italy about four out of five convictions involving prison terms are never put into effect, due to reversals on appeal or other reasons.  Nevertheless, these convictions are a sobering warning to experts who make public pronouncements about the possibility of earthquakes.

An article on the conviction in The Economist carried the subhead “Damned if you do, damned if you don’t.”  Earthquake experts asked to predict quakes face a classic dilemma.  If they fail to predict an earthquake that occurs, they can be charged with negligently giving bad advice that led to fatalities, at least in Italy.  But if they go to the other extreme and sound the alarm any time it looks like a quake might happen, they are liable to cause panic, or else to be wrong so often that people will ignore them even when they turn out to be right.  The independent prediction by the lab technician that his radon-based method was saying an earthquake was coming caused panic in a nearby town, and it is possible that the L’Aquila commission felt obliged to calm troubled waters.  It was their bad fortune to take that position less than a week before the fatal quake that did happen.

While the U. S. has perhaps a more robust tradition of free speech, including the freedom to give opinions about public dangers without worrying about manslaughter charges, the unhappy experience of the Italian earthquake scientists and engineers is a cautionary tale for any expert who makes public pronouncements on matters relating to safety.  Despite appearances, people do listen to experts and sometimes even take their advice.  I am not aware of any doctors from the 1950s who were quoted in TV ads as saying smoking was harmless, and then were sued or charged with manslaughter after it was shown that cigarettes cause cancer.  But that may be because a powerful industry fought the idea for decades.  No well-funded interest group had a vested interest in showing that the scientists had science on their side when they dismissed the possibility of an imminent quake.  Other members of the Italian National Institute of Geophysics and Vulcanology (NIGV) did some Monday-morning quarterbacking after the quake and calculated that the chances of a big earthquake happening in the 10-km region around the city rose from a normal background of 1 in 200,000 to a much greater, but still small, chance of 1 in 1000 a few hours before it actually occurred.  But one chance in a thousand is still a pretty long shot, and even if the panel had announced this relatively great increase in the likelihood of an earthquake, they might still have found themselves in the dock.

Part of the problem is that most average citizens do not want to deal in probabilities.  They want to know, “Are we going to have an earthquake or aren’t we, and if so, when?”  Unfortunately, the science we have so far can only deal in probabilities, and if experts are asked to give more than raw statistics about a possible future event, they tend to send a message that is either reassuring or alarming, depending on the tone of the situation and any number of other extraneous factors.

Let us hope that the Italian convictions don’t persuade seismologists to eschew forecasting as too personally risky to engage in, because a robust and reliable science of earthquake forecasting would be a valuable thing that could save lives.  But if earthquake scientists feel that they are putting their personal lives and liberty at stake every time they issue an opinion, the science and engineering pertaining to earthquakes could itself suffer long-term damage as an unexpected casualty of the L’Aquila disaster.

Sources:  Besides the two articles in The Economist on the seismologists’ trial (one on Sept. 17, 2011 after the indictment at http://www.economist.com/node/21529006
and the second on Oct. 27, 2012 at http://www.economist.com/news/science-and-technology/21565135-italy-sloppy-seismology-can-lead-prison-reason-tremble), I used an article on the Italian legal system at http://www.justlanded.com/english/Italy/Articles/Visas-Permits/Legal-System, and I referred to the Wikipedia articles “Italian Code of Criminal Procedure” and “2009 L’Aquila earthquake.”

Monday, October 22, 2012

Airline Safety: No News Is Good News—Or Is It?


It has been almost four years since the last fatal commercial airline accident in the U. S.:  the crash of Flight 3407 in February 2009, in which pilot and copilot errors combined to send the plane into a house in Clarence Center, N. Y., killing fifty people.  Of course, that could change overnight, but for the moment we can be grateful that the airline safety record looks so good.  However, there’s a fly in this otherwise sweet-smelling ointment:  it turns out that crashes stimulate the Federal Aviation Administration (FAA) more than anything else to improve safety measures such as pilot training and work rules.  And the lack of such stimulus has allowed the FAA to drag out some important pilot-training improvements for over a decade.

In the early years of flying, many accidents were due to mechanical failures, and this remained true at least until the 1980s.  Cargo doors, flammable cargo, and of course bombs (before airport security was beefed up) were responsible for many fatalities.  But with the advent of airport security measures and technical improvements in both airframe construction and restrictions on the types of cargo carried, most of the non-human causes of commercial crashes have been adequately dealt with.  What remains, as the story of Flight 3407 tells us, is the human factor.

Because fatigue seemed to play a big role in the Flight 3407 accident, the FAA began to revise rules on pilot work schedules to prevent the kind of overscheduling that pilot Marvin Renslow and 24-year-old copilot Rebecca Shaw experienced before their fatal accident in 2009.  Shaw had joined the flight after an all-night commute from Seattle, and at the time it was a common thing for pilots to snooze in airport waiting rooms at odd moments rather than checking into a hotel with the accompanying delays.

But only in 2011 were the new work-schedule rules implemented, and then only for planes carrying paying passengers.  Cargo flights are still immune from the new rules, which seems to imply that while we want to protect paying customers, pilots and hardware are expendable.

Regulation is a sparring match between an industry that sees restrictions on how they can use their paid staff in dollars-and-cents terms, and a government agency that is beholden to Congress and the people at large to ensure that airline travel is “safe.”  Of course, “safe” can only be approached, not achieved, and therein lies the difficulty.  The practical outcome is that things slide along, with the FAA taking years to solicit industry input and modify the proposed rules, until an accident prods Congress to come down hard on the agency with a mandate for improved rules that will keep the next horse from getting out of the barn after the present one has escaped.

That is perhaps a cynical view of the process, but it appears to cover the facts.  I have not looked at the proposed new rules, but as I recommended in May 2009 when the National Transportation Safety Board issued its conclusions on the causes of the Flight 3407 crash, we can learn a lot from near-accidents without having to go through the agony of a real one.

A recent news report on the issue of new FAA regulations says that voluntary data-gathering has been emphasized over one-size-fits-all obligatory rules.  And perhaps that is one reason that we’ve had such a long spell without commercial-airline fatalities.  The Air Line Pilots’ Association (ALPA) holds an annual Air Safety Forum, and the September 2012 online issue of the Air Line Pilot Magazine describes topics at the four-day forum such as stall recovery, the new pilot training rules proposed by the FAA, and airport safety issues.  Computerized records of voluntary safety-issue reporting make it easier than ever for pilots to learn from the mistakes of their colleagues.  Under the old cop-and-bad-guy model, airlines were reluctant to publicize pilot errors because of fears that they would get in trouble with the FAA, and consequently, knowledge about errors that could have turned into major disasters stayed in the cockpit.  Under the new atmosphere of collaboration, however, the FAA encourages such sharing of experience, with the result that pilots are more prepared than ever to avoid or deal with dangerous situations that other pilots have encountered.

This is one more example of a general principle: that we often learn more from technical mistakes and errors than we do from uneventful success.  This was something Chesley Sullenberger put into practice on January 15, 2009, when some geese killed all the engines of his Airbus A320 on his descent to New York City.  Sullenberger and his co-pilot successfully ditched the plane in the Hudson River and everyone was safely rescued.  It turned out that Sullenberger had served the NTSB as an accident investigator and was active in safety committees for the ALPA.  A good man constantly improving his safety skills got a big chance to put them into action, and he did.

What Sullenberger did that day, we would hope that the entire industry does all the time:  learn from previous errors, communicate them widely, and use that knowledge to prevent future incidents.  So far, it seems to be working.  It doesn’t make headlines, and it doesn’t stir Congress to action, but the pragmatic engineering criterion “does it work?” seems to be met here.  Let’s hope that the new record of time without a fatal commercial crash keeps getting longer each day.

Sources:  The Associated Press carried Joan Lowy’s article on airline pilot rule making, which was titled “No crashes, so new safety rules stall” in the Austin American-Statesman for Oct. 21, 2012.  I consulted the September 2012 online edition of the Air Line Pilot Magazine at http://www.alpa.org/publications/Air_Line_Pilot_September_2012/Air_Line_Pilot_September_2012.html#06 and the Wikipedia article on Chesley Sullenberger.  My blogs on Flight 3407 appeared on Feb. 16 and May 18, 2009.

Monday, October 15, 2012

Genes and Sneakers

 
Here is a not altogether implausible scenario from a possible not-too-distant future.

You’re a 30-year-old U. S. woman who has recently been diagnosed with breast cancer.  You are too old to be covered by your parents’ health insurance, and you don’t yet work for a firm that has health-care coverage, so you have applied for health insurance under a new Federal insurance-exchange program.  As a condition of receiving coverage, you must supply a mouth swab which provides a DNA sample.  A few weeks later, the results come back:  because you have a hitherto undiscovered genetic defect that puts you at a high risk of developing Alzheimer’s disease at an early age, you are eligible for a mastectomy, but not chemotherapy.  According to a utilitarian calculation by a government bureaucracy, you will die of Alzheimer’s well before your breast cancer would recur without the added prevention provided by chemo.

Now, if the insurer were a private company, the scenario I just described would be illegal, at least according to a recent Associated Press article on the potential pitfalls of inexpensive human genome sequencing.  “Discrimination” by either employers or health insurance companies based on a person’s DNA information is a violation of Federal law.  But just as it’s illegal for you and me to print money, but perfectly legal for the government to print money, there may come a time when the government deems it necessary to analyze your DNA for reasons of “efficiency” or “cost-effectiveness.”

It is truly amazing how rapidly a feat which was once hailed as one of the most difficult achievements in the history of humanity is now something that may cost as little as $1,000 in a few years.  Of course, we are not really comparing apples and oranges here, because it’s one thing to read out all the 1’s and 0’s (to use computer language) of a person’s DNA, and another thing altogether to know what it means.  And technically, the human genome sequencers aren’t really finished even now, more than a decade after a “working draft” was published in 2000.  Figuring out what the human genome is saying is one of the hottest topics in molecular biology, and more is being learned every day.  But enough is known already so that dozens of genetically-related diseases can now be tested for.  And with that ability come a host of ethical issues.

Insurance companies rely on accurate calculation of risks faced by their customers in the average or statistical sense.  That’s how they stay in business, by making educated guesses as to who is likely to die when, who is more likely to need what medical treatment, and so on.  Nobody gets upset when a life-insurance firm wants to charge an 80-year old more than a 20-year old for a $100,000 life-insurance policy.  Decades of actuarial data (and common sense) show that the octogenarian is much more likely to “assume room temperature” (in Kinky Friedman’s phrase) sooner than the college-age kid.  And believe it or not, there was a time when the kind of actuarial or statistical calculation that prudently apportions insurance rates to risk was regarded as advanced scientific knowledge.  For all I know, some people opposed the use of obscure calculations of actuarial science for pricing insurance when these methods first arrived on the scene.  But eventually, people realized that the advantage of having insurance was worth the trouble of paying different prices for it, and we got to where we are today.

Well, now we have some new advanced scientific knowledge about our DNA that can be obtained for a cost that falls every year, and it promises to tell us all sorts of things about how long we might live and what we might die of.  From the consumer’s point of view, especially if you are a consumer with a genetic malady that could cost some health insurer millions, it makes sense to pass a law forbidding discrimination on the basis of genetic testing.  But to be entirely consistent, it seems to me, they shouldn’t have stopped there.  They should have rescinded all the variations in the price of all kinds of insurance based on things like whether you smoke, how safely you drive, or how old you are.

The reason they didn’t, is because imposing a completely uniform rate on everybody for a class of insurance without taking advantage of any of the data that allows companies to predict risk, is like blindfolding a man and then telling him to go find his car keys.  Maybe he’ll find them eventually by feeling every square inch of the house, but it will take him a lot longer than if you let him look.  And if private insurers can’t use additional information to predict risk, they will have to raise rates on almost everybody, because they have to deal with worst-case situations that they could avoid with more information. But what’s crazy for a private company is done all the time by government, and so what we’ve prohibited from coming in the front door—discrimination based on DNA testing—is very likely to sneak around and come in by the back door when even the government finds that ignoring DNA data is a very costly thing to do.  Hence the sneakers of the title (I had to work it in somewhere).

What’s the answer?  I don’t have one.  Not every ethical dilemma posed by a new technological development has an easy answer, or even a logical hard answer.  We as a society have spent billions of dollars developing the ability to decode our own genes.  We have let that particular genie (pardon the expression) out of the bottle, and like Pandora, many of us will not be able to resist the temptation to pay whatever the market will bear to find out what our genes bode for our future.  But very little of what genetic testing tells you is a certainty.  And even lives cut short or debilitated by genetic disease can be worth living—ask Stephen Hawking, who has a type of amyotrophic lateral sclerosis (ALS or Lou Gehrig’s disease) and has been wheelchair-bound and almost paralyzed for many decades.  Yet he has won a dozen or more international prizes for his groundbreaking work in theoretical physics, has married twice, and has been portrayed on Broadway.  Hawking should thank God that genetic testing for ALS wasn’t available when his mother was pregnant.  England’s National Health Services might have saved a few bucks if Hawking had been aborted, but the world would have been much poorer as a result.

Sources:  The article “Panel:  Genetics needs ethics rules” by Lauran Neergaard appeared in the Oct. 14, 2012 edition of the Austin American-Statesman, p. A7.  I referred to articles in Wikipedia on Stephen Hawking and the Human Genome Project. 

Monday, October 08, 2012

Are You Mediated?


Every now and then a book comes along that is hard to classify, but pulls a lot of disparate things together and explains them in a remarkable way.  Mediated is such a book.  The answer to the question of the title is yes, almost certainly you’ve been mediated.  Especially if you’re reading this blog.

Thomas De Zengotita is an anthropologist by training who teaches in New York City.  His book describes how the ubiquitous electronic media of the twenty-first century has made fundamental and far-reaching changes in human behavior and thought.  There’s enough material in the book for dozens of blogs, so I’m going to restrict myself to one little piece of it:  how we deal with our manifold options of spending mediated time versus time in old-fashioned reality.

Let’s start back in the thirteenth century, with St. Thomas Aquinas (although the roots go back to Plato and Aristotle).  Aquinas was the most famous of what are called the “scholastic” philosophers, and scholastics were known for making distinctions.  One of the more significant distinctions he drew was the one that separates the process of thought (what Aquinas called the “intellectual power”) from the processes of sensing, which Aquinas termed the “sensual power” (not in the modern connotation of “sensual,” but meaning just “pertaining to the senses”).  Now, sensing—meaning touch, taste, smell, hearing, and sight—goes on in what we now call “real time.”  Once you aim your eyes in a given direction, what you see is not under your control, generally speaking.  But what you think about what you see is.  Using your intellectual power, you can think about things present, things past, or things to come, and swap these around at your will.  So as long as humanity has had the ability to think, we have experienced something in addition to “real time”:  what De Zengotita calls “unreal time.”

Unreal time was restricted to the human mind until modern media came along:  first analog means of recording and playing back experience (the phonograph, motion pictures) and ways of experiencing real time in another location (radio, television).  And of course it wasn’t long before recorded experience began to be broadcast so that the media brought you a kind of hybrid of real time and unreal time (e. g. “recorded live”).   Early forms of media such as novels and radio dramas still required some active thought (or imagination, which is a different but related thing) on the part of the listener.  But a few days ago I saw a banner outside the Texas State History Museum advertising a 3-D IMAX presentation of an animated film about flying dinosaurs.  It’s hard to imagine anything that would leave less work to the imagination than being surrounded by giant pterodactyls which appear to be flying directly at your head.

De Zengotita’s writing style reminds me of the late Catholic writer Walker Percy, whose Lost in the Cosmos was a similar unclassifiable work that was part social criticism, part personal essay, but highly readable and thought-provoking because of its original point of view.  In Mediated, De Zengotita strenuously refrains from moralizing or prescribing “solutions to the problem,” a trope which he says has become almost required these days.  But he wants us to think about the consequences of spending a larger and larger part of our time in mediated unreal time:  playing online games, cruising the web that so easily takes us from one non-sequitur to the next almost-unrelated site, and watching representations of all kinds that train us to expect to be flattered by increasingly customized and personalized treatment (count how many times you see the word “my” this or that in ads, and even software labels such as “my computer”).  One day, as he was in conversation with a techie type discussing the potential problems that could arise from spending too much time in mediated unreality, the man turned to him and said, “What’s so great about reality?”

If you are a secular humanist like De Zengotita says he is, maybe raw reality really is just one of an increasing number of options we have now, and it’s hard to answer the techie’s question.  But for people who believe in a God who created both reality in general and us in particular, the real has a prior call on our attention over anything we cobble together ourselves.  I realized an aspect of this over a year ago when, as a part of a retreat I was planning, I voluntarily gave up listening to my car radio.  My listening habits were fairly typical:  NPR, a classical music station, and a low-power AM station broadcasting Relevant Radio, an evangelistic effort of the Roman Catholic Church.  But the freedom to think my own thoughts during commutes came to be more valuable to me than whatever background interruptions the radio brought to me, and I haven’t gone back.  On long commutes to Austin, I sometimes play recordings of interviews with authors by former NPR correspondent Ken Myers, who has established a subscription radio network of sorts called the Mars Hill Audio Journal.  (Mars Hill was how I learned about Mediated, as a matter of fact.)  But the key to my approach to my car audio in particular, and increasingly media in general, is that I try to stay in charge whenever possible.

That means I try to avoid simply killing time by bombing around on the web.  If I get on the web I try to make sure that I know what I’m looking for—a journal article, maybe, or the definition of a word, or a review of a particular movie.  Because if I just take at random one of the thousands of little dangling baits that nearly every website hangs in front of you, I know I will eventually end up seeing something I regret, simply for the time wasted if not for the dubious nature of the content.

Am I still mediated?  Unquestionably.  But at least, after reading De Zengotita’s book, I’m aware of my mediated state and can try to do something about it.  And those of us who work in the media—everywhere from blogs like this to the most advanced techies at Google working on things we may not see for years—should all find out what being mediated is, and make up your mind whether you like it or not.

Sources:  Thomas De Zengotita’s Mediated:  How the Media Shapes Your World and the Way You Live in It was published by Bloomsbury in 2005.  For more information about the Mars Hill Audio Journal, see http://www.mhaj.com.

Monday, October 01, 2012

Fukushima Revisited: Lessons Learned


On Mar. 11, 2011, a huge earthquake and tsunami struck Japan, killing thousands of people outright and flooding large areas of the northeastern coastline of the country.  But perhaps the most significant legacy of the disaster will arise from what happened at the Fukushima nuclear plant, which was situated in the direct path of the tsunami.

As we mentioned in a blog two days after the disaster, no nuclear plant in history had been subjected to an 8.9-magnitude earthquake before.  But all of the six reactors at the plant may have sustained the shock without serious initial damage.  As the earthquake struck, automatic shutdown procedures were followed and after the earthquake, the operating reactors were still under control.  The problems came with the tsunami, which flooded the lowest level of the plant.

At this point, we turn to the conclusions of two special commissions charged with investigating the accident.  Both issued their conclusions just this last July of 2012.  One commission was the first of its kind in the entire sixty-six-year history of Japan’s constitutional government.  After interviewing hundreds of witnesses and conducting over a thousand hours of interviews, the commissions had harsh words to say about Tokyo Electric Power Company (TEPCO), government officials, and the sadly lacking state of emergency preparedness showed by those charged with the safety of nuclear power generally in Japan.

One problem that could have been avoided concerned the location of the emergency generators that kept cooling pumps operating during cooldown.  Turning off a large nuclear reactor is not like just flipping a switch.  They operate by heating large volumes of water, metal, and fuel to many hundreds of degrees, and even if the nuclear reaction is stopped almost instantly by some means such as the insertion of neutron-absorbing control rods, the laws of physics say that all that heat has to go somewhere.  And the usual place it goes is into the cooling fluid that is circulated through the reactor to remove the heat to boilers to generate electricity.

In the case of a shutdown, the heat can be simply dissipated in cooling towers or other rapid means, but first it has to be extracted by the cooling fluid flowing through the reactor.  In an emergency, this fluid has to be pumped even faster than normal, and only mechanical pumps will do the job in the type of reactor used at Fukushima.  With the loss of electric power from outside due to the earthquake and from the plant’s own generators due to the shutdown, the pumps had to be powered by emergency generators that were operating from diesel engines.  The big problem was, all these emergency generators were in the basement—where the floodwaters rose and stopped them cold.

From that point on, the situation just got worse.  With no cooling fluid flowing, the three reactors operating at the time of the earthquake overheated and produced hydrogen from the reaction of water with hot metal inside, and eventually the hydrogen exploded.  This was a chemical, not a nuclear, explosion, but it broke open the plant’s housing enough to release a lot of radioactive trash from the wrecked reactors inside—about a tenth of what was released during the much more serious accident at Chernobyl, Ukraine in 1986.  But enough radioactive material was released at Fukushima to affect the lives of those who lived near the plant for many years.

The fact that the emergency generators were in a vulnerable position where floodwaters could stop them is only one of a number of design flaws that contributed to the magnitude of the disaster.  Higher dikes around the plant site could have conceivably prevented flooding in the first place.  Following a call for increased safety measures at nuclear plants in 2006, TEPCO apparently did little or nothing.  According to the National Diet report, the firm relied on its close connections with Japanese regulators to avoid taking any substantial actions to improve safety.  The reports also faulted government officials for not planning for evacuations of the scale that turned out to be needed.  The Fukushima disaster has also given ammunition for groups agitating for the end of nuclear power altogether, and several countries such as Germany have either slowed or stopped their plans for future nuclear plants.

Admittedly, the earthquake and tsunami that led to the Fukushima disaster were at the outer limits of what any reasonable design would take into account.  But clearly, some fairly simple measures that might have made routine operations a little less convenient would have reduced or eliminated altogether the tragic events that led to the death or injury of numerous plant workers, the release of radiation that contaminated land for miles around the plant, the bad publicity that nuclear power received, and the total loss of billions of dollars’ worth of machinery and equipment.

One hopes that every nuclear engineer, in school and out, will make a special study of Fukushima in order to use the lessons learned from what went wrong there.  With the release of the disaster reports (and, hopefully, their translation into other languages including English), the nuclear industry has been presented with a treasure trove of mostly bad examples of how not to do it.  As engineer and writer Henry Petroski likes to point out, engineers often learn more from failure than from success, and Fukushima has presented us with an abundance of learning opportunities.  In view of concerns over climate change, the availability of fossil fuels, and the promise of conservation technologies such as smart-grid approaches to power distribution, it would be a shame if we back away from a form of energy that could provide non-fossil power for many decades to come.

Sources:  I relied upon the Wikipedia summaries of the commission reports under the headings of “Fukushima Daiichi nuclear disaster” and “National Diet of Japan Fukushima Nuclear Accident Independent Investigation Commission.”