Twenty years ago this week, a late-night experiment at an obscure nuclear power plant in the former Soviet Union turned into the worst nuclear accident in history. During the early morning hours of April 26, 1986, operators at the graphite-core plant in Chernobyl, some eighty miles north of the Ukranian capital of Kiev, violated numerous regulations and disabled safety mechanisms during an ill-considered reactor test. The reactor blew apart and the graphite (carbon) core caught fire like a giant nuclear barbecue pit, sending radioactive smoke into the atmosphere. The accident was compounded by the criminally slow response of the Soviet government, which first attempted to cover up the incident. When Scandinavian nations detected abnormal levels of airborne radioactivity and started asking questions, the USSR reluctantly admitted there was a problem, but not before thousands of people living near the plant had been exposed to dangerous levels of radioactivity.
An Associated Press story by Mara D. Bellaby published this week recounts estimates of the total number of fatalities and illnesses caused by the accident. Not as many people died from Chernobyl as was originally feared. Eventually the government got around to evacuating some 116,000 people who lived within twenty miles of the plant. Official reports released by United Nations agencies recently say that only 50 people have died so far as a direct result of radiation poisoning traceable to the accident. Surprisingly, this includes those who fought the fire in the first hours of the accident and who were exposed to the most intense levels of radiation. The most significant problem in the general public has turned out to be a sharp increase in thyroid cancer among young people. Since radioactive iodine is taken up preferentially by the thyroid in children and adolescents, this increase was expected. Careful screening for early signs of thyroid cancer and prompt treatment have cured nearly all of those who contracted the disease, according to the reports. So if the world's worst nuclear accident caused only 50 deaths, why is it that no new nuclear power plants have been ordered in the United States since 1978?
The last nuclear plant to be completed in this country was finished in 1996. The nearly twenty-year span between these two dates alone give you some idea as to why utilities are reluctant to order nuclear plants. For a variety of reasons, many of them good, the nuclear power industry in the U. S. is hedged with an incredible number of regulations, permit processes, and controls from overlapping Federal, state, and local jurisdictions. Our own worst nuclear-plant disaster, Three Mile Island, happened in Pennsylvania in 1979, and compares to Chernobyl as a fender-bender compares to a bus full of children tumbling down a mountain. Nevertheless, it was serious enough to create political turmoil that effectively shut down the nuclear power construction industry in this country. There are still U. S. companies that make nuclear plants—they just don't sell them here.
As a consequence, the increased demand for electricity in the U. S. has been met since the 1980s largely by more coal-fired plants, with a small but significant amount contributed from renewable sources such as wind power. There are many good reasons to oppose nuclear power: the problem of what to do with the highly hazardous wastes created by plant operation, the danger of nuclear proliferation to unstable countries, and the "yuck factor" that some people will always feel about a technology that is associated with nuclear weapons. But assuming that the nation's use of electric energy is not going to decrease in absolute terms any time soon, the power has to come from somewhere, namely coal in the last few years. And opponents of greenhouse-gas emissions, many of whom also oppose nuclear power, know (or should know) that you can't burn coal without making carbon dioxide, which is the greenhouse gas of most concern. Nuclear power, whatever its other drawbacks, produces virtually no greenhouse gases, which is one reason that even "greens" have been giving it a second glance lately.
Some countries such as France never abandoned nuclear power. France's example shows that given a moderate, stable regulatory environment and good engineering, nuclear power can be a safe and reliable source of electricity, leaving aside the question of wastes. Still, it is not at all clear that the nuclear industry will ever be able to build substantial numbers of new plants in the U. S. The new free-enterprise model of partially deregulated utilities makes it even more risky to plan a long-term capital investment such as a nuclear plant, which sucks in millions of dollars for years before even starting to produce revenue. So if we can't build new nuclear plants, and we don't want to contributed to global warming by building new fossil-fueled coal, oil, or natural gas plants, where will the energy come from?
Radical conservation combined with renewable and distributed energy generation is one possible answer. Here and there, enterprising architects have built houses and even commercial buildings whose net use of externally supplied energy in the form of electricity or natural gas is only a small fraction of what typical construction uses. The drawback, of course, is that it takes expensive custom engineering and materials to achieve these radical savings, and in the current economic environment there is no incentive to do these things. Perhaps some radical economic experimentation is in order here. If large tax breaks or even subsidies were provided for building structures whose energy usage was, say, 50% or less of the average level, this could really be regarded in the light of a loan, since the country as a whole will benefit from the fact that less energy usage is a net gain in a costly-energy economy. A whole raft of vested interests would first have to be placated, but that is what politics is for.
As the aftermath of Chernobyl has proved, our worst fears in some areas sometimes turn out to be not as bad as we thought. But before we in the U. S. go nuclear in a big way, we have time to consider other options.
Sources: An article by Mara Bellaby similar to the one carried in the Austin American-Statesman is at http://www.newsobserver.com/104/story/431637.html.
Monday, April 24, 2006
Wednesday, April 19, 2006
Patent or Blackmail?
Here is a list of some of the great human achievements of the past five hundred years: the Scientific Revolution, the Industrial Revolution, the patent system . . . . What's that last one doing there? Historians of technology rightly regard the development of patent law as one of the most significant intellectual innovations of the early modern period. Beginning in Renaissance Europe and spreading to America, the idea that an inventor's rights to make and sell his invention should be protected by law for a limited period encouraged innovation while ensuring that the rights of the general public would also be protected from monopolies of indefinite lifetime. Engineers, whose ideas form the basis of many patents, should be interested to know that the present U. S. patent system is being gamed in a major way, to the detriment of nearly all concerned.
The most recent example of this concerns the firm Research in Motion, which makes the popular Blackberry wireless communication system. It used to be the case that patents were fairly difficult to obtain. The inventor's patent attorneys were pretty evenly matched by the U. S. government's patent examiners, whose job it was to make sure that trivial, obvious, or otherwise meritless patents were not issued. Patenting an idea was a serious and sometimes difficult undertaking, but when you got one, you knew you had something, and so did everyone else.
Not so anymore. A combination of factors—inadequate Patent Office funding, a hyper-pro-business attitude in government, and speedups in the pace of innovation—have made it much easier to get a patent in the last ten to twenty years. This includes dubious ones sometimes called "submarine patents"—not patents on the submarine, but patents deliberately designed to cover all parts of an emerging field, whether or not the supposed inventor has any genuinely innovative ideas or not. In the past, these types of patents would have never been issued, but in the current almost-anything-goes atmosphere, all it takes is enough money paid to a good patent firm.
What happened to Research in Motion this year shows what kind of harm can result from this over-liberalized issuing of patents. In the early 1990's, one Thomas Campagna patented some ideas for wireless email. In the meantime, Research in Motion put in a lot of work to develop the Blackberry, and obtained its own patents. In 2001, a company named NTP, formed to exploit Campagna's patents, sued RIM for patent infringement. The resulting legal hassle threatened to produce an injunction that would shut down all Blackberry services in the U. S., clearly an outcome that would benefit no one. This was despite the fact that the U. S. Patent and Trademark Office re-examined and rejected at least seven of NTP's patents along the way. In March of this year, RIM announced a settlement in which NTP would receive over $600 million. No doubt RIM views this as part of the cost of staying in business. But if the shady NTP patents had never been issued in the first place, none of this would have happened.
What has this got to do with engineering ethics? A lot. First, engineers can refrain from participating in the generation of "junk" patents. Unfortunately, this may not have much of an effect, since unscrupulous patent lawyers don't need much in the way of technical help to cobble together useless patents. This is not to say that patenting is unethical in general. Properly used in a well-conducted system, patents help to achieve the balance between monopolistic profit, innovation, and reasonably-priced new products and services that characterizes modern industrial societies. But the pendulum has swung way too far in favor of patent owners and patent attorneys to the detriment of the general public and those who actually do the hard work of developing and marketing new products, only to have their resources diverted into pointless patent battles. Under the present circumstances, the danger is that innovation will be stifled by artificially extended patents that allow established firms to exclude competition indefinitely. This is already happening in the pharmaceutical industry as some firms come up with patented repackaging of old patented drugs to prevent a cheaper generic form from coming onto the market. Who pays for this? The beleaguered patient who has to pay beaucoup bucks for the name-brand drug longer than necessary.
The second thing engineers can do is to make a political issue out of the patent system. True, it doesn't have the popular appeal of antiwar movements or tax reform. But it is critically important to fix a badly broken system before R&D departments of multinational firms decide to relocate in countries where the system is more rational. Ever since the U. S. patent system was founded in 1790, it has differed in significant ways from most European systems. One of the most important differences is that most European patent holders must show that they are licensing their patent to others or using their patents themselves, while there is no such requirement in the U. S. This allows U. S. patent holders to "sit on" submarine patents that lie dormant until a well-heeled company comes within the sights of the patent-holder's legal gun. Besides changes in the legal structure of patents, the U. S. Patent and Trademark Office simply needs a lot more good help in the form of funding and staff to stay competitive with the best private patent lawyers. Only then will they be able to reinstate the rigorous examination of patents that prevailed before the recent gold-rush atmosphere developed.
With their specialized training, engineers stand in a unique position to make an important political difference in this situation. Consider writing your U. S. senator or congressman about this matter, and see what happens. The worst that can happen is nothing, and the best could be a lot better than that.
Sources: The New York Times article "In Silicon Valley, A Man Without a Patent" by John Markoff was published online on Apr. 16, 2006, and is available from the NYT archives at http://select.nytimes.com/gst/abstract.html?res=F20811FA3D5B0C758DDDAD0894DE404482 for a fee. The Forbes.com article "More Patents Rejected in BlackBerry Case" by Arik Hesseldahl is at http://www.forbes.com/business/2005/06/22/rim-patent-infringement-cx_ah_0622rim.html.
The most recent example of this concerns the firm Research in Motion, which makes the popular Blackberry wireless communication system. It used to be the case that patents were fairly difficult to obtain. The inventor's patent attorneys were pretty evenly matched by the U. S. government's patent examiners, whose job it was to make sure that trivial, obvious, or otherwise meritless patents were not issued. Patenting an idea was a serious and sometimes difficult undertaking, but when you got one, you knew you had something, and so did everyone else.
Not so anymore. A combination of factors—inadequate Patent Office funding, a hyper-pro-business attitude in government, and speedups in the pace of innovation—have made it much easier to get a patent in the last ten to twenty years. This includes dubious ones sometimes called "submarine patents"—not patents on the submarine, but patents deliberately designed to cover all parts of an emerging field, whether or not the supposed inventor has any genuinely innovative ideas or not. In the past, these types of patents would have never been issued, but in the current almost-anything-goes atmosphere, all it takes is enough money paid to a good patent firm.
What happened to Research in Motion this year shows what kind of harm can result from this over-liberalized issuing of patents. In the early 1990's, one Thomas Campagna patented some ideas for wireless email. In the meantime, Research in Motion put in a lot of work to develop the Blackberry, and obtained its own patents. In 2001, a company named NTP, formed to exploit Campagna's patents, sued RIM for patent infringement. The resulting legal hassle threatened to produce an injunction that would shut down all Blackberry services in the U. S., clearly an outcome that would benefit no one. This was despite the fact that the U. S. Patent and Trademark Office re-examined and rejected at least seven of NTP's patents along the way. In March of this year, RIM announced a settlement in which NTP would receive over $600 million. No doubt RIM views this as part of the cost of staying in business. But if the shady NTP patents had never been issued in the first place, none of this would have happened.
What has this got to do with engineering ethics? A lot. First, engineers can refrain from participating in the generation of "junk" patents. Unfortunately, this may not have much of an effect, since unscrupulous patent lawyers don't need much in the way of technical help to cobble together useless patents. This is not to say that patenting is unethical in general. Properly used in a well-conducted system, patents help to achieve the balance between monopolistic profit, innovation, and reasonably-priced new products and services that characterizes modern industrial societies. But the pendulum has swung way too far in favor of patent owners and patent attorneys to the detriment of the general public and those who actually do the hard work of developing and marketing new products, only to have their resources diverted into pointless patent battles. Under the present circumstances, the danger is that innovation will be stifled by artificially extended patents that allow established firms to exclude competition indefinitely. This is already happening in the pharmaceutical industry as some firms come up with patented repackaging of old patented drugs to prevent a cheaper generic form from coming onto the market. Who pays for this? The beleaguered patient who has to pay beaucoup bucks for the name-brand drug longer than necessary.
The second thing engineers can do is to make a political issue out of the patent system. True, it doesn't have the popular appeal of antiwar movements or tax reform. But it is critically important to fix a badly broken system before R&D departments of multinational firms decide to relocate in countries where the system is more rational. Ever since the U. S. patent system was founded in 1790, it has differed in significant ways from most European systems. One of the most important differences is that most European patent holders must show that they are licensing their patent to others or using their patents themselves, while there is no such requirement in the U. S. This allows U. S. patent holders to "sit on" submarine patents that lie dormant until a well-heeled company comes within the sights of the patent-holder's legal gun. Besides changes in the legal structure of patents, the U. S. Patent and Trademark Office simply needs a lot more good help in the form of funding and staff to stay competitive with the best private patent lawyers. Only then will they be able to reinstate the rigorous examination of patents that prevailed before the recent gold-rush atmosphere developed.
With their specialized training, engineers stand in a unique position to make an important political difference in this situation. Consider writing your U. S. senator or congressman about this matter, and see what happens. The worst that can happen is nothing, and the best could be a lot better than that.
Sources: The New York Times article "In Silicon Valley, A Man Without a Patent" by John Markoff was published online on Apr. 16, 2006, and is available from the NYT archives at http://select.nytimes.com/gst/abstract.html?res=F20811FA3D5B0C758DDDAD0894DE404482 for a fee. The Forbes.com article "More Patents Rejected in BlackBerry Case" by Arik Hesseldahl is at http://www.forbes.com/business/2005/06/22/rim-patent-infringement-cx_ah_0622rim.html.
Thursday, April 13, 2006
Earthquake Prediction: Ready for Prime Time?
Earthquakes and the tsunamis that sometimes accompany them are one of the most frightening and fatal types of natural disasters. The December 26, 2004 earthquake and tsunami that struck in and around the Indian Ocean killed more than 200,000 people, and millions more have died in similar disasters. One of the main ways people die in an earthquake is in collapsing buildings, and over the years civil engineers have developed building codes and other techniques that reduce (but do not eliminate) the danger of structural collapse during an earthquake. Unfortunately for billions of people who live in developing countries, these measures are expensive. If the choice is between living in shaky but affordable housing on the one hand, and going without shelter on the other, most people take their chances with a house that may fall down in an earthquake. The poor of this world have more pressing things to worry about than earthquake safety, but that doesn't make their lives any less valuable.
Viewed as an engineering problem, the question of how to save lives in earthquakes and tsunamis has several possible solutions. The only one we have pursued to any great extent up to now is to make sure that structures will withstand the likely force of an earthquake. (As far as tsunamis go, there is little one can do except run for higher ground.) If—and this is a big "if"—earthquakes could be predicted with good accuracy, the problem becomes simpler. A few hours before an earthquake strikes, simply clear everyone out of dangerous buildings until the danger is past. This second solution is not without its own problems, but if it could be implemented, the cost of an early-warning system would be much less than earthquake-proof buildings for everybody, and the potential to save lives would therefore be much greater. The only problem is, how do you predict earthquakes?
Historically, earthquake prediction has been regarded as a pseudo-science. The abundance of post-earthquake "premonition" stories such as animals acting strangely, unusual sounds, and lights in the sky is a set of data that few scientists take seriously, and with some justification. Human beings are not emotionless recording machines, and memory is a highly subjective thing. Perfectly ordinary and random incidents that happen just before a frightening event take on an ominous cast when recalled later. But the shady neighborhood that earthquake prediction has resided in up to now should not prevent scientists and engineers from exploring ideas about how to do it.
The December 2005 issue of IEEE Spectrum, a highly regarded magazine for professional electrical and electronic engineers, carried an article on recent efforts to develop technical means of predicting earthquakes. (The article can be found at http://www.spectrum.ieee.org/dec05/2367). The lead author. Tom Bleier, described how ELF waves (extremely-low-frequency electromagnetic waves) and other measures such as satellite-sensed electromagnetic waves and surface temperatures have appeared at times to be correlated with certain large earthquake events. He made what to this author sounded like a good case that there is something to the idea that such correlations are real. However, a good physical explanation for why such correlations should occur is presently lacking.
The article inspired three geophysicists to write a letter to the editors of IEEE Spectrum protesting the publication of claims that they said should be rejected (the letter can be viewed at http://www.spectrum.ieee.org/apr06/3275). Robert J. Geller, Alex I. Braginski, and Wallace H. Campbell argued that there is no scientific basis for the kind of earthquake prediction that Bleier and his colleagues are doing. They claim there is so much noise from other natural and man-made sources at the frequencies in question that any exercise in earthquake prediction amounts to sophisticated tea-leaf reading. Their opinion is that the scientific community has examined the methods of Bleier and company and found them wanting.
This controversy reminds me of the early days of tornado prediction. From the late 19th century until 1938, forecasters at the U. S. Weather Bureau were forbidden even to use the word "tornado" in a forecast. The prevailing opinion was that there was no reliable way to predict tornadoes and such a forecast was likely only to cause needless panic. It wasn't until 1948 when some U. S. Air Force weathermen at Tinker Air Force Base in Oklahoma had their airfield trashed by a tornado that anyone began to apply serious scientific effort toward the problem of tornado forecasting. They came up with a combination of conditions that looked like it would work. Five days later, they noted the same conditions prevailed, and, not being under the restrictions of the civilian Weather Bureau, took it upon themselves to issue a tornado forecast to Air Force personnel. Later that same evening, probably the only tornado in history that was greeted with jubilation struck Tinker Air Force Base again! The weathermen published their findings in 1950 and 1951, but for several years afterwards tornado forecasts were restricted to military facilities unless they were leaked to the media. Other researchers attempting to publish research papers relating to tornado forecasting were blocked by skeptical reviewers. It took the better part of a decade to overcome the attitude that forecasting tornadoes was so chancy as to not be worth upsetting the public. But in combination with radar-based early warning systems for tornadoes that were put in place in the 1950s, annual tornado fatalities in the Midwest plummeted. (The story of tornado prediction is told in Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting.)
Time will tell whether the new techniques of earthquake forecasting will bear fruit in the form of reliable, specific predictions. In the meantime, its proponents should prepare themselves for a long battle with skeptics. We can hope that if there is anything to it, engineers, scientists, and the public will be open-minded enough to welcome the practice and take it seriously enough to save lives with it in the future.
Sources: See URLs above referring to items in IEEE Spectrum. Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting was published in 2001 by the University of Oklahoma Press, Norman.
Viewed as an engineering problem, the question of how to save lives in earthquakes and tsunamis has several possible solutions. The only one we have pursued to any great extent up to now is to make sure that structures will withstand the likely force of an earthquake. (As far as tsunamis go, there is little one can do except run for higher ground.) If—and this is a big "if"—earthquakes could be predicted with good accuracy, the problem becomes simpler. A few hours before an earthquake strikes, simply clear everyone out of dangerous buildings until the danger is past. This second solution is not without its own problems, but if it could be implemented, the cost of an early-warning system would be much less than earthquake-proof buildings for everybody, and the potential to save lives would therefore be much greater. The only problem is, how do you predict earthquakes?
Historically, earthquake prediction has been regarded as a pseudo-science. The abundance of post-earthquake "premonition" stories such as animals acting strangely, unusual sounds, and lights in the sky is a set of data that few scientists take seriously, and with some justification. Human beings are not emotionless recording machines, and memory is a highly subjective thing. Perfectly ordinary and random incidents that happen just before a frightening event take on an ominous cast when recalled later. But the shady neighborhood that earthquake prediction has resided in up to now should not prevent scientists and engineers from exploring ideas about how to do it.
The December 2005 issue of IEEE Spectrum, a highly regarded magazine for professional electrical and electronic engineers, carried an article on recent efforts to develop technical means of predicting earthquakes. (The article can be found at http://www.spectrum.ieee.org/dec05/2367). The lead author. Tom Bleier, described how ELF waves (extremely-low-frequency electromagnetic waves) and other measures such as satellite-sensed electromagnetic waves and surface temperatures have appeared at times to be correlated with certain large earthquake events. He made what to this author sounded like a good case that there is something to the idea that such correlations are real. However, a good physical explanation for why such correlations should occur is presently lacking.
The article inspired three geophysicists to write a letter to the editors of IEEE Spectrum protesting the publication of claims that they said should be rejected (the letter can be viewed at http://www.spectrum.ieee.org/apr06/3275). Robert J. Geller, Alex I. Braginski, and Wallace H. Campbell argued that there is no scientific basis for the kind of earthquake prediction that Bleier and his colleagues are doing. They claim there is so much noise from other natural and man-made sources at the frequencies in question that any exercise in earthquake prediction amounts to sophisticated tea-leaf reading. Their opinion is that the scientific community has examined the methods of Bleier and company and found them wanting.
This controversy reminds me of the early days of tornado prediction. From the late 19th century until 1938, forecasters at the U. S. Weather Bureau were forbidden even to use the word "tornado" in a forecast. The prevailing opinion was that there was no reliable way to predict tornadoes and such a forecast was likely only to cause needless panic. It wasn't until 1948 when some U. S. Air Force weathermen at Tinker Air Force Base in Oklahoma had their airfield trashed by a tornado that anyone began to apply serious scientific effort toward the problem of tornado forecasting. They came up with a combination of conditions that looked like it would work. Five days later, they noted the same conditions prevailed, and, not being under the restrictions of the civilian Weather Bureau, took it upon themselves to issue a tornado forecast to Air Force personnel. Later that same evening, probably the only tornado in history that was greeted with jubilation struck Tinker Air Force Base again! The weathermen published their findings in 1950 and 1951, but for several years afterwards tornado forecasts were restricted to military facilities unless they were leaked to the media. Other researchers attempting to publish research papers relating to tornado forecasting were blocked by skeptical reviewers. It took the better part of a decade to overcome the attitude that forecasting tornadoes was so chancy as to not be worth upsetting the public. But in combination with radar-based early warning systems for tornadoes that were put in place in the 1950s, annual tornado fatalities in the Midwest plummeted. (The story of tornado prediction is told in Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting.)
Time will tell whether the new techniques of earthquake forecasting will bear fruit in the form of reliable, specific predictions. In the meantime, its proponents should prepare themselves for a long battle with skeptics. We can hope that if there is anything to it, engineers, scientists, and the public will be open-minded enough to welcome the practice and take it seriously enough to save lives with it in the future.
Sources: See URLs above referring to items in IEEE Spectrum. Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting was published in 2001 by the University of Oklahoma Press, Norman.
Friday, April 07, 2006
The Engineer and Grandma Millie: The California Energy Crisis Revisited
Engineers like to think that what they do professionally helps people. The payoff in this regard is not as direct as a doctor gets when he takes in a dying patient and sends them home feeling fine. But most engineers, I suspect, would like to believe that the work they do makes a positive difference in the lives of people who use their products and services.
This connection is especially visible in the area of electric utilities. In the aftermath of numerous hurricanes and storms in 2005, we saw teams of hundreds of linemen coming from all across America to repair the damaged distribution infrastructure. Linemen aren't engineers, true, but I have encountered the same "keep the power coming" attitude in power engineers whose job it is to direct operations of the regional power pools that maintain a moment-by-moment balance between the fluctuations of electricity demand and the available supply. Since electricity cannot be stored in large quantities, it must be produced as needed, and keeping abreast of changing demands can be a headache, even when no one is trying to play financial games too.
This week, two of the all-time top financial game-players are on trial for lying about their company's profitability. Jeffrey Skilling, former CEO of Enron, and Kenneth Lay, the firm's founder, are being tried in a fraud and conspiracy case that the federal prosecutors have framed in simple terms. In the years 2000 and 2001, some parts of Enron were losing lots of money, and the claim is that Skilling and Lay knew this and lied about it to investors and the general public. Ironically, one part of Enron that was extremely—some would say sinfully—profitable was the energy-trading division, which the Attorney General of the State of California claims was responsible for many of the rolling blackouts that hit that state in the same years. Skilling and Lay didn't have to lie about that—they just had to live with their consciences.
What happened to the energy market in California in 2000 has been described as the perfect storm of electric-utility deregulation. To make a long and complex story short, increasing demand and partial deregulation led to a situation in which there was simply not enough electricity available in California for several days of unusually hot weather or short supply. The new tariffs allowed companies like Enron to charge whatever the traffic would bear for energy imports and futures, and as a result rates soared to the stratosphere in only a few months. The Attorney General claims that Enron and other utility interests purposefully took generating facilities offline in order to increase their profits. The fallout in terms of accusations, lawsuits, bankruptcy proceedings, and other effects continues to this day. In the process, state investigators unearthed a set of recorded phone conversations among energy brokers at Enron and other firms.
These tapes make for depressing listening. One took place as the state legislature was debating whether to cap the spot price of energy on the open market. "So the rumor's true, they're taking all the f---ing money back from you guys? All the money you guys stole from those poor grandmothers in California?"
"Yeah, Grandma Millie, man. She's the one who couldn't figure out how to vote on the butterfly ballot. Now she wants her f---ing money back from the power utilities. . . ." And these are some of the less obscene samples. Many more such recordings can be found at http://ag.ca.gov/antitrust/energy/index.htm.
The engineers who participated, willingly or unwillingly, in the events of the California energy crisis have not received as much attention as the financial traders who clearly profited from the situation. I have seen a few references in the open literature to their activities and the difficult situation they faced. In the tightly coordinated world of electric power-pool operation, individual action is nearly impossible, since the decision to shut down a facility or make purchases of power here or there is one that only a few individuals can make. Anything other than following orders in a case like this would amount to industrial sabotage, since an uncoordinated attempt to shut down or put online a generator would cause serious damage. Whether or not the engineers involved liked what they were doing, and whether or not they knew its implications, they had few options in the event. Like soldiers in a battlefield, their horizon was limited to their immediate surroundings and the technical circumstances they had to deal with at each moment. It is possible that they realized the wider implications of their actions during the crisis only in retrospect.
If any power engineers involved in the California energy crisis care to share their experiences, it would be appreciated. Engineers are generally far from the centers of political and corporate power where rate setting and related issues are decided. But to the extent that such matters interfere with the average engineer's desire to serve the public, not penalize it, there is something wrong structurally with the way electric utilities are set up and administered.
Since 2001, Enron has gone bankrupt, the California economy has cooled off, continued efforts in energy conservation have alleviated summer blackout threats, and additional generating capacity has been added to the nation's power grid. Sometimes a crisis has to happen in order to galvanize politicians and corporations into action, so we might actually be thankful that the California energy crisis happened when it did, and was no more severe than it was. All the same, it would be easy to become complacent in the face of new schemes for shortchanging Grandma Millie in order to profit the powerful, and we should be wary of them in the future.
Sources: The best source of Enron tapes I have found online is on the California Attorney General's website http://ag.ca.gov/antitrust/energy/index.htm.
This connection is especially visible in the area of electric utilities. In the aftermath of numerous hurricanes and storms in 2005, we saw teams of hundreds of linemen coming from all across America to repair the damaged distribution infrastructure. Linemen aren't engineers, true, but I have encountered the same "keep the power coming" attitude in power engineers whose job it is to direct operations of the regional power pools that maintain a moment-by-moment balance between the fluctuations of electricity demand and the available supply. Since electricity cannot be stored in large quantities, it must be produced as needed, and keeping abreast of changing demands can be a headache, even when no one is trying to play financial games too.
This week, two of the all-time top financial game-players are on trial for lying about their company's profitability. Jeffrey Skilling, former CEO of Enron, and Kenneth Lay, the firm's founder, are being tried in a fraud and conspiracy case that the federal prosecutors have framed in simple terms. In the years 2000 and 2001, some parts of Enron were losing lots of money, and the claim is that Skilling and Lay knew this and lied about it to investors and the general public. Ironically, one part of Enron that was extremely—some would say sinfully—profitable was the energy-trading division, which the Attorney General of the State of California claims was responsible for many of the rolling blackouts that hit that state in the same years. Skilling and Lay didn't have to lie about that—they just had to live with their consciences.
What happened to the energy market in California in 2000 has been described as the perfect storm of electric-utility deregulation. To make a long and complex story short, increasing demand and partial deregulation led to a situation in which there was simply not enough electricity available in California for several days of unusually hot weather or short supply. The new tariffs allowed companies like Enron to charge whatever the traffic would bear for energy imports and futures, and as a result rates soared to the stratosphere in only a few months. The Attorney General claims that Enron and other utility interests purposefully took generating facilities offline in order to increase their profits. The fallout in terms of accusations, lawsuits, bankruptcy proceedings, and other effects continues to this day. In the process, state investigators unearthed a set of recorded phone conversations among energy brokers at Enron and other firms.
These tapes make for depressing listening. One took place as the state legislature was debating whether to cap the spot price of energy on the open market. "So the rumor's true, they're taking all the f---ing money back from you guys? All the money you guys stole from those poor grandmothers in California?"
"Yeah, Grandma Millie, man. She's the one who couldn't figure out how to vote on the butterfly ballot. Now she wants her f---ing money back from the power utilities. . . ." And these are some of the less obscene samples. Many more such recordings can be found at http://ag.ca.gov/antitrust/energy/index.htm.
The engineers who participated, willingly or unwillingly, in the events of the California energy crisis have not received as much attention as the financial traders who clearly profited from the situation. I have seen a few references in the open literature to their activities and the difficult situation they faced. In the tightly coordinated world of electric power-pool operation, individual action is nearly impossible, since the decision to shut down a facility or make purchases of power here or there is one that only a few individuals can make. Anything other than following orders in a case like this would amount to industrial sabotage, since an uncoordinated attempt to shut down or put online a generator would cause serious damage. Whether or not the engineers involved liked what they were doing, and whether or not they knew its implications, they had few options in the event. Like soldiers in a battlefield, their horizon was limited to their immediate surroundings and the technical circumstances they had to deal with at each moment. It is possible that they realized the wider implications of their actions during the crisis only in retrospect.
If any power engineers involved in the California energy crisis care to share their experiences, it would be appreciated. Engineers are generally far from the centers of political and corporate power where rate setting and related issues are decided. But to the extent that such matters interfere with the average engineer's desire to serve the public, not penalize it, there is something wrong structurally with the way electric utilities are set up and administered.
Since 2001, Enron has gone bankrupt, the California economy has cooled off, continued efforts in energy conservation have alleviated summer blackout threats, and additional generating capacity has been added to the nation's power grid. Sometimes a crisis has to happen in order to galvanize politicians and corporations into action, so we might actually be thankful that the California energy crisis happened when it did, and was no more severe than it was. All the same, it would be easy to become complacent in the face of new schemes for shortchanging Grandma Millie in order to profit the powerful, and we should be wary of them in the future.
Sources: The best source of Enron tapes I have found online is on the California Attorney General's website http://ag.ca.gov/antitrust/energy/index.htm.
Thursday, March 30, 2006
Engineering Censorship in China
On the last day of April in 2005, Chinese journalist Shi Tao was sentenced to ten years in prison for sending an email to a New York colleague, the editor-in-chief of a publication called Democracy News. According to the Chinese court's verdict, Tao's email contained state secrets, and his crime consisted in leaking them to an "overseas hostile element," namely, Democracy News. The thing that makes Tao's case interesting to the rest of the world, and America in particular, is that Yahoo ! Holdings (Hong Kong) helped the Chinese government identify Tao by divulging information about his private email account. Without Yahoo's help, Tao quite possibly would be a free man today, working for what he sees to be the noble goal of promoting democracy in China.
By some estimates, China is the world's biggest untapped market for information technology. The population of mainland China makes up the second-largest group of Internet users, second only to the U. S. That wouldn't have happened without technology—hardware and software-—furnished largely by U. S. owned or operated companies such as Yahoo, Google, and Microsoft. In order to gain access to the lucrative Chinese market, all three firms have agreed to abide by the restrictive censorship and information-control policies of the Peoples' Republic of China. They have also been roundly criticized for such cooperation. In January, the Secretary General of Amnesty International expressed dismay at the "growing global trend in the IT industry" to impose "restrictions that infringe on human rights." Revealing private email account information, shutting down "undesirable" websites, and restricting search-engine results to items that are politically acceptable are a few examples of the steps that IT firms have taken in order to stay on good terms with the Chinese government.
Some people like to say that all technology is ethically neutral, and the only time ethics comes into the picture is when you look at how the technology is used. I have yet to be convinced of the ethical neutrality of a nuclear weapon. As we have found, the nuclear tests of the 1960s in which no one was directly killed nevertheless caused environmental damage and radiation levels that led to serious later harm. Some technologies carry with them an intrinsic bias toward good or evil, and it is foolish to pretend otherwise. It may be necessary from time to time to build things with a built-in ethical bias, but we do that in full consciousness that they cannot be viewed as ethically neutral.
The Internet's designers imbued its very structure with the spirit of egalitarianism and, one might even say, democracy. The distributed, non-hierarchical way that information travels, the "universal" record locators that anyone from an eleven-year-old boy in his bedroom to the U. S. government can obtain under basically the same rules, and the almost-instant access to anything are all biased toward the "global village" model of human interaction. While one may disagree with the merits of that model, it has created a situation in which democracy, openness, and the free exchange of information come naturally to the Internet. To restrict any of these things means that IT designers and companies have to go to extra trouble and expense. In a sense, they are going against the grain of the whole design philosophy of the system.
In defending Microsoft's actions, Microsoft founder Bill Gates claims that the basically open nature of the Internet will lead to a net increase in freedom for the Chinese people, despite the restrictions and occasional blog-takedowns that his firm does at the government's bidding. Speaking at the World Economic Forum in Davos last January as reported in the Times of London Online, Gates said, "I do think information flow is happening in China ... even by existing there contributions to a national dialogue have taken place. There’s no doubt in my mind that’s been a huge plus."
It is a fact that laws and freedoms differ greatly from one country to another. Doing business in countries with evil or corrupt regimes has always been a morally complex thing. Quite often, moral clarity is arrived at only after the utter defeat and repudiation of a government such as that of Nazi Germany after World War II. And as Gates points out, engaging a country through trade can lead to opportunities for improving the lot of its citizens that an absolute hands-off posture would prevent.
All the same, I get a strange feeling in the pit of my stomach when I think that where I live influences what I'll be able to find on Google, or what I'll be able to email to my friends. I visited China back in 1989, less than two months after the Tiananmen Square massacre. Our guide pointed out the blackened blocks of concrete which had not yet been replaced after the fires and violence of those days. It saddens me that the same government which committed those crimes is still in power, and has strong-armed the cooperation of U. S. corporations that have enjoyed freedom in this country and now are a party to restricting it in China. But this may be one of those situations where we will find out what the right course is only by waiting to see how things turns out.
Sources: Article "Gates Defends China's Internet Restrictions" is at
http://business.timesonline.co.uk/article/0,,19149-2012784,00.html. Article on Yahoo co-founder, "Yang defends support for 'firewall of China'" is at http://www.iol.co.za/?set_id=1&click_id=31&art_id=qw1143581582510B215.
Amnesty International's press release of January 2006 "China: Internet companies assist censorship" is at http://web.amnesty.org/library/Index/ENGASA170022006. Shi Tao's verdict is at the Reporters Without Borders website
http://www.rsf.org/article.php3?id_article=14884.
By some estimates, China is the world's biggest untapped market for information technology. The population of mainland China makes up the second-largest group of Internet users, second only to the U. S. That wouldn't have happened without technology—hardware and software-—furnished largely by U. S. owned or operated companies such as Yahoo, Google, and Microsoft. In order to gain access to the lucrative Chinese market, all three firms have agreed to abide by the restrictive censorship and information-control policies of the Peoples' Republic of China. They have also been roundly criticized for such cooperation. In January, the Secretary General of Amnesty International expressed dismay at the "growing global trend in the IT industry" to impose "restrictions that infringe on human rights." Revealing private email account information, shutting down "undesirable" websites, and restricting search-engine results to items that are politically acceptable are a few examples of the steps that IT firms have taken in order to stay on good terms with the Chinese government.
Some people like to say that all technology is ethically neutral, and the only time ethics comes into the picture is when you look at how the technology is used. I have yet to be convinced of the ethical neutrality of a nuclear weapon. As we have found, the nuclear tests of the 1960s in which no one was directly killed nevertheless caused environmental damage and radiation levels that led to serious later harm. Some technologies carry with them an intrinsic bias toward good or evil, and it is foolish to pretend otherwise. It may be necessary from time to time to build things with a built-in ethical bias, but we do that in full consciousness that they cannot be viewed as ethically neutral.
The Internet's designers imbued its very structure with the spirit of egalitarianism and, one might even say, democracy. The distributed, non-hierarchical way that information travels, the "universal" record locators that anyone from an eleven-year-old boy in his bedroom to the U. S. government can obtain under basically the same rules, and the almost-instant access to anything are all biased toward the "global village" model of human interaction. While one may disagree with the merits of that model, it has created a situation in which democracy, openness, and the free exchange of information come naturally to the Internet. To restrict any of these things means that IT designers and companies have to go to extra trouble and expense. In a sense, they are going against the grain of the whole design philosophy of the system.
In defending Microsoft's actions, Microsoft founder Bill Gates claims that the basically open nature of the Internet will lead to a net increase in freedom for the Chinese people, despite the restrictions and occasional blog-takedowns that his firm does at the government's bidding. Speaking at the World Economic Forum in Davos last January as reported in the Times of London Online, Gates said, "I do think information flow is happening in China ... even by existing there contributions to a national dialogue have taken place. There’s no doubt in my mind that’s been a huge plus."
It is a fact that laws and freedoms differ greatly from one country to another. Doing business in countries with evil or corrupt regimes has always been a morally complex thing. Quite often, moral clarity is arrived at only after the utter defeat and repudiation of a government such as that of Nazi Germany after World War II. And as Gates points out, engaging a country through trade can lead to opportunities for improving the lot of its citizens that an absolute hands-off posture would prevent.
All the same, I get a strange feeling in the pit of my stomach when I think that where I live influences what I'll be able to find on Google, or what I'll be able to email to my friends. I visited China back in 1989, less than two months after the Tiananmen Square massacre. Our guide pointed out the blackened blocks of concrete which had not yet been replaced after the fires and violence of those days. It saddens me that the same government which committed those crimes is still in power, and has strong-armed the cooperation of U. S. corporations that have enjoyed freedom in this country and now are a party to restricting it in China. But this may be one of those situations where we will find out what the right course is only by waiting to see how things turns out.
Sources: Article "Gates Defends China's Internet Restrictions" is at
http://business.timesonline.co.uk/article/0,,19149-2012784,00.html. Article on Yahoo co-founder, "Yang defends support for 'firewall of China'" is at http://www.iol.co.za/?set_id=1&click_id=31&art_id=qw1143581582510B215.
Amnesty International's press release of January 2006 "China: Internet companies assist censorship" is at http://web.amnesty.org/library/Index/ENGASA170022006. Shi Tao's verdict is at the Reporters Without Borders website
http://www.rsf.org/article.php3?id_article=14884.
Tuesday, March 21, 2006
Retire the Space Shuttles Now
Last week, NASA announced that the same kind of fuel-sensor problem that delayed last summer's flight has cropped up again. Program managers decided this time to replace all four sensors with new ones, a process that will take three weeks and delay the next flight until sometime in July. It was originally planned for May. This is both good news and bad news.
The good news is that NASA managers are finally showing some conservatism in their approach to potentially catastrophic problems. The fuel sensors monitor fuel levels in the external tank, telling the engines to cut off before the liquid-hydrogen fuel runs out. If the fuel tank ran dry while the engines were still operating, the resulting oxygen-rich mixture could cause severe corrosion and damage to the engines. Under normal operation the sensors are not needed, but if two or more sensors gave a false "empty" reading, the resulting engine shutdown could force an emergency landing or even cause a crash. So NASA is showing wisdom in replacing all the sensors before attempting another launch.
The bad news is that once again, NASA is going into space with a flying antique. Major elements of the space shuttle design are now over thirty years old. NASA engineers routinely comb the web for surplus sales of outmoded electronic components to use for repairs on the shuttle. I own a pickup truck that was built in 1981, the year after the first shuttle flew. I still drive it around town, but I must confess I'm somewhat reluctant to take it on a 35-mile trip to Austin and back for fear of a breakdown or worse. Granted that the shuttle fleet has received a great deal more attention and refurbishing than my truck, the fact remains that for every year the existing shuttles are kept in operation, maintenance and operating costs rise and the chances of failure from a hitherto unexpected cause grow greater.
Every reliability engineer is familiar with the "bathtub curve" that shows rates of failure in a collection of components over time. Suppose you buy a thousand new light bulbs for a large institution such as a school or hospital, install them, and keep track of when they fail. A small number will blow out within a few hours of first being turned on. This is called "infant mortality" and is due to defects that did not show up at the factory's inspection. This is the downward-sloping end of the bathtub curve. Then for a long time, you will see a very low rate of failure, one or two every month, perhaps. This is the bottom of the bathtub. Finally, as the usual failure mechanisms start to act, the failure rate will rise toward the end of the rated lifetimes of the bulbs. This is the rising slope of the bathtub, and continues until virtually all the bulbs fail.
The shuttles have literally thousands of components, each with a particular lifetime. No doubt NASA reliability engineers have studied the problem extensively, and the fact that the remaining shuttles still work is mute testimony that the engineers have done something right. But as time goes on and numerous components are used far beyond their expected lifetimes, unusual and undocumented failure modes can start to show up. It's not normal for a car's wheel to fall off, but when I pushed the mileage of an old car past the 200,000 mile point, that's almost what happened. Every successful launch moves the shuttles closer to the next failure, and as time goes on, it will be harder and harder to predict what the failure might be. From an engineering perspective, the only sensible thing to do with such antiquated hardware is to retire it. But politics plays as much a role in what NASA does as engineering, if not more.
No one likes to kick an organization when it's down, so ironically, the 2003 Discovery disaster probably kept President Bush from doing the sensible thing and terminating the shuttle program in a timely way. But who knows how many more astronauts will die between now and 2010 when the program is scheduled to end?
Space is billed as the last great frontier, and no one pretends that space exploration is without its hazards. The Apollo program cost the lives of three astronauts in a 1967 launchpad fire. The accident investigation wrapped up in three months, the program continued, and we landed on the moon two years later. No great achievement is without risks, and the consensus at the time was that the risks were worth it.
No such consensus exists today. The primary mission for the shuttles these days is to support the international space station, which is itself an enterprise of dubious utility, plagued by cost overruns, equipment problems, and a signal lack of clarity in its goals and mission. Some continued presence of man in space is probably worth while. But the numerous recent successes in privately funded space efforts indicate that private enterprise can do everything NASA is doing with the shuttle at less cost, more safely, if private firms are given some good ground rules and sufficient funding to make a fresh start. If the U. S. government had taken the same attitude toward air travel that it has taken toward manned space flight, we would still be watching a few highly trained NASA aeronauts fly across the Atlantic in single-engined Spirits of St. Louis, if that much. Shut down the shuttle, open up the field to private competition, and let the idealism of a new generation of space explorers come up with something that old institutions cannot even conceive.
Sources: For more details on the Shuttle's external tank, see http://en.wikipedia.org/wiki/Space_Shuttle_external_tank
The good news is that NASA managers are finally showing some conservatism in their approach to potentially catastrophic problems. The fuel sensors monitor fuel levels in the external tank, telling the engines to cut off before the liquid-hydrogen fuel runs out. If the fuel tank ran dry while the engines were still operating, the resulting oxygen-rich mixture could cause severe corrosion and damage to the engines. Under normal operation the sensors are not needed, but if two or more sensors gave a false "empty" reading, the resulting engine shutdown could force an emergency landing or even cause a crash. So NASA is showing wisdom in replacing all the sensors before attempting another launch.
The bad news is that once again, NASA is going into space with a flying antique. Major elements of the space shuttle design are now over thirty years old. NASA engineers routinely comb the web for surplus sales of outmoded electronic components to use for repairs on the shuttle. I own a pickup truck that was built in 1981, the year after the first shuttle flew. I still drive it around town, but I must confess I'm somewhat reluctant to take it on a 35-mile trip to Austin and back for fear of a breakdown or worse. Granted that the shuttle fleet has received a great deal more attention and refurbishing than my truck, the fact remains that for every year the existing shuttles are kept in operation, maintenance and operating costs rise and the chances of failure from a hitherto unexpected cause grow greater.
Every reliability engineer is familiar with the "bathtub curve" that shows rates of failure in a collection of components over time. Suppose you buy a thousand new light bulbs for a large institution such as a school or hospital, install them, and keep track of when they fail. A small number will blow out within a few hours of first being turned on. This is called "infant mortality" and is due to defects that did not show up at the factory's inspection. This is the downward-sloping end of the bathtub curve. Then for a long time, you will see a very low rate of failure, one or two every month, perhaps. This is the bottom of the bathtub. Finally, as the usual failure mechanisms start to act, the failure rate will rise toward the end of the rated lifetimes of the bulbs. This is the rising slope of the bathtub, and continues until virtually all the bulbs fail.
The shuttles have literally thousands of components, each with a particular lifetime. No doubt NASA reliability engineers have studied the problem extensively, and the fact that the remaining shuttles still work is mute testimony that the engineers have done something right. But as time goes on and numerous components are used far beyond their expected lifetimes, unusual and undocumented failure modes can start to show up. It's not normal for a car's wheel to fall off, but when I pushed the mileage of an old car past the 200,000 mile point, that's almost what happened. Every successful launch moves the shuttles closer to the next failure, and as time goes on, it will be harder and harder to predict what the failure might be. From an engineering perspective, the only sensible thing to do with such antiquated hardware is to retire it. But politics plays as much a role in what NASA does as engineering, if not more.
No one likes to kick an organization when it's down, so ironically, the 2003 Discovery disaster probably kept President Bush from doing the sensible thing and terminating the shuttle program in a timely way. But who knows how many more astronauts will die between now and 2010 when the program is scheduled to end?
Space is billed as the last great frontier, and no one pretends that space exploration is without its hazards. The Apollo program cost the lives of three astronauts in a 1967 launchpad fire. The accident investigation wrapped up in three months, the program continued, and we landed on the moon two years later. No great achievement is without risks, and the consensus at the time was that the risks were worth it.
No such consensus exists today. The primary mission for the shuttles these days is to support the international space station, which is itself an enterprise of dubious utility, plagued by cost overruns, equipment problems, and a signal lack of clarity in its goals and mission. Some continued presence of man in space is probably worth while. But the numerous recent successes in privately funded space efforts indicate that private enterprise can do everything NASA is doing with the shuttle at less cost, more safely, if private firms are given some good ground rules and sufficient funding to make a fresh start. If the U. S. government had taken the same attitude toward air travel that it has taken toward manned space flight, we would still be watching a few highly trained NASA aeronauts fly across the Atlantic in single-engined Spirits of St. Louis, if that much. Shut down the shuttle, open up the field to private competition, and let the idealism of a new generation of space explorers come up with something that old institutions cannot even conceive.
Sources: For more details on the Shuttle's external tank, see http://en.wikipedia.org/wiki/Space_Shuttle_external_tank
Tuesday, March 14, 2006
BP Houston Refinery Disaster: One Year Later
On March 23, 2005, some temporary workers in a Texas City, Texas oil refinery owned by BP (formerly British Petroleum) were just finishing lunch near the trailers that housed their offices, when they saw a geyser of clear liquid spurting out the top of a steel tower only a few yards away from them. According to the Houston Chronicle, one of them cried into a radio, "God, I hope that's water." A few seconds later, a highly flammable pool of an intermediate product called raffinate spread throughout the area. Although the exact cause of ignition was never officially determined, some witnesses recalled that an idling diesel pickup truck suddenly sped up as if somebody stepped on the gas. Then came the explosion.
It killed fifteen workers, injured 170, and wrecked acres of refinery equipment. In the year following, both the U. S. Chemical Safety and Hazard Investigation Board and BP carried out independent investigations, which reached similar conclusions. While the investigators found that outmoded and nonfunctional hardware contributed to the accident, the single most important cause was a culture of carelessness and bad management.
In a highly automated business such as oil refining, it is easy to look at the vast expanse of fractionating towers, pipes, flares, and tanks, and get the impression that such a system basically runs itself. But when you realize how many dangerous chemicals—corrosive, flammable, volatile—go through intense heat and pressure inside thousands of pipes and vessels, the amazing thing is that there are not major refinery accidents every day. More important than the visible structure of hardware, controls, and even the computer software that helps operators run the plant is the human structure of management, authority, will, energy, memory, obedience, and trust. As many industries mature, more and more is known about the physical and chemical processes involved. Computer models can predict even unexpected and dangerous behavior before two pipes are ever welded together to build an actual refining unit. This improved physical understanding can lull managers and operators into thinking that no thinking is required, or at least very little.
As with many accidents, a combination of relatively unlikely events and decisions conspired to bring about the tragedy of a year ago. First, a number of temporary trailers were brought into the borders of the active plant within a few yards of equipment that processed hazardous materials. If the plant had been treated like what it is—potentially, a bomb about to go off—these trailers would have been blocks away. Inconvenient, perhaps, for the workers who would have had to travel farther and get less done each day, but better than dying. Next, operators tried to restart a unit that had been down for maintenance without clearing the area. Starting and stopping chemical-plant processes are much more dangerous than periods of smooth operation, and more things are likely to go wrong. A fractionating tower that should have been filled to a depth of only about six feet instead filled up to a height of over a hundred feet with flammable raffinate. The operators were misled into thinking the levels were normal by malfunctioning and nonfunctioning instruments. When they realized there was too much hot raffinate in the tower and attempted to drain it away, the action one worker took to improve things actually made them worse, because the heat from the hot material drained away at the bottom was exchanged back into the tower, causing both it and an auxiliary "blowdown" stack to overflow. This was what caused the geyser that a worker prayed was water.
BP has paid for this accident in several ways. The entire plant was shut down for months, the U. S. Occupational Safety and Health Administration levied a $21 million fine against the company (which it paid without admitting the correctness of the charges), and numerous lawsuits arising from the accident continue. But wouldn't it be better if before a tragedy like this happens, enough pressure could be brought to bear on an organization to make it mend its ways?
The Internet may be one way this can happen. I would be very interested to hear from anyone who has had experience with the BP accident (directly or indirectly), or who can share factual insights about it and suggest ways to keep the next major refinery accident from happening. You can respond to this posting by clicking on the comments link below. I hope to hear from you!
Sources: A more detailed summary of the incidents leading up to this disaster is available at the
U. S. Chemical Safety and Hazard Investigation Board website, complete with a narrated video simulation of the incidents and the vapor and pressure waves resulting from the explosion. BP has also posted its completed investigation report at www.bpresponse.org.
It killed fifteen workers, injured 170, and wrecked acres of refinery equipment. In the year following, both the U. S. Chemical Safety and Hazard Investigation Board and BP carried out independent investigations, which reached similar conclusions. While the investigators found that outmoded and nonfunctional hardware contributed to the accident, the single most important cause was a culture of carelessness and bad management.
In a highly automated business such as oil refining, it is easy to look at the vast expanse of fractionating towers, pipes, flares, and tanks, and get the impression that such a system basically runs itself. But when you realize how many dangerous chemicals—corrosive, flammable, volatile—go through intense heat and pressure inside thousands of pipes and vessels, the amazing thing is that there are not major refinery accidents every day. More important than the visible structure of hardware, controls, and even the computer software that helps operators run the plant is the human structure of management, authority, will, energy, memory, obedience, and trust. As many industries mature, more and more is known about the physical and chemical processes involved. Computer models can predict even unexpected and dangerous behavior before two pipes are ever welded together to build an actual refining unit. This improved physical understanding can lull managers and operators into thinking that no thinking is required, or at least very little.
As with many accidents, a combination of relatively unlikely events and decisions conspired to bring about the tragedy of a year ago. First, a number of temporary trailers were brought into the borders of the active plant within a few yards of equipment that processed hazardous materials. If the plant had been treated like what it is—potentially, a bomb about to go off—these trailers would have been blocks away. Inconvenient, perhaps, for the workers who would have had to travel farther and get less done each day, but better than dying. Next, operators tried to restart a unit that had been down for maintenance without clearing the area. Starting and stopping chemical-plant processes are much more dangerous than periods of smooth operation, and more things are likely to go wrong. A fractionating tower that should have been filled to a depth of only about six feet instead filled up to a height of over a hundred feet with flammable raffinate. The operators were misled into thinking the levels were normal by malfunctioning and nonfunctioning instruments. When they realized there was too much hot raffinate in the tower and attempted to drain it away, the action one worker took to improve things actually made them worse, because the heat from the hot material drained away at the bottom was exchanged back into the tower, causing both it and an auxiliary "blowdown" stack to overflow. This was what caused the geyser that a worker prayed was water.
BP has paid for this accident in several ways. The entire plant was shut down for months, the U. S. Occupational Safety and Health Administration levied a $21 million fine against the company (which it paid without admitting the correctness of the charges), and numerous lawsuits arising from the accident continue. But wouldn't it be better if before a tragedy like this happens, enough pressure could be brought to bear on an organization to make it mend its ways?
The Internet may be one way this can happen. I would be very interested to hear from anyone who has had experience with the BP accident (directly or indirectly), or who can share factual insights about it and suggest ways to keep the next major refinery accident from happening. You can respond to this posting by clicking on the comments link below. I hope to hear from you!
Sources: A more detailed summary of the incidents leading up to this disaster is available at the
U. S. Chemical Safety and Hazard Investigation Board website, complete with a narrated video simulation of the incidents and the vapor and pressure waves resulting from the explosion. BP has also posted its completed investigation report at www.bpresponse.org.
Saturday, March 11, 2006
Welcome
This is a forum for discussion of current issues in engineering ethics and current events that have an engineering ethics angle. Historian of technology Henry Petroski has said that engineers often learn more from failures than from successes. My hope for this forum is that it will serve as a rapid way for knowledgeable people to exchange factual information and insights about matters such as:
--- Consumer safety issues
--- Disasters and accidents involving engineered products or systems
--- Hazards that need attention drawn to them
--- Official statements concerning controversial issues that involve engineering ethics
--- Ways engineers can learn from past mistakes and problems
Each week I plan to post a brief commentary on a news item related to engineering ethics. I invite you, the reader, to respond, especially if you have technical or other knowledge that will add to public understanding of the issue at hand. If readership allows, I may add other features such as ongoing discussion threads and an FAQ section. This forum will be successful if it attracts the attention of thoughtful, knowledgeable individuals who can contribute to the better understanding of how engineers can do the right thing, as well as how they can do things right.
--- Consumer safety issues
--- Disasters and accidents involving engineered products or systems
--- Hazards that need attention drawn to them
--- Official statements concerning controversial issues that involve engineering ethics
--- Ways engineers can learn from past mistakes and problems
Each week I plan to post a brief commentary on a news item related to engineering ethics. I invite you, the reader, to respond, especially if you have technical or other knowledge that will add to public understanding of the issue at hand. If readership allows, I may add other features such as ongoing discussion threads and an FAQ section. This forum will be successful if it attracts the attention of thoughtful, knowledgeable individuals who can contribute to the better understanding of how engineers can do the right thing, as well as how they can do things right.
Subscribe to:
Posts (Atom)