Monday, May 29, 2006

Model Railroading: Coming to Your Town in a Big Way

A friend of mine is an avid model railroader. He has spent countless hours assembling intricate scale-model railroad cars and locomotives, constructing miles of model track, and attending meets where dozens of his fellow enthusiasts put together entire scale-model counties of rail routes through scenic landscapes and busy towns. The remote controls for these toys have grown increasingly sophisticated with time as well, all the way down to realistic engine noises produced digitally. The only people who may resent the time and energy spent on such a harmless hobby are the wives thus deprived of their husbands' time (and husbands, if any women pursue this avocation, of which I am unaware). But a parallel development—the remote control of real railroad locomotives with no one on board—is stirring up a considerable controversy.

Since the decline of passenger rail transportation in the U. S. in the last half of the twentieth century, the U. S. rail system has faded into the background of public consciousness. But the freight operations that rail lines support have actually become more critical than ever to the country's economy. Nearly all the coal that fuels our coal-fired power plants (and that is about half of them) is carried by rail, as well as numerous other bulk materials such as gravel, cement, chemicals, and food products, not to mention imported merchandise, automobiles, and so on. Since very few additional rail lines are being built, the railroad industry is searching for ways to put more and more freight through a physically limited system. And one of these ways involves remote control of unmanned locomotives.

An article in the May 28 issue of the Austin American-Statesman describes how this works. An operator who has completed an 80-hour training course stands by a track on which a remote-control locomotive sits. Strapped to his chest is a box sprouting joysticks, crank knobs, and a stubby antenna, rather like an overgrown model-airplane radio-control unit. With this remote control system, the operator can perform most of the operations that the engineer in the cab can do, only without any engineer in the cab. If radio control is lost for any reason, the system automatically stops the train.

Most of these systems are being used in switchyards, where the relatively short range of the radio transmitter is not a problem. But recently, some lines have been experimenting with using the system to send trains to nearby industrial sites for short hauls.

Safety is an obvious concern. If there is nobody in the cab, how can the operator stop the train if an obstruction unexpectedly shows up? Unfortunately, stopping a train is not an instantaneous act. Depending on speed and size, it can take up to a mile or more to stop a train even under emergency conditions. The engineers who designed the remote-control systems have presumably taken these factors into consideration, but as with many technologies, the way it is used has a lot to do with how safe it is.

Railroads are one of the most highly unionized industries in America, and opinions among the unions about the new technology are divided. The Brotherhood of Railway Engineers' feelings about the matter are clear from their main website, which shows a tipped-over railway engine with the legend "Remote Control" plastered across it. Since a locomotive running without an engineer represents direct job loss, their concern is understandable. They are, in the colloquial phrase, "agin it," and have commissioned a report which criticizes wider adoption of the technology before better operating rules are put in place. Numerous attempts by the BLE to slow the technology through strikes or other means have been blocked by federal judges.

On the other hand, the United Transportation Union, which represents conductors and switchmen, has come out, after some waffling, in favor of limited use of the technology. The Federal Railroad Administration, for its part, has studied the issue and allowed limited experimentation as long as the operators (generally switchmen) have received an 80-hour training course. This annoys the railway engineers, who have to take a six-month-long course and pass tests to qualify for their jobs.

What about accidents? There have not been many serious accidents reported as yet, possibly because the technology is so new: a few derailing and three fatalities, but no major large-scale accidents with multiple loss of life. It is not clear how far the rail lines wish to go with remote-control locomotives. It is easy to imagine a single model-railroad-style system the size of the U. S. with thousands of trains running completely under computer control. Even now, locomotive engineers are like airline pilots in that they do what centralized traffic-control operators tell them to via microwave radio links from a few control centers that continuously monitor train positions and movements. So replacing the engineers with "robotic" control would not be as great a change as you might think. What the people on the train supply now, of course, is eyes and ears and hands to do the great variety of things that computers and robots cannot yet do. Some of these things are related to safety and some are not.

So it will be some time before the average train you see trundling across a grade crossing while you wait in your car will be nothing but a pile of steel and cargo, bereft of any human presence. If the Brotherhood of Locomotive Engineers has its way, it will never happen. On the other hand, remote control may spread gradually until some big disaster occurs with a remotely-controlled locomotive, which might energize legislators to prohibit the practice altogether. In the meantime, you might visit the next model-railroaders meet in your town to see what the future of real railroading may be like.

Sources: The Federal Railroad Administration has a statement "Remote Control Locomotive Operations" at http://www.fra.dot.gov/us/content/94. The website http://www.labornotes.org/archives/2003/08/b.html has an article "Rail Workers Battle Unsafe Remote Control Technology" written by Ron Hume. The Brotherhood of Locomotive Engineers website has an article "BLET releases remote control hazard study" at http://www.ble.org/pr/news/newsflash.asp?id=4156.

Thursday, May 25, 2006

Engineering Laptop Data Security, or, 26.5 Million Veterans Can't Be Wrong

On Monday, May 22, we learned that some time in the preceding three weeks, a burglar broke into the house of a mid-level analyst in the Department of Veterans Affairs in Washington, D. C. Among the items missing the next day was the employee's laptop computer. That by itself is not news—laptops are stolen every day. But the thing that motivated Veteran Affairs Secretary Jim Nicholson to announce the theft to the news media was the fact that on that laptop's hard drive were the names, Social Security numbers, and other personal information belonging to over 26 million veterans.

It is not hard to imagine what someone with the scruples of a burglar could do with that information. We can only hope that the miscreant does not read the newspapers, watch TV news, or download iPod newsblogs, and that he fenced the machine to someone who will divest it of all identifying indications, including the hard drive data. But the very small chance that a very big problem will occur is still a very big problem. And since Social Security numbers last for the lifetime of their owners, the concern that one of those veterans will be a victim of identity theft may not go away unless the machine is recovered with the knowledge that the data wasn't copied. This happy eventuality is, to say the least, unlikely.

As it does in many other areas, the advance of technology has blurred the distinction between two groups of people who formerly had very different responsibilities. Back in the 1970s when it took a roomful of refrigerator-size tape drives to store twenty-seven million personal records, there were only a handful of people in any given organization who had the technical ability to manipulate or copy the information. The computer-science specialists who designed, operated, and maintained the systems were generally aware of their special responsibilities that came with the power to work with personal data. Besides which, a putative thief would have had to bring a small loading van along to steal such a large amount of data. Although data theft and identity theft have been a problem at some level since the earliest days of computers, the sheer bulk and awkwardness of large amounts of data, and the relatively scarce and highly secure computer rooms in which they were housed, meant that such a theft had to be carefully planned and executed like a bank or payroll heist. For the average non-technical user of such information, the most data handled at once was contained in a bulky folder of green-and-white-striped computer paper, which nobody wanted to carry out of the office anyway. So computer security was an issue mainly for those few specialists who dealt directly with mainframe computers, and the rest of us scarcely knew it existed.

No longer. Because of the democratization of technology we now enjoy, most laptops sold today with 100 GB hard drives can hold the digital equivalent of all the printed contents of a small town's public library. The size of digital storage has changed, but the responsibilities are still the same. Every person who is in charge of a laptop with sensitive information on it has the same moral obligations as those (now retired) computer operators in the glass-walled computer rooms of yore. But in these days of high-pressure work and high-speed internet connections at home, what is more natural than to throw the laptop in the car and finish that special project in the evening just this once, even though you seem to recall some office rule against taking work home? That is just what the anonymous Veterans Administration employee did, and now look what's happened.

There are technological fixes for this technological problem, of course. A ten-second Google search turns up companies such as Eracom Technologies, which offers a variety of data encryption methods for servers, desktops, and laptops. The idea is that the authorized user types in a special password, and for extra security plugs in a special module to enable the laptop to boot up. Once the computer is satisfied that it is being used by the right person, it acts just like a normal computer. But all the data on the hard drive is actually encrypted with advanced techniques and de-encrypted as needed. Were a thief to steal the unit, he or she would be unable to start the machine. Even if the hard drive were removed and copied, the result would be nonsense.

Of course, Eracom doesn't give this technology away for free. I don't know what it costs, but it must be considerably less than the cost of a laptop, and they probably give quantity discounts for large organizations such as the U. S. Department of Veterans Affairs. But even advanced security technology like this can be thwarted if the user does something dumb, like writing the password on a note taped to the keyboard, or keeping the special unlocking module in the same bag with the computer. As an engineer told me recently, he tries to design systems that are foolproof, but doesn't bother to make them "damn-fool proof."

If a pattern of identity theft matching the stolen records does not emerge soon, our returning soldiers may not have to worry about the consequences of this particular laptop burglary. After all, they have seen and dealt with a lot bigger problems than this one. The rest of us, especially those who have any kind of sensitive data that we carry around in laptops, Blackberries, or data storage devices, should think twice before we take it out of a secure area. And ask what your organization does in case such data is stolen. If the answer isn't satisfactory, maybe someone should invest in a little added security. But all the data-security technology in the world cannot substitute for simply being careful.

Sources: An article describing the news conference at which Jim Nicholson revealed the laptop theft is at http://www.acm.org/serving/se/code.htm. Information on encrypting hard-drive data is available at such sites as http://www.eracom-tech.com/hard_disk_encryption.0.html.

Thursday, May 18, 2006

Engineering Privacy in the Computer Age

The Association for Computing Machinery (ACM) is the world's leading society for computer professionals. Founded in 1947, it is for professionals involved in information technology what the American Medical Association is for U. S. doctors. Prominently displayed on the ACM's website is a lengthy Code of Ethics, which includes the following words about privacy rights:

"Computing and communication technology enables the collection and exchange of personal information on a scale unprecedented in the history of civilization. Thus there is increased potential for violating the privacy of individuals and groups. . . . It is the responsibility of professionals to maintain the privacy and integrity of data describing individuals."

So far, so good. Few will argue that the ubiquity of computers has made it possible to collect, analyze, or steal unimaginable amounts of highly personal information. But the code doesn't simply stop with a call to maintain privacy. It goes into further detail:

". . . This imperative implies that only the necessary amount of personal information be collected in a system, that retention and disposal periods for that information be clearly defined and enforced, and that personal information gathered for a specific purpose not be used for other purposes without consent of the individual(s). These principles apply to electronic communications, including electronic mail, and prohibit procedures that capture or monitor electronic user data, including messages,without the permission of users . . . ."

President Bush has been in hot water this week after a report in USA Today that the National Security Agency has been collecting the phone call records of millions of Americans. One phone company after another has denied providing such information. While it is perhaps too early to decide the truth about the matter, the record of numbers dialed and calls received is something that most citizens would regard as personal information.

On the other hand, we have all seen TV shows in which the dialing records of a criminal suspect have provided important clues to the solution of a crime. Phone taps, call records, and traces have been a part of domestic law enforcement for decades. And of course, computers are involved in nearly all electronic communications of any description these days. How do the computer professionals deal with these cases? Here's how:

"User data observed during the normal duties of system operation and maintenance must be treated with strictest confidentiality, except in cases where it is evidence for the violation of law, organizational regulations, or this Code. In these cases, the nature or contents of that information must be disclosed only to proper authorities."

So, at least according to the ACM Code of Ethics, information such as call records should be disclosed to the "proper authorities" (e. g. the NSA) only when the user data is evidence for the violation of (1) law, (2) "organizational regulations," or (3) the Code itself. The ACM Code of Ethics or the internal regulations of the phone companies are not the inspiration for NSA activities, we hope. So it seems that an ACM member in good standing could participate in such an activity only if the records obtained were evidence for the violation of law.

That's a pretty narrow scope. Somehow I doubt that the phone records of all Americans, or even a substantial fraction of all Americans, constitute evidence for the violation of law. Maybe some of them do, but that is why most phone tap, trace, and call record requests are made by law enforcement officials only for specific individuals who are already under suspicion. If anything like the reported wholesale phone-record transfer took place, those members of the ACM who participated in it are under a cloud ethically, to say the least.

Some days it seems like the great internet-website-phone-fax-TV-MP3-instant message-chatroom behemoth runs on its own without human intervention of any kind. But there are people behind all the systems, and people make the decisions that protect or violate your privacy. Just the other day, I learned that the operator of the website at my church (!) has a way to tell if particular viewers bookmark the site. When I heard this, I had a chilling vision of some invisible guy looking over my shoulder as I sat in front of my computer in my supposedly private room at home. So far, no harm that I know of has come to me because people I don't know and will never meet can tell which websites I bookmark. But it may have something to do with the fact that even after we signed up for the national do-not-call list, I keep getting phone calls right at suppertime from organizations I could swear I have never had any dealings with. But maybe if bookmarking a website counts as a "dealing," this gives them the right to call me. Who knows?

The truth will eventually emerge about the NSA and national calling records. Laws always lag behind rapidly advancing technologies, and a certain amount of confusion and injustice results. But at some point, if things get too out of hand, the legal system may overreact with burdensome regulations that in some cases are worse than the disease they were designed to cure. The best protection against such an outcome is for everyone, especially members of the Association for Computing Machinery, to abide by sound ethical principles and every so often ask, "If I were on the receiving end of this, would it bother me?"

Sources: The Association for Computing Machinery's Code of Ethics is at http://www.acm.org/serving/se/code.htm.

Tuesday, May 09, 2006

Mobile Phones on Airplanes: Too Soon to Talk?

To some airline passengers, a mobile phone is God's gift to air travel. You can see how eagerly they relieve the boredom of watching other passengers struggle into their seats by chatting with friends and relatives until the last possible second—and sometimes longer. I've watched the test of wills as a flight attendant stood by an oblivious businessman who simply would not put up his phone until she repeated her request three times and threatened to delay the flight for everybody. And it sometimes looks like a contest to see who can whip out their phone and make the first call after the announcement that it's okay to use phones again after landing. Clearly, people would like to use their mobile phones all the time, not just on the ground. Possibly in view of this fact, the Federal Communications Commission has announced that it is considering whether to lift the restriction on in-flight mobile phone calls. So is there anything to the notion that electronic devices such as mobile phones can seriously affect the avionics of a modern jet aircraft? Or is it just a silly bureaucratic exhibition of meaningless power without foundation in fact?

Sources: An online version of the March 2006 IEEE Spectrum article, "Unsafe at Any Airspeed," is at http://www.spectrum.ieee.org/mar06/3069.

Surprisingly little research has been done into whether people actually use mobile phones on plane flights, and if such use can interfere with navigation or communication systems. In the March 2006 issue of the magazine IEEE Spectrum, a publication for professional electronic engineers, researchers at Carnegie Mellon University reported the findings of a three-month investigation in which they placed a radio-wave "sniffer" on board numerous commercial flights. This instrument package was designed to receive and record radio emissions in the frequencies used by mobile phones. After the equipment flew in the overhead luggage rack on 37 different commercial flights, the data was downloaded and analyzed.

It turned out that on average, at least one person on every flight, and sometimes several people, made one or more mobile phone calls at times that clearly violated FAA and airline rules. While none of the planes in the study crashed or reported any harmful interference with avionics, the researchers found from independent data collected by NASA that there have been over seventy incidents in which portable electronic devices on board a plane have interfered with aircraft systems. The increasing use of global positioning system (GPS) navigation tools makes newer avionics even more vulnerable to interference than in the past, since GPS relies on receiving weak satellite signals that can disappear under interference from onboard phones, laptops, or other unauthorized electronics. While the Carnegie Mellon study does not cite a particular plane crash as being caused by interference from portable electronic devices, it implies that interference may have contributed to crashes in the past, given what we now know about mobile phone use on airliners.

Based on the results of their study, the researchers made several recommendations. A total ban on mobile phones in airplanes was not one of them. One of their most innovative proposals is to equip flight crews with a hand-held version of their "sniffer." This could be made as small as a pager and could be slipped into a pocket. At the same time that the flight attendant offers coffee, tea, or snacks, he or she could be patrolling the aisles for illicit mobile-phone use. Simply warning passengers that any mobile phone use can be detected in this way would probably go far toward discouraging the practice.

Other recommendations include better coordination between the Federal Aviation Administration, in charge of airline safety, and the Federal Communications Commission, in charge of the airwaves. Also, the NASA program that accumulated data about airline safety problems has had its budget cut in recent years, and the researchers called for its funding to be restored. All of these ideas are good ones, but unless politicians, industry representatives, and regulators take action, things may go on as they are until a tragedy occurs.

Tragedies are, unfortunately, great motivators for regulators and politicians to do something. The trouble with the interference problem in this regard is that, unlike a broken turbine blade or other physical cause, radio interference leaves little or no trace of itself after a crash. Even if a crash was caused by interference that produced a false reading from a GPS display, discovering this cause after the fact would be difficult or impossible without much better in-flight data recording than we now have.

So this is one problem that may be difficult to fix technologically. Of course, if everybody followed the rules, it would disappear. And here is one instance where you, the individual airline passenger, can do something. Not only can you refrain from using your mobile phone during prohibited parts of the flight, but if you see someone else doing it, you might try speaking to them about it. The life you save may be your own!

Tuesday, May 02, 2006

Engineering the Distracted Driver

On the afternoon of June 19, 1999, Bryan Smith was driving along Maine's Route 5 in the White Mountains near the New Hampshire border. His Rottweiler was with him in the back of his Dodge Caravan. The dog did something that caused Smith to turn around to see what was the matter. While his attention was diverted from the roadway in front of him, his vehicle hit an object on the edge of the road. When Smith stopped the car to see what he'd hit, he found that it was famed author Stephen King, who subsequently underwent five operations for the injuries he sustained. Smith was not intoxicated or speeding. The only thing that kept him from seeing King in time to avoid the collision was the distraction caused by his dog.

While this is probably the most famous recent automotive accident involving a distracted driver, recent research by the Virginia Tech Transportation Institute indicates that it was the tip of an iceberg that is much larger than we thought. Using high-tech instrumentation such as Doppler radars, accelerometers, and five channels of compressed video to provide a second-by-second record of over two million miles of driving, the Virginia Tech researchers analyzed events leading up to over 60 crashes documented during the study of 100 instrumented cars and their drivers. The researchers were surprised to find that driver inattention was a factor in nearly four out of five crashes. This category includes fatigue and glancing away from the forward roadway for any reason. The most common cause of driver inattention was found to be "wireless devices," which includes cellphones, although other passengers, radios, and CD players were also implicated. Further information on the study can be found at the website of the sponsoring agency, the National Highway Traffic Safety Administration, at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html.

Over 43,000 people die in U. S. auto crashes every year. In the hierarchy of things to be concerned about in engineering ethics, death is at the top. Any innovation that leads to increasing fatalities needs to be scrutinized thoroughly. From a system point of view, however, the things people do in their cars are almost uncharted territory, as the Virginia Tech research shows.

Consider a typical Saturday-morning outing for a mother and her children. Their vehicle may contain a built-in GPS navigation system, a satellite radio, a conventional radio, a CD player, and air-conditioning controls, all of which need attention at various times. She may be carrying her own cellphone and Blackberry, and her children may be watching a DVD on a player in the back seat, in addition to carrying their own phones. All of these pieces of equipment were designed without the knowledge that driver inattention is apparently a factor in almost four out of five crashes. The timing and usage of all these devices is left entirely up to the owners and operators, whose last drivers' ed course might have been two decades ago, if ever. The wonder is that anybody can drive more than a couple of miles amid such electronic chaos without hitting something.

This kind of problem has been faced before by the military, whose interest in giving fighter pilots the information they need without unduly distracting them is truly a life-or-death matter. A fighter-plane cockpit is a highly coordinated and uniform environment in which pilots know exactly what to expect, and where instruments and visual cues are placed with careful attention to their effects on the ability of the pilot to perform his job quickly and without needless fumbling.

I don't propose that we hand over control of everyone's car interior to the Department of Defense. But at some point we need to recognize that the original purpose of the automobile driver's seat—to provide a place where the operator can devote his or her full attention to the demanding task of controlling a potentially fatal piece of equipment moving at high speed—is becoming lost in the proliferation of options, gadgets, and distractions that most state driving laws permit. The one exception I am aware of is a law in most states that prohibits the operation of a television screen within the driver's line of vision. But watching TV while driving would be safer than trying to operate some of the latest digital gizmos with their multiple menus and tiny display screens.

Laws almost always lag behind technology, and with good reason. Unless a new technology poses a "clear and present danger," it is best to let enough history accumulate to allow a reasoned judgment based on sufficient evidence. The evidence of driver inattention has been long in coming, but it has now arrived. Engineers need to consider safety ideas that are out of the conventional boxes with regard to technologies used in automobiles. For example, it is technically feasible, given enough standards and agreements, to devise an interlock system that makes all controls for non-essential electronics (GPS, cellphones, etc.) inoperable while the car is in motion. If everyone had to stop or pull off to the side of the road to make a phone call or read a map, would the world come to an end? No. Time was when nobody could make phone calls from cars at all, and somehow people survived.

This isn't necessarily a call for regulation. The people with the greatest financial interest at stake in automotive safety are the insurance companies. What if they offered deep discounts for people who drove interlock-equipped cars? The automakers know that safety sells to a certain segment of consumers, primarily those with young families. Enough clever people working on this problem could come up with solutions that would not require drastic laws and would end up making the highways safer, and probably the electronics easier to operate too. The evidence is in. Now it's time to do something about it.

In the meantime, I suggest adopting the "two-second rule." The 100-car study found that short glances away from the roadway, especially for environmental checks like looking at one's rear-view mirror, were not risky as long as they took less than two seconds. But taking your eyes away from front and center for any longer than that led to increased chances of a wreck. So look away if you must, but not for longer than two seconds if you can avoid it.

Sources: The National Highway Traffic Safety Administration has more information on the Virginia Tech study "The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data" at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html. The biographical information on Stephen King is from the Wikipedia article on King, http://en.wikipedia.org/wiki/Stephen_King.

Monday, April 24, 2006

Nuclear Power Reconsidered

Twenty years ago this week, a late-night experiment at an obscure nuclear power plant in the former Soviet Union turned into the worst nuclear accident in history. During the early morning hours of April 26, 1986, operators at the graphite-core plant in Chernobyl, some eighty miles north of the Ukranian capital of Kiev, violated numerous regulations and disabled safety mechanisms during an ill-considered reactor test. The reactor blew apart and the graphite (carbon) core caught fire like a giant nuclear barbecue pit, sending radioactive smoke into the atmosphere. The accident was compounded by the criminally slow response of the Soviet government, which first attempted to cover up the incident. When Scandinavian nations detected abnormal levels of airborne radioactivity and started asking questions, the USSR reluctantly admitted there was a problem, but not before thousands of people living near the plant had been exposed to dangerous levels of radioactivity.

An Associated Press story by Mara D. Bellaby published this week recounts estimates of the total number of fatalities and illnesses caused by the accident. Not as many people died from Chernobyl as was originally feared. Eventually the government got around to evacuating some 116,000 people who lived within twenty miles of the plant. Official reports released by United Nations agencies recently say that only 50 people have died so far as a direct result of radiation poisoning traceable to the accident. Surprisingly, this includes those who fought the fire in the first hours of the accident and who were exposed to the most intense levels of radiation. The most significant problem in the general public has turned out to be a sharp increase in thyroid cancer among young people. Since radioactive iodine is taken up preferentially by the thyroid in children and adolescents, this increase was expected. Careful screening for early signs of thyroid cancer and prompt treatment have cured nearly all of those who contracted the disease, according to the reports. So if the world's worst nuclear accident caused only 50 deaths, why is it that no new nuclear power plants have been ordered in the United States since 1978?

The last nuclear plant to be completed in this country was finished in 1996. The nearly twenty-year span between these two dates alone give you some idea as to why utilities are reluctant to order nuclear plants. For a variety of reasons, many of them good, the nuclear power industry in the U. S. is hedged with an incredible number of regulations, permit processes, and controls from overlapping Federal, state, and local jurisdictions. Our own worst nuclear-plant disaster, Three Mile Island, happened in Pennsylvania in 1979, and compares to Chernobyl as a fender-bender compares to a bus full of children tumbling down a mountain. Nevertheless, it was serious enough to create political turmoil that effectively shut down the nuclear power construction industry in this country. There are still U. S. companies that make nuclear plants—they just don't sell them here.

As a consequence, the increased demand for electricity in the U. S. has been met since the 1980s largely by more coal-fired plants, with a small but significant amount contributed from renewable sources such as wind power. There are many good reasons to oppose nuclear power: the problem of what to do with the highly hazardous wastes created by plant operation, the danger of nuclear proliferation to unstable countries, and the "yuck factor" that some people will always feel about a technology that is associated with nuclear weapons. But assuming that the nation's use of electric energy is not going to decrease in absolute terms any time soon, the power has to come from somewhere, namely coal in the last few years. And opponents of greenhouse-gas emissions, many of whom also oppose nuclear power, know (or should know) that you can't burn coal without making carbon dioxide, which is the greenhouse gas of most concern. Nuclear power, whatever its other drawbacks, produces virtually no greenhouse gases, which is one reason that even "greens" have been giving it a second glance lately.

Some countries such as France never abandoned nuclear power. France's example shows that given a moderate, stable regulatory environment and good engineering, nuclear power can be a safe and reliable source of electricity, leaving aside the question of wastes. Still, it is not at all clear that the nuclear industry will ever be able to build substantial numbers of new plants in the U. S. The new free-enterprise model of partially deregulated utilities makes it even more risky to plan a long-term capital investment such as a nuclear plant, which sucks in millions of dollars for years before even starting to produce revenue. So if we can't build new nuclear plants, and we don't want to contributed to global warming by building new fossil-fueled coal, oil, or natural gas plants, where will the energy come from?

Radical conservation combined with renewable and distributed energy generation is one possible answer. Here and there, enterprising architects have built houses and even commercial buildings whose net use of externally supplied energy in the form of electricity or natural gas is only a small fraction of what typical construction uses. The drawback, of course, is that it takes expensive custom engineering and materials to achieve these radical savings, and in the current economic environment there is no incentive to do these things. Perhaps some radical economic experimentation is in order here. If large tax breaks or even subsidies were provided for building structures whose energy usage was, say, 50% or less of the average level, this could really be regarded in the light of a loan, since the country as a whole will benefit from the fact that less energy usage is a net gain in a costly-energy economy. A whole raft of vested interests would first have to be placated, but that is what politics is for.

As the aftermath of Chernobyl has proved, our worst fears in some areas sometimes turn out to be not as bad as we thought. But before we in the U. S. go nuclear in a big way, we have time to consider other options.

Sources: An article by Mara Bellaby similar to the one carried in the Austin American-Statesman is at http://www.newsobserver.com/104/story/431637.html.

Wednesday, April 19, 2006

Patent or Blackmail?

Here is a list of some of the great human achievements of the past five hundred years: the Scientific Revolution, the Industrial Revolution, the patent system . . . . What's that last one doing there? Historians of technology rightly regard the development of patent law as one of the most significant intellectual innovations of the early modern period. Beginning in Renaissance Europe and spreading to America, the idea that an inventor's rights to make and sell his invention should be protected by law for a limited period encouraged innovation while ensuring that the rights of the general public would also be protected from monopolies of indefinite lifetime. Engineers, whose ideas form the basis of many patents, should be interested to know that the present U. S. patent system is being gamed in a major way, to the detriment of nearly all concerned.

The most recent example of this concerns the firm Research in Motion, which makes the popular Blackberry wireless communication system. It used to be the case that patents were fairly difficult to obtain. The inventor's patent attorneys were pretty evenly matched by the U. S. government's patent examiners, whose job it was to make sure that trivial, obvious, or otherwise meritless patents were not issued. Patenting an idea was a serious and sometimes difficult undertaking, but when you got one, you knew you had something, and so did everyone else.

Not so anymore. A combination of factors—inadequate Patent Office funding, a hyper-pro-business attitude in government, and speedups in the pace of innovation—have made it much easier to get a patent in the last ten to twenty years. This includes dubious ones sometimes called "submarine patents"—not patents on the submarine, but patents deliberately designed to cover all parts of an emerging field, whether or not the supposed inventor has any genuinely innovative ideas or not. In the past, these types of patents would have never been issued, but in the current almost-anything-goes atmosphere, all it takes is enough money paid to a good patent firm.

What happened to Research in Motion this year shows what kind of harm can result from this over-liberalized issuing of patents. In the early 1990's, one Thomas Campagna patented some ideas for wireless email. In the meantime, Research in Motion put in a lot of work to develop the Blackberry, and obtained its own patents. In 2001, a company named NTP, formed to exploit Campagna's patents, sued RIM for patent infringement. The resulting legal hassle threatened to produce an injunction that would shut down all Blackberry services in the U. S., clearly an outcome that would benefit no one. This was despite the fact that the U. S. Patent and Trademark Office re-examined and rejected at least seven of NTP's patents along the way. In March of this year, RIM announced a settlement in which NTP would receive over $600 million. No doubt RIM views this as part of the cost of staying in business. But if the shady NTP patents had never been issued in the first place, none of this would have happened.

What has this got to do with engineering ethics? A lot. First, engineers can refrain from participating in the generation of "junk" patents. Unfortunately, this may not have much of an effect, since unscrupulous patent lawyers don't need much in the way of technical help to cobble together useless patents. This is not to say that patenting is unethical in general. Properly used in a well-conducted system, patents help to achieve the balance between monopolistic profit, innovation, and reasonably-priced new products and services that characterizes modern industrial societies. But the pendulum has swung way too far in favor of patent owners and patent attorneys to the detriment of the general public and those who actually do the hard work of developing and marketing new products, only to have their resources diverted into pointless patent battles. Under the present circumstances, the danger is that innovation will be stifled by artificially extended patents that allow established firms to exclude competition indefinitely. This is already happening in the pharmaceutical industry as some firms come up with patented repackaging of old patented drugs to prevent a cheaper generic form from coming onto the market. Who pays for this? The beleaguered patient who has to pay beaucoup bucks for the name-brand drug longer than necessary.

The second thing engineers can do is to make a political issue out of the patent system. True, it doesn't have the popular appeal of antiwar movements or tax reform. But it is critically important to fix a badly broken system before R&D departments of multinational firms decide to relocate in countries where the system is more rational. Ever since the U. S. patent system was founded in 1790, it has differed in significant ways from most European systems. One of the most important differences is that most European patent holders must show that they are licensing their patent to others or using their patents themselves, while there is no such requirement in the U. S. This allows U. S. patent holders to "sit on" submarine patents that lie dormant until a well-heeled company comes within the sights of the patent-holder's legal gun. Besides changes in the legal structure of patents, the U. S. Patent and Trademark Office simply needs a lot more good help in the form of funding and staff to stay competitive with the best private patent lawyers. Only then will they be able to reinstate the rigorous examination of patents that prevailed before the recent gold-rush atmosphere developed.

With their specialized training, engineers stand in a unique position to make an important political difference in this situation. Consider writing your U. S. senator or congressman about this matter, and see what happens. The worst that can happen is nothing, and the best could be a lot better than that.

Sources: The New York Times article "In Silicon Valley, A Man Without a Patent" by John Markoff was published online on Apr. 16, 2006, and is available from the NYT archives at http://select.nytimes.com/gst/abstract.html?res=F20811FA3D5B0C758DDDAD0894DE404482 for a fee. The Forbes.com article "More Patents Rejected in BlackBerry Case" by Arik Hesseldahl is at http://www.forbes.com/business/2005/06/22/rim-patent-infringement-cx_ah_0622rim.html.

Thursday, April 13, 2006

Earthquake Prediction: Ready for Prime Time?

Earthquakes and the tsunamis that sometimes accompany them are one of the most frightening and fatal types of natural disasters. The December 26, 2004 earthquake and tsunami that struck in and around the Indian Ocean killed more than 200,000 people, and millions more have died in similar disasters. One of the main ways people die in an earthquake is in collapsing buildings, and over the years civil engineers have developed building codes and other techniques that reduce (but do not eliminate) the danger of structural collapse during an earthquake. Unfortunately for billions of people who live in developing countries, these measures are expensive. If the choice is between living in shaky but affordable housing on the one hand, and going without shelter on the other, most people take their chances with a house that may fall down in an earthquake. The poor of this world have more pressing things to worry about than earthquake safety, but that doesn't make their lives any less valuable.

Viewed as an engineering problem, the question of how to save lives in earthquakes and tsunamis has several possible solutions. The only one we have pursued to any great extent up to now is to make sure that structures will withstand the likely force of an earthquake. (As far as tsunamis go, there is little one can do except run for higher ground.) If—and this is a big "if"—earthquakes could be predicted with good accuracy, the problem becomes simpler. A few hours before an earthquake strikes, simply clear everyone out of dangerous buildings until the danger is past. This second solution is not without its own problems, but if it could be implemented, the cost of an early-warning system would be much less than earthquake-proof buildings for everybody, and the potential to save lives would therefore be much greater. The only problem is, how do you predict earthquakes?

Historically, earthquake prediction has been regarded as a pseudo-science. The abundance of post-earthquake "premonition" stories such as animals acting strangely, unusual sounds, and lights in the sky is a set of data that few scientists take seriously, and with some justification. Human beings are not emotionless recording machines, and memory is a highly subjective thing. Perfectly ordinary and random incidents that happen just before a frightening event take on an ominous cast when recalled later. But the shady neighborhood that earthquake prediction has resided in up to now should not prevent scientists and engineers from exploring ideas about how to do it.

The December 2005 issue of IEEE Spectrum, a highly regarded magazine for professional electrical and electronic engineers, carried an article on recent efforts to develop technical means of predicting earthquakes. (The article can be found at http://www.spectrum.ieee.org/dec05/2367). The lead author. Tom Bleier, described how ELF waves (extremely-low-frequency electromagnetic waves) and other measures such as satellite-sensed electromagnetic waves and surface temperatures have appeared at times to be correlated with certain large earthquake events. He made what to this author sounded like a good case that there is something to the idea that such correlations are real. However, a good physical explanation for why such correlations should occur is presently lacking.

The article inspired three geophysicists to write a letter to the editors of IEEE Spectrum protesting the publication of claims that they said should be rejected (the letter can be viewed at http://www.spectrum.ieee.org/apr06/3275). Robert J. Geller, Alex I. Braginski, and Wallace H. Campbell argued that there is no scientific basis for the kind of earthquake prediction that Bleier and his colleagues are doing. They claim there is so much noise from other natural and man-made sources at the frequencies in question that any exercise in earthquake prediction amounts to sophisticated tea-leaf reading. Their opinion is that the scientific community has examined the methods of Bleier and company and found them wanting.

This controversy reminds me of the early days of tornado prediction. From the late 19th century until 1938, forecasters at the U. S. Weather Bureau were forbidden even to use the word "tornado" in a forecast. The prevailing opinion was that there was no reliable way to predict tornadoes and such a forecast was likely only to cause needless panic. It wasn't until 1948 when some U. S. Air Force weathermen at Tinker Air Force Base in Oklahoma had their airfield trashed by a tornado that anyone began to apply serious scientific effort toward the problem of tornado forecasting. They came up with a combination of conditions that looked like it would work. Five days later, they noted the same conditions prevailed, and, not being under the restrictions of the civilian Weather Bureau, took it upon themselves to issue a tornado forecast to Air Force personnel. Later that same evening, probably the only tornado in history that was greeted with jubilation struck Tinker Air Force Base again! The weathermen published their findings in 1950 and 1951, but for several years afterwards tornado forecasts were restricted to military facilities unless they were leaked to the media. Other researchers attempting to publish research papers relating to tornado forecasting were blocked by skeptical reviewers. It took the better part of a decade to overcome the attitude that forecasting tornadoes was so chancy as to not be worth upsetting the public. But in combination with radar-based early warning systems for tornadoes that were put in place in the 1950s, annual tornado fatalities in the Midwest plummeted. (The story of tornado prediction is told in Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting.)

Time will tell whether the new techniques of earthquake forecasting will bear fruit in the form of reliable, specific predictions. In the meantime, its proponents should prepare themselves for a long battle with skeptics. We can hope that if there is anything to it, engineers, scientists, and the public will be open-minded enough to welcome the practice and take it seriously enough to save lives with it in the future.

Sources: See URLs above referring to items in IEEE Spectrum. Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting was published in 2001 by the University of Oklahoma Press, Norman.

Friday, April 07, 2006

The Engineer and Grandma Millie: The California Energy Crisis Revisited

Engineers like to think that what they do professionally helps people. The payoff in this regard is not as direct as a doctor gets when he takes in a dying patient and sends them home feeling fine. But most engineers, I suspect, would like to believe that the work they do makes a positive difference in the lives of people who use their products and services.

This connection is especially visible in the area of electric utilities. In the aftermath of numerous hurricanes and storms in 2005, we saw teams of hundreds of linemen coming from all across America to repair the damaged distribution infrastructure. Linemen aren't engineers, true, but I have encountered the same "keep the power coming" attitude in power engineers whose job it is to direct operations of the regional power pools that maintain a moment-by-moment balance between the fluctuations of electricity demand and the available supply. Since electricity cannot be stored in large quantities, it must be produced as needed, and keeping abreast of changing demands can be a headache, even when no one is trying to play financial games too.

This week, two of the all-time top financial game-players are on trial for lying about their company's profitability. Jeffrey Skilling, former CEO of Enron, and Kenneth Lay, the firm's founder, are being tried in a fraud and conspiracy case that the federal prosecutors have framed in simple terms. In the years 2000 and 2001, some parts of Enron were losing lots of money, and the claim is that Skilling and Lay knew this and lied about it to investors and the general public. Ironically, one part of Enron that was extremely—some would say sinfully—profitable was the energy-trading division, which the Attorney General of the State of California claims was responsible for many of the rolling blackouts that hit that state in the same years. Skilling and Lay didn't have to lie about that—they just had to live with their consciences.

What happened to the energy market in California in 2000 has been described as the perfect storm of electric-utility deregulation. To make a long and complex story short, increasing demand and partial deregulation led to a situation in which there was simply not enough electricity available in California for several days of unusually hot weather or short supply. The new tariffs allowed companies like Enron to charge whatever the traffic would bear for energy imports and futures, and as a result rates soared to the stratosphere in only a few months. The Attorney General claims that Enron and other utility interests purposefully took generating facilities offline in order to increase their profits. The fallout in terms of accusations, lawsuits, bankruptcy proceedings, and other effects continues to this day. In the process, state investigators unearthed a set of recorded phone conversations among energy brokers at Enron and other firms.

These tapes make for depressing listening. One took place as the state legislature was debating whether to cap the spot price of energy on the open market. "So the rumor's true, they're taking all the f---ing money back from you guys? All the money you guys stole from those poor grandmothers in California?"

"Yeah, Grandma Millie, man. She's the one who couldn't figure out how to vote on the butterfly ballot. Now she wants her f---ing money back from the power utilities. . . ." And these are some of the less obscene samples. Many more such recordings can be found at http://ag.ca.gov/antitrust/energy/index.htm.

The engineers who participated, willingly or unwillingly, in the events of the California energy crisis have not received as much attention as the financial traders who clearly profited from the situation. I have seen a few references in the open literature to their activities and the difficult situation they faced. In the tightly coordinated world of electric power-pool operation, individual action is nearly impossible, since the decision to shut down a facility or make purchases of power here or there is one that only a few individuals can make. Anything other than following orders in a case like this would amount to industrial sabotage, since an uncoordinated attempt to shut down or put online a generator would cause serious damage. Whether or not the engineers involved liked what they were doing, and whether or not they knew its implications, they had few options in the event. Like soldiers in a battlefield, their horizon was limited to their immediate surroundings and the technical circumstances they had to deal with at each moment. It is possible that they realized the wider implications of their actions during the crisis only in retrospect.

If any power engineers involved in the California energy crisis care to share their experiences, it would be appreciated. Engineers are generally far from the centers of political and corporate power where rate setting and related issues are decided. But to the extent that such matters interfere with the average engineer's desire to serve the public, not penalize it, there is something wrong structurally with the way electric utilities are set up and administered.

Since 2001, Enron has gone bankrupt, the California economy has cooled off, continued efforts in energy conservation have alleviated summer blackout threats, and additional generating capacity has been added to the nation's power grid. Sometimes a crisis has to happen in order to galvanize politicians and corporations into action, so we might actually be thankful that the California energy crisis happened when it did, and was no more severe than it was. All the same, it would be easy to become complacent in the face of new schemes for shortchanging Grandma Millie in order to profit the powerful, and we should be wary of them in the future.

Sources: The best source of Enron tapes I have found online is on the California Attorney General's website http://ag.ca.gov/antitrust/energy/index.htm.

Thursday, March 30, 2006

Engineering Censorship in China

On the last day of April in 2005, Chinese journalist Shi Tao was sentenced to ten years in prison for sending an email to a New York colleague, the editor-in-chief of a publication called Democracy News. According to the Chinese court's verdict, Tao's email contained state secrets, and his crime consisted in leaking them to an "overseas hostile element," namely, Democracy News. The thing that makes Tao's case interesting to the rest of the world, and America in particular, is that Yahoo ! Holdings (Hong Kong) helped the Chinese government identify Tao by divulging information about his private email account. Without Yahoo's help, Tao quite possibly would be a free man today, working for what he sees to be the noble goal of promoting democracy in China.

By some estimates, China is the world's biggest untapped market for information technology. The population of mainland China makes up the second-largest group of Internet users, second only to the U. S. That wouldn't have happened without technology—hardware and software-—furnished largely by U. S. owned or operated companies such as Yahoo, Google, and Microsoft. In order to gain access to the lucrative Chinese market, all three firms have agreed to abide by the restrictive censorship and information-control policies of the Peoples' Republic of China. They have also been roundly criticized for such cooperation. In January, the Secretary General of Amnesty International expressed dismay at the "growing global trend in the IT industry" to impose "restrictions that infringe on human rights." Revealing private email account information, shutting down "undesirable" websites, and restricting search-engine results to items that are politically acceptable are a few examples of the steps that IT firms have taken in order to stay on good terms with the Chinese government.

Some people like to say that all technology is ethically neutral, and the only time ethics comes into the picture is when you look at how the technology is used. I have yet to be convinced of the ethical neutrality of a nuclear weapon. As we have found, the nuclear tests of the 1960s in which no one was directly killed nevertheless caused environmental damage and radiation levels that led to serious later harm. Some technologies carry with them an intrinsic bias toward good or evil, and it is foolish to pretend otherwise. It may be necessary from time to time to build things with a built-in ethical bias, but we do that in full consciousness that they cannot be viewed as ethically neutral.

The Internet's designers imbued its very structure with the spirit of egalitarianism and, one might even say, democracy. The distributed, non-hierarchical way that information travels, the "universal" record locators that anyone from an eleven-year-old boy in his bedroom to the U. S. government can obtain under basically the same rules, and the almost-instant access to anything are all biased toward the "global village" model of human interaction. While one may disagree with the merits of that model, it has created a situation in which democracy, openness, and the free exchange of information come naturally to the Internet. To restrict any of these things means that IT designers and companies have to go to extra trouble and expense. In a sense, they are going against the grain of the whole design philosophy of the system.

In defending Microsoft's actions, Microsoft founder Bill Gates claims that the basically open nature of the Internet will lead to a net increase in freedom for the Chinese people, despite the restrictions and occasional blog-takedowns that his firm does at the government's bidding. Speaking at the World Economic Forum in Davos last January as reported in the Times of London Online, Gates said, "I do think information flow is happening in China ... even by existing there contributions to a national dialogue have taken place. There’s no doubt in my mind that’s been a huge plus."

It is a fact that laws and freedoms differ greatly from one country to another. Doing business in countries with evil or corrupt regimes has always been a morally complex thing. Quite often, moral clarity is arrived at only after the utter defeat and repudiation of a government such as that of Nazi Germany after World War II. And as Gates points out, engaging a country through trade can lead to opportunities for improving the lot of its citizens that an absolute hands-off posture would prevent.

All the same, I get a strange feeling in the pit of my stomach when I think that where I live influences what I'll be able to find on Google, or what I'll be able to email to my friends. I visited China back in 1989, less than two months after the Tiananmen Square massacre. Our guide pointed out the blackened blocks of concrete which had not yet been replaced after the fires and violence of those days. It saddens me that the same government which committed those crimes is still in power, and has strong-armed the cooperation of U. S. corporations that have enjoyed freedom in this country and now are a party to restricting it in China. But this may be one of those situations where we will find out what the right course is only by waiting to see how things turns out.

Sources: Article "Gates Defends China's Internet Restrictions" is at
http://business.timesonline.co.uk/article/0,,19149-2012784,00.html. Article on Yahoo co-founder, "Yang defends support for 'firewall of China'" is at http://www.iol.co.za/?set_id=1&click_id=31&art_id=qw1143581582510B215.
Amnesty International's press release of January 2006 "China: Internet companies assist censorship" is at http://web.amnesty.org/library/Index/ENGASA170022006. Shi Tao's verdict is at the Reporters Without Borders website
http://www.rsf.org/article.php3?id_article=14884.

Tuesday, March 21, 2006

Retire the Space Shuttles Now

Last week, NASA announced that the same kind of fuel-sensor problem that delayed last summer's flight has cropped up again. Program managers decided this time to replace all four sensors with new ones, a process that will take three weeks and delay the next flight until sometime in July. It was originally planned for May. This is both good news and bad news.

The good news is that NASA managers are finally showing some conservatism in their approach to potentially catastrophic problems. The fuel sensors monitor fuel levels in the external tank, telling the engines to cut off before the liquid-hydrogen fuel runs out. If the fuel tank ran dry while the engines were still operating, the resulting oxygen-rich mixture could cause severe corrosion and damage to the engines. Under normal operation the sensors are not needed, but if two or more sensors gave a false "empty" reading, the resulting engine shutdown could force an emergency landing or even cause a crash. So NASA is showing wisdom in replacing all the sensors before attempting another launch.

The bad news is that once again, NASA is going into space with a flying antique. Major elements of the space shuttle design are now over thirty years old. NASA engineers routinely comb the web for surplus sales of outmoded electronic components to use for repairs on the shuttle. I own a pickup truck that was built in 1981, the year after the first shuttle flew. I still drive it around town, but I must confess I'm somewhat reluctant to take it on a 35-mile trip to Austin and back for fear of a breakdown or worse. Granted that the shuttle fleet has received a great deal more attention and refurbishing than my truck, the fact remains that for every year the existing shuttles are kept in operation, maintenance and operating costs rise and the chances of failure from a hitherto unexpected cause grow greater.

Every reliability engineer is familiar with the "bathtub curve" that shows rates of failure in a collection of components over time. Suppose you buy a thousand new light bulbs for a large institution such as a school or hospital, install them, and keep track of when they fail. A small number will blow out within a few hours of first being turned on. This is called "infant mortality" and is due to defects that did not show up at the factory's inspection. This is the downward-sloping end of the bathtub curve. Then for a long time, you will see a very low rate of failure, one or two every month, perhaps. This is the bottom of the bathtub. Finally, as the usual failure mechanisms start to act, the failure rate will rise toward the end of the rated lifetimes of the bulbs. This is the rising slope of the bathtub, and continues until virtually all the bulbs fail.

The shuttles have literally thousands of components, each with a particular lifetime. No doubt NASA reliability engineers have studied the problem extensively, and the fact that the remaining shuttles still work is mute testimony that the engineers have done something right. But as time goes on and numerous components are used far beyond their expected lifetimes, unusual and undocumented failure modes can start to show up. It's not normal for a car's wheel to fall off, but when I pushed the mileage of an old car past the 200,000 mile point, that's almost what happened. Every successful launch moves the shuttles closer to the next failure, and as time goes on, it will be harder and harder to predict what the failure might be. From an engineering perspective, the only sensible thing to do with such antiquated hardware is to retire it. But politics plays as much a role in what NASA does as engineering, if not more.

No one likes to kick an organization when it's down, so ironically, the 2003 Discovery disaster probably kept President Bush from doing the sensible thing and terminating the shuttle program in a timely way. But who knows how many more astronauts will die between now and 2010 when the program is scheduled to end?

Space is billed as the last great frontier, and no one pretends that space exploration is without its hazards. The Apollo program cost the lives of three astronauts in a 1967 launchpad fire. The accident investigation wrapped up in three months, the program continued, and we landed on the moon two years later. No great achievement is without risks, and the consensus at the time was that the risks were worth it.

No such consensus exists today. The primary mission for the shuttles these days is to support the international space station, which is itself an enterprise of dubious utility, plagued by cost overruns, equipment problems, and a signal lack of clarity in its goals and mission. Some continued presence of man in space is probably worth while. But the numerous recent successes in privately funded space efforts indicate that private enterprise can do everything NASA is doing with the shuttle at less cost, more safely, if private firms are given some good ground rules and sufficient funding to make a fresh start. If the U. S. government had taken the same attitude toward air travel that it has taken toward manned space flight, we would still be watching a few highly trained NASA aeronauts fly across the Atlantic in single-engined Spirits of St. Louis, if that much. Shut down the shuttle, open up the field to private competition, and let the idealism of a new generation of space explorers come up with something that old institutions cannot even conceive.

Sources: For more details on the Shuttle's external tank, see http://en.wikipedia.org/wiki/Space_Shuttle_external_tank

Tuesday, March 14, 2006

BP Houston Refinery Disaster: One Year Later

On March 23, 2005, some temporary workers in a Texas City, Texas oil refinery owned by BP (formerly British Petroleum) were just finishing lunch near the trailers that housed their offices, when they saw a geyser of clear liquid spurting out the top of a steel tower only a few yards away from them. According to the Houston Chronicle, one of them cried into a radio, "God, I hope that's water." A few seconds later, a highly flammable pool of an intermediate product called raffinate spread throughout the area. Although the exact cause of ignition was never officially determined, some witnesses recalled that an idling diesel pickup truck suddenly sped up as if somebody stepped on the gas. Then came the explosion.

It killed fifteen workers, injured 170, and wrecked acres of refinery equipment. In the year following, both the U. S. Chemical Safety and Hazard Investigation Board and BP carried out independent investigations, which reached similar conclusions. While the investigators found that outmoded and nonfunctional hardware contributed to the accident, the single most important cause was a culture of carelessness and bad management.

In a highly automated business such as oil refining, it is easy to look at the vast expanse of fractionating towers, pipes, flares, and tanks, and get the impression that such a system basically runs itself. But when you realize how many dangerous chemicals—corrosive, flammable, volatile—go through intense heat and pressure inside thousands of pipes and vessels, the amazing thing is that there are not major refinery accidents every day. More important than the visible structure of hardware, controls, and even the computer software that helps operators run the plant is the human structure of management, authority, will, energy, memory, obedience, and trust. As many industries mature, more and more is known about the physical and chemical processes involved. Computer models can predict even unexpected and dangerous behavior before two pipes are ever welded together to build an actual refining unit. This improved physical understanding can lull managers and operators into thinking that no thinking is required, or at least very little.

As with many accidents, a combination of relatively unlikely events and decisions conspired to bring about the tragedy of a year ago. First, a number of temporary trailers were brought into the borders of the active plant within a few yards of equipment that processed hazardous materials. If the plant had been treated like what it is—potentially, a bomb about to go off—these trailers would have been blocks away. Inconvenient, perhaps, for the workers who would have had to travel farther and get less done each day, but better than dying. Next, operators tried to restart a unit that had been down for maintenance without clearing the area. Starting and stopping chemical-plant processes are much more dangerous than periods of smooth operation, and more things are likely to go wrong. A fractionating tower that should have been filled to a depth of only about six feet instead filled up to a height of over a hundred feet with flammable raffinate. The operators were misled into thinking the levels were normal by malfunctioning and nonfunctioning instruments. When they realized there was too much hot raffinate in the tower and attempted to drain it away, the action one worker took to improve things actually made them worse, because the heat from the hot material drained away at the bottom was exchanged back into the tower, causing both it and an auxiliary "blowdown" stack to overflow. This was what caused the geyser that a worker prayed was water.

BP has paid for this accident in several ways. The entire plant was shut down for months, the U. S. Occupational Safety and Health Administration levied a $21 million fine against the company (which it paid without admitting the correctness of the charges), and numerous lawsuits arising from the accident continue. But wouldn't it be better if before a tragedy like this happens, enough pressure could be brought to bear on an organization to make it mend its ways?

The Internet may be one way this can happen. I would be very interested to hear from anyone who has had experience with the BP accident (directly or indirectly), or who can share factual insights about it and suggest ways to keep the next major refinery accident from happening. You can respond to this posting by clicking on the comments link below. I hope to hear from you!

Sources: A more detailed summary of the incidents leading up to this disaster is available at the
U. S. Chemical Safety and Hazard Investigation Board website, complete with a narrated video simulation of the incidents and the vapor and pressure waves resulting from the explosion. BP has also posted its completed investigation report at www.bpresponse.org.

Saturday, March 11, 2006

Welcome

This is a forum for discussion of current issues in engineering ethics and current events that have an engineering ethics angle. Historian of technology Henry Petroski has said that engineers often learn more from failures than from successes. My hope for this forum is that it will serve as a rapid way for knowledgeable people to exchange factual information and insights about matters such as:

--- Consumer safety issues
--- Disasters and accidents involving engineered products or systems
--- Hazards that need attention drawn to them
--- Official statements concerning controversial issues that involve engineering ethics
--- Ways engineers can learn from past mistakes and problems

Each week I plan to post a brief commentary on a news item related to engineering ethics. I invite you, the reader, to respond, especially if you have technical or other knowledge that will add to public understanding of the issue at hand. If readership allows, I may add other features such as ongoing discussion threads and an FAQ section. This forum will be successful if it attracts the attention of thoughtful, knowledgeable individuals who can contribute to the better understanding of how engineers can do the right thing, as well as how they can do things right.