Walter Lord, in his classic nonfiction book A Night To Remember, used dozens of interviews and historical documents to recount the 1912 sinking of the Titanic in vivid and harrowing detail. Now David Barstow, David Rohde, and Stephanie Saul of the New York Times have done something similar for the Deepwater Horizon disaster last April 20. While official investigators will probably take years to complete a final technical reconstruction with all the available information, the story these reporters have pieced together already highlights some of the critical shortcomings that led to the worst deepwater-drilling disaster (and consequential environmental damage) in recent memory.
Their 12-page report makes disturbing reading. They describe how Transocean, the company which owned the rig and operated it for the international oil giant BP, was under time pressure to cap off the completed well and move to the next project. They show something of the complex command-and-control system for the rig that involved all kinds of safety systems (both manual and automatic) as well as dozens of specialists out of the hundred or so engineers, managers, deckhands, drillers, cooks, and cleaning personnel who were on the rig at the time. And they reveal that while the blowout that killed the rig was about the worst that can happen on an offshore platform, there were plenty of ways the disaster could have been minimized or even avoided—at least in theory. But as any engineering student knows, there can be a long and rocky road between theory and practice. I will highlight some of the critical missteps that struck me as common to other disasters that have made headlines over the years.
I think one lesson that will be learned from the Deepwater Horizon tragedy is that current control and safety systems on offshore oil rigs need to be more integrated and simplified. The description of the dozens of buttons, lights, and instrumentation in physically separate locations that went off in response to the detection of high levels of flammable gas during the blowout reminds me of what happened at the Three Mile Island nuclear power reactor in 1979. One of the most critical personnel on the rig was Andrea Fleytas, a 23-year-old bridge officer who was one of the first to witness the huge number of gas alarms going off on her control panel. With less than two years experience on the rig, she had received safety training but had never before experienced an actual rig emergency. She, like everyone else on the rig, faced some crucial decisions in the nine minutes that elapsed between the first signs of the blowout on the rig, and the point where the explosions began. Similarly, at Three Mile Island, investigators found that the operators were confused by the multiplicity of alarms going off during the early stages of the meltdown, and actually took actions that were counterproductive. In the case of the oil-rig disaster, inaction was the problem, but the cause was similar.
Andrea Fleytas or others could have sounded the master alarm, instantly alerting everyone that the rig was in serious trouble. She could have also disabled the engines driving the rig’s generators, which were potent sources of ignition for flammable gas. And the crew could have taken the drastic step of cutting the rig loose from the well, which would have stopped the flow of gas and given them a chance to survive.
But each one of these actions would have exacted a price, ranging from the minor (waking up tired drill workers who were asleep at 11 o’clock at night with a master alarm) to the major (cutting the rig loose from the well meant millions of dollars in expense to recover the well later). And in the event, the confusion of the situation with unprecedented combinations of alarms going off and a lack of coordination among critical personnel in the command structure meant that none of these actions that might have mitigated or avoided the disaster were in fact done.
It is almost too easy to sit in a comfortable chair nine months after the disaster and criticize the actions of those who afterward did courageous and self-sacrificing things while the rig burned and sank. None of what I say is meant as criticism of individuals. The Deepwater Horizon was above all a system, and when systems go wrong, it is pointless to focus on this or that component (human or otherwise) to the exclusion of the overall picture. In fact, a lack of overall big-picture planning appears to be one of the more significant flaws in the way the system was set up. Independent alarms were put in place for specific locations, but there were no overall coordinated automatic systems that would, for example, sound the master alarm if more than a certain number of gas detectors sensed a leak. The master alarm was placed under manual control to avoid waking up people with false alarms. But this meant that in a truly serious situation, human judgment had to enter the loop, and in this case it failed.
Similarly, the natural hesitancy of a person with limited experience to take an action that they know will cost their firm millions of dollars was just too much to overcome. This sort of thing can’t be dealt with in a cursory paragraph in a training manual. Safety officers in organizations have to grow into a peculiar kind of authority that is strictly limited as to scope, but absolute within its proper range. It needs to be the kind of thing that would let a brand-new safety officer in an oil refinery dress down the refinery’s CEO for not wearing a safety helmet. That sort of attitude is not easy to cultivate, but it is vitally necessary if safety personnel are to do their jobs.
Disasters teach engineers more than success, it is said, and I hope that the sad lessons learned from the Deepwater Horizon disaster will lead to positive changes in safety training, drills, and designs for future offshore operations.
Sources: The New York Times article “The Deepwater Horizon’s Final Hours” appeared in the Dec. 25, 2010 online edition at http://www.nytimes.com/2010/12/26/us/26spill.html.
No comments:
Post a Comment