This morning, Tuesday, June 27, 2006, four days and some hours before the scheduled launch of NASA's Space Shuttle Discovery, the director of engineering at the Johnson Space Center, Charlie Camarda, was removed from the mission's management team. The Houston Chronicle reports that this reassignment, which Camarda says was against his will, took place after Camarda sent an email to colleagues supporting them for expressing their "dissenting opinions and your exceptions/constraints for flight." Ten days ago, in the June 17 flight readiness review meeting, NASA's head safety official Bryan O'Connor and Christopher Scolese, NASA's chief engineer, voted not to launch. Despite their opposition, NASA managers decided to proceed with the scheduled flight anyway. According to comments the two made after the meeting, their concerns were more that Discovery may suffer irreparable damage during the launch, not that the crew of seven astronauts is in more than the usual danger involved in a ride into space. Nevertheless, it's very clear from these and other reports that NASA is far from one big happy family these days.
Camarda's dismissal may have more to do with internal NASA politics than with shuttle safety. But the two cannot be separated. NASA maintains the shuttles, trains the astronauts, and decides when and how often to fly the remaining three orbiters: Atlantis, Discovery, and Endeavor. NASA head Michael Griffin has gone on record as saying that if Discovery is seriously damaged by pieces of insulating foam—the same problem that doomed Columbia in 2003—he would consider shutting down the entire shuttle program. That policy no doubt influenced the votes of O'Connor and Scolese, who feel that engineering modifications to foam on a number of support brackets should be made to prevent irreparable damage to Discovery's vital heat shield. Everyone agrees that if the kind of damage sustained by Columbia occurs, and is discovered in orbit, and can't be repaired, then the astronauts can take refuge in the International Space Station until a rescue flight can be arranged with one of the two remaining shuttles. This despite the fact that the Station has lately had trouble accommodating only two or three residents at a time. But being uncomfortable and cramped in weightlessness for a few weeks is better than a fiery death. You haven't seen a lot of news items about billionaires paying for rides into space lately, have you? Maybe there's a reason.
In my Mar. 21, 2006 blog, "Retire the Space Shuttle Now," I stated a number of good reasons that we should go straight to the next model of space orbiter without risking any more people's lives in antiquated, patched-up shuttles that deserve an honored place in the Smithsonian, not reuse in space long after their design lifetimes. The recent news out of NASA has only increased my concern that yet another known problem that we haven't heard about in public, but which the engineers are all too familiar with, will reach out and cause another hair-raising space adventure like Apollo 13's near-disaster, if not worse.
Unfortunately, the shuttle program has achieved canonical status in the engineering ethics literature for a couple of reasons. One is that NASA, being a public agency, is unusually open about its internal processes and debates, which means that records of data and decisions are easy to obtain. The second is that both the Challenger and Columbia disasters were caused by known problems that were technically fairly well understood. The failures were not mysterious scientific puzzles; they were failures in management decision-making.
In most well-run organizations, the chief safety officer is king in his or her limited domain. In an oil refinery, for instance, if the president and owner of the plant walks into a hazardous area and attempts to light a cigar, the lowliest safety official present is entirely within his rights to do anything necessary to prevent it, including knocking the president down. On June 17, we witnessed the spectacle of not only NASA's chief safety officer, but its chief engineer as well, say that for reasons of property protection, the launch should not proceed—and they were overruled. And Charles Camarda, an engineer who himself flew on the 2005 Discovery flight, the first one after the Columbia disaster, has just gotten sacked from his mission responsibilities for commending the way some of his underlings spoke out at the flight review. It is not a pretty picture.
In Greek mythology, a young woman named Cassandra had the misfortune to attract the eye of the god Apollo. In an attempt to put himself in her good graces, he gave her the gift of prophecy. But when she refused his advances, he ran up against the rule that says what the gods giveth, the gods can't taketh away. He couldn't keep her from being a prophet, but he could spoil it another way: he made sure that whatever Cassandra prophesied in the way of dire forecasts would not be believed by anybody else. So when she ran around in Troy saying, "You'll be sorry if you bring that big wooden horse in here," she warned the Trojans in vain, the Greeks popped out anyway, and Troy fell. This made Cassandra wish she had never seen Apollo in the first place. Since then her name has passed into the language to mean one whose accurate foretellings of disaster are ignored.
I don't want to be a NASA Cassandra. I have no illusions that one blogger, or even an entire Greek chorus of bloggers, will influence NASA's decision-making process. My hopes and my prayers are that STS-121 will go smoothly, with no headlines other than the routine ones. But we face three possible outcomes on this trip: a routine flight with no significant problems, a flight in which Discovery is damaged enough to scuttle the remaining Shuttle fleet, or a more serious problem that endangers life. May God grant that the third possibility doesn't happen. But I'm going to leave it up to Him as to which of the other two takes place.
Sources: For Camarda's reassignment, see the Houston Chronicle at http://www.chron.com/disp/story.mpl/front/4004817.html. For Camarda's comments on NASA's changed culture, see the 2004 interview at the NASA website
http://www.nasa.gov/vision/space/preparingtravel/rtf_interview_camarda_04.html. For a report on the June 17 meeting, see
http://news.yahoo.com/s/space/20060620/sc_space/nasaschiefengineersafetyofficerweighinonsts121launchdecision.
Tuesday, June 27, 2006
Tuesday, June 20, 2006
Hunting the Cyber Predator
The scene: a ballroom in a fancy hotel in Denver, Colorado. The room is crammed with teenagers of both sexes, as well as a preponderance of young men in their twenties, from all across the U. S. and from many foreign countries as well. Each person wears a mask and a costume that completely disguises identity. What brought them here? In malls and shopping centers all across the nation, attractive advertisements enticed these young people to a free party. To respond to an ad, you entered a small office where you encountered a man wearing a blindfold. The man asked you a few not particularly personal questions about yourself, and handed you a free round-trip airline ticket to Colorado. Some of the younger teens told their parents what they were up to, but many of them neglected that little detail.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
Tuesday, June 13, 2006
Engineering the Perfect Baby
Most engineering societies publish codes of ethics, and most of these codes say something about the health and welfare of the public. My own professional society, the IEEE, has over 300,000 members involved in electrotechnology of all kinds, including the ultrasound machines that produce images of unborn babies. The IEEE code of ethics says among other things that its members agree "to accept responsibility in making decisions consistent with the safety, health and welfare of the public" and "to treat fairly all persons regardless of such factors as race, religion, gender, disability, age, or national origin." Many people—and several U. S. states—include unborn babies in the category of "persons," even if they are found to have disabilities.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Sunday, June 04, 2006
Hurricane Katrina: Good News for Flood Control Engineering
Last August's Hurricane Katrina left well over a thousand people dead, most of New Orleans flooded, and many thousands homeless. You have to look long and hard to find any good news in the aftermath of the worst natural disaster to hit the United States in many decades. But ironically, one of the best things that may happen as a result is a badly needed top-to-bottom reorganization of coastal flood control work.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Subscribe to:
Posts (Atom)