Monday, February 25, 2008

Discounting Global Warming, Revisited

Running this blog is a pretty one-sided deal most of the time. Every week I send out some thoughts into the blogosphere, and rarely do I get a response. But last week's post about applying the economics of discounting to global warming got not just one, but two responses, both making similar criticisms. For this blog, that amounts to a storm of controversy, and I can't resist responding. But first, let me summarize the criticisms.

The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"

The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."

The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.

Allow me to differ.

Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.

When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.

You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.

I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.

My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.

I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.

We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.

In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.

Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."

Monday, February 18, 2008

Should We Discount Global Warming?

No, by "discount," I don't mean "ignore altogether." What I mean is what bankers and economists mean by the word. The discount rate is an assumed interest rate that is used to make economic decisions, as anyone who has taken engineering economics will recall. And the funny thing is, although discussions of global warming invariably deal with matters fifty or a hundred years in the future, hardly anyone applies the simple economics of discount rates to the problem. When you do, the result is a surprise.

Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.

This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.

But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.

Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.

Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.

Monday, February 11, 2008

The Price of Life: Industrial Accidents Then and Now

The refining giant British Petroleum has been in the news again lately, and not in a good way. At the firm's Texas City, Texas refinery on Jan. 14, a worker named William Gracia died when a lid blew off a water filtration vessel during a startup procedure and hit him in the head. The day before that, BP's board of directors fired its CEO, Lord Browne of Madingley, not quite three years after an explosion at the same refinery killed 15 people and injured 170 in the worst U. S. industrial accident in a decade. Although reasons are not usually given when a CEO is dismissed, one can speculate that the disaster had something to do with Lord Browne's departure—that and the $1.6 billion the firm paid out to settle some 4,000 lawsuits, and the $1 billion repair bill to get the refinery operating again. The $22 billion in profits that BP made in 2006 puts these numbers into perspective. Or does it?

What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.

Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?

Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.

It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.

On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.

In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.

Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.

Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.

Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).

Monday, February 04, 2008

If You Can't Trust the Experts. . .

Being an expert at something is both a privilege and a responsibility. Experts who abuse their special abilities make things harder for experts who follow the rules. There's nothing new about these ideas. But the experts who follow the rules often get ignored in the flaps over experts who violate the rules.

Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.

The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."

This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?

Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.

But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.

I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.

After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.

Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.

Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.

Monday, January 28, 2008

One Laptop Per Reviewer

A few months ago (in "One Laptop Per Child: Will It Fly?" on Oct. 22, 2007), I commented on the XO laptop designed by some MIT folks who want to bring the benefits of computer technology to millions of children in third-world countries. It's now been long enough for several reviewers to write independent judgments of the unit, and the results are interesting, to say the least.

Andrew "bunnie" Huang, a recent MIT Ph. D. graduate who writes a blog on computer hardware, thinks the mechanical design of the unit is "brilliant." He was impressed by clever little tricks such as the way the designers used the WiFi antennas to fold down and seal the ventilation holes when the unit's not being used. Along with several other reviewers, Huang liked the way the screen remains visible even in bright sunlight—an intentional design choice that makes the unit usable in outdoor settings.

Huang was less impressed with the software, which consists mainly of a custom-tailored word processing program, a web browser, and a few games. The games ran okay, but the web browser was challenged by all but the simplest websites and the keyboard, a sealed-membrane type, was tiring to use for more than a few minutes.

Since the XO is designed for children, several reviewers turned the unit over to their kids to see what would happen. This is hardly a fair test of how the device will fare in Ulan Bator or Rwanda, because the children of people who write computer reviews for a living are going to be a little more tech-savvy than your average child in a developing country. Not surprisingly, the reports from the younger set were mixed. One kid liked the "squishy" feel of the membrane keyboard, but gave up on the gizmo when he found he couldn't use some functions on one of his favorite websites. In order to get her XO to work properly, another reviewer had to face the challenges of downloading a new operating system from the OLPC website. She pointed out that following instructions like "At your root prompt, type: olpc-update (build-no) where (build-no) is the name of the build you would like" is not something that many non-techie adults will be able to handle. Kids adapt faster, but they have to have some initial guidance too.

Many of the units reviewed were pre-production prototypes, and so we should make allowances for that. Also, since each reviewer got only one unit, no one ever tried the mesh-networking capability. Mesh networking means that in a village with a dozen XO's, every laptop could in principle communicate with every other laptop as well as with the one internet hub in the village, all without fancy network setups or wires. We have to take the developers' word that this feature works as advertised.

Overall, the reviewers were enthusiastic about the genuinely good features—mainly hardware ones—and tried to be kind about the limitations, mainly in software and capabilities.

I've been sitting here racking my brain for an example of something like this in the history of technology which actually worked. What I'm trying to think of is a situation where a bunch of experts saw a need for a specially stripped-down version of something that was successful elsewhere in the context of a wealthy set of economies, and designed it and implemented it through government channels. And the only example I can come up with is the Trabant, which can hardly be termed an unqualified success.

For those who don't recognize the name (probably nearly everyone), the Trabant was the only car made in East Germany from 1957 to the end of the Cold War and the fall of the Berlin Wall. It had a two-cycle lawnmower-type engine, a plastic body, and could go from 0 to 60 in only 21 seconds—on good days. I remember reading in the early 1990s about a resident of East Germany who bought a "real" car and was so disgusted with his Trabant that he drove it into a dumpster and left it there. By now, the few remaining "Trabis" have become collector's items, but back when the Trabant was the only car you could buy in East Germany, demand for them outside the country was approximately zero.

Will the XO become the computer world's version of the Trabant? One reason to think not is that the XO seems to be designed better in some ways than most of today's laptops. My guess is that engineers will cherry-pick the XO's design, taking the good features and putting them into higher-end commercial models, but probably leaving the software alone. And unless some huge institution like the Department of Defense or a national government enforces the use of a particular software that is otherwise not as attractive as commercially viable products, its doom is generally sure.

All the professional computer reviewers in the world can say nice things about the XO, but that won't make it popular among its target audience: children in the poorest parts of the world. Trying to do something about poverty—economic and intellectual—is a good thing. And it's only natural for computer experts to try to use their expertise to benefit poor people with computers. But in trying to get the technology to the people who need it, the OLPC people will have to deal with matters even more complex than open operating systems and mesh networks: the root causes of poverty, unemployment, and oppression. And the realm of those matters is not to be found in hardware or software, but in the human soul.

Sources: I consulted XO reviews by Martha Mendoza of AP (reprinted in the Jan. 28, 2008 edition of the Austin American-Statesman, Jamie and Nicholas Bsales at http://www.laptopmag.com/Review/My-8-Year-Old-Reviews-the-OLPC-XO.htm, Kenneth Barrow at
http://www.notebookreview.com/default.asp?newsID=4093, and "bunnie" at http://bunniestudios.com/blog/?p=218. A story on the collector's renaissance of the Trabant can be found at http://www.nytimes.com/2007/06/17/world/europe/17trabant.html. The One Laptop Per Child website is http://laptop.org/.

Monday, January 21, 2008

Did Morality Evolve? Part 2

Last week I commented on an article by Harvard psychologist Steven Pinker about what he called the "moral instinct." Pinker reviewed scientific efforts to study moral thinking in the brain and across cultures which showed that (a) moral issues are treated differently in the brain than other kinds of thought processes and (b) there seems to be a core of moral principles or categories that show up in every culture studied. I pointed out that the second fact was noticed long before Pinker and his colleagues came along, in the form of the theory of natural law. But I left for today the question of where these core principles come from.

As a subscriber to the evolutionary origin of everything human, Pinker believes that morality is ultimately attributable to evolution. However, he is sensitive to the jaundiced eye with which the general public tends to view evolutionary psychology. As Pinker puts this dim view, "Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes."

This is all wrong, Pinker says, and he gives two reasons for why we shouldn't be afraid or concerned when people like him show us the true foundations of morality.

For one thing, the idea of the "selfish gene" is only a metaphor. Genes aren't really selfish, he says, but in order to simplify complex concepts for mass consumption, geneticists have sometimes talked about genes as though they had personality traits such as selfishness and a determination to survive. And people take this the wrong way to mean that if my genes are selfish, then I must be too, even when I think I'm being generous or self-sacrificing, because it's all really a ploy to perpetuate my genes.

Okay, but Pinker can't have things both ways in this regard. Either the idea of the selfish gene is a reality, or it is a metaphor. If we are moral and believe in the absolute rightness of certain moral principles merely because we evolved that way, then the selfish gene is more than a metaphor: it is the bottom level of reality, the ultimate explanation. And if talking about selfish genes is just a metaphor, and the reality is that genes are just molecules, then what does that make people? Just bigger collections of molecules. And if genes can't be selfish in any meaningful sense, why can the larger collections of molecules called people be selfish, or moral, or anything else other than passive followers of physical law?

To his credit, Pinker senses these questions at some level, because next he asks with regard to the idea that morality evolved, ". . . where does it leave the concept of morality itself?" Does it have a real, objective existence independent of genes or evolution, or is it just foam on the ocean of evolved life, a superficial feature that would cease to exist if the evolved creatures called human beings died out?

Pinker notes that many people attribute the origin of moral principles to God. Then he misapplies what is known to philosophers as the "Euthyphro dilemma." Euthypro is the title of one of Plato's dialogs in which Plato describes a conversation between Socrates and a young man named Euthyphro who wants to prosecute his own father for murder. Disrespect for elders was an impiety in Greek society, but so was murder—hence the dilemma. Socrates asks why the moral or pious act is regarded as moral or pious: "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?"

Pinker takes this dilemma and uses it as a supposedly bulletproof response to anyone who claims a divine origin for morality. And he does it not by asserting anything, but by throwing up a cloud of questions which he leaves to the reader to answer in the desired way: "Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist?"

After disposing of the God alternative, Pinker admits that maybe moral principles have a kind of Platonic existence "out there," like the truths of mathematics. Even atheists can believe in the Pythagorean Theorem, and Pinker seems to be comfortable with the idea of "moral realism"—the notion that maybe there really are moral principles that we discover, but which would be there even if people weren't around to understand them. And he winds up by saying that maybe we'll behave better if we understand where our morality comes from and how our bodies work when we deal with moral issues.

If Pinker had looked a little more seriously at the Euthyphro dilemma, he would have realized that Socrates didn't so much dispose of the idea of a divine origin for morality as he tried to lead Euthyphro to a deeper understanding of what piety is. Philosophers still discuss various ways of concluding the Euthyphro argument, which is by no means universally regarded as a knockout response to the contention that God invented morality.

If one believes in a God outside the natural universe and time, a God who created everything, then morality must be one of the things God created. Philosophers like to pose "what-if" questions that are titillating to our intellects, but often these questions disregard the character of the personalities involved. My own answer to the question of whether God would suddenly turn around and make the good today bad tomorrow, is that "God wouldn't do a thing like that." Maybe in some abstract God-of-the-philosophers world, such a thing is a logical possibility. But those who know God, which is just an extension of how one person knows another, know that God doesn't act that way. Never has and never will.

So a viable alternative to Pinker's Platonic moral realism is a theologically informed belief that somehow—perhaps by using evolutionary processes—God wrote the moral law on our hearts. Either way, I can say along with Pinker that we didn't just make it up by ourselves.

Sources: Pinker's article can be found at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Both Wikipedia ("Euthyphro dilemma") and the Stanford Encyclopedia of Philosophy ("Religion and morality") have good discussions of the Euthyphro dialogue and its implications. The quotation from Socrates above was taken from the Wikipedia article.

Monday, January 14, 2008

Did Morality Evolve? Part 1

Every now and then I like to ruminate on the paradox of engineering ethics. Modern engineering is founded on the principles of objectivity, the scientific method, and the rule of accepting only ideas that can be defended by logical arguments based on observations and measurements. But the foundations of ethics and morality look very different, to say the least. So how can you do engineering ethics without betraying the principles of either engineering or ethics?

The latest stimulus to re-examine this topic came in the form of an article in the Jan. 13 online edition of the New York Times Magazine by Steven Pinker. Pinker holds a chair in the Department of Psychology at Harvard University. He is a comparative rarity among academic psychologists in that he writes clearly and actually listens to the arguments of his opponents. In "The Moral Instinct," Pinker surveys the rapidly advancing science of studying moral behavior by using the tools of experimental psychology.

One of the most interesting recent findings is that the brain has a kind of morality switch built into it. Psychologists can study the activity of particular areas of the brain by using a technique called functional MRI, which shows a picture of brain regions that are taking up more oxygen and presumably working harder. A region called the "dorsolateral surface of the frontal lobes" handles rational thinking such as trying to balance your checkbook without a calculator. On the other hand, the medial frontal-lobe regions deal with emotions about other people—a morality switch that gets turned on some times but not other times.

In one study, the researchers posed a series of moral dilemmas to the subjects and asked them to decide what to do. One question—call it the utilitarian question—involved throwing a streetcar-track switch to save five workers' lives by sending a runaway train to run over a sixth worker. Another question—call it the emotional question—was basically the same dilemma, but instead of throwing a switch, the subject had to decide whether to throw a fat man off a bridge. Of the tests that were not spoiled when the subject laughed so hard at the questions that he fell out of the chair and away from the fMRI machine, the researchers found that only the rational part of the brain got involved when the critical act was just throwing a switch. But when the subject had to imagine walking up to a living, breathing man and throwing him to his death, even if it would save five other lives, the emotional part of the brain lit up and got into a fight with the rational part, which also woke up a third part of the brain that acts as a kind of referee between conflicting signals.

The point of this is that psychologists can now use fMRI and other techniques to distinguish between questions and issues that we use mainly rational thinking to answer, and ones which we respond to by appealing to a more basic, non-rational process that Pinker calls the "moral instinct." And Pinker says some very interesting things about this instinct.

For one thing, studies of people from all walks of life and from a variety of cultures all indicate that there may be a core of instinctive moral beliefs that we all have in common. The very fact that Pinker is willing to admit this shows that he is not captive to the "morality-is-subjective" school of thought which has flourished in academia in recent years. Pinker says what he says, not because of any ideological conviction, but because survey and laboratory data from all over the world confirm it. He cites the work of another psychologist, Jonathan Haidt, who says there are basically five categories of moral principles that cover most of the ground for everybody. What are they?

Without going into too much detail, here's the list: (1) Harm—don't hurt other people and help them if you can. (2) Fairness—people in comparable situations should be treated comparably. (3) Group loyalty—other things being equal, take care of your own (family, friends, city, nation) first. (4) Authority—there are rules, rulers, and rulemakers who should be respected and deferred to. (5) Purity—Saintliness, cleanliness, and being without spot or blemish are good things, and grubbiness, filth, and disorder are bad ones.

Pinker says a lot more, but perhaps I will save some of it for next week. I'd like to stop right there and note that what Pinker and his psychological colleagues are doing is searching for experimental validation of something called natural law. And it looks to me like they've found it.

Natural law is the idea that certain principles of morality are not simply agreed upon by mutual consent, but somehow inhere in the nature of things. And not only that, but in some sense these principles of natural law are built into human nature. The idea of natural law goes back at least to St. Thomas Aquinas, who saw it as something God put into all human beings, whether or not they believed in God. It was viewed as a strong basis for human laws until the Enlightenment, when other philosophies of law became more popular. But natural law still has its defenders in the legal profession, political science, and religion.

One of the most articulate defenses of natural law was written in 1947 by C. S. Lewis, the Oxford literary scholar and author. In a small book called The Abolition of Man, Lewis appended a list of what he discerned to be the central principles of what he called the "Tao" or universal laws of morality. Lewis's "Law of General Beneficence" and his "Law of Mercy" look a lot like the moral principle pertaining to Harm above. His "Duties to Parents, Elders, Ancestors" pertain to the principle of Authority, and you can link Lewis's "Duties to Children and Posterity" and his "Law of Special Beneficence" (that is, to family, country, etc.) to the Group Loyalty principle above.

How did Lewis come up with a list that overlaps in so many ways with the product of the latest modern psychological research? By studying the writings of ancient cultures: Babylonia, Egypt, China, and the Norsemen, among others. Pretty good for a guy with no research funding or graduate assistants, way back in the dark ages of 1947.

The point of this little lesson is that ethics and morality, far from being founded on criteria that are purely subjective, and therefore culturally bound and changeable, seems to come from a source that is pretty constant in its basic outlines across time, space, and cultures. And the latest deliverances of modern experimental psychology back up that idea. We will say more about Pinker's article next week, but this point is worth pondering till then.

Sources: Pinker's article appeared at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Besides Lewis and his The Abolition of Man, another highly readable proponent of natural law is J. Budziszewski, professor of political science at the University of Texas at Austin and author of Written On the Heart: The Case for Natural Law (1997).

Monday, January 07, 2008

NASA's Air Transportation Safety Survey: Light, Heat, and Fog

Regular readers of this blog know that NASA is not my favorite government agency. Once upon a time in the 1960s, it had a clear mission, attracted some of the world's best professionals, and landed men on the moon. But since then the organization has swayed between focused, clear projects (space telescopes such as Hubble come to mind), and disasters ranging from the tragic (the Challenger and Columbia disasters) to the merely expensive (I could cite numerous space-probe projects that went awry here). The disasters have won a place of prominence for NASA in most engineering ethics textbooks, which usually use the Challenger disaster as an example of how bad management can kill people.

Well, it is my sad duty to comment on yet another episode of what looks like a good idea gone awry due to internal conflicts, bickering, and bad management inside NASA, plus possibly a little help from the media. After NASA started to implement what promised to be a great idea about how to improve airline safety (there's the light), the agency got in a tussle with freedom-of-information-act requestors, NASA head Michael Griffin intervened and took hostile questions from Congress and other agencies (there's the heat), and finally released the data in a close-to-unusable form (there's the fog).

First, the light. Everybody familiar with engineering ethics problems knows that for every major disaster (a bridge that actually falls down, a spaceship that crashes), there are dozens to hundreds of lesser problems and issues that, if noticed and properly acted upon, can serve as warnings about some truly major problems that can then be prevented. Knowing this, some clever people at NASA and outside it (notably a questionnaire expert at Stanford named John Krosnick) organized a big telephone survey of thousands of airline pilots and did interviews from 2001 to 2004, asking them about potentially hazardous incidents that they had personal experience with. This was called the National Aviation Operation Monitoring System.

The normal way that the Federal Aviation Administration (an agency separate from NASA) finds out about near-misses and so on is when pilots file reports on them. Apparently there are rules about when a pilot is supposed to report a near accident, but if pilots are human (most of them are, anyway), they probably don't always follow such rules. If other pilots are involved, the whole process smacks somewhat of ratting on one's colleague, and I suppose there is no reward for reporting these things other than the knowledge that you're following the rules. Anyway, to my knowledge, that is the only current mechanism for detecting incidents that might tell us about dangerous trends having to do with new equipment or procedures, for instance, that might lead to serious accidents in the future.

The NASA-sponsored survey project was an advance on this method. It didn't just wait for pilots to report incidents—it went out and asked about them in random phone surveys. In the nature of things, this kind of survey will turn up more data than one that relies upon the pilot's initiative to write up and submit a report. But there are ways of calibrating out that difference and arriving at something close to the truth, if the survey is checked by other means and completed under the supervision of qualified experts such as Prof. Krosnick.

Well, that didn't happen. Or if it did, we don't know about it yet. Evidently, when the numbers of incidents reported by pilots through the phone survey turned out to be a lot higher than the numbers the FAA was getting, some news media people got wind of the information and submitted requests for it under the Freedom of Information Act. Now if I were in NASA's shoes, this might give me some pause, admittedly. It takes a certain amount of time to process and analyze data, but it seems like with computer-aided methods, a year or two should be enough for the survey investigators to write up and issue a report. No report was issued. Why is not clear, except that NASA is quoted as saying it didn't want to harm the airline industry. Well, fine, but crashes harm the airline industry too, and if this data can be used to improve the already good airline safety record further, it's a shame that NASA has sat on it so long.

In congressional hearings about the matter held last October, NASA head Griffin promised to release some data from the project by the end of the year. He kept the letter of his promise, anyway, by posting a 16,000-page .pdf document somewhere on NASA's website on Dec. 31, 2007. A number of indications show that NASA was not especially eager for people to do anything with this data.

For one thing, the news release announcing the document said it was to be found at NASA's website, "http://www.nasa.gov." For anyone familiar with NASA's huge and almost Byzantine website, that's like saying "It's in Arkansas." Your scribe spent fifteen minutes looking for it there and with Google, without success. This is not to say it's not there—the Associated Press people found it, but they're paid to do things like that. A search with NASA's own website search engine under "National Aviation Operations Monitoring System" done while I was looking at that very phrase in one of their own news releases on their own website—turned up zero results. Go figure.

What I figure is what many news outlets have concluded: for some reason, possibly the one NASA stated about fear of scaring customers away from airlines, they are reluctant to make these results public or useful in any meaningful way that could actually serve the original purpose of the survey, which was to come up with a better way of catching potential airline accidents before they become real ones. So we have a situation where $11 million of the taxpayer's money has been spent on a media flap and a release of data in a form that one of the survey's own designers—Prof. Krosnick—says is intentionally designed to mislead anyone who tries to use it.

After one of the old movie comedy team Laurel and Hardy's epic screwups involving ropes, stairs, ladders, cream pies, a piano, and a goat, Oliver Hardy would turn to Stan Laurel and say, "Well, Stanley, this is a fi-i-i-ne mess!" That about covers this latest NASA episode. The best thing I can say about it is that nobody got killed, although if it had been done better, we might have been able to prevent some fatalities in the future.

Sources: Two news reports on the NASA data release are at the Houston Chronicle website http://www.chron.com/disp/story.mpl/front/5414060.html and the Chicago Tribune website http://www.chicagotribune.com/news/local/sns-ap-air-safety-secrets,0,3362253.story. NASA's own news release announcing the data is at http://www.nasa.gov/home/hqnews/2007/dec/HQ_M07191_NAOMS_advisory.html. If a sharp-eyed or patient reader locates the actual URL where the NASA survey data is available, I would appreciate it if you could send it to me so I could mention it in a revised blog.

Monday, December 31, 2007

Threats, Rumors, the Internet, and Banks

Well, it's finally happened. I am in possession of some information which may be completely unreliable, but on the other hand is not public knowledge. And it has something to do with engineering ethics, broadly defined. (That's the only way it's defined in this blog—broadly.)

Here it is: About six weeks ago, a U. S. Congressperson went around telling a few of her friends to get as much money out of the bank as they could, since the credit and banking computer systems were under a significant terrorist threat. One of the people the Congressperson told, told my sister, and yesterday my sister told me. (That's pretty stale news for an Internet blog, I realize, but hey, I use what I can get.) It's quite possible that the threat, if it ever existed, has disappeared by now. But it did stimulate me to ask the question, "What are the chances that a concerted terrorist attack on the credit and banking computer systems would succeed in shutting down the U. S. economy?"

So far, in the very limited research I've done, I can't find anybody who has addressed that question recently in so many words. But I turned up a few things I didn't know about, and so I'll share them with you.

The vast majority of cybercrimes committed in this country result not in nationwide crises, but in thousands or millions of consumers losing sums varying from a few cents to thousands of dollars or more. False and deceptive websites using the technique known as "phishing" capture much of this ill-gotten gain. These can range from quasi-legal sites that simply sell something online that's available elsewhere for free if you just looked a little harder (I fell for this one once), down to sophisticated sites that imitate legitimate organizations such as banks and credit card companies with the intention of snagging an unsuspecting consumer's credit information and cleaning out their electronic wallets. While these activities are annoying (or worse if you happen to be a victim of identity theft and get your credit rating loused up through no fault of your own), they in themselves do not pose a threat to the security of the U. S. economy as a whole.

What we're talking about is the cybercrime equivalent of a 9/11: a situation in which nobody (or almost nobody) could complete financial transactions using the Internet. Since a huge fraction of the daily economic activity of the nation now involves computer networks in some way or other, that would indeed be a serious problem if it went on for longer than a day or two.

The consequences of such an attack can be judged by what happened after the real 9/11 in 2001, when the entire aviation infrastructure was closed down for a few days. The real economic damage came not so much from that "airline holiday" (although it hurt) as from the reluctance to fly that millions of people felt for months afterward. This landed the airline industry in a slump from which it is only now recovering.

A little thought will show that a complete terrorist-caused shutdown isn't necessary to produce the desired effect (or undesired, depending on your point of view), even if it were possible, which it may not be, given the distributed and robust nature of the Internet. Say some small but significant fraction—even as little as 1% to 3%—of online financial transactions began going completely astray. I try to buy an MPEG file online for 99 cents, and I end up getting a bill for $403.94 for some industrial chemical I never heard of. Or stuff simply disappears and nobody has a record of it, and no way of telling if it got there. That is the essence of terrorism: do a very small and low-budget thing that does some spectacular damage and scares everybody into changing their behavior in a pernicious way. If such minor problems led only ten percent of the public to quit buying things, you'd have an instant recession.

Enough of this devil's advocacy. Now for the good news. There is an outfit called the Financial Services Information Sharing and Analysis Center (FSISAC). It was founded back in 1999 to provide the nation's banking, credit, and other financial services organizations with a place to share computer security information. Although it has run across some roadblocks—in 2002, one Ty Sagalow testified before the Senate about how FSISAC needed some exemptions from the Freedom of Information Act and antitrust laws in order to do its job better—the mere fact that seven years after 9/11, we have not suffered a cyberterrorist equivalent of the World Trade Center attacks says that somebody must be doing something right.

You may have seen the three-letter abbreviation "SSL" on some websites or financial transactions you have done online. That stands for "Secure Socket Layer" and if you've been even more sharp-eyed and seen a "VeriSign" logo, that means the transaction was safeguarded by FSISAC's service provider, VeriSign, out of Mountain View, California. I'm sure they employ many software engineers and other specialists to keep ahead of those who would crack the security codes that protect internet financial transactions, and it's not an easy job. But as bad as identity theft or phishing is these days, it would be much worse without the work of VeriSign and other similar organizations.

If the truth be told, much cybercrime is made easier by the stupid things some consumers do, such as giving out their credit card numbers and passwords and social security numbers to "phishy-"looking websites, or in response to emails purporting to be from your bank or credit card company. Any financial organization worth its salt guards passwords and such things as gold, and never has to stoop to the expedient of emailing its customers to say, "Oh, please remind us of your password again, we lost it." But as P. T. Barnum is alleged to have said, no one has ever gone broke underestimating the intelligence of the American public. Or maybe it was taste, not intelligence. Anyway, don't fall for such stunts.

The FSISAC has a handy pair of threat level monitors on their main website, with colors that run from green to blue, yellow, orange, and red. As of today, the general risk of cyber attacks is blue ("guarded") and the significant risk of physical terrorist attacks is yellow ("elevated"). I'm not sure what you're supposed to do with that information, but you might sleep better tonight after the New Year's Eve celebration knowing that your online money and credit are—reasonably—safe. Happy New Year!

Sources: The FSISAC website showing the threat-level displays is at http://www.fsisac.com/. VeriSign's main website is http://www.verisign.com/. Mr. Sagalow's testimony before the U. S. Senate in May of 2002 is reproduced at http://www.senate.gov/~govt-aff/050802sagalow.htm.

Wednesday, December 26, 2007

Let There Be (Efficient) Light

Like many of us, the U. S. Congress often puts off things till the last minute. Last week, just before breaking for the Christmas recess, our elected representatives passed an energy bill. Unlike earlier toothless bills, this one will grow some teeth if we wait long enough and don't let another Congress pull them first. Besides an increase in the CAFE auto-mileage standards, the bill will make it illegal by 2012 to sell light bulbs that don't meet a certain efficiency standard. And most of today's incandescents can't meet the mark.

Now what has this got to do with engineering ethics? You could argue that there's no ethical dilemmas or problems here. You could say it's legal, and therefore ethical, to design, make, and sell cheap, inefficient light bulbs right up to the last day before the 2012 deadline, and thereafter it will be illegal, and then unethical, to do so. No ambiguities, no moral dilemmas, cut and dried, end of story. But simply stating the problem in that way shows how there has to be more thought put into it than that.

For example, systems of production and distribution don't typically turn on a dime. One reason the legislators put off the deadline five years into the future is to give manufacturers and their engineers plenty of time to plan for it. And planning, as anyone who has done even simple engineering knows, is not always a straightforward process. To the extent that research into new technologies will be required, planning can be highly unpredictable, and engineers will have to exercise considerable judgment in order to get from here to there in time with a product that works and won't cost too much to sell. That kind of thing is the bread and butter of engineering, but in this case it's accelerated and directed by a legal mandate. And I haven't even touched the issue of whether such mandates are a good thing, even if they encourage companies to make energy-efficient products.

In the New York Times article that highlighted this law, a spokesman for General Electric (whose origins can be traced directly back to incandescent light-bulb inventor Thomas Edison) was quoted as claiming that his company is working on an incandescent bulb that will meet the new standards. Maybe so. There are fundamental physical limitations of that technology which will make it hard for any kind of incandescent to compete with the compact fluorescent units, let alone advanced light-emitting diode (LED) light sources that may be developed shortly. But fortunately, Congress didn't tell companies how to meet the standard—it just set the standard and is letting the free market and its engineers figure out how to get there.

I have not seen the details of the new law, but I assume there are exemptions for situations where incandescents will still be needed. For example, in the theater and movie industries, there is a huge investment in lighting equipment that uses incandescents which would be difficult or impossible to adapt to fluorescent units for technical reasons. It turns out that the sun emits light that is very close to what certain kinds of incandescent bulbs emit, and for accurate color rendition the broad spectrum of an incandescent light is needed. And I have a feeling—just a feeling—that, like candles, incandescent light bulbs will be preserved in special cultural settings: displays of antique lighting and period stage sets, perhaps. Surely there will be a way to deal with that without resorting to the light-bulb equivalent of a black market.

But most of these problems are technical challenges that can be solved by technical solutions. One of the biggest concerns I have is an esthetic one: the relative coldness of fluorescent or LED light compared to incandescent light. This is a matter of the spectral balance of intensity in different wavelengths. For reasons having to do with phosphor efficiencies and the difficulty of making red phosphors, it's still hard to find a fluorescent light that has the warm reddish-yellow glow of a plain old-fashioned light bulb, which in turn recalls the even dimmer and yellower gleam of the kerosene lantern or candle. Manufacturers may solve this problem if there seems to be enough of a demand for a warm-toned light source, but most people probably don't care. For all the importance light has to our lives, we Americans are surprisingly uncritical and accepting of a wide range of light quality, from the harsh glare of mercury and sodium lamps to the inefficient but friendly glow of the cheap 60-watt bulb. I'm not particularly looking forward to getting rid of the incandescent bulbs in my office that I installed specially as a kind of protest against the harsh fluorescent glare of the standard-issue tubes in the ceiling. But when it gets to the point when I have to do it, I hope I can buy some fluorescent replacements that mimic that warm-toned glow, even if I know the technology isn't the same.

Sources: The New York Times article describing the light-bulb portion of the energy bill and its consequences can be found at http://www.nytimes.com/2007/12/22/business/22light.html. A February 2007 news item describing General Electric's announcement of high-efficiency incandescent lamp technology (though not giving any technical details) is at http://www.greenbiz.com/news/news_third.cfm?NewsID=34635.

Monday, December 17, 2007

Lead in the Christmas Tree Lights—When Caution Becomes Paranoia

Who would have thought? Lurking there amid the gaily colored balls, the fresh-smelling piney-woods aroma of the Douglas fir, and the brilliant sparks of light twinkling throughout, is the silent enemy: lead. Or at least, something like that must have been going through the reader who wrote in to the Austin American-Statesman after she read a caution tag on her string of Christmas-tree lights. According to her, it said "Handling these lights will expose you to lead, which has been shown to cause birth defects." Panicked, she rushed back to the store where she bought them to see if she could find some lead-free ones, but "ALL of them had labels stating that they were coated in lead! This is terrifying news for a young woman who is planning to start a family!"

The guy who writes the advice column in which this tragic screed appeared said not to worry, but be sure and wash your hands after handling the lights. He based his advice on information from Douglas Borys, who directs something called the Central Texas Poison Center.

In responding to the woman's plight, Mr. Borys faced a problem that engineers have to deal with too: how to talk about risk in a way that is both technically accurate and understandable and usable by the general public. We have to negotiate a careful passage between the rock of purely accurate technological gibberish, and the hard place of telling people there's nothing to worry about at all.

In the case of lead, there is no doubt that enough lead in the system of a child, or the child's mother before it is born, can cause real harm. The question is, how much is "enough"?

Well, going to the technical extreme, the U. S. Centers for Disease Control and Prevention issued a report in 2005 supporting the existing "level of concern" that a child's blood not contain more than 10 micrograms of lead per deciliter (abbreviated as 10 mg/dL). No studies have shown consistent definitive harm to come to children with that low an amount of lead in their system. Just to give you an idea of how low this is, the typical adult in the U. S. has between 1 and 5 mg/dL of lead in their blood, according to a 1994 report. The concern about pregnant (or potentially pregnant) women getting lead in their system is that the fetus is abnormally sensitive to lead compared to older children and adults, although exactly how much isn't clear, since we obviously can't do controlled experiments on pregnant women to find out.

Now if you tried to print the preceding paragraph in a daily paper, or a blog for general consumption, or (perish the thought!) read it on the TV news, you'd probably get fired. Why? Because using phrases like "micrograms per deciliter" has the same effect on most U. S. audiences as a momentary lapse into Farsi. People don't understand it and tune you out. But unfortunately, if you want to talk about scientifically indisputable facts, you have to start with nuts-and-bolts things like how many atoms of lead do you find in a person and where did it come from? These are things that scientists can measure and quantify, but the general public cannot understand them, at least not without a lot of help. So it all has to be interpreted.

So to go to the other extreme of over-interpretation, the expert from the poison center could have said something like, "Aaahh, fuggedaboudit! Do you smoke? Does your house have old lead paint? Do you ever drive without seatbelts, or talk on your cell phone and drive at the same time? Are you significantly overweight? If any of these things is true, you're far more likely to die from one of them than from any possible harm that might come to you or your hypothetical children from handling Christmas-tree lights with a tiny bit of lead at each solder joint, covered up underneath insulation and probably not accessible to the consumer at all under any normal circumstance."

In saying these things, the expert would have been entirely correct, but probably would have come across as less than sympathetic, shall we say. A Time Magazine article back in November 2006 pointed out that because of the way our brains process information, we tend to overreact to certain kinds of hazards and ignore others that we'd be better off paying attention to. Unusual hazards and dangers that take a long time to show their insiduous effects worry us more than things we're used to or things that get us all at once (like heart attacks or car wrecks). The woman's worry fits both of these categories: the last thing she was thinking about as she decorated her Christmas tree was exposing herself to a poisoning hazard, and lead poisoning takes a while to show its effects.

As the expert's advice goes, I'd say he walked a reasonable line between the two extremes. Giving people something to do about a hazard (such as handwashing) always helps psychologically, even though as a matter of fact there wasn't any hazard in the first place. And blowing off the danger altogether is generally regarded as irresponsible, because one of the iron-clad rules of technical discourse is that nothing is entirely "safe."

Well, here's hoping that your thoughts of Christmas and the holiday season will be uncontaminated by worries about lead or any other poison—chemical, mental, or otherwise.

Sources: The column "Question Everything" by Peter Mongillo appeared in the Dec. 17, 2007 edition of the Austin American-Statesman. The online edition of Time Magazine for November 2006 carried the article "How Americans Are Living Dangerously" by Jeffrey Kluger at http://www.time.com/time/magazine/article/0,9171,1562978-1,00.html
. And the U. S. Centers for Disease Control and Prevention carries numerous technical articles on lead hazards and prevention, including a survey of blood lead levels at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5420a5.htm#tab1.

Monday, December 10, 2007

The Human Side of Automated Driving

The graphic attracted my eye. It showed a 1950s-mom type looking alarmed as she sat beside a futuristic robot driving an equally improbable-looking car. The headline? "In the Future, Smart People Will Let Cars Take Control." Which implies, of course, only dumb people won't. But I'm not sure that's what the author had in mind.

John Tierney wrote in last week's online edition of the New York Times that we are getting closer each year to the point where completely automated control of automobiles in realistic driving situations will become a reality, at least from the technological point of view. The Defense Advanced Research Projects Agency has been running a driverless-car Grand Prix for the last four years. In 2004, despite a relatively unobstructed route on Western salt flats, none of the vehicles got farther than seven miles before breaking down, crashing, or otherwise dropping out of the race. But this year, six cars finished a much more challenging sixty-mile course that included live traffic. Experts say that in five to fifteen years, using technologies ranging from millimeter-wave radar to GPS and artificial-intelligence decision systems, it will be both practical and safe to hand control of a properly equipped vehicle over to the equivalent of a robot driver for a good part of many auto trips. But will we?

There is that in humans which is glad for help, but rebels at a complete takeover. While we have been smoothly adapting to incremental automation of cars for decades, a complete takeover is a different matter. Almost nobody objected in the late 1920s to the introduction of what was then called the "self-starter" that replaced turning a crank in front of your car with turning an ignition key. (The only people who grumbled about it back then were men who liked the fact that most women were simply not strong enough to start a car the old-fashioned way, and therefore couldn't drive!) Automatic transmissions came next, and have taken over most non-U. S. markets except in places where drivers (again, men, mostly) take pride in shifting for themselves. Power steering, power brakes, anti-lock braking, and cruise control are all automatic systems that we have adopted almost without a quibble. But I think most people will at least stop to think before they press a button that relinquishes total control of the vehicle to a computer, or robot, or servomechanism, or whatever we'll choose to call it.

And well they might hesitate. Tierney notes that automatically piloted vehicles can follow much more closely in safety than cars being driven by humans. He cites a recent experiment in which engineers directed a group of driverless passenger cars to drive at 65 m. p. h. spaced just fifteen feet apart, with no untoward results. This has obvious positive implications for increasing the capacity of existing freeways. But he doesn't say if the interstate was cleared of all other traffic for this experiment. As for safety, automatic vehicle control doesn't have to be perfect—only better than what we have now, a system in which over 42,000 people died on U. S. roadways alone in 2006, the vast majority because of accidents due to human error rather than mechanical failures.

If we are going to go to totally automatic control for automobiles, it seems like there will have to be a systematic effort to organize the times, places, and conditions under which this kind of control can be used. You can bet that the fifteen-foot-spacing experiment would have failed spectacularly if even one of those cars were driven by a human. The great virtue of machine control is that it's much more predictable than humans, who can be distracted by anything from a stray wasp to a cell phone call and do anything or nothing as a consequence. One expert imagines that we will have special total-control lanes on freeways much like high-occupancy-vehicle lanes today, and no manually controlled vehicles will be allowed inside such lanes.

That's one way to do it, certainly. But I for one look forward to the day when we have door-to-door robot chauffeurs. I would like nothing better than to get in my car, program in my destination, and then sit back and read or work or listen to music or enjoy the scenery, or in fact any of the other things I can do right now on a train ride, which is at present practical transportation in the U. S. only in the northeast corridor. For decades we have fussed about the urban sprawl caused by the automobile and how much better things are handled (according to some) when public transportation is used instead of cars. It may be that automatic vehicle control will provide some kind of third way that will alleviate at least some of the problems caused by the automobile. If we can let go of the control thing, maybe we can do something similar with the ownership thing too, although as long as people want to work in cities and live in the country, you're going to have to find some way to get millions of bodies into the city in the morning and back to the country in the evening. But if we could space vehicles safely only fifteen feet apart and let them go sixty or eighty m. p. h. on the freeways, and come up with some software that would deal with traffic jams and other unpredictable but inevitable problems, commuting might become both safer, more fuel-efficient, and more pleasant.

Before many more of these futuristic visions happen, though, we are going to have to change some of our attitudes. There are sure to be a few drive-it-myself-or-nothing folks who will say that we'll have to pry their cold, dead fingers off the steering wheel before we can get them to agree to use totally automated driving. And if the thing isn't handled well politically, such a minority could spoil a potentially good thing for the rest of us. The right to drive your own car with your own hands on the steering wheel is one of those assumed rights that we accept almost without thinking about it, but if the day comes when it is more of a hazard than a public good, we may have to think about it twice—and then give it up.

Sources: The New York Times online article referred to appeared on Dec. 4, 2007 at http://www.nytimes.com/2007/12/04/science/04tier.html. Tierney refers to a University of California Transportation Center article by Steven Shladover published in the Spring 2000 edition of the center's Access Magazine (http://www.uctc.net/access/access16lite.pdf).

Monday, December 03, 2007

Can Robots Be Ethical? Continued

Last week I considered the proposal of sci-fi writer Robert Sawyer, who wants to recognize robot rights and responsibilities as moral agents. He looks forward to the day when "biological and artificial beings can share this world as equals." I said that this week I would take up the distinction between good and necessary laws regulating the development of use of robots as robots, and the unnecessary and pernicious idea of treating robots as autonomous moral agents. To do that, I'd like to look at what Sawyer means by "equal."

I think the sense in which he uses that word is the same sense that is used in the Declaration of Independence, which says that "all men are created equal." That was regarded by the writers of the Declaration as a self-evident truth, that is, one so plainly true that it needed no supporting evidence. It is equally plain and obvious that "equal" does not mean "identical." Then as now, people are born with different physical and mental endowments, and so what the Founders meant by "equal" must mean something else other than "identical in every respect."

What I believe they meant is that, as human beings created by God, all people deserve to receive equal treatment in certain broad respects, such as the rights to life, liberty, and the pursuit of happiness. That is probably what Sawyer means by equal too. Although the origin and nature of robots will always be very different than those of human beings, he urges us to treat robots as equals under law.

I suspect Sawyer wants us to view this question in the light of what might seem to be its great historical parallel, that is, slavery. Under that institution, some human beings treated other human beings as though they were machines: buying and selling them and taking the fruits of their labor without just compensation. The deep wrong in all this is that slaves are human beings too, and it took hundreds of years for Western societies to accept that fact and act on it. But acting on it required a solid conviction that there was something special and distinct about human beings, something that the abolition of slavery acknowledged.

Robots are not human beings. Nothing that can ever happen will change that fact—no advances in technology, no degradation in the perception of what is human or what is machine, nothing. It is an objective fact, a self-evident truth. But just as human society took a great step forward in admitting that slaves were people and not machines, we have the freedom to take a great step backward by deluding ourselves that people are just machines. Following Sawyer's ideas would take us down that path. Why?

Already, it is a commonly understood assumption among many educated and professional classes (but rarely stated in so many words) that there is no essential difference between humans and machines. There are differences of degree—the human mind, for example, is superior to computers in some ways but inferior in other ways. But according to this view, humans are just physical systems following the laws of physics exactly like machines do, and if we could ever build a machine with the software and hardware that could simulate human life, then we would have created a human being, not just a simulation.

What Sawyer is asking us to do is to acknowledge that point of view explicitly. Just as the recognition of the humanity of slaves led to the abolition of slavery, the recognition of the machine nature of humanity will lead to the equality of robots and human beings. But look who moved this time. In the first case, we raised the slaves up to the level of fully privileged human beings. But in the second, we propose to lower mankind to the level of just another machine. There is no other alternative, because admitting machines to the rights and responsibilities of humans implicitly acknowledges that humans have no special characteristic that distinguishes them from machines.

Would you like to be treated like a machine? Even a machine with "human" rights? Of course not. Well, then, how would you like to work for a machine? Or owe money to a machine? Or be arrested, tried, and convicted by a machine? Or be ruled by a machine? If we give machines the same rights as humans, all these things not only may, but must come true. Otherwise we have not fully given robots the same rights and responsibilities as humans.

There is a reason that most science fiction dealing with robots portrays the dark side of what might happen if robots managed to escape the full control of humans (or even if they don't). All good fiction is moral, and the engine that drives robot-dominated dystopias is the horror we feel at witnessing the commission of wrongs on a massive scale. Add to that horror the irony that these stories always begin when humans try to achieve something good with robots (even if it is a selfish good), and you have the makings of great, or at least entertaining, stories. But we want them to stay that way—just stories, not reality.

Artists often serve as prophets in a culture, not in any necessarily mystical sense, but in the sense that they can imagine the future outcomes of trends that the rest of us less sensitive folk perceive only dimly, if at all. We should heed the warnings of a succession of science fiction writers from Isaac Asimov to Arthur C. Clarke and onward, that there is great danger in granting too much autonomy, privileges, and yes, equality, to robots. In common with desires of all kinds, robots make good slaves but bad masters. As progress in robotic technology continues, a good body of law regulating the design and use of robots will be needed. But of supreme importance is the philosophy upon which this body of law is erected. If at the start we acknowledge that robots are in principle just advanced cybernetic control systems, essentially no different than a thermostat in your house or the cruise control on your car, then the safety and convenience of human beings will come first in this body of law, and we can employ increasingly sophisticated robots in the future without fear. But if the laws are built upon the wrong foundation—namely, a theoretical idea that robots and humans are the same kind of entity—then we can look forward to the day that some of the worst of science fiction's robotic dystopias will happen for real.

Sources: Besides last week's blog on this topic, I have written an essay ("Sociable Machines?") on the philosophical basis of the distinction between humans and machines, which I will provide upon request to my email address (available at the Texas State "people search" function on the Texas State University website www.txstate.edu).

Monday, November 26, 2007

Can Robots Be Ethical?

Earlier this month, Canadian science-fiction writer Bob Sawyer attracted a lot of attention with an editorial he wrote for a special robotics issue of the prestigious research journal Science. In his piece, Sawyer showed that writers of science fiction have been exploring the relationship between humans and robots at least since the early stories of Isaac Asimov in the 1940s. But far from coming up with a tidy solution to the moral implications of autonomous, seemingly intelligent machines, the sci-fi crowd appears to have concentrated on the dismal downsides of what could go wrong with robots despite the best intentions of humans to make them safe and obedient. Think Frankenstein, only with Energizer-Bunny endurance and superhuman powers.

Nevertheless, Sawyer is an optimist. He applauds the efforts of South Korea, Japan, and the European Robotics Research Network to develop guidelines for the ethical aspects of robot use, and chides the U. S. for lagging in this area. He uses phrases like "robot responsibilities and rights" and speculates that the main reason this country hasn't developed robot ethics is that many robots or robot-like machines are used by the military. He wants us specifically to explore the question of whether "biological and artificial beings can share this world as equals." He winds up with the hope that we might all aspire to the outcome of a 1938 story in which a man married a robot. That is, he looks forward to the time that all of us, like the lovers in countless fairy tales, can be "living, as one day all humans and robots might, happily ever after."

Well. Hope is a specifically human virtue, and is not to be thoughtlessly disparaged. But Sawyer has erred in blurring some vitally important distinctions that often get overlooked in discussions about the present and future role of robots in society.

I do not know anything about Sawyer's core beliefs and philosophy other than what he said in his editorial. But I hope he writes his fiction more carefully than he writes editorials.

The key question is whether a machine described by that term is under the control of a human being, and to what extent that control is exercised. He begins his editorial with the story of how a remotely piloted vehicle dropped a bomb on two people who looked like they were planting an explosive device in Iraq. He terms this vehicle, which was undoubtedly under the continuous control of a human operator, a "quasi-robot." No doubt it contains numerous servomechanisms to relieve the operator from tedious hand-controlled steering and stabilization duties, but to call a remotely controlled bomber a "quasi-robot" is to give it a degree of autonomy which it does not possess.

Autonomy is a relative term. There is no entirely autonomous (the word's roots mean "self-governing") being in the universe except God. The issue of autonomy is a red herring that distracts attention from the real question, which is this: is it even possible for a human-made machine to possess moral agency?

Now I've got to explain what moral agency is. We are used to the idea that children below a certain age are not allowed to enter into contracts, marry, smoke, or drink. Why is this? Because society has rendered a judgment that they are in general not mature enough to exercise independent (autonomous) moral judgment about these matters. They are not old enough to be regarded as moral agents in every respect under law. Of course, even young children seem to have some built-in ability to make moral judgments. Isn't "That's not fair!" one of the favorite phrases in the six-year-old set? We accord certain rights and responsibilities to humans as they mature because we recognize that they, and only they, can act as moral agents.

Sawyer's mistake (or one of them, anyway) is to assume that as progress in artificial intelligence and robotics progresses, robots will mature essentially like humans do and will be able to behave like moral agents. I would point out that this achievement is far from being demonstrated. But even if moral agency is simulated some day by a robot in a realistic way indistinguishable from humanity, this fact will always be true: machines have been, are, and always will be the products of the human mind. As such, the human mind or minds which create them also possess the ultimate moral responsibility for the robot's actions and behavior, no matter how seemingly autonomous the robot becomes. So the robot can have no "rights and responsibilities"—those are things which only moral agents, namely humans, can possess.

This fact is illustrated by one of Sawyer's own examples. He cites the case of a $10 million jury award to a man who was injured back in 1979 by a robot, probably an industrial machine. You can bet that in 1979, the robot in question was no autonomous R2D2—it was probably something like one of those advanced welders that you see in automotive ads, the ones that zip around making ten welds in the time it takes a human to make one. I merely note that the injured party did not sue the robot for $10 million—he sued the robot's operators and owners, because everybody agrees that if a machine causes injury, and one or more humans are responsible for the actions of the machine, that the humans are at fault and bear the moral responsibility for the machine's actions.

Another distinction Sawyer fails to make is the difference between good and necessary laws regulating the development and use of robots as robots, and the entirely pernicious and unnecessary idea of treating robots as autonomous moral agents. But as I'm out of space for today, I will take this question up next week.

Sources: Sawyer's editorial appeared in the Nov. 16, 2007 issue of Science, vol. 318, p. 1037. I addressed some issues related to the question of robot ethics in my blog "Are Robots Human? or, Are Humans Robots?" for July 30, 2007. Bob Sawyer's webpage is at http://sfwriter.com.

Monday, November 19, 2007

Yahoo Pays—A Little—for Internet Censorship in China

Shi Tao is still languishing in a Chinese prison. But now Yahoo, the company that helped put him there, has to pay something for what they did.

Until November of 2004, Shi was a journalist working for a Chinese business journal. Earlier that year, his newspaper received a message from the Chinese government warning the journal not to run stories on the 15th anniversary of the Tiananmen Square massacre of 1989. Shi emailed a copy of this message to an editor at Democracy News, a New York-based human-rights organization. Chinese government officials found out about the email and pressured Yahoo, Shi's internet service provider, to reveal the identity of the email's author. Yahoo did so, and on Nov. 24, 2004, agents of the government arrested Shi in the northeastern city of Taiyuan. He was convicted the following April of revealing "state secrets" and has been in jail ever since. In a similar case, Yahoo revealed the identity of engineer Wang Xiaoling, who had posted pro-democracy comments online about the same time. He suffered the same fate as Shi Tao, but Wang's wife Yu Ling decided not to take this lying down.

After years of delay trying to obtain court documents, Yu Ling filed suit in a California court against Yahoo last April. And last week, Yahoo announced that the suit had been settled out of court, though few details were released other than the fact that Yahoo executives promised they would do "everything they can to get the men out of prison." In a fight between a totalitarian sovereign government and the CEO of one U. S. company, I think it is fair to say the odds are stacked against the company—and any prisoners the company is trying to help.

A lot of engineering ethics involves shades of gray, ambivalent situations, and other complexities. That is not the case here. At stake is the question of whether freedom to criticize one's government is a good thing or not. The founders of the United States believed it was. It is a principle enshrined in the U. S. Constitution and defended to what some might view as an absurd degree today. If it is a good thing in one culture or state, it is a good thing everywhere. That freedom is just as valuable and worth protecting in Shanghai as it is in Peoria.

So what happens to your respect for this freedom if you run a large multinational company eager to profit from the giant potential market that is China? It appears that you agree to whatever compromises with freedom the communist government demands of you, up to and including the divulging of email account holder's identities. Now internet service providers in this country also divulge account names to law enforcement officials from time to time, but only after court orders with regard to what is likely to be truly criminal activity. Posting a blog saying you don't like George Bush will not get you sent to jail here. But as we have seen, doing something similar in China will get you sent to jail there, and Yahoo helped.

Only when one of the prisoner's relatives went to great personal trouble and expense to file a lawsuit against Yahoo did that company even start to act. In the past, and as recently as last week, it has justified its betrayal of Shi and Wang by saying if it doesn't follow the Chinese government's rules, Yahoo's own employees might be in danger. Well, duh! Better our customers go to jail than us. Is this the kind of attitude you want from a company that you do business with?

Rather than admit wrongdoing or even disclose what compensation is involved in the lawsuit's settlement, Yahoo convinced the dissidents to settle out of court for an undisclosed amount plus a promise to do whatever they can to gain the prisoners' release. If I were Shi or Wang, I would not hold my breath.

This kind of thing is what happens when a corporation allows profits to overwhelm its moral sense. The pressure on publicly owned corporations to make the most money while staying just within the bounds of the law is immense. And as someone whose retirement investments include corporate stocks and bonds, I am as much a part of that problem as anyone else who invests money in today's economy. But when that legal- economic principle is allowed to trump all others, you end up with situations in which settling lawsuits for doing heinous things is simply a matter of buying off those you have injured at the lowest negotiated price. That appears to be exactly what Yahoo has done.

If you are expecting me to pull any punches here, I'm not going to. Last week we invited a Chinese graduate student and her husband over for supper at our house. Both of the were born in the Peoples' Republic of China, but in different cities, and they met only last year when he was in his fifth year as a professor of mathematics and she was a new graduate student at Texas State University. They fell in love, got married, and now she is looking for a job. In this country there are no work committees to pass judgment on whether you can marry, what job you can take, where you can live, and so on. I did not discuss with them the reasons they emigrated to the U. S., but I think the answers are obvious.

Leaving one's native land is a terrible wrench, and these young people must have had very good reasons to abandon the land of their birth, learn a difficult foreign language, and excel academically in a strange environment. But it happens all the time. Wouldn't it be nice if stories like this could happen in China too? And some day they may, but only if the government decides to change its ways. And that will happen only when people like Shi Tao and Wang Xiaoling can make their voices heard without fear that a company based in the freedom-loving United States of America will rat on them and help send them to jail.

Sources: A report on the Yahoo settlement can be found in CNN's Asia online edition at
http://edition.cnn.com/2007/WORLD/asiapcf/11/13/yahoo.china/index.html. I first commented on this issue in a blog posted here on March 30, 2006.

Monday, November 12, 2007

Safety's Sleuths: The NTSB Investigation of the Minneapolis Bridge Collapse

Bridges are not supposed to fall down. But last August 1, the 1,900-foot-long bridge that carried I-35W over the Mississippi in Minneapolis came apart and landed in the river, carrying thirteen people to their highly unexpected deaths. We fear dying from a lot of things, but it is safe to say that nobody on that bridge that day spent a lot of time worrying about whether they would die as an interstate-highway bridge fell out from under them.

That very fact attests to the rarity, in this country anyway, of major structural failures in transportation-related public works such as bridges. One reason they are so rare is that for the last four decades, the National Transportation Safety Board has investigated accidents involving the nation's transportation infrastructure. In so doing, it performs a critical task that historian of technology Henry Petroski says is essential to the continued safety of engineered structures. In "Design Paradigms," a book of engineering case studies that includes three famous bridge failures, Petroski writes, "The surest way to obviate new failures is to understand fully the facts and circumstances surrounding failures that have already occurred." That is what the NTSB is doing now with regard to the Minneapolis bridge collapse.

The same day of the collapse, the NTSB dispatched a nineteen-member "Go Team" from its headquarters in Washington, D. C. to Minneapolis. As rescue and recovery work allowed, members of this team collected many kinds of information. They used an FBI-provided three-dimensional laser scanning device and gyro-stabilized high-resolution cameras to establish exactly where the parts of the bridge came to rest after the collapse. They collected names and recollections from dozens of eyewitnesses and secured the original of the famous security-camera recording that showed part of the bridge in the act of collapsing. By the third week of August, NTSB officials had interviewed over 300 people, including over a hundred who called in to a specially arranged witness hotline. In the following month or so, critical pieces of the bridge were removed to a nearby secure site for detailed inspection and investigation.

One of the most important tools currently available to the NTSB is powerful computer software called "finite-element analysis." This is a way to solve the fundamental materials-science equations that describe how steel (or any other solid) behaves under complicated conditions of stress. While it can't predict exactly where cracks will occur in an overstressed beam, it can reveal locations in a complex structure such as a bridge, where the local stresses exceed the tensile strength of steel. It is in such locations that cracks and failures are most likely to occur.

But as with any computer program, finite-element analysis software is only as good as the data you put into it. This is why the NTSB has spent the last three months gathering as much information as they can on not only the details of the bridge structure, including core samples showing how thick the deck was, but also other factors such as loading. You may recall that at the time of the collapse, a construction crew with heavy equipment was working on a portion of the bridge. The NTSB has concluded that a total of 287 tons of construction equipment and materials were on the bridge at the time of the accident. The exact location and weight of this extra loading is critical input to the computer analysis. The NTSB has made good progress in procuring such information by talking with eyewitnesses and viewing an aerial photograph taken by an airline passenger from a plane that passed over the bridge shortly before its collapse. Although the NTSB turned the disaster site over to the Minnesota Department of Transportation on Oct. 12, some thirty NTSB staffers are still working full-time on the investigation, which is not expected to be wrapped up for over a year.

Well-run operations are often taken for granted. Things could be very different. In places where there is nothing like the NTSB, disasters like this can be much more frequent, and citizens trying to affix blame have little if any recourse if something terrible happens to them or their loved ones. The NTSB could be corrupt, for example, or subject to bribes or falsification of its reports in response to political pressures. To my knowledge, however, its reputation for probity and "just-the-facts" scientific integrity is essentially spotless. This is no minor achievement, and the engineers who work for the Board have accomplished great things in the service of informing the both the technical public and the general public about the reasons for tragedies such as the Minneapolis bridge collapse.

Every major engineering failure marks the start of a detective story. Accident investigation is one of the few lines of work where engineers can spend their professional lives in the role of detectives. Now and then the culprit is a true criminal, but most of the time, accidents are due to inattention, bad communications, or inadvertent mistakes rather than any active will to do harm. Nevertheless, harm is done.

We will have to wait a while longer before we have the full story of how a part of I-35W suddenly lost altitude that hot August day. But it will be a story worth waiting for, because we can learn from it how to keep accidents like it from happening again.

Sources: The NTSB posts updates periodically on its accident investigations at its website. The latest such release about the Minneapolis bridge collapse was posted on Oct. 23, 2007 at http://www.ntsb.gov/pressrel/2007/071023c.html.