Saturday, March 15, 2008

Robot Rats and SARs for PEPs

Sometimes things happen fast in politics. On Sunday morning, March 9, Eliot Spitzer woke up to the beginning of his 63rd week in office as Governor of New York State, an office which served as a stepping stone to the White House for his predecessors Theodore and Franklin D. Roosevelt. He had an apparently unstained reputation for fighting corruption in high places, which he had earned during his seven years as New York State's Attorney General, going after everything from Enron-type financial scandals to prostitution rings.

Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?

Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.

Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.

As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.

One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.

At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.

Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.

And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.

Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.

Tuesday, March 11, 2008

Engineering the End of Malaria

In my Feb. 25 entry, I used the idea of wiping out malaria as an example of what might be done with "a few billion dollars" that would otherwise go toward dealing with global warming. I will admit that I simply pulled that number out of the air. Since then I have learned that while eliminating malaria is something that people as wealthy as Bill and Melinda Gates have tried to do, it is by no means a simple or straightforward task. But engineers may be able to help in some ways you wouldn't expect.

As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.

The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.

At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.

One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.

This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?

In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.

Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.

Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.

Monday, March 03, 2008

Locked-In Profits or Service to the Downtrodden?

Suppose you're the wife of a man who got arrested in Oakland, California. You weren't with him at the time, and all you know is the bare fact that he was arrested. Until recently, your only alternative was to call the Alameda County public information number, work your way through a phone tree, and hope there would be a live person at the other end who could tell you something. Sometimes there was and sometimes there wasn't. But now, thanks to the initiative of some staff in the Alameda County Information Technology department, there is an Inmate Locator on the county's website. If you have the person's full name, or even if all you know is that they were booked in the last twenty-four hours, you can get online and see identifying information, the "custody status," and which jail they're in. Of course, you have to have a computer and a high-speed internet connection to do this efficiently, but doesn't everybody?

Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?

You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).

I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.

Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.

As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.

I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.

But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.

With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.

This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.

So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.

Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."

Monday, February 25, 2008

Discounting Global Warming, Revisited

Running this blog is a pretty one-sided deal most of the time. Every week I send out some thoughts into the blogosphere, and rarely do I get a response. But last week's post about applying the economics of discounting to global warming got not just one, but two responses, both making similar criticisms. For this blog, that amounts to a storm of controversy, and I can't resist responding. But first, let me summarize the criticisms.

The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"

The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."

The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.

Allow me to differ.

Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.

When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.

You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.

I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.

My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.

I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.

We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.

In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.

Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."

Monday, February 18, 2008

Should We Discount Global Warming?

No, by "discount," I don't mean "ignore altogether." What I mean is what bankers and economists mean by the word. The discount rate is an assumed interest rate that is used to make economic decisions, as anyone who has taken engineering economics will recall. And the funny thing is, although discussions of global warming invariably deal with matters fifty or a hundred years in the future, hardly anyone applies the simple economics of discount rates to the problem. When you do, the result is a surprise.

Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.

This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.

But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.

Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.

Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.

Monday, February 11, 2008

The Price of Life: Industrial Accidents Then and Now

The refining giant British Petroleum has been in the news again lately, and not in a good way. At the firm's Texas City, Texas refinery on Jan. 14, a worker named William Gracia died when a lid blew off a water filtration vessel during a startup procedure and hit him in the head. The day before that, BP's board of directors fired its CEO, Lord Browne of Madingley, not quite three years after an explosion at the same refinery killed 15 people and injured 170 in the worst U. S. industrial accident in a decade. Although reasons are not usually given when a CEO is dismissed, one can speculate that the disaster had something to do with Lord Browne's departure—that and the $1.6 billion the firm paid out to settle some 4,000 lawsuits, and the $1 billion repair bill to get the refinery operating again. The $22 billion in profits that BP made in 2006 puts these numbers into perspective. Or does it?

What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.

Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?

Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.

It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.

On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.

In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.

Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.

Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.

Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).

Monday, February 04, 2008

If You Can't Trust the Experts. . .

Being an expert at something is both a privilege and a responsibility. Experts who abuse their special abilities make things harder for experts who follow the rules. There's nothing new about these ideas. But the experts who follow the rules often get ignored in the flaps over experts who violate the rules.

Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.

The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."

This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?

Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.

But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.

I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.

After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.

Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.

Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.

Monday, January 28, 2008

One Laptop Per Reviewer

A few months ago (in "One Laptop Per Child: Will It Fly?" on Oct. 22, 2007), I commented on the XO laptop designed by some MIT folks who want to bring the benefits of computer technology to millions of children in third-world countries. It's now been long enough for several reviewers to write independent judgments of the unit, and the results are interesting, to say the least.

Andrew "bunnie" Huang, a recent MIT Ph. D. graduate who writes a blog on computer hardware, thinks the mechanical design of the unit is "brilliant." He was impressed by clever little tricks such as the way the designers used the WiFi antennas to fold down and seal the ventilation holes when the unit's not being used. Along with several other reviewers, Huang liked the way the screen remains visible even in bright sunlight—an intentional design choice that makes the unit usable in outdoor settings.

Huang was less impressed with the software, which consists mainly of a custom-tailored word processing program, a web browser, and a few games. The games ran okay, but the web browser was challenged by all but the simplest websites and the keyboard, a sealed-membrane type, was tiring to use for more than a few minutes.

Since the XO is designed for children, several reviewers turned the unit over to their kids to see what would happen. This is hardly a fair test of how the device will fare in Ulan Bator or Rwanda, because the children of people who write computer reviews for a living are going to be a little more tech-savvy than your average child in a developing country. Not surprisingly, the reports from the younger set were mixed. One kid liked the "squishy" feel of the membrane keyboard, but gave up on the gizmo when he found he couldn't use some functions on one of his favorite websites. In order to get her XO to work properly, another reviewer had to face the challenges of downloading a new operating system from the OLPC website. She pointed out that following instructions like "At your root prompt, type: olpc-update (build-no) where (build-no) is the name of the build you would like" is not something that many non-techie adults will be able to handle. Kids adapt faster, but they have to have some initial guidance too.

Many of the units reviewed were pre-production prototypes, and so we should make allowances for that. Also, since each reviewer got only one unit, no one ever tried the mesh-networking capability. Mesh networking means that in a village with a dozen XO's, every laptop could in principle communicate with every other laptop as well as with the one internet hub in the village, all without fancy network setups or wires. We have to take the developers' word that this feature works as advertised.

Overall, the reviewers were enthusiastic about the genuinely good features—mainly hardware ones—and tried to be kind about the limitations, mainly in software and capabilities.

I've been sitting here racking my brain for an example of something like this in the history of technology which actually worked. What I'm trying to think of is a situation where a bunch of experts saw a need for a specially stripped-down version of something that was successful elsewhere in the context of a wealthy set of economies, and designed it and implemented it through government channels. And the only example I can come up with is the Trabant, which can hardly be termed an unqualified success.

For those who don't recognize the name (probably nearly everyone), the Trabant was the only car made in East Germany from 1957 to the end of the Cold War and the fall of the Berlin Wall. It had a two-cycle lawnmower-type engine, a plastic body, and could go from 0 to 60 in only 21 seconds—on good days. I remember reading in the early 1990s about a resident of East Germany who bought a "real" car and was so disgusted with his Trabant that he drove it into a dumpster and left it there. By now, the few remaining "Trabis" have become collector's items, but back when the Trabant was the only car you could buy in East Germany, demand for them outside the country was approximately zero.

Will the XO become the computer world's version of the Trabant? One reason to think not is that the XO seems to be designed better in some ways than most of today's laptops. My guess is that engineers will cherry-pick the XO's design, taking the good features and putting them into higher-end commercial models, but probably leaving the software alone. And unless some huge institution like the Department of Defense or a national government enforces the use of a particular software that is otherwise not as attractive as commercially viable products, its doom is generally sure.

All the professional computer reviewers in the world can say nice things about the XO, but that won't make it popular among its target audience: children in the poorest parts of the world. Trying to do something about poverty—economic and intellectual—is a good thing. And it's only natural for computer experts to try to use their expertise to benefit poor people with computers. But in trying to get the technology to the people who need it, the OLPC people will have to deal with matters even more complex than open operating systems and mesh networks: the root causes of poverty, unemployment, and oppression. And the realm of those matters is not to be found in hardware or software, but in the human soul.

Sources: I consulted XO reviews by Martha Mendoza of AP (reprinted in the Jan. 28, 2008 edition of the Austin American-Statesman, Jamie and Nicholas Bsales at http://www.laptopmag.com/Review/My-8-Year-Old-Reviews-the-OLPC-XO.htm, Kenneth Barrow at
http://www.notebookreview.com/default.asp?newsID=4093, and "bunnie" at http://bunniestudios.com/blog/?p=218. A story on the collector's renaissance of the Trabant can be found at http://www.nytimes.com/2007/06/17/world/europe/17trabant.html. The One Laptop Per Child website is http://laptop.org/.

Monday, January 21, 2008

Did Morality Evolve? Part 2

Last week I commented on an article by Harvard psychologist Steven Pinker about what he called the "moral instinct." Pinker reviewed scientific efforts to study moral thinking in the brain and across cultures which showed that (a) moral issues are treated differently in the brain than other kinds of thought processes and (b) there seems to be a core of moral principles or categories that show up in every culture studied. I pointed out that the second fact was noticed long before Pinker and his colleagues came along, in the form of the theory of natural law. But I left for today the question of where these core principles come from.

As a subscriber to the evolutionary origin of everything human, Pinker believes that morality is ultimately attributable to evolution. However, he is sensitive to the jaundiced eye with which the general public tends to view evolutionary psychology. As Pinker puts this dim view, "Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes."

This is all wrong, Pinker says, and he gives two reasons for why we shouldn't be afraid or concerned when people like him show us the true foundations of morality.

For one thing, the idea of the "selfish gene" is only a metaphor. Genes aren't really selfish, he says, but in order to simplify complex concepts for mass consumption, geneticists have sometimes talked about genes as though they had personality traits such as selfishness and a determination to survive. And people take this the wrong way to mean that if my genes are selfish, then I must be too, even when I think I'm being generous or self-sacrificing, because it's all really a ploy to perpetuate my genes.

Okay, but Pinker can't have things both ways in this regard. Either the idea of the selfish gene is a reality, or it is a metaphor. If we are moral and believe in the absolute rightness of certain moral principles merely because we evolved that way, then the selfish gene is more than a metaphor: it is the bottom level of reality, the ultimate explanation. And if talking about selfish genes is just a metaphor, and the reality is that genes are just molecules, then what does that make people? Just bigger collections of molecules. And if genes can't be selfish in any meaningful sense, why can the larger collections of molecules called people be selfish, or moral, or anything else other than passive followers of physical law?

To his credit, Pinker senses these questions at some level, because next he asks with regard to the idea that morality evolved, ". . . where does it leave the concept of morality itself?" Does it have a real, objective existence independent of genes or evolution, or is it just foam on the ocean of evolved life, a superficial feature that would cease to exist if the evolved creatures called human beings died out?

Pinker notes that many people attribute the origin of moral principles to God. Then he misapplies what is known to philosophers as the "Euthyphro dilemma." Euthypro is the title of one of Plato's dialogs in which Plato describes a conversation between Socrates and a young man named Euthyphro who wants to prosecute his own father for murder. Disrespect for elders was an impiety in Greek society, but so was murder—hence the dilemma. Socrates asks why the moral or pious act is regarded as moral or pious: "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?"

Pinker takes this dilemma and uses it as a supposedly bulletproof response to anyone who claims a divine origin for morality. And he does it not by asserting anything, but by throwing up a cloud of questions which he leaves to the reader to answer in the desired way: "Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist?"

After disposing of the God alternative, Pinker admits that maybe moral principles have a kind of Platonic existence "out there," like the truths of mathematics. Even atheists can believe in the Pythagorean Theorem, and Pinker seems to be comfortable with the idea of "moral realism"—the notion that maybe there really are moral principles that we discover, but which would be there even if people weren't around to understand them. And he winds up by saying that maybe we'll behave better if we understand where our morality comes from and how our bodies work when we deal with moral issues.

If Pinker had looked a little more seriously at the Euthyphro dilemma, he would have realized that Socrates didn't so much dispose of the idea of a divine origin for morality as he tried to lead Euthyphro to a deeper understanding of what piety is. Philosophers still discuss various ways of concluding the Euthyphro argument, which is by no means universally regarded as a knockout response to the contention that God invented morality.

If one believes in a God outside the natural universe and time, a God who created everything, then morality must be one of the things God created. Philosophers like to pose "what-if" questions that are titillating to our intellects, but often these questions disregard the character of the personalities involved. My own answer to the question of whether God would suddenly turn around and make the good today bad tomorrow, is that "God wouldn't do a thing like that." Maybe in some abstract God-of-the-philosophers world, such a thing is a logical possibility. But those who know God, which is just an extension of how one person knows another, know that God doesn't act that way. Never has and never will.

So a viable alternative to Pinker's Platonic moral realism is a theologically informed belief that somehow—perhaps by using evolutionary processes—God wrote the moral law on our hearts. Either way, I can say along with Pinker that we didn't just make it up by ourselves.

Sources: Pinker's article can be found at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Both Wikipedia ("Euthyphro dilemma") and the Stanford Encyclopedia of Philosophy ("Religion and morality") have good discussions of the Euthyphro dialogue and its implications. The quotation from Socrates above was taken from the Wikipedia article.

Monday, January 14, 2008

Did Morality Evolve? Part 1

Every now and then I like to ruminate on the paradox of engineering ethics. Modern engineering is founded on the principles of objectivity, the scientific method, and the rule of accepting only ideas that can be defended by logical arguments based on observations and measurements. But the foundations of ethics and morality look very different, to say the least. So how can you do engineering ethics without betraying the principles of either engineering or ethics?

The latest stimulus to re-examine this topic came in the form of an article in the Jan. 13 online edition of the New York Times Magazine by Steven Pinker. Pinker holds a chair in the Department of Psychology at Harvard University. He is a comparative rarity among academic psychologists in that he writes clearly and actually listens to the arguments of his opponents. In "The Moral Instinct," Pinker surveys the rapidly advancing science of studying moral behavior by using the tools of experimental psychology.

One of the most interesting recent findings is that the brain has a kind of morality switch built into it. Psychologists can study the activity of particular areas of the brain by using a technique called functional MRI, which shows a picture of brain regions that are taking up more oxygen and presumably working harder. A region called the "dorsolateral surface of the frontal lobes" handles rational thinking such as trying to balance your checkbook without a calculator. On the other hand, the medial frontal-lobe regions deal with emotions about other people—a morality switch that gets turned on some times but not other times.

In one study, the researchers posed a series of moral dilemmas to the subjects and asked them to decide what to do. One question—call it the utilitarian question—involved throwing a streetcar-track switch to save five workers' lives by sending a runaway train to run over a sixth worker. Another question—call it the emotional question—was basically the same dilemma, but instead of throwing a switch, the subject had to decide whether to throw a fat man off a bridge. Of the tests that were not spoiled when the subject laughed so hard at the questions that he fell out of the chair and away from the fMRI machine, the researchers found that only the rational part of the brain got involved when the critical act was just throwing a switch. But when the subject had to imagine walking up to a living, breathing man and throwing him to his death, even if it would save five other lives, the emotional part of the brain lit up and got into a fight with the rational part, which also woke up a third part of the brain that acts as a kind of referee between conflicting signals.

The point of this is that psychologists can now use fMRI and other techniques to distinguish between questions and issues that we use mainly rational thinking to answer, and ones which we respond to by appealing to a more basic, non-rational process that Pinker calls the "moral instinct." And Pinker says some very interesting things about this instinct.

For one thing, studies of people from all walks of life and from a variety of cultures all indicate that there may be a core of instinctive moral beliefs that we all have in common. The very fact that Pinker is willing to admit this shows that he is not captive to the "morality-is-subjective" school of thought which has flourished in academia in recent years. Pinker says what he says, not because of any ideological conviction, but because survey and laboratory data from all over the world confirm it. He cites the work of another psychologist, Jonathan Haidt, who says there are basically five categories of moral principles that cover most of the ground for everybody. What are they?

Without going into too much detail, here's the list: (1) Harm—don't hurt other people and help them if you can. (2) Fairness—people in comparable situations should be treated comparably. (3) Group loyalty—other things being equal, take care of your own (family, friends, city, nation) first. (4) Authority—there are rules, rulers, and rulemakers who should be respected and deferred to. (5) Purity—Saintliness, cleanliness, and being without spot or blemish are good things, and grubbiness, filth, and disorder are bad ones.

Pinker says a lot more, but perhaps I will save some of it for next week. I'd like to stop right there and note that what Pinker and his psychological colleagues are doing is searching for experimental validation of something called natural law. And it looks to me like they've found it.

Natural law is the idea that certain principles of morality are not simply agreed upon by mutual consent, but somehow inhere in the nature of things. And not only that, but in some sense these principles of natural law are built into human nature. The idea of natural law goes back at least to St. Thomas Aquinas, who saw it as something God put into all human beings, whether or not they believed in God. It was viewed as a strong basis for human laws until the Enlightenment, when other philosophies of law became more popular. But natural law still has its defenders in the legal profession, political science, and religion.

One of the most articulate defenses of natural law was written in 1947 by C. S. Lewis, the Oxford literary scholar and author. In a small book called The Abolition of Man, Lewis appended a list of what he discerned to be the central principles of what he called the "Tao" or universal laws of morality. Lewis's "Law of General Beneficence" and his "Law of Mercy" look a lot like the moral principle pertaining to Harm above. His "Duties to Parents, Elders, Ancestors" pertain to the principle of Authority, and you can link Lewis's "Duties to Children and Posterity" and his "Law of Special Beneficence" (that is, to family, country, etc.) to the Group Loyalty principle above.

How did Lewis come up with a list that overlaps in so many ways with the product of the latest modern psychological research? By studying the writings of ancient cultures: Babylonia, Egypt, China, and the Norsemen, among others. Pretty good for a guy with no research funding or graduate assistants, way back in the dark ages of 1947.

The point of this little lesson is that ethics and morality, far from being founded on criteria that are purely subjective, and therefore culturally bound and changeable, seems to come from a source that is pretty constant in its basic outlines across time, space, and cultures. And the latest deliverances of modern experimental psychology back up that idea. We will say more about Pinker's article next week, but this point is worth pondering till then.

Sources: Pinker's article appeared at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Besides Lewis and his The Abolition of Man, another highly readable proponent of natural law is J. Budziszewski, professor of political science at the University of Texas at Austin and author of Written On the Heart: The Case for Natural Law (1997).

Monday, January 07, 2008

NASA's Air Transportation Safety Survey: Light, Heat, and Fog

Regular readers of this blog know that NASA is not my favorite government agency. Once upon a time in the 1960s, it had a clear mission, attracted some of the world's best professionals, and landed men on the moon. But since then the organization has swayed between focused, clear projects (space telescopes such as Hubble come to mind), and disasters ranging from the tragic (the Challenger and Columbia disasters) to the merely expensive (I could cite numerous space-probe projects that went awry here). The disasters have won a place of prominence for NASA in most engineering ethics textbooks, which usually use the Challenger disaster as an example of how bad management can kill people.

Well, it is my sad duty to comment on yet another episode of what looks like a good idea gone awry due to internal conflicts, bickering, and bad management inside NASA, plus possibly a little help from the media. After NASA started to implement what promised to be a great idea about how to improve airline safety (there's the light), the agency got in a tussle with freedom-of-information-act requestors, NASA head Michael Griffin intervened and took hostile questions from Congress and other agencies (there's the heat), and finally released the data in a close-to-unusable form (there's the fog).

First, the light. Everybody familiar with engineering ethics problems knows that for every major disaster (a bridge that actually falls down, a spaceship that crashes), there are dozens to hundreds of lesser problems and issues that, if noticed and properly acted upon, can serve as warnings about some truly major problems that can then be prevented. Knowing this, some clever people at NASA and outside it (notably a questionnaire expert at Stanford named John Krosnick) organized a big telephone survey of thousands of airline pilots and did interviews from 2001 to 2004, asking them about potentially hazardous incidents that they had personal experience with. This was called the National Aviation Operation Monitoring System.

The normal way that the Federal Aviation Administration (an agency separate from NASA) finds out about near-misses and so on is when pilots file reports on them. Apparently there are rules about when a pilot is supposed to report a near accident, but if pilots are human (most of them are, anyway), they probably don't always follow such rules. If other pilots are involved, the whole process smacks somewhat of ratting on one's colleague, and I suppose there is no reward for reporting these things other than the knowledge that you're following the rules. Anyway, to my knowledge, that is the only current mechanism for detecting incidents that might tell us about dangerous trends having to do with new equipment or procedures, for instance, that might lead to serious accidents in the future.

The NASA-sponsored survey project was an advance on this method. It didn't just wait for pilots to report incidents—it went out and asked about them in random phone surveys. In the nature of things, this kind of survey will turn up more data than one that relies upon the pilot's initiative to write up and submit a report. But there are ways of calibrating out that difference and arriving at something close to the truth, if the survey is checked by other means and completed under the supervision of qualified experts such as Prof. Krosnick.

Well, that didn't happen. Or if it did, we don't know about it yet. Evidently, when the numbers of incidents reported by pilots through the phone survey turned out to be a lot higher than the numbers the FAA was getting, some news media people got wind of the information and submitted requests for it under the Freedom of Information Act. Now if I were in NASA's shoes, this might give me some pause, admittedly. It takes a certain amount of time to process and analyze data, but it seems like with computer-aided methods, a year or two should be enough for the survey investigators to write up and issue a report. No report was issued. Why is not clear, except that NASA is quoted as saying it didn't want to harm the airline industry. Well, fine, but crashes harm the airline industry too, and if this data can be used to improve the already good airline safety record further, it's a shame that NASA has sat on it so long.

In congressional hearings about the matter held last October, NASA head Griffin promised to release some data from the project by the end of the year. He kept the letter of his promise, anyway, by posting a 16,000-page .pdf document somewhere on NASA's website on Dec. 31, 2007. A number of indications show that NASA was not especially eager for people to do anything with this data.

For one thing, the news release announcing the document said it was to be found at NASA's website, "http://www.nasa.gov." For anyone familiar with NASA's huge and almost Byzantine website, that's like saying "It's in Arkansas." Your scribe spent fifteen minutes looking for it there and with Google, without success. This is not to say it's not there—the Associated Press people found it, but they're paid to do things like that. A search with NASA's own website search engine under "National Aviation Operations Monitoring System" done while I was looking at that very phrase in one of their own news releases on their own website—turned up zero results. Go figure.

What I figure is what many news outlets have concluded: for some reason, possibly the one NASA stated about fear of scaring customers away from airlines, they are reluctant to make these results public or useful in any meaningful way that could actually serve the original purpose of the survey, which was to come up with a better way of catching potential airline accidents before they become real ones. So we have a situation where $11 million of the taxpayer's money has been spent on a media flap and a release of data in a form that one of the survey's own designers—Prof. Krosnick—says is intentionally designed to mislead anyone who tries to use it.

After one of the old movie comedy team Laurel and Hardy's epic screwups involving ropes, stairs, ladders, cream pies, a piano, and a goat, Oliver Hardy would turn to Stan Laurel and say, "Well, Stanley, this is a fi-i-i-ne mess!" That about covers this latest NASA episode. The best thing I can say about it is that nobody got killed, although if it had been done better, we might have been able to prevent some fatalities in the future.

Sources: Two news reports on the NASA data release are at the Houston Chronicle website http://www.chron.com/disp/story.mpl/front/5414060.html and the Chicago Tribune website http://www.chicagotribune.com/news/local/sns-ap-air-safety-secrets,0,3362253.story. NASA's own news release announcing the data is at http://www.nasa.gov/home/hqnews/2007/dec/HQ_M07191_NAOMS_advisory.html. If a sharp-eyed or patient reader locates the actual URL where the NASA survey data is available, I would appreciate it if you could send it to me so I could mention it in a revised blog.

Monday, December 31, 2007

Threats, Rumors, the Internet, and Banks

Well, it's finally happened. I am in possession of some information which may be completely unreliable, but on the other hand is not public knowledge. And it has something to do with engineering ethics, broadly defined. (That's the only way it's defined in this blog—broadly.)

Here it is: About six weeks ago, a U. S. Congressperson went around telling a few of her friends to get as much money out of the bank as they could, since the credit and banking computer systems were under a significant terrorist threat. One of the people the Congressperson told, told my sister, and yesterday my sister told me. (That's pretty stale news for an Internet blog, I realize, but hey, I use what I can get.) It's quite possible that the threat, if it ever existed, has disappeared by now. But it did stimulate me to ask the question, "What are the chances that a concerted terrorist attack on the credit and banking computer systems would succeed in shutting down the U. S. economy?"

So far, in the very limited research I've done, I can't find anybody who has addressed that question recently in so many words. But I turned up a few things I didn't know about, and so I'll share them with you.

The vast majority of cybercrimes committed in this country result not in nationwide crises, but in thousands or millions of consumers losing sums varying from a few cents to thousands of dollars or more. False and deceptive websites using the technique known as "phishing" capture much of this ill-gotten gain. These can range from quasi-legal sites that simply sell something online that's available elsewhere for free if you just looked a little harder (I fell for this one once), down to sophisticated sites that imitate legitimate organizations such as banks and credit card companies with the intention of snagging an unsuspecting consumer's credit information and cleaning out their electronic wallets. While these activities are annoying (or worse if you happen to be a victim of identity theft and get your credit rating loused up through no fault of your own), they in themselves do not pose a threat to the security of the U. S. economy as a whole.

What we're talking about is the cybercrime equivalent of a 9/11: a situation in which nobody (or almost nobody) could complete financial transactions using the Internet. Since a huge fraction of the daily economic activity of the nation now involves computer networks in some way or other, that would indeed be a serious problem if it went on for longer than a day or two.

The consequences of such an attack can be judged by what happened after the real 9/11 in 2001, when the entire aviation infrastructure was closed down for a few days. The real economic damage came not so much from that "airline holiday" (although it hurt) as from the reluctance to fly that millions of people felt for months afterward. This landed the airline industry in a slump from which it is only now recovering.

A little thought will show that a complete terrorist-caused shutdown isn't necessary to produce the desired effect (or undesired, depending on your point of view), even if it were possible, which it may not be, given the distributed and robust nature of the Internet. Say some small but significant fraction—even as little as 1% to 3%—of online financial transactions began going completely astray. I try to buy an MPEG file online for 99 cents, and I end up getting a bill for $403.94 for some industrial chemical I never heard of. Or stuff simply disappears and nobody has a record of it, and no way of telling if it got there. That is the essence of terrorism: do a very small and low-budget thing that does some spectacular damage and scares everybody into changing their behavior in a pernicious way. If such minor problems led only ten percent of the public to quit buying things, you'd have an instant recession.

Enough of this devil's advocacy. Now for the good news. There is an outfit called the Financial Services Information Sharing and Analysis Center (FSISAC). It was founded back in 1999 to provide the nation's banking, credit, and other financial services organizations with a place to share computer security information. Although it has run across some roadblocks—in 2002, one Ty Sagalow testified before the Senate about how FSISAC needed some exemptions from the Freedom of Information Act and antitrust laws in order to do its job better—the mere fact that seven years after 9/11, we have not suffered a cyberterrorist equivalent of the World Trade Center attacks says that somebody must be doing something right.

You may have seen the three-letter abbreviation "SSL" on some websites or financial transactions you have done online. That stands for "Secure Socket Layer" and if you've been even more sharp-eyed and seen a "VeriSign" logo, that means the transaction was safeguarded by FSISAC's service provider, VeriSign, out of Mountain View, California. I'm sure they employ many software engineers and other specialists to keep ahead of those who would crack the security codes that protect internet financial transactions, and it's not an easy job. But as bad as identity theft or phishing is these days, it would be much worse without the work of VeriSign and other similar organizations.

If the truth be told, much cybercrime is made easier by the stupid things some consumers do, such as giving out their credit card numbers and passwords and social security numbers to "phishy-"looking websites, or in response to emails purporting to be from your bank or credit card company. Any financial organization worth its salt guards passwords and such things as gold, and never has to stoop to the expedient of emailing its customers to say, "Oh, please remind us of your password again, we lost it." But as P. T. Barnum is alleged to have said, no one has ever gone broke underestimating the intelligence of the American public. Or maybe it was taste, not intelligence. Anyway, don't fall for such stunts.

The FSISAC has a handy pair of threat level monitors on their main website, with colors that run from green to blue, yellow, orange, and red. As of today, the general risk of cyber attacks is blue ("guarded") and the significant risk of physical terrorist attacks is yellow ("elevated"). I'm not sure what you're supposed to do with that information, but you might sleep better tonight after the New Year's Eve celebration knowing that your online money and credit are—reasonably—safe. Happy New Year!

Sources: The FSISAC website showing the threat-level displays is at http://www.fsisac.com/. VeriSign's main website is http://www.verisign.com/. Mr. Sagalow's testimony before the U. S. Senate in May of 2002 is reproduced at http://www.senate.gov/~govt-aff/050802sagalow.htm.

Wednesday, December 26, 2007

Let There Be (Efficient) Light

Like many of us, the U. S. Congress often puts off things till the last minute. Last week, just before breaking for the Christmas recess, our elected representatives passed an energy bill. Unlike earlier toothless bills, this one will grow some teeth if we wait long enough and don't let another Congress pull them first. Besides an increase in the CAFE auto-mileage standards, the bill will make it illegal by 2012 to sell light bulbs that don't meet a certain efficiency standard. And most of today's incandescents can't meet the mark.

Now what has this got to do with engineering ethics? You could argue that there's no ethical dilemmas or problems here. You could say it's legal, and therefore ethical, to design, make, and sell cheap, inefficient light bulbs right up to the last day before the 2012 deadline, and thereafter it will be illegal, and then unethical, to do so. No ambiguities, no moral dilemmas, cut and dried, end of story. But simply stating the problem in that way shows how there has to be more thought put into it than that.

For example, systems of production and distribution don't typically turn on a dime. One reason the legislators put off the deadline five years into the future is to give manufacturers and their engineers plenty of time to plan for it. And planning, as anyone who has done even simple engineering knows, is not always a straightforward process. To the extent that research into new technologies will be required, planning can be highly unpredictable, and engineers will have to exercise considerable judgment in order to get from here to there in time with a product that works and won't cost too much to sell. That kind of thing is the bread and butter of engineering, but in this case it's accelerated and directed by a legal mandate. And I haven't even touched the issue of whether such mandates are a good thing, even if they encourage companies to make energy-efficient products.

In the New York Times article that highlighted this law, a spokesman for General Electric (whose origins can be traced directly back to incandescent light-bulb inventor Thomas Edison) was quoted as claiming that his company is working on an incandescent bulb that will meet the new standards. Maybe so. There are fundamental physical limitations of that technology which will make it hard for any kind of incandescent to compete with the compact fluorescent units, let alone advanced light-emitting diode (LED) light sources that may be developed shortly. But fortunately, Congress didn't tell companies how to meet the standard—it just set the standard and is letting the free market and its engineers figure out how to get there.

I have not seen the details of the new law, but I assume there are exemptions for situations where incandescents will still be needed. For example, in the theater and movie industries, there is a huge investment in lighting equipment that uses incandescents which would be difficult or impossible to adapt to fluorescent units for technical reasons. It turns out that the sun emits light that is very close to what certain kinds of incandescent bulbs emit, and for accurate color rendition the broad spectrum of an incandescent light is needed. And I have a feeling—just a feeling—that, like candles, incandescent light bulbs will be preserved in special cultural settings: displays of antique lighting and period stage sets, perhaps. Surely there will be a way to deal with that without resorting to the light-bulb equivalent of a black market.

But most of these problems are technical challenges that can be solved by technical solutions. One of the biggest concerns I have is an esthetic one: the relative coldness of fluorescent or LED light compared to incandescent light. This is a matter of the spectral balance of intensity in different wavelengths. For reasons having to do with phosphor efficiencies and the difficulty of making red phosphors, it's still hard to find a fluorescent light that has the warm reddish-yellow glow of a plain old-fashioned light bulb, which in turn recalls the even dimmer and yellower gleam of the kerosene lantern or candle. Manufacturers may solve this problem if there seems to be enough of a demand for a warm-toned light source, but most people probably don't care. For all the importance light has to our lives, we Americans are surprisingly uncritical and accepting of a wide range of light quality, from the harsh glare of mercury and sodium lamps to the inefficient but friendly glow of the cheap 60-watt bulb. I'm not particularly looking forward to getting rid of the incandescent bulbs in my office that I installed specially as a kind of protest against the harsh fluorescent glare of the standard-issue tubes in the ceiling. But when it gets to the point when I have to do it, I hope I can buy some fluorescent replacements that mimic that warm-toned glow, even if I know the technology isn't the same.

Sources: The New York Times article describing the light-bulb portion of the energy bill and its consequences can be found at http://www.nytimes.com/2007/12/22/business/22light.html. A February 2007 news item describing General Electric's announcement of high-efficiency incandescent lamp technology (though not giving any technical details) is at http://www.greenbiz.com/news/news_third.cfm?NewsID=34635.

Monday, December 17, 2007

Lead in the Christmas Tree Lights—When Caution Becomes Paranoia

Who would have thought? Lurking there amid the gaily colored balls, the fresh-smelling piney-woods aroma of the Douglas fir, and the brilliant sparks of light twinkling throughout, is the silent enemy: lead. Or at least, something like that must have been going through the reader who wrote in to the Austin American-Statesman after she read a caution tag on her string of Christmas-tree lights. According to her, it said "Handling these lights will expose you to lead, which has been shown to cause birth defects." Panicked, she rushed back to the store where she bought them to see if she could find some lead-free ones, but "ALL of them had labels stating that they were coated in lead! This is terrifying news for a young woman who is planning to start a family!"

The guy who writes the advice column in which this tragic screed appeared said not to worry, but be sure and wash your hands after handling the lights. He based his advice on information from Douglas Borys, who directs something called the Central Texas Poison Center.

In responding to the woman's plight, Mr. Borys faced a problem that engineers have to deal with too: how to talk about risk in a way that is both technically accurate and understandable and usable by the general public. We have to negotiate a careful passage between the rock of purely accurate technological gibberish, and the hard place of telling people there's nothing to worry about at all.

In the case of lead, there is no doubt that enough lead in the system of a child, or the child's mother before it is born, can cause real harm. The question is, how much is "enough"?

Well, going to the technical extreme, the U. S. Centers for Disease Control and Prevention issued a report in 2005 supporting the existing "level of concern" that a child's blood not contain more than 10 micrograms of lead per deciliter (abbreviated as 10 mg/dL). No studies have shown consistent definitive harm to come to children with that low an amount of lead in their system. Just to give you an idea of how low this is, the typical adult in the U. S. has between 1 and 5 mg/dL of lead in their blood, according to a 1994 report. The concern about pregnant (or potentially pregnant) women getting lead in their system is that the fetus is abnormally sensitive to lead compared to older children and adults, although exactly how much isn't clear, since we obviously can't do controlled experiments on pregnant women to find out.

Now if you tried to print the preceding paragraph in a daily paper, or a blog for general consumption, or (perish the thought!) read it on the TV news, you'd probably get fired. Why? Because using phrases like "micrograms per deciliter" has the same effect on most U. S. audiences as a momentary lapse into Farsi. People don't understand it and tune you out. But unfortunately, if you want to talk about scientifically indisputable facts, you have to start with nuts-and-bolts things like how many atoms of lead do you find in a person and where did it come from? These are things that scientists can measure and quantify, but the general public cannot understand them, at least not without a lot of help. So it all has to be interpreted.

So to go to the other extreme of over-interpretation, the expert from the poison center could have said something like, "Aaahh, fuggedaboudit! Do you smoke? Does your house have old lead paint? Do you ever drive without seatbelts, or talk on your cell phone and drive at the same time? Are you significantly overweight? If any of these things is true, you're far more likely to die from one of them than from any possible harm that might come to you or your hypothetical children from handling Christmas-tree lights with a tiny bit of lead at each solder joint, covered up underneath insulation and probably not accessible to the consumer at all under any normal circumstance."

In saying these things, the expert would have been entirely correct, but probably would have come across as less than sympathetic, shall we say. A Time Magazine article back in November 2006 pointed out that because of the way our brains process information, we tend to overreact to certain kinds of hazards and ignore others that we'd be better off paying attention to. Unusual hazards and dangers that take a long time to show their insiduous effects worry us more than things we're used to or things that get us all at once (like heart attacks or car wrecks). The woman's worry fits both of these categories: the last thing she was thinking about as she decorated her Christmas tree was exposing herself to a poisoning hazard, and lead poisoning takes a while to show its effects.

As the expert's advice goes, I'd say he walked a reasonable line between the two extremes. Giving people something to do about a hazard (such as handwashing) always helps psychologically, even though as a matter of fact there wasn't any hazard in the first place. And blowing off the danger altogether is generally regarded as irresponsible, because one of the iron-clad rules of technical discourse is that nothing is entirely "safe."

Well, here's hoping that your thoughts of Christmas and the holiday season will be uncontaminated by worries about lead or any other poison—chemical, mental, or otherwise.

Sources: The column "Question Everything" by Peter Mongillo appeared in the Dec. 17, 2007 edition of the Austin American-Statesman. The online edition of Time Magazine for November 2006 carried the article "How Americans Are Living Dangerously" by Jeffrey Kluger at http://www.time.com/time/magazine/article/0,9171,1562978-1,00.html
. And the U. S. Centers for Disease Control and Prevention carries numerous technical articles on lead hazards and prevention, including a survey of blood lead levels at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5420a5.htm#tab1.

Monday, December 10, 2007

The Human Side of Automated Driving

The graphic attracted my eye. It showed a 1950s-mom type looking alarmed as she sat beside a futuristic robot driving an equally improbable-looking car. The headline? "In the Future, Smart People Will Let Cars Take Control." Which implies, of course, only dumb people won't. But I'm not sure that's what the author had in mind.

John Tierney wrote in last week's online edition of the New York Times that we are getting closer each year to the point where completely automated control of automobiles in realistic driving situations will become a reality, at least from the technological point of view. The Defense Advanced Research Projects Agency has been running a driverless-car Grand Prix for the last four years. In 2004, despite a relatively unobstructed route on Western salt flats, none of the vehicles got farther than seven miles before breaking down, crashing, or otherwise dropping out of the race. But this year, six cars finished a much more challenging sixty-mile course that included live traffic. Experts say that in five to fifteen years, using technologies ranging from millimeter-wave radar to GPS and artificial-intelligence decision systems, it will be both practical and safe to hand control of a properly equipped vehicle over to the equivalent of a robot driver for a good part of many auto trips. But will we?

There is that in humans which is glad for help, but rebels at a complete takeover. While we have been smoothly adapting to incremental automation of cars for decades, a complete takeover is a different matter. Almost nobody objected in the late 1920s to the introduction of what was then called the "self-starter" that replaced turning a crank in front of your car with turning an ignition key. (The only people who grumbled about it back then were men who liked the fact that most women were simply not strong enough to start a car the old-fashioned way, and therefore couldn't drive!) Automatic transmissions came next, and have taken over most non-U. S. markets except in places where drivers (again, men, mostly) take pride in shifting for themselves. Power steering, power brakes, anti-lock braking, and cruise control are all automatic systems that we have adopted almost without a quibble. But I think most people will at least stop to think before they press a button that relinquishes total control of the vehicle to a computer, or robot, or servomechanism, or whatever we'll choose to call it.

And well they might hesitate. Tierney notes that automatically piloted vehicles can follow much more closely in safety than cars being driven by humans. He cites a recent experiment in which engineers directed a group of driverless passenger cars to drive at 65 m. p. h. spaced just fifteen feet apart, with no untoward results. This has obvious positive implications for increasing the capacity of existing freeways. But he doesn't say if the interstate was cleared of all other traffic for this experiment. As for safety, automatic vehicle control doesn't have to be perfect—only better than what we have now, a system in which over 42,000 people died on U. S. roadways alone in 2006, the vast majority because of accidents due to human error rather than mechanical failures.

If we are going to go to totally automatic control for automobiles, it seems like there will have to be a systematic effort to organize the times, places, and conditions under which this kind of control can be used. You can bet that the fifteen-foot-spacing experiment would have failed spectacularly if even one of those cars were driven by a human. The great virtue of machine control is that it's much more predictable than humans, who can be distracted by anything from a stray wasp to a cell phone call and do anything or nothing as a consequence. One expert imagines that we will have special total-control lanes on freeways much like high-occupancy-vehicle lanes today, and no manually controlled vehicles will be allowed inside such lanes.

That's one way to do it, certainly. But I for one look forward to the day when we have door-to-door robot chauffeurs. I would like nothing better than to get in my car, program in my destination, and then sit back and read or work or listen to music or enjoy the scenery, or in fact any of the other things I can do right now on a train ride, which is at present practical transportation in the U. S. only in the northeast corridor. For decades we have fussed about the urban sprawl caused by the automobile and how much better things are handled (according to some) when public transportation is used instead of cars. It may be that automatic vehicle control will provide some kind of third way that will alleviate at least some of the problems caused by the automobile. If we can let go of the control thing, maybe we can do something similar with the ownership thing too, although as long as people want to work in cities and live in the country, you're going to have to find some way to get millions of bodies into the city in the morning and back to the country in the evening. But if we could space vehicles safely only fifteen feet apart and let them go sixty or eighty m. p. h. on the freeways, and come up with some software that would deal with traffic jams and other unpredictable but inevitable problems, commuting might become both safer, more fuel-efficient, and more pleasant.

Before many more of these futuristic visions happen, though, we are going to have to change some of our attitudes. There are sure to be a few drive-it-myself-or-nothing folks who will say that we'll have to pry their cold, dead fingers off the steering wheel before we can get them to agree to use totally automated driving. And if the thing isn't handled well politically, such a minority could spoil a potentially good thing for the rest of us. The right to drive your own car with your own hands on the steering wheel is one of those assumed rights that we accept almost without thinking about it, but if the day comes when it is more of a hazard than a public good, we may have to think about it twice—and then give it up.

Sources: The New York Times online article referred to appeared on Dec. 4, 2007 at http://www.nytimes.com/2007/12/04/science/04tier.html. Tierney refers to a University of California Transportation Center article by Steven Shladover published in the Spring 2000 edition of the center's Access Magazine (http://www.uctc.net/access/access16lite.pdf).

Monday, December 03, 2007

Can Robots Be Ethical? Continued

Last week I considered the proposal of sci-fi writer Robert Sawyer, who wants to recognize robot rights and responsibilities as moral agents. He looks forward to the day when "biological and artificial beings can share this world as equals." I said that this week I would take up the distinction between good and necessary laws regulating the development of use of robots as robots, and the unnecessary and pernicious idea of treating robots as autonomous moral agents. To do that, I'd like to look at what Sawyer means by "equal."

I think the sense in which he uses that word is the same sense that is used in the Declaration of Independence, which says that "all men are created equal." That was regarded by the writers of the Declaration as a self-evident truth, that is, one so plainly true that it needed no supporting evidence. It is equally plain and obvious that "equal" does not mean "identical." Then as now, people are born with different physical and mental endowments, and so what the Founders meant by "equal" must mean something else other than "identical in every respect."

What I believe they meant is that, as human beings created by God, all people deserve to receive equal treatment in certain broad respects, such as the rights to life, liberty, and the pursuit of happiness. That is probably what Sawyer means by equal too. Although the origin and nature of robots will always be very different than those of human beings, he urges us to treat robots as equals under law.

I suspect Sawyer wants us to view this question in the light of what might seem to be its great historical parallel, that is, slavery. Under that institution, some human beings treated other human beings as though they were machines: buying and selling them and taking the fruits of their labor without just compensation. The deep wrong in all this is that slaves are human beings too, and it took hundreds of years for Western societies to accept that fact and act on it. But acting on it required a solid conviction that there was something special and distinct about human beings, something that the abolition of slavery acknowledged.

Robots are not human beings. Nothing that can ever happen will change that fact—no advances in technology, no degradation in the perception of what is human or what is machine, nothing. It is an objective fact, a self-evident truth. But just as human society took a great step forward in admitting that slaves were people and not machines, we have the freedom to take a great step backward by deluding ourselves that people are just machines. Following Sawyer's ideas would take us down that path. Why?

Already, it is a commonly understood assumption among many educated and professional classes (but rarely stated in so many words) that there is no essential difference between humans and machines. There are differences of degree—the human mind, for example, is superior to computers in some ways but inferior in other ways. But according to this view, humans are just physical systems following the laws of physics exactly like machines do, and if we could ever build a machine with the software and hardware that could simulate human life, then we would have created a human being, not just a simulation.

What Sawyer is asking us to do is to acknowledge that point of view explicitly. Just as the recognition of the humanity of slaves led to the abolition of slavery, the recognition of the machine nature of humanity will lead to the equality of robots and human beings. But look who moved this time. In the first case, we raised the slaves up to the level of fully privileged human beings. But in the second, we propose to lower mankind to the level of just another machine. There is no other alternative, because admitting machines to the rights and responsibilities of humans implicitly acknowledges that humans have no special characteristic that distinguishes them from machines.

Would you like to be treated like a machine? Even a machine with "human" rights? Of course not. Well, then, how would you like to work for a machine? Or owe money to a machine? Or be arrested, tried, and convicted by a machine? Or be ruled by a machine? If we give machines the same rights as humans, all these things not only may, but must come true. Otherwise we have not fully given robots the same rights and responsibilities as humans.

There is a reason that most science fiction dealing with robots portrays the dark side of what might happen if robots managed to escape the full control of humans (or even if they don't). All good fiction is moral, and the engine that drives robot-dominated dystopias is the horror we feel at witnessing the commission of wrongs on a massive scale. Add to that horror the irony that these stories always begin when humans try to achieve something good with robots (even if it is a selfish good), and you have the makings of great, or at least entertaining, stories. But we want them to stay that way—just stories, not reality.

Artists often serve as prophets in a culture, not in any necessarily mystical sense, but in the sense that they can imagine the future outcomes of trends that the rest of us less sensitive folk perceive only dimly, if at all. We should heed the warnings of a succession of science fiction writers from Isaac Asimov to Arthur C. Clarke and onward, that there is great danger in granting too much autonomy, privileges, and yes, equality, to robots. In common with desires of all kinds, robots make good slaves but bad masters. As progress in robotic technology continues, a good body of law regulating the design and use of robots will be needed. But of supreme importance is the philosophy upon which this body of law is erected. If at the start we acknowledge that robots are in principle just advanced cybernetic control systems, essentially no different than a thermostat in your house or the cruise control on your car, then the safety and convenience of human beings will come first in this body of law, and we can employ increasingly sophisticated robots in the future without fear. But if the laws are built upon the wrong foundation—namely, a theoretical idea that robots and humans are the same kind of entity—then we can look forward to the day that some of the worst of science fiction's robotic dystopias will happen for real.

Sources: Besides last week's blog on this topic, I have written an essay ("Sociable Machines?") on the philosophical basis of the distinction between humans and machines, which I will provide upon request to my email address (available at the Texas State "people search" function on the Texas State University website www.txstate.edu).