Well, it's finally happened. I am in possession of some information which may be completely unreliable, but on the other hand is not public knowledge. And it has something to do with engineering ethics, broadly defined. (That's the only way it's defined in this blog—broadly.)
Here it is: About six weeks ago, a U. S. Congressperson went around telling a few of her friends to get as much money out of the bank as they could, since the credit and banking computer systems were under a significant terrorist threat. One of the people the Congressperson told, told my sister, and yesterday my sister told me. (That's pretty stale news for an Internet blog, I realize, but hey, I use what I can get.) It's quite possible that the threat, if it ever existed, has disappeared by now. But it did stimulate me to ask the question, "What are the chances that a concerted terrorist attack on the credit and banking computer systems would succeed in shutting down the U. S. economy?"
So far, in the very limited research I've done, I can't find anybody who has addressed that question recently in so many words. But I turned up a few things I didn't know about, and so I'll share them with you.
The vast majority of cybercrimes committed in this country result not in nationwide crises, but in thousands or millions of consumers losing sums varying from a few cents to thousands of dollars or more. False and deceptive websites using the technique known as "phishing" capture much of this ill-gotten gain. These can range from quasi-legal sites that simply sell something online that's available elsewhere for free if you just looked a little harder (I fell for this one once), down to sophisticated sites that imitate legitimate organizations such as banks and credit card companies with the intention of snagging an unsuspecting consumer's credit information and cleaning out their electronic wallets. While these activities are annoying (or worse if you happen to be a victim of identity theft and get your credit rating loused up through no fault of your own), they in themselves do not pose a threat to the security of the U. S. economy as a whole.
What we're talking about is the cybercrime equivalent of a 9/11: a situation in which nobody (or almost nobody) could complete financial transactions using the Internet. Since a huge fraction of the daily economic activity of the nation now involves computer networks in some way or other, that would indeed be a serious problem if it went on for longer than a day or two.
The consequences of such an attack can be judged by what happened after the real 9/11 in 2001, when the entire aviation infrastructure was closed down for a few days. The real economic damage came not so much from that "airline holiday" (although it hurt) as from the reluctance to fly that millions of people felt for months afterward. This landed the airline industry in a slump from which it is only now recovering.
A little thought will show that a complete terrorist-caused shutdown isn't necessary to produce the desired effect (or undesired, depending on your point of view), even if it were possible, which it may not be, given the distributed and robust nature of the Internet. Say some small but significant fraction—even as little as 1% to 3%—of online financial transactions began going completely astray. I try to buy an MPEG file online for 99 cents, and I end up getting a bill for $403.94 for some industrial chemical I never heard of. Or stuff simply disappears and nobody has a record of it, and no way of telling if it got there. That is the essence of terrorism: do a very small and low-budget thing that does some spectacular damage and scares everybody into changing their behavior in a pernicious way. If such minor problems led only ten percent of the public to quit buying things, you'd have an instant recession.
Enough of this devil's advocacy. Now for the good news. There is an outfit called the Financial Services Information Sharing and Analysis Center (FSISAC). It was founded back in 1999 to provide the nation's banking, credit, and other financial services organizations with a place to share computer security information. Although it has run across some roadblocks—in 2002, one Ty Sagalow testified before the Senate about how FSISAC needed some exemptions from the Freedom of Information Act and antitrust laws in order to do its job better—the mere fact that seven years after 9/11, we have not suffered a cyberterrorist equivalent of the World Trade Center attacks says that somebody must be doing something right.
You may have seen the three-letter abbreviation "SSL" on some websites or financial transactions you have done online. That stands for "Secure Socket Layer" and if you've been even more sharp-eyed and seen a "VeriSign" logo, that means the transaction was safeguarded by FSISAC's service provider, VeriSign, out of Mountain View, California. I'm sure they employ many software engineers and other specialists to keep ahead of those who would crack the security codes that protect internet financial transactions, and it's not an easy job. But as bad as identity theft or phishing is these days, it would be much worse without the work of VeriSign and other similar organizations.
If the truth be told, much cybercrime is made easier by the stupid things some consumers do, such as giving out their credit card numbers and passwords and social security numbers to "phishy-"looking websites, or in response to emails purporting to be from your bank or credit card company. Any financial organization worth its salt guards passwords and such things as gold, and never has to stoop to the expedient of emailing its customers to say, "Oh, please remind us of your password again, we lost it." But as P. T. Barnum is alleged to have said, no one has ever gone broke underestimating the intelligence of the American public. Or maybe it was taste, not intelligence. Anyway, don't fall for such stunts.
The FSISAC has a handy pair of threat level monitors on their main website, with colors that run from green to blue, yellow, orange, and red. As of today, the general risk of cyber attacks is blue ("guarded") and the significant risk of physical terrorist attacks is yellow ("elevated"). I'm not sure what you're supposed to do with that information, but you might sleep better tonight after the New Year's Eve celebration knowing that your online money and credit are—reasonably—safe. Happy New Year!
Sources: The FSISAC website showing the threat-level displays is at http://www.fsisac.com/. VeriSign's main website is http://www.verisign.com/. Mr. Sagalow's testimony before the U. S. Senate in May of 2002 is reproduced at http://www.senate.gov/~govt-aff/050802sagalow.htm.
Monday, December 31, 2007
Wednesday, December 26, 2007
Let There Be (Efficient) Light
Like many of us, the U. S. Congress often puts off things till the last minute. Last week, just before breaking for the Christmas recess, our elected representatives passed an energy bill. Unlike earlier toothless bills, this one will grow some teeth if we wait long enough and don't let another Congress pull them first. Besides an increase in the CAFE auto-mileage standards, the bill will make it illegal by 2012 to sell light bulbs that don't meet a certain efficiency standard. And most of today's incandescents can't meet the mark.
Now what has this got to do with engineering ethics? You could argue that there's no ethical dilemmas or problems here. You could say it's legal, and therefore ethical, to design, make, and sell cheap, inefficient light bulbs right up to the last day before the 2012 deadline, and thereafter it will be illegal, and then unethical, to do so. No ambiguities, no moral dilemmas, cut and dried, end of story. But simply stating the problem in that way shows how there has to be more thought put into it than that.
For example, systems of production and distribution don't typically turn on a dime. One reason the legislators put off the deadline five years into the future is to give manufacturers and their engineers plenty of time to plan for it. And planning, as anyone who has done even simple engineering knows, is not always a straightforward process. To the extent that research into new technologies will be required, planning can be highly unpredictable, and engineers will have to exercise considerable judgment in order to get from here to there in time with a product that works and won't cost too much to sell. That kind of thing is the bread and butter of engineering, but in this case it's accelerated and directed by a legal mandate. And I haven't even touched the issue of whether such mandates are a good thing, even if they encourage companies to make energy-efficient products.
In the New York Times article that highlighted this law, a spokesman for General Electric (whose origins can be traced directly back to incandescent light-bulb inventor Thomas Edison) was quoted as claiming that his company is working on an incandescent bulb that will meet the new standards. Maybe so. There are fundamental physical limitations of that technology which will make it hard for any kind of incandescent to compete with the compact fluorescent units, let alone advanced light-emitting diode (LED) light sources that may be developed shortly. But fortunately, Congress didn't tell companies how to meet the standard—it just set the standard and is letting the free market and its engineers figure out how to get there.
I have not seen the details of the new law, but I assume there are exemptions for situations where incandescents will still be needed. For example, in the theater and movie industries, there is a huge investment in lighting equipment that uses incandescents which would be difficult or impossible to adapt to fluorescent units for technical reasons. It turns out that the sun emits light that is very close to what certain kinds of incandescent bulbs emit, and for accurate color rendition the broad spectrum of an incandescent light is needed. And I have a feeling—just a feeling—that, like candles, incandescent light bulbs will be preserved in special cultural settings: displays of antique lighting and period stage sets, perhaps. Surely there will be a way to deal with that without resorting to the light-bulb equivalent of a black market.
But most of these problems are technical challenges that can be solved by technical solutions. One of the biggest concerns I have is an esthetic one: the relative coldness of fluorescent or LED light compared to incandescent light. This is a matter of the spectral balance of intensity in different wavelengths. For reasons having to do with phosphor efficiencies and the difficulty of making red phosphors, it's still hard to find a fluorescent light that has the warm reddish-yellow glow of a plain old-fashioned light bulb, which in turn recalls the even dimmer and yellower gleam of the kerosene lantern or candle. Manufacturers may solve this problem if there seems to be enough of a demand for a warm-toned light source, but most people probably don't care. For all the importance light has to our lives, we Americans are surprisingly uncritical and accepting of a wide range of light quality, from the harsh glare of mercury and sodium lamps to the inefficient but friendly glow of the cheap 60-watt bulb. I'm not particularly looking forward to getting rid of the incandescent bulbs in my office that I installed specially as a kind of protest against the harsh fluorescent glare of the standard-issue tubes in the ceiling. But when it gets to the point when I have to do it, I hope I can buy some fluorescent replacements that mimic that warm-toned glow, even if I know the technology isn't the same.
Sources: The New York Times article describing the light-bulb portion of the energy bill and its consequences can be found at http://www.nytimes.com/2007/12/22/business/22light.html. A February 2007 news item describing General Electric's announcement of high-efficiency incandescent lamp technology (though not giving any technical details) is at http://www.greenbiz.com/news/news_third.cfm?NewsID=34635.
Now what has this got to do with engineering ethics? You could argue that there's no ethical dilemmas or problems here. You could say it's legal, and therefore ethical, to design, make, and sell cheap, inefficient light bulbs right up to the last day before the 2012 deadline, and thereafter it will be illegal, and then unethical, to do so. No ambiguities, no moral dilemmas, cut and dried, end of story. But simply stating the problem in that way shows how there has to be more thought put into it than that.
For example, systems of production and distribution don't typically turn on a dime. One reason the legislators put off the deadline five years into the future is to give manufacturers and their engineers plenty of time to plan for it. And planning, as anyone who has done even simple engineering knows, is not always a straightforward process. To the extent that research into new technologies will be required, planning can be highly unpredictable, and engineers will have to exercise considerable judgment in order to get from here to there in time with a product that works and won't cost too much to sell. That kind of thing is the bread and butter of engineering, but in this case it's accelerated and directed by a legal mandate. And I haven't even touched the issue of whether such mandates are a good thing, even if they encourage companies to make energy-efficient products.
In the New York Times article that highlighted this law, a spokesman for General Electric (whose origins can be traced directly back to incandescent light-bulb inventor Thomas Edison) was quoted as claiming that his company is working on an incandescent bulb that will meet the new standards. Maybe so. There are fundamental physical limitations of that technology which will make it hard for any kind of incandescent to compete with the compact fluorescent units, let alone advanced light-emitting diode (LED) light sources that may be developed shortly. But fortunately, Congress didn't tell companies how to meet the standard—it just set the standard and is letting the free market and its engineers figure out how to get there.
I have not seen the details of the new law, but I assume there are exemptions for situations where incandescents will still be needed. For example, in the theater and movie industries, there is a huge investment in lighting equipment that uses incandescents which would be difficult or impossible to adapt to fluorescent units for technical reasons. It turns out that the sun emits light that is very close to what certain kinds of incandescent bulbs emit, and for accurate color rendition the broad spectrum of an incandescent light is needed. And I have a feeling—just a feeling—that, like candles, incandescent light bulbs will be preserved in special cultural settings: displays of antique lighting and period stage sets, perhaps. Surely there will be a way to deal with that without resorting to the light-bulb equivalent of a black market.
But most of these problems are technical challenges that can be solved by technical solutions. One of the biggest concerns I have is an esthetic one: the relative coldness of fluorescent or LED light compared to incandescent light. This is a matter of the spectral balance of intensity in different wavelengths. For reasons having to do with phosphor efficiencies and the difficulty of making red phosphors, it's still hard to find a fluorescent light that has the warm reddish-yellow glow of a plain old-fashioned light bulb, which in turn recalls the even dimmer and yellower gleam of the kerosene lantern or candle. Manufacturers may solve this problem if there seems to be enough of a demand for a warm-toned light source, but most people probably don't care. For all the importance light has to our lives, we Americans are surprisingly uncritical and accepting of a wide range of light quality, from the harsh glare of mercury and sodium lamps to the inefficient but friendly glow of the cheap 60-watt bulb. I'm not particularly looking forward to getting rid of the incandescent bulbs in my office that I installed specially as a kind of protest against the harsh fluorescent glare of the standard-issue tubes in the ceiling. But when it gets to the point when I have to do it, I hope I can buy some fluorescent replacements that mimic that warm-toned glow, even if I know the technology isn't the same.
Sources: The New York Times article describing the light-bulb portion of the energy bill and its consequences can be found at http://www.nytimes.com/2007/12/22/business/22light.html. A February 2007 news item describing General Electric's announcement of high-efficiency incandescent lamp technology (though not giving any technical details) is at http://www.greenbiz.com/news/news_third.cfm?NewsID=34635.
Monday, December 17, 2007
Lead in the Christmas Tree Lights—When Caution Becomes Paranoia
Who would have thought? Lurking there amid the gaily colored balls, the fresh-smelling piney-woods aroma of the Douglas fir, and the brilliant sparks of light twinkling throughout, is the silent enemy: lead. Or at least, something like that must have been going through the reader who wrote in to the Austin American-Statesman after she read a caution tag on her string of Christmas-tree lights. According to her, it said "Handling these lights will expose you to lead, which has been shown to cause birth defects." Panicked, she rushed back to the store where she bought them to see if she could find some lead-free ones, but "ALL of them had labels stating that they were coated in lead! This is terrifying news for a young woman who is planning to start a family!"
The guy who writes the advice column in which this tragic screed appeared said not to worry, but be sure and wash your hands after handling the lights. He based his advice on information from Douglas Borys, who directs something called the Central Texas Poison Center.
In responding to the woman's plight, Mr. Borys faced a problem that engineers have to deal with too: how to talk about risk in a way that is both technically accurate and understandable and usable by the general public. We have to negotiate a careful passage between the rock of purely accurate technological gibberish, and the hard place of telling people there's nothing to worry about at all.
In the case of lead, there is no doubt that enough lead in the system of a child, or the child's mother before it is born, can cause real harm. The question is, how much is "enough"?
Well, going to the technical extreme, the U. S. Centers for Disease Control and Prevention issued a report in 2005 supporting the existing "level of concern" that a child's blood not contain more than 10 micrograms of lead per deciliter (abbreviated as 10 mg/dL). No studies have shown consistent definitive harm to come to children with that low an amount of lead in their system. Just to give you an idea of how low this is, the typical adult in the U. S. has between 1 and 5 mg/dL of lead in their blood, according to a 1994 report. The concern about pregnant (or potentially pregnant) women getting lead in their system is that the fetus is abnormally sensitive to lead compared to older children and adults, although exactly how much isn't clear, since we obviously can't do controlled experiments on pregnant women to find out.
Now if you tried to print the preceding paragraph in a daily paper, or a blog for general consumption, or (perish the thought!) read it on the TV news, you'd probably get fired. Why? Because using phrases like "micrograms per deciliter" has the same effect on most U. S. audiences as a momentary lapse into Farsi. People don't understand it and tune you out. But unfortunately, if you want to talk about scientifically indisputable facts, you have to start with nuts-and-bolts things like how many atoms of lead do you find in a person and where did it come from? These are things that scientists can measure and quantify, but the general public cannot understand them, at least not without a lot of help. So it all has to be interpreted.
So to go to the other extreme of over-interpretation, the expert from the poison center could have said something like, "Aaahh, fuggedaboudit! Do you smoke? Does your house have old lead paint? Do you ever drive without seatbelts, or talk on your cell phone and drive at the same time? Are you significantly overweight? If any of these things is true, you're far more likely to die from one of them than from any possible harm that might come to you or your hypothetical children from handling Christmas-tree lights with a tiny bit of lead at each solder joint, covered up underneath insulation and probably not accessible to the consumer at all under any normal circumstance."
In saying these things, the expert would have been entirely correct, but probably would have come across as less than sympathetic, shall we say. A Time Magazine article back in November 2006 pointed out that because of the way our brains process information, we tend to overreact to certain kinds of hazards and ignore others that we'd be better off paying attention to. Unusual hazards and dangers that take a long time to show their insiduous effects worry us more than things we're used to or things that get us all at once (like heart attacks or car wrecks). The woman's worry fits both of these categories: the last thing she was thinking about as she decorated her Christmas tree was exposing herself to a poisoning hazard, and lead poisoning takes a while to show its effects.
As the expert's advice goes, I'd say he walked a reasonable line between the two extremes. Giving people something to do about a hazard (such as handwashing) always helps psychologically, even though as a matter of fact there wasn't any hazard in the first place. And blowing off the danger altogether is generally regarded as irresponsible, because one of the iron-clad rules of technical discourse is that nothing is entirely "safe."
Well, here's hoping that your thoughts of Christmas and the holiday season will be uncontaminated by worries about lead or any other poison—chemical, mental, or otherwise.
Sources: The column "Question Everything" by Peter Mongillo appeared in the Dec. 17, 2007 edition of the Austin American-Statesman. The online edition of Time Magazine for November 2006 carried the article "How Americans Are Living Dangerously" by Jeffrey Kluger at http://www.time.com/time/magazine/article/0,9171,1562978-1,00.html
. And the U. S. Centers for Disease Control and Prevention carries numerous technical articles on lead hazards and prevention, including a survey of blood lead levels at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5420a5.htm#tab1.
The guy who writes the advice column in which this tragic screed appeared said not to worry, but be sure and wash your hands after handling the lights. He based his advice on information from Douglas Borys, who directs something called the Central Texas Poison Center.
In responding to the woman's plight, Mr. Borys faced a problem that engineers have to deal with too: how to talk about risk in a way that is both technically accurate and understandable and usable by the general public. We have to negotiate a careful passage between the rock of purely accurate technological gibberish, and the hard place of telling people there's nothing to worry about at all.
In the case of lead, there is no doubt that enough lead in the system of a child, or the child's mother before it is born, can cause real harm. The question is, how much is "enough"?
Well, going to the technical extreme, the U. S. Centers for Disease Control and Prevention issued a report in 2005 supporting the existing "level of concern" that a child's blood not contain more than 10 micrograms of lead per deciliter (abbreviated as 10 mg/dL). No studies have shown consistent definitive harm to come to children with that low an amount of lead in their system. Just to give you an idea of how low this is, the typical adult in the U. S. has between 1 and 5 mg/dL of lead in their blood, according to a 1994 report. The concern about pregnant (or potentially pregnant) women getting lead in their system is that the fetus is abnormally sensitive to lead compared to older children and adults, although exactly how much isn't clear, since we obviously can't do controlled experiments on pregnant women to find out.
Now if you tried to print the preceding paragraph in a daily paper, or a blog for general consumption, or (perish the thought!) read it on the TV news, you'd probably get fired. Why? Because using phrases like "micrograms per deciliter" has the same effect on most U. S. audiences as a momentary lapse into Farsi. People don't understand it and tune you out. But unfortunately, if you want to talk about scientifically indisputable facts, you have to start with nuts-and-bolts things like how many atoms of lead do you find in a person and where did it come from? These are things that scientists can measure and quantify, but the general public cannot understand them, at least not without a lot of help. So it all has to be interpreted.
So to go to the other extreme of over-interpretation, the expert from the poison center could have said something like, "Aaahh, fuggedaboudit! Do you smoke? Does your house have old lead paint? Do you ever drive without seatbelts, or talk on your cell phone and drive at the same time? Are you significantly overweight? If any of these things is true, you're far more likely to die from one of them than from any possible harm that might come to you or your hypothetical children from handling Christmas-tree lights with a tiny bit of lead at each solder joint, covered up underneath insulation and probably not accessible to the consumer at all under any normal circumstance."
In saying these things, the expert would have been entirely correct, but probably would have come across as less than sympathetic, shall we say. A Time Magazine article back in November 2006 pointed out that because of the way our brains process information, we tend to overreact to certain kinds of hazards and ignore others that we'd be better off paying attention to. Unusual hazards and dangers that take a long time to show their insiduous effects worry us more than things we're used to or things that get us all at once (like heart attacks or car wrecks). The woman's worry fits both of these categories: the last thing she was thinking about as she decorated her Christmas tree was exposing herself to a poisoning hazard, and lead poisoning takes a while to show its effects.
As the expert's advice goes, I'd say he walked a reasonable line between the two extremes. Giving people something to do about a hazard (such as handwashing) always helps psychologically, even though as a matter of fact there wasn't any hazard in the first place. And blowing off the danger altogether is generally regarded as irresponsible, because one of the iron-clad rules of technical discourse is that nothing is entirely "safe."
Well, here's hoping that your thoughts of Christmas and the holiday season will be uncontaminated by worries about lead or any other poison—chemical, mental, or otherwise.
Sources: The column "Question Everything" by Peter Mongillo appeared in the Dec. 17, 2007 edition of the Austin American-Statesman. The online edition of Time Magazine for November 2006 carried the article "How Americans Are Living Dangerously" by Jeffrey Kluger at http://www.time.com/time/magazine/article/0,9171,1562978-1,00.html
. And the U. S. Centers for Disease Control and Prevention carries numerous technical articles on lead hazards and prevention, including a survey of blood lead levels at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5420a5.htm#tab1.
Monday, December 10, 2007
The Human Side of Automated Driving
The graphic attracted my eye. It showed a 1950s-mom type looking alarmed as she sat beside a futuristic robot driving an equally improbable-looking car. The headline? "In the Future, Smart People Will Let Cars Take Control." Which implies, of course, only dumb people won't. But I'm not sure that's what the author had in mind.
John Tierney wrote in last week's online edition of the New York Times that we are getting closer each year to the point where completely automated control of automobiles in realistic driving situations will become a reality, at least from the technological point of view. The Defense Advanced Research Projects Agency has been running a driverless-car Grand Prix for the last four years. In 2004, despite a relatively unobstructed route on Western salt flats, none of the vehicles got farther than seven miles before breaking down, crashing, or otherwise dropping out of the race. But this year, six cars finished a much more challenging sixty-mile course that included live traffic. Experts say that in five to fifteen years, using technologies ranging from millimeter-wave radar to GPS and artificial-intelligence decision systems, it will be both practical and safe to hand control of a properly equipped vehicle over to the equivalent of a robot driver for a good part of many auto trips. But will we?
There is that in humans which is glad for help, but rebels at a complete takeover. While we have been smoothly adapting to incremental automation of cars for decades, a complete takeover is a different matter. Almost nobody objected in the late 1920s to the introduction of what was then called the "self-starter" that replaced turning a crank in front of your car with turning an ignition key. (The only people who grumbled about it back then were men who liked the fact that most women were simply not strong enough to start a car the old-fashioned way, and therefore couldn't drive!) Automatic transmissions came next, and have taken over most non-U. S. markets except in places where drivers (again, men, mostly) take pride in shifting for themselves. Power steering, power brakes, anti-lock braking, and cruise control are all automatic systems that we have adopted almost without a quibble. But I think most people will at least stop to think before they press a button that relinquishes total control of the vehicle to a computer, or robot, or servomechanism, or whatever we'll choose to call it.
And well they might hesitate. Tierney notes that automatically piloted vehicles can follow much more closely in safety than cars being driven by humans. He cites a recent experiment in which engineers directed a group of driverless passenger cars to drive at 65 m. p. h. spaced just fifteen feet apart, with no untoward results. This has obvious positive implications for increasing the capacity of existing freeways. But he doesn't say if the interstate was cleared of all other traffic for this experiment. As for safety, automatic vehicle control doesn't have to be perfect—only better than what we have now, a system in which over 42,000 people died on U. S. roadways alone in 2006, the vast majority because of accidents due to human error rather than mechanical failures.
If we are going to go to totally automatic control for automobiles, it seems like there will have to be a systematic effort to organize the times, places, and conditions under which this kind of control can be used. You can bet that the fifteen-foot-spacing experiment would have failed spectacularly if even one of those cars were driven by a human. The great virtue of machine control is that it's much more predictable than humans, who can be distracted by anything from a stray wasp to a cell phone call and do anything or nothing as a consequence. One expert imagines that we will have special total-control lanes on freeways much like high-occupancy-vehicle lanes today, and no manually controlled vehicles will be allowed inside such lanes.
That's one way to do it, certainly. But I for one look forward to the day when we have door-to-door robot chauffeurs. I would like nothing better than to get in my car, program in my destination, and then sit back and read or work or listen to music or enjoy the scenery, or in fact any of the other things I can do right now on a train ride, which is at present practical transportation in the U. S. only in the northeast corridor. For decades we have fussed about the urban sprawl caused by the automobile and how much better things are handled (according to some) when public transportation is used instead of cars. It may be that automatic vehicle control will provide some kind of third way that will alleviate at least some of the problems caused by the automobile. If we can let go of the control thing, maybe we can do something similar with the ownership thing too, although as long as people want to work in cities and live in the country, you're going to have to find some way to get millions of bodies into the city in the morning and back to the country in the evening. But if we could space vehicles safely only fifteen feet apart and let them go sixty or eighty m. p. h. on the freeways, and come up with some software that would deal with traffic jams and other unpredictable but inevitable problems, commuting might become both safer, more fuel-efficient, and more pleasant.
Before many more of these futuristic visions happen, though, we are going to have to change some of our attitudes. There are sure to be a few drive-it-myself-or-nothing folks who will say that we'll have to pry their cold, dead fingers off the steering wheel before we can get them to agree to use totally automated driving. And if the thing isn't handled well politically, such a minority could spoil a potentially good thing for the rest of us. The right to drive your own car with your own hands on the steering wheel is one of those assumed rights that we accept almost without thinking about it, but if the day comes when it is more of a hazard than a public good, we may have to think about it twice—and then give it up.
Sources: The New York Times online article referred to appeared on Dec. 4, 2007 at http://www.nytimes.com/2007/12/04/science/04tier.html. Tierney refers to a University of California Transportation Center article by Steven Shladover published in the Spring 2000 edition of the center's Access Magazine (http://www.uctc.net/access/access16lite.pdf).
John Tierney wrote in last week's online edition of the New York Times that we are getting closer each year to the point where completely automated control of automobiles in realistic driving situations will become a reality, at least from the technological point of view. The Defense Advanced Research Projects Agency has been running a driverless-car Grand Prix for the last four years. In 2004, despite a relatively unobstructed route on Western salt flats, none of the vehicles got farther than seven miles before breaking down, crashing, or otherwise dropping out of the race. But this year, six cars finished a much more challenging sixty-mile course that included live traffic. Experts say that in five to fifteen years, using technologies ranging from millimeter-wave radar to GPS and artificial-intelligence decision systems, it will be both practical and safe to hand control of a properly equipped vehicle over to the equivalent of a robot driver for a good part of many auto trips. But will we?
There is that in humans which is glad for help, but rebels at a complete takeover. While we have been smoothly adapting to incremental automation of cars for decades, a complete takeover is a different matter. Almost nobody objected in the late 1920s to the introduction of what was then called the "self-starter" that replaced turning a crank in front of your car with turning an ignition key. (The only people who grumbled about it back then were men who liked the fact that most women were simply not strong enough to start a car the old-fashioned way, and therefore couldn't drive!) Automatic transmissions came next, and have taken over most non-U. S. markets except in places where drivers (again, men, mostly) take pride in shifting for themselves. Power steering, power brakes, anti-lock braking, and cruise control are all automatic systems that we have adopted almost without a quibble. But I think most people will at least stop to think before they press a button that relinquishes total control of the vehicle to a computer, or robot, or servomechanism, or whatever we'll choose to call it.
And well they might hesitate. Tierney notes that automatically piloted vehicles can follow much more closely in safety than cars being driven by humans. He cites a recent experiment in which engineers directed a group of driverless passenger cars to drive at 65 m. p. h. spaced just fifteen feet apart, with no untoward results. This has obvious positive implications for increasing the capacity of existing freeways. But he doesn't say if the interstate was cleared of all other traffic for this experiment. As for safety, automatic vehicle control doesn't have to be perfect—only better than what we have now, a system in which over 42,000 people died on U. S. roadways alone in 2006, the vast majority because of accidents due to human error rather than mechanical failures.
If we are going to go to totally automatic control for automobiles, it seems like there will have to be a systematic effort to organize the times, places, and conditions under which this kind of control can be used. You can bet that the fifteen-foot-spacing experiment would have failed spectacularly if even one of those cars were driven by a human. The great virtue of machine control is that it's much more predictable than humans, who can be distracted by anything from a stray wasp to a cell phone call and do anything or nothing as a consequence. One expert imagines that we will have special total-control lanes on freeways much like high-occupancy-vehicle lanes today, and no manually controlled vehicles will be allowed inside such lanes.
That's one way to do it, certainly. But I for one look forward to the day when we have door-to-door robot chauffeurs. I would like nothing better than to get in my car, program in my destination, and then sit back and read or work or listen to music or enjoy the scenery, or in fact any of the other things I can do right now on a train ride, which is at present practical transportation in the U. S. only in the northeast corridor. For decades we have fussed about the urban sprawl caused by the automobile and how much better things are handled (according to some) when public transportation is used instead of cars. It may be that automatic vehicle control will provide some kind of third way that will alleviate at least some of the problems caused by the automobile. If we can let go of the control thing, maybe we can do something similar with the ownership thing too, although as long as people want to work in cities and live in the country, you're going to have to find some way to get millions of bodies into the city in the morning and back to the country in the evening. But if we could space vehicles safely only fifteen feet apart and let them go sixty or eighty m. p. h. on the freeways, and come up with some software that would deal with traffic jams and other unpredictable but inevitable problems, commuting might become both safer, more fuel-efficient, and more pleasant.
Before many more of these futuristic visions happen, though, we are going to have to change some of our attitudes. There are sure to be a few drive-it-myself-or-nothing folks who will say that we'll have to pry their cold, dead fingers off the steering wheel before we can get them to agree to use totally automated driving. And if the thing isn't handled well politically, such a minority could spoil a potentially good thing for the rest of us. The right to drive your own car with your own hands on the steering wheel is one of those assumed rights that we accept almost without thinking about it, but if the day comes when it is more of a hazard than a public good, we may have to think about it twice—and then give it up.
Sources: The New York Times online article referred to appeared on Dec. 4, 2007 at http://www.nytimes.com/2007/12/04/science/04tier.html. Tierney refers to a University of California Transportation Center article by Steven Shladover published in the Spring 2000 edition of the center's Access Magazine (http://www.uctc.net/access/access16lite.pdf).
Monday, December 03, 2007
Can Robots Be Ethical? Continued
Last week I considered the proposal of sci-fi writer Robert Sawyer, who wants to recognize robot rights and responsibilities as moral agents. He looks forward to the day when "biological and artificial beings can share this world as equals." I said that this week I would take up the distinction between good and necessary laws regulating the development of use of robots as robots, and the unnecessary and pernicious idea of treating robots as autonomous moral agents. To do that, I'd like to look at what Sawyer means by "equal."
I think the sense in which he uses that word is the same sense that is used in the Declaration of Independence, which says that "all men are created equal." That was regarded by the writers of the Declaration as a self-evident truth, that is, one so plainly true that it needed no supporting evidence. It is equally plain and obvious that "equal" does not mean "identical." Then as now, people are born with different physical and mental endowments, and so what the Founders meant by "equal" must mean something else other than "identical in every respect."
What I believe they meant is that, as human beings created by God, all people deserve to receive equal treatment in certain broad respects, such as the rights to life, liberty, and the pursuit of happiness. That is probably what Sawyer means by equal too. Although the origin and nature of robots will always be very different than those of human beings, he urges us to treat robots as equals under law.
I suspect Sawyer wants us to view this question in the light of what might seem to be its great historical parallel, that is, slavery. Under that institution, some human beings treated other human beings as though they were machines: buying and selling them and taking the fruits of their labor without just compensation. The deep wrong in all this is that slaves are human beings too, and it took hundreds of years for Western societies to accept that fact and act on it. But acting on it required a solid conviction that there was something special and distinct about human beings, something that the abolition of slavery acknowledged.
Robots are not human beings. Nothing that can ever happen will change that fact—no advances in technology, no degradation in the perception of what is human or what is machine, nothing. It is an objective fact, a self-evident truth. But just as human society took a great step forward in admitting that slaves were people and not machines, we have the freedom to take a great step backward by deluding ourselves that people are just machines. Following Sawyer's ideas would take us down that path. Why?
Already, it is a commonly understood assumption among many educated and professional classes (but rarely stated in so many words) that there is no essential difference between humans and machines. There are differences of degree—the human mind, for example, is superior to computers in some ways but inferior in other ways. But according to this view, humans are just physical systems following the laws of physics exactly like machines do, and if we could ever build a machine with the software and hardware that could simulate human life, then we would have created a human being, not just a simulation.
What Sawyer is asking us to do is to acknowledge that point of view explicitly. Just as the recognition of the humanity of slaves led to the abolition of slavery, the recognition of the machine nature of humanity will lead to the equality of robots and human beings. But look who moved this time. In the first case, we raised the slaves up to the level of fully privileged human beings. But in the second, we propose to lower mankind to the level of just another machine. There is no other alternative, because admitting machines to the rights and responsibilities of humans implicitly acknowledges that humans have no special characteristic that distinguishes them from machines.
Would you like to be treated like a machine? Even a machine with "human" rights? Of course not. Well, then, how would you like to work for a machine? Or owe money to a machine? Or be arrested, tried, and convicted by a machine? Or be ruled by a machine? If we give machines the same rights as humans, all these things not only may, but must come true. Otherwise we have not fully given robots the same rights and responsibilities as humans.
There is a reason that most science fiction dealing with robots portrays the dark side of what might happen if robots managed to escape the full control of humans (or even if they don't). All good fiction is moral, and the engine that drives robot-dominated dystopias is the horror we feel at witnessing the commission of wrongs on a massive scale. Add to that horror the irony that these stories always begin when humans try to achieve something good with robots (even if it is a selfish good), and you have the makings of great, or at least entertaining, stories. But we want them to stay that way—just stories, not reality.
Artists often serve as prophets in a culture, not in any necessarily mystical sense, but in the sense that they can imagine the future outcomes of trends that the rest of us less sensitive folk perceive only dimly, if at all. We should heed the warnings of a succession of science fiction writers from Isaac Asimov to Arthur C. Clarke and onward, that there is great danger in granting too much autonomy, privileges, and yes, equality, to robots. In common with desires of all kinds, robots make good slaves but bad masters. As progress in robotic technology continues, a good body of law regulating the design and use of robots will be needed. But of supreme importance is the philosophy upon which this body of law is erected. If at the start we acknowledge that robots are in principle just advanced cybernetic control systems, essentially no different than a thermostat in your house or the cruise control on your car, then the safety and convenience of human beings will come first in this body of law, and we can employ increasingly sophisticated robots in the future without fear. But if the laws are built upon the wrong foundation—namely, a theoretical idea that robots and humans are the same kind of entity—then we can look forward to the day that some of the worst of science fiction's robotic dystopias will happen for real.
Sources: Besides last week's blog on this topic, I have written an essay ("Sociable Machines?") on the philosophical basis of the distinction between humans and machines, which I will provide upon request to my email address (available at the Texas State "people search" function on the Texas State University website www.txstate.edu).
I think the sense in which he uses that word is the same sense that is used in the Declaration of Independence, which says that "all men are created equal." That was regarded by the writers of the Declaration as a self-evident truth, that is, one so plainly true that it needed no supporting evidence. It is equally plain and obvious that "equal" does not mean "identical." Then as now, people are born with different physical and mental endowments, and so what the Founders meant by "equal" must mean something else other than "identical in every respect."
What I believe they meant is that, as human beings created by God, all people deserve to receive equal treatment in certain broad respects, such as the rights to life, liberty, and the pursuit of happiness. That is probably what Sawyer means by equal too. Although the origin and nature of robots will always be very different than those of human beings, he urges us to treat robots as equals under law.
I suspect Sawyer wants us to view this question in the light of what might seem to be its great historical parallel, that is, slavery. Under that institution, some human beings treated other human beings as though they were machines: buying and selling them and taking the fruits of their labor without just compensation. The deep wrong in all this is that slaves are human beings too, and it took hundreds of years for Western societies to accept that fact and act on it. But acting on it required a solid conviction that there was something special and distinct about human beings, something that the abolition of slavery acknowledged.
Robots are not human beings. Nothing that can ever happen will change that fact—no advances in technology, no degradation in the perception of what is human or what is machine, nothing. It is an objective fact, a self-evident truth. But just as human society took a great step forward in admitting that slaves were people and not machines, we have the freedom to take a great step backward by deluding ourselves that people are just machines. Following Sawyer's ideas would take us down that path. Why?
Already, it is a commonly understood assumption among many educated and professional classes (but rarely stated in so many words) that there is no essential difference between humans and machines. There are differences of degree—the human mind, for example, is superior to computers in some ways but inferior in other ways. But according to this view, humans are just physical systems following the laws of physics exactly like machines do, and if we could ever build a machine with the software and hardware that could simulate human life, then we would have created a human being, not just a simulation.
What Sawyer is asking us to do is to acknowledge that point of view explicitly. Just as the recognition of the humanity of slaves led to the abolition of slavery, the recognition of the machine nature of humanity will lead to the equality of robots and human beings. But look who moved this time. In the first case, we raised the slaves up to the level of fully privileged human beings. But in the second, we propose to lower mankind to the level of just another machine. There is no other alternative, because admitting machines to the rights and responsibilities of humans implicitly acknowledges that humans have no special characteristic that distinguishes them from machines.
Would you like to be treated like a machine? Even a machine with "human" rights? Of course not. Well, then, how would you like to work for a machine? Or owe money to a machine? Or be arrested, tried, and convicted by a machine? Or be ruled by a machine? If we give machines the same rights as humans, all these things not only may, but must come true. Otherwise we have not fully given robots the same rights and responsibilities as humans.
There is a reason that most science fiction dealing with robots portrays the dark side of what might happen if robots managed to escape the full control of humans (or even if they don't). All good fiction is moral, and the engine that drives robot-dominated dystopias is the horror we feel at witnessing the commission of wrongs on a massive scale. Add to that horror the irony that these stories always begin when humans try to achieve something good with robots (even if it is a selfish good), and you have the makings of great, or at least entertaining, stories. But we want them to stay that way—just stories, not reality.
Artists often serve as prophets in a culture, not in any necessarily mystical sense, but in the sense that they can imagine the future outcomes of trends that the rest of us less sensitive folk perceive only dimly, if at all. We should heed the warnings of a succession of science fiction writers from Isaac Asimov to Arthur C. Clarke and onward, that there is great danger in granting too much autonomy, privileges, and yes, equality, to robots. In common with desires of all kinds, robots make good slaves but bad masters. As progress in robotic technology continues, a good body of law regulating the design and use of robots will be needed. But of supreme importance is the philosophy upon which this body of law is erected. If at the start we acknowledge that robots are in principle just advanced cybernetic control systems, essentially no different than a thermostat in your house or the cruise control on your car, then the safety and convenience of human beings will come first in this body of law, and we can employ increasingly sophisticated robots in the future without fear. But if the laws are built upon the wrong foundation—namely, a theoretical idea that robots and humans are the same kind of entity—then we can look forward to the day that some of the worst of science fiction's robotic dystopias will happen for real.
Sources: Besides last week's blog on this topic, I have written an essay ("Sociable Machines?") on the philosophical basis of the distinction between humans and machines, which I will provide upon request to my email address (available at the Texas State "people search" function on the Texas State University website www.txstate.edu).
Subscribe to:
Posts (Atom)