Like any other profession, engineering has its particular proverbs and sayings. One of my favorites is, "If it ain't broke, don't fix it." As with most proverbs, this one captures only part of the whole picture of a complex situation. But when I look at the potential and actual problems we have these days with the U. S. electric power system, I wish more people in authority had paid attention to that particular proverb.
Electricity is an unusual commodity in that it must be produced exactly as fast as it is sold. If a million people suddenly turn on their lights all at once, somebody somewhere has to supply that much more electricity in milliseconds, or else there is big trouble for everybody on the power distribution network. For lights to come on reliably and stay on all across the country, the systems of generating plants, transmission lines, distribution lines, and monitoring and control equipment have to work in a smooth, coordinated way. And, somebody has to pay for it all.
From an economic point of view, approaches to electric utility management and financing lie somewhere between two extremes. At one extreme is completely centralized control, billing, and coordination, often performed in many countries by the national government. France is an example of this approach. Large, complex electric systems are a natural fit to large, complex government bureaucracies, and in the hands of competent, dedicated civil servants, government-owned and -operated utilities can be a model of efficiency and advanced technology. Government control and ownership can provide the stability for long-term research and development. This is one reason that France leads the world in the development of safe, reliable nuclear power, which provides most of the electricity in that country.
The other extreme can be found in third-world countries where there is little or no effective government regulation of utilities, either through incompetence, war, or other causes. In this type of situation, private enterprise rushes in to fill the gap and you have private "utilities"—often nothing more than a few guys with a generator and some wire—selling electricity for whatever the market will bear, in an uncoordinated and inefficient way. This approach leads to a spotty, inefficient market in which the availability and reliability of electricity depends on where you live, and typically large portions of the market (in rural or dangerous areas) are not served at all.
In the U. S., we have historically swung from near one extreme to the other. As electric utilities began to grow in the late 1800s and early 1900s, they began as independent companies. But the technical economies of scale quickly became apparent, and the Great Depression brought on tremendous consolidation of companies into a few large firms, which were then taken under the regulatory wing of federal and state governments. What we had then was a kind of benevolent dictatorship of the industry by government, in which private investors ceded much control to the various regulatory commissions, but received in turn a reliable but relatively small return on their investment.
This state of affairs prevailed through the 1970s, whereupon various political forces began a move toward deregulation. The record of deregulation is spotty at best, probably because it represents an attempt to have our regulatory cake and eat it too. No one wants the electricity market here to devolve to the haphazard free-for-all that it is in places like Iraq, or even India, where electricity theft is as common as beggary. So rightly, some regulations must be left in place in order to protect the interests of those who cannot protect themselves, which in the case of electric utilities means most of us.
The most noteworthy recent disasters having to do with deregulation were the disruptions and price explosions in California of a few years ago, caused in large part by Enron and other trading companies who manipulated the market during hot summers of high demand. Even if the loopholes allowing such abuses are closed and inadequate generating capacity is addressed with more power plants, however, many problems remain. A recent New York Times article points out that because the existing rules provide disincentives for power companies to spend money on transmission and distribution equipment (power lines), certain parts of the country have to pay exorbitant rates in the form of "congestion charges."
The basic problem is, there are not lines enough to carry cheap power from where it is available to where it is needed. Somebody would have to pay to build them, and somebody else would have to approve the construction. In these days of "not in my back yard" attitudes, it is increasingly hard to construct new power lines anywhere, even in rural areas. The net result of these complications is that as time goes on and demand for power increases, more and more areas may find themselves starved for power, and will have to pay rates that might be as high as twice the prevailing rate of surrounding regions.
My personal bias is that we have gone way too far in attempts to privatize the electric utility industry. It is a business which technologically fits better with a centralized authority and center of coordination. But in today's political climate, the chances of going back to a more centralized way of doing things are small. It looks like the best we can do is to continue to tinker with what regulations remain, fixing problems where pernicious disincentives appear, and keeping an eye out for Grandma and her electric heater that she needs to get through the winter. But in my opinion, the whole thing wasn't broke to begin with, and the fix of deregulation didn't need to be applied the way it was.
Sources: The New York Times article on congestion charges appeared in the Dec. 13, 2006 online edition at http://www.nytimes.com/2006/12/13/business/13power.html?hp&ex=1166072400&en=dcfbff42cc8f19d4&ei=5094&partner=homepage.
Thursday, December 28, 2006
Tuesday, December 19, 2006
America's Chernobyl Waiting to Happen
"Dallas, Texas, Mar. 30, 2005 (AP) --- An apparent nuclear explosion in Amarillo, Texas has cut off all communications with the West Texas city and regions in a fifteen-mile radius around the blast. Eyewitness accounts by airline pilots in the vicinity report an 'incredible flash' followed by a mushroom cloud reaching at least 35,000 feet. Speculation on the source of the explosion has centered on Amarillo's Pantex plant, the nation's only facility for construction and disassembly of nuclear weapons."
In case you think you missed something a year ago last March, the news item above is fiction. But according to some sources, it is plausible. It could have happened. And there is reason to believe that unless some serious housecleaning takes place in Amarillo, the chances that something like this might happen in the future are higher than any of us would like.
The end of the Cold War brought hopes that instead of piling up megaton after megaton
of mutually assured destructive power in the shape of thermonuclear weapons, the U. S. and the Soviet Union (or what was left of it) would begin to disassemble their nuclear stockpiles to make the world a safer place. Over the past fifteen years, international agreements have been reached to do exactly that. From a peak of over 30,000 nuclear warheads in 1965, the U. S. stockpile has declined to just a little over 10,000 as of 2002. And here is where the engineering issues come in, because for every downtick of that number, somebody somewhere has to disassemble a nuclear warhead.
A nuclear bomb or missile is not something that you just throw on the surplus market to dispose of. First it has to be rendered incapable of exploding. Then the plutonium and other dangerous explosive materials have to be removed in a way that is both safe to the technicians doing the work, and also to the surrounding countryside and population. As you might imagine, these operations are difficult, dangerous, and require secret specialized knowledge. For more than thirty years, the only facility in the U. S. where nuclear weapons were made or disassembled has been the Pantex plant outside Amarillo, Texas. It is currently operated by a consortium of private contractors including BWXT, Honeywell, and Bechtel, and works exclusively for the federal government, specifically the Department of Energy. If you want a nuclear weapon taken apart, you go to Pantex, period. And therein lies a potential problem.
Where I teach engineering, the job of nuclear weapon disassembler is not one that comes up a lot when students tell me what they'd like to be when they graduate. I imagine that it is hard to recruit and retain people who are both willing and qualified to do such work. But at the same time, it is not the kind of growth industry that attracts a lot of investment. So it is plausible to me that as the demand for disassembly increases, the corporate bosses in charge of the operation might tend to skimp on things like deferred maintenance, safety training and execution, and hiring of additional staff. That is the picture which emerges from an anonymous letter made public recently by the Project on Government Oversight, a government watchdog group.
Anonymous letters can contain exaggerations, but what is not in dispute is the fact that on three occasions beginning Mar. 30, 2005, someone at Pantex tried to disassemble a nuclear weapon in a way that set off all kinds of alarms in the minds of experts who know the details. I'm speculating at this point, but as I read between the lines and use my knowledge of 1965-era technology, something like this may have happened.
A nuclear weapon built in 1965 probably contained no computers, relatively few transistors, and a good many vacuum tubes. Any safety interlocks to prevent accidental detonation were probably mechanical as well as electronic, and consisted of switches, relays, and possibly some rudimentary transistor circuits. But somewhere physically inside the long cylindrical structure lies a terminal which, if contacted by a grounded piece of metal, will probably set the whole thing off and vaporize Amarillo and the surrounding area.
A piece of equipment that has been sitting around since 1965 in a cold, drafty missile silo is probably a little corroded here and there. Screws and plugs that used to come apart easily are now stubborn or even frozen in place. The technician in charge of beginning disassembly of this baby probably tried all the standard approaches to unscrewing a vital part in order to disable it, without success. At that point, desperation overcame judgment. The official news release from the National Nuclear Safety Agency puts it in bureaucratese thus: "This includes the failures to adhere to limits in the force applied to the weapon assembly and a Technical Safety Requirement violation associated with the use of a tool that was explicitly forbidden from use as stated in a Justification for Continued Operation." Maybe he whammed at it with a big hammer. Maybe he tried drilling out a stuck bolt with an electric drill. We may never know. But what we do know is, the reasons for all these Technical Safety Requirements is that if you violate them, you edge closer to setting off an explosion of some kind.
Not every explosion that could happen at Pantex would be The Big One with the mushroom cloud and a megaton of energy. The way nuclear weapons work is by using cleverly designed pieces of conventional high explosive to create configurations that favor the initiation of the nuclear chain reactions that produce the big boom. A lot of things have to go right (or wrong, depending on your point of view) in order for a full-scale nuclear explosion to happen. Kim Jong Il of North Korea found this out not too long ago when his nuclear test fizzled rather than boomed. But even if nothing nuclear happens when the conventional explosives go off, you've got a fine mess on your hands: probably a few people killed, expensive secret equipment destroyed, and worst from an environmental viewpoint, plutonium or other hazardous nuclear material spread all over the place, including the atmosphere.
This general sort of thing was what happened at Chernobyl, Ukraine in 1986, when some technicians experimenting late at night with a badly designed nuclear power plant managed to blow it up. The bald-faced coverup that the USSR tried to mount in the disaster's aftermath may have contributed to its ultimate downfall. So even if the worst-case scenario of a nuclear explosion doesn't ever happen at Pantex, a "small" explosion of conventional weapons could cause a release of nuclear material that could harm thousands or millions of people downwind. Where I happen to live, incidentally.
I hope the concerns pointed out by the Pantex employees who apparently wrote the anonymous letter are exaggerated. I hope that the statement from Pantex's official website that "[t]here is no credible scenario at Pantex in which an accident can result in a nuclear detonation" is true. But incredible things do happen from time to time. Let's just hope they don't happen at Pantex any time soon.
Sources: The Project on Government Oversight webpage citing the Pantex employees' anonymous letter is at http://www.pogo.org/p/homeland/hl-061201-bodman.html. The official Pantex website statement about a nuclear explosion not being a credible scenario is at http://www.pantex.com/currentnews/factSheets.html. Statistics on the U. S. nuclear weapons stockpile are from Wikipedia's article on "United States and weapons of mass destruction."
In case you think you missed something a year ago last March, the news item above is fiction. But according to some sources, it is plausible. It could have happened. And there is reason to believe that unless some serious housecleaning takes place in Amarillo, the chances that something like this might happen in the future are higher than any of us would like.
The end of the Cold War brought hopes that instead of piling up megaton after megaton
of mutually assured destructive power in the shape of thermonuclear weapons, the U. S. and the Soviet Union (or what was left of it) would begin to disassemble their nuclear stockpiles to make the world a safer place. Over the past fifteen years, international agreements have been reached to do exactly that. From a peak of over 30,000 nuclear warheads in 1965, the U. S. stockpile has declined to just a little over 10,000 as of 2002. And here is where the engineering issues come in, because for every downtick of that number, somebody somewhere has to disassemble a nuclear warhead.
A nuclear bomb or missile is not something that you just throw on the surplus market to dispose of. First it has to be rendered incapable of exploding. Then the plutonium and other dangerous explosive materials have to be removed in a way that is both safe to the technicians doing the work, and also to the surrounding countryside and population. As you might imagine, these operations are difficult, dangerous, and require secret specialized knowledge. For more than thirty years, the only facility in the U. S. where nuclear weapons were made or disassembled has been the Pantex plant outside Amarillo, Texas. It is currently operated by a consortium of private contractors including BWXT, Honeywell, and Bechtel, and works exclusively for the federal government, specifically the Department of Energy. If you want a nuclear weapon taken apart, you go to Pantex, period. And therein lies a potential problem.
Where I teach engineering, the job of nuclear weapon disassembler is not one that comes up a lot when students tell me what they'd like to be when they graduate. I imagine that it is hard to recruit and retain people who are both willing and qualified to do such work. But at the same time, it is not the kind of growth industry that attracts a lot of investment. So it is plausible to me that as the demand for disassembly increases, the corporate bosses in charge of the operation might tend to skimp on things like deferred maintenance, safety training and execution, and hiring of additional staff. That is the picture which emerges from an anonymous letter made public recently by the Project on Government Oversight, a government watchdog group.
Anonymous letters can contain exaggerations, but what is not in dispute is the fact that on three occasions beginning Mar. 30, 2005, someone at Pantex tried to disassemble a nuclear weapon in a way that set off all kinds of alarms in the minds of experts who know the details. I'm speculating at this point, but as I read between the lines and use my knowledge of 1965-era technology, something like this may have happened.
A nuclear weapon built in 1965 probably contained no computers, relatively few transistors, and a good many vacuum tubes. Any safety interlocks to prevent accidental detonation were probably mechanical as well as electronic, and consisted of switches, relays, and possibly some rudimentary transistor circuits. But somewhere physically inside the long cylindrical structure lies a terminal which, if contacted by a grounded piece of metal, will probably set the whole thing off and vaporize Amarillo and the surrounding area.
A piece of equipment that has been sitting around since 1965 in a cold, drafty missile silo is probably a little corroded here and there. Screws and plugs that used to come apart easily are now stubborn or even frozen in place. The technician in charge of beginning disassembly of this baby probably tried all the standard approaches to unscrewing a vital part in order to disable it, without success. At that point, desperation overcame judgment. The official news release from the National Nuclear Safety Agency puts it in bureaucratese thus: "This includes the failures to adhere to limits in the force applied to the weapon assembly and a Technical Safety Requirement violation associated with the use of a tool that was explicitly forbidden from use as stated in a Justification for Continued Operation." Maybe he whammed at it with a big hammer. Maybe he tried drilling out a stuck bolt with an electric drill. We may never know. But what we do know is, the reasons for all these Technical Safety Requirements is that if you violate them, you edge closer to setting off an explosion of some kind.
Not every explosion that could happen at Pantex would be The Big One with the mushroom cloud and a megaton of energy. The way nuclear weapons work is by using cleverly designed pieces of conventional high explosive to create configurations that favor the initiation of the nuclear chain reactions that produce the big boom. A lot of things have to go right (or wrong, depending on your point of view) in order for a full-scale nuclear explosion to happen. Kim Jong Il of North Korea found this out not too long ago when his nuclear test fizzled rather than boomed. But even if nothing nuclear happens when the conventional explosives go off, you've got a fine mess on your hands: probably a few people killed, expensive secret equipment destroyed, and worst from an environmental viewpoint, plutonium or other hazardous nuclear material spread all over the place, including the atmosphere.
This general sort of thing was what happened at Chernobyl, Ukraine in 1986, when some technicians experimenting late at night with a badly designed nuclear power plant managed to blow it up. The bald-faced coverup that the USSR tried to mount in the disaster's aftermath may have contributed to its ultimate downfall. So even if the worst-case scenario of a nuclear explosion doesn't ever happen at Pantex, a "small" explosion of conventional weapons could cause a release of nuclear material that could harm thousands or millions of people downwind. Where I happen to live, incidentally.
I hope the concerns pointed out by the Pantex employees who apparently wrote the anonymous letter are exaggerated. I hope that the statement from Pantex's official website that "[t]here is no credible scenario at Pantex in which an accident can result in a nuclear detonation" is true. But incredible things do happen from time to time. Let's just hope they don't happen at Pantex any time soon.
Sources: The Project on Government Oversight webpage citing the Pantex employees' anonymous letter is at http://www.pogo.org/p/homeland/hl-061201-bodman.html. The official Pantex website statement about a nuclear explosion not being a credible scenario is at http://www.pantex.com/currentnews/factSheets.html. Statistics on the U. S. nuclear weapons stockpile are from Wikipedia's article on "United States and weapons of mass destruction."
Tuesday, December 12, 2006
Hacker Psych 101
Well, it's happened again. The Los Angeles Times reports that for more than a year prior to Nov. 21, 2006, somebody was siphoning personal information such as Social Security numbers from a database of more than 800,000 students and faculty at UCLA. Eventually, the system administrators noticed some unusual activity and suppressed the hack, but by the time they closed the door, a great many horses had escaped the barn.
This is one of the biggest recent breaches of data security at a university, but it is by no means the only one. The same article reports that 29 security breaches at other universities during the first six months of this year affected about 845,000 people.
Why is hacking so common? This is a profound question that goes to the heart of the nature of evil. It's good to start with the principle that, no matter how twisted, perverse, or just plain stupid a wrong action looks to observers, the person doing it sees something good about it.
For example, it's not a big mystery why people rob banks. In the famous words of 1930's gangster Willie Sutton, "Because that's where the money is." To a bank robber, simply going in and taking money by force is a way to obtain what they view as good, namely, money.
There are hackers whose motivation is essentially no different than Willie Sutton's. Identity theft turns out to be one of the easiest types of crime for them to commit, and so they turn to hacking, not because they especially enjoy it, but because it will lead to a result they want: data they can use to masquerade as somebody else in order to obtain money and goods by fraud. This motivation, although deplorable, is understandable, and fits into our historical understanding of the criminal mind, such as it is. As technology has advanced, so must the technical abilities of criminals. At this point it isn't clear whether money was the motive behind the UCLA breach or not. Because the breach had gone on so long without notable evidence of identity theft, it's possible that this was a hack for the heck of it.
Many, if not most, hacks fall into this second category. For an insight into why people do these things if they're not making money or profiting in some other way, the insights of Sarah Gordon, a senior research fellow at Symantec Security Response, shed some light on the matter.
Gordon's specialty is the ethics and psychology of hacking. In her job at Symantec, she has encountered just about every kind of hack and hacker there is. In an interview published in 2003, she says that the reason many people feel little or no guilt (at least not enough for them to stop) when they write viruses and do hacks is that they don't consider computers to be part of the real world. Speaking about school-age children learning to use computers for the first time, she said, "They don't have the same morality in the virtual world as they have in the real world because they don't think computers are part of the real world."
Gordon says that parents and teachers should share part of the blame. When a child steals someone's password and uses it, for example, a teacher could ask, "Would you steal Johnny's house key and use it to poke around in his bedroom?" Presumably not. But the analogy may be a difficult one for children to make—and many adults, for that matter.
Gordon thinks it may take a generation or two for our culture's prevailing morality to catch up with the hyper-speed advances in computer technology. She sees some progress in the U. S., noting that there is a new reluctance to post viruses online, whereas a few years ago no one thought there was anything wrong with the practice. Still, she thinks that hacking and virus-writing is an act of rebellion that remains popular in countries where young people are experiencing computers and networks for the first time, and rebellion is just part of human nature. A boy who grew up in a thatched hut with no running water, moves to a city, and finds that he can disrupt the operations of thousands of computers halfway across the world with a few keystrokes can receive a power buzz that he can get nowhere else in his life.
It seems to me that the anonymity provided by the technical nature of computer networks also contributes to the problem. Some say that a test of true morality is to ask yourself whether you would do a bad thing if you were sure you'd never get caught. The nature of computer networks ensures that very few hackers and virus writers do get caught, at least not without a lot of trouble. And it looks like lots of people fail that kind of test.
Well, I'm a teacher, so if there are any students reading this, I'm here to tell you that just because you can hide behind a computer screen, you shouldn't abandon the Golden Rule. But it may take a few years for the message to soak in. At the same time, I recognize a broader generalization of Sarah Gordon's notion that rebellion is part of human nature: evil and sin are part of human nature. I think this was a feature of humanity that many computer scientists neglected to take into consideration way back when they were establishing the foundations of some very pervasive systems and protocols that would cost billions of dollars to change today. Eventually things will get better, but it may take a generation or more before password theft and bicycle theft are viewed as the same kind of thing by most people.
Sources: The Dec. 12 L. A. Times story on the UCLA security breach is at http://www.latimes.com/news/local/la-me-ucla12dec12,0,7111141.story?coll=la-home-headlines. The interview with Sarah Gordon is at http://news.com.com/2008-1082-829812.html.
This is one of the biggest recent breaches of data security at a university, but it is by no means the only one. The same article reports that 29 security breaches at other universities during the first six months of this year affected about 845,000 people.
Why is hacking so common? This is a profound question that goes to the heart of the nature of evil. It's good to start with the principle that, no matter how twisted, perverse, or just plain stupid a wrong action looks to observers, the person doing it sees something good about it.
For example, it's not a big mystery why people rob banks. In the famous words of 1930's gangster Willie Sutton, "Because that's where the money is." To a bank robber, simply going in and taking money by force is a way to obtain what they view as good, namely, money.
There are hackers whose motivation is essentially no different than Willie Sutton's. Identity theft turns out to be one of the easiest types of crime for them to commit, and so they turn to hacking, not because they especially enjoy it, but because it will lead to a result they want: data they can use to masquerade as somebody else in order to obtain money and goods by fraud. This motivation, although deplorable, is understandable, and fits into our historical understanding of the criminal mind, such as it is. As technology has advanced, so must the technical abilities of criminals. At this point it isn't clear whether money was the motive behind the UCLA breach or not. Because the breach had gone on so long without notable evidence of identity theft, it's possible that this was a hack for the heck of it.
Many, if not most, hacks fall into this second category. For an insight into why people do these things if they're not making money or profiting in some other way, the insights of Sarah Gordon, a senior research fellow at Symantec Security Response, shed some light on the matter.
Gordon's specialty is the ethics and psychology of hacking. In her job at Symantec, she has encountered just about every kind of hack and hacker there is. In an interview published in 2003, she says that the reason many people feel little or no guilt (at least not enough for them to stop) when they write viruses and do hacks is that they don't consider computers to be part of the real world. Speaking about school-age children learning to use computers for the first time, she said, "They don't have the same morality in the virtual world as they have in the real world because they don't think computers are part of the real world."
Gordon says that parents and teachers should share part of the blame. When a child steals someone's password and uses it, for example, a teacher could ask, "Would you steal Johnny's house key and use it to poke around in his bedroom?" Presumably not. But the analogy may be a difficult one for children to make—and many adults, for that matter.
Gordon thinks it may take a generation or two for our culture's prevailing morality to catch up with the hyper-speed advances in computer technology. She sees some progress in the U. S., noting that there is a new reluctance to post viruses online, whereas a few years ago no one thought there was anything wrong with the practice. Still, she thinks that hacking and virus-writing is an act of rebellion that remains popular in countries where young people are experiencing computers and networks for the first time, and rebellion is just part of human nature. A boy who grew up in a thatched hut with no running water, moves to a city, and finds that he can disrupt the operations of thousands of computers halfway across the world with a few keystrokes can receive a power buzz that he can get nowhere else in his life.
It seems to me that the anonymity provided by the technical nature of computer networks also contributes to the problem. Some say that a test of true morality is to ask yourself whether you would do a bad thing if you were sure you'd never get caught. The nature of computer networks ensures that very few hackers and virus writers do get caught, at least not without a lot of trouble. And it looks like lots of people fail that kind of test.
Well, I'm a teacher, so if there are any students reading this, I'm here to tell you that just because you can hide behind a computer screen, you shouldn't abandon the Golden Rule. But it may take a few years for the message to soak in. At the same time, I recognize a broader generalization of Sarah Gordon's notion that rebellion is part of human nature: evil and sin are part of human nature. I think this was a feature of humanity that many computer scientists neglected to take into consideration way back when they were establishing the foundations of some very pervasive systems and protocols that would cost billions of dollars to change today. Eventually things will get better, but it may take a generation or more before password theft and bicycle theft are viewed as the same kind of thing by most people.
Sources: The Dec. 12 L. A. Times story on the UCLA security breach is at http://www.latimes.com/news/local/la-me-ucla12dec12,0,7111141.story?coll=la-home-headlines. The interview with Sarah Gordon is at http://news.com.com/2008-1082-829812.html.
Tuesday, December 05, 2006
Superman Works for Airport Security Now
I've had occasion to mention Superman before ("Sniffing Through Your Wallet with RFID", Oct. 25, 2006), but my reference then to his X-ray vision was in jest. Well, a news item from the U. S. Transportation Security Administration says that in effect, they've hired Superman (at least, the mechanical equivalent of his X-ray vision ability) to watch passengers at Phoenix's Sky Harbor International Airport. The effect is to allow strip searches without stripping.
According to the Dec. 1 Associated Press news item, in the initial tests of the system, which uses a type of X-ray technology called "backscatter," security officials will examine only people who fail the primary screening. These passengers will be offered the choice of either a pat-down search or examination by the backscatter machine. The images, which reportedly are blurred by software in "certain areas," are nevertheless detailed enough to show items as small as jewelry next to the body. The technology is already in use in prisons, and the intensity of X-rays is much lower than a typical medical X-ray.
When I read this story, it brought back memories of my days as a junior terrorist. Before you get up from your computer to call the FBI, let me explain. In the 1990's, I did some consulting work for a company that was developing a contraband detection system using short radio waves called millimeter waves. It turns out that the human body emits these waves just because it's warm. With a detector that is sensitive enough, you can detect the waves coming through clothing, and if you are wearing something like plastic explosive under your shirt, the shadow of it will show up in the image.
We built a system, and to test it, several of us took turns playing terrorist by wearing lumps of modelling clay and plastic pistols taped to our shirts underneath a windbreaker. It was a tedious task, because the machine took 15 minutes or more to make a decent picture and you had to hold still the whole time. The results looked like blurry photographic negatives, but you could see the outlines of the contraband clearly. You could also see the main features of the body underneath the clothing, and that led to some privacy concerns, as you might imagine. The wife of the company president volunteered to be our female subject. I never saw the resulting picture—apparently it was detailed enough to be censored. For a number of reasons, both technical and social, that particular machine never made it to market, but all this was before 9/11 and the sea change in our attitudes toward airport security that resulted.
This change in attitudes has done funny things to some people, notably Susan Hallowell, who is the Transportation Security Administration's security laboratory director. A picture accompanying the article shows Ms. Hallowell in the X-ray altogether, and shows about the same detail as a department-store mannequin from the 1950s, or a Barbie doll. I suppose Ms. Hallowell's willingness to pose was motivated by a sincere desire to increase the quality of airport security with less discomfort to passengers, but it wouldn't surprise me if her strategy backfires. If I put myself in the mindset of a middle-aged woman who faces the choice of either letting another woman do a pat-down search, or knowing that somewhere out of sight, somebody—possibly another woman but possibly not—is going to see every single bulge, sag, and fold underneath my clothes, I would choose the pat-down search every time. In fact, I'd go screaming to my Congressman to stop implementation of the backscatter system before my naked profile showed up on MySpace. Yes, the TSA says the images won't be stored or transmitted. And maybe they will be able to keep that promise. But if there's a leak somewhere—say Madonna goes through one of these things and a paparazzi manages to bribe an inspector—the whole plan could go up in political flames.
Besides which, there is a principle that is largely neglected today, but still deserves some attention: the Constitutional prohibition against unreasonable searches and seizures. The Fourth Amendment says in full, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." I'm no Constitutional or legal scholar, and obviously some legal means has been found to get around constitutional challenges to airport security inspections. Probably the argument is, if you don't want to be searched, take the bus. But letting somebody I don't know see me without clothes, simply on the slight chance that I'm carrying a gun or a bomb, seems to cross a line that we as a nation have hesitated to cross before.
When George Orwell portrayed the ever-present unblinking eye of Big Brother in his dystopia 1984, the idea of being spied on constantly had the power to shock, because it was so novel. But today there are places in England where you can walk for many blocks and never be out of sight of security cameras. This has not destroyed England, and it has actually helped track down terrorists such as those who committed the London subway bombings. The thing we lose when one more privacy barrier comes down is so hard to describe because it's silent, has no public relations agent promoting it, and doesn't show up in the compilation of gross national products. But it's the kind of thing that you notice mainly after it's gone. And once it goes, it can be very hard to recover.
Sources: The AP article describing the Phoenix tests was carried by many media outlets, among them MyWay (http://apnews.myway.com/article/20061201/D8LO1JLO2.html). The paper describing my foray into contraband detection was entitled “Contraband detection through clothing by means of millimeter-wave imaging,” by G. R. Huguenin et al., SPIE Proc. 1942 Underground and Obscured Object Imaging and Detection, Orlando, FL, pp. 117-128, 15-16 April 1993.
According to the Dec. 1 Associated Press news item, in the initial tests of the system, which uses a type of X-ray technology called "backscatter," security officials will examine only people who fail the primary screening. These passengers will be offered the choice of either a pat-down search or examination by the backscatter machine. The images, which reportedly are blurred by software in "certain areas," are nevertheless detailed enough to show items as small as jewelry next to the body. The technology is already in use in prisons, and the intensity of X-rays is much lower than a typical medical X-ray.
When I read this story, it brought back memories of my days as a junior terrorist. Before you get up from your computer to call the FBI, let me explain. In the 1990's, I did some consulting work for a company that was developing a contraband detection system using short radio waves called millimeter waves. It turns out that the human body emits these waves just because it's warm. With a detector that is sensitive enough, you can detect the waves coming through clothing, and if you are wearing something like plastic explosive under your shirt, the shadow of it will show up in the image.
We built a system, and to test it, several of us took turns playing terrorist by wearing lumps of modelling clay and plastic pistols taped to our shirts underneath a windbreaker. It was a tedious task, because the machine took 15 minutes or more to make a decent picture and you had to hold still the whole time. The results looked like blurry photographic negatives, but you could see the outlines of the contraband clearly. You could also see the main features of the body underneath the clothing, and that led to some privacy concerns, as you might imagine. The wife of the company president volunteered to be our female subject. I never saw the resulting picture—apparently it was detailed enough to be censored. For a number of reasons, both technical and social, that particular machine never made it to market, but all this was before 9/11 and the sea change in our attitudes toward airport security that resulted.
This change in attitudes has done funny things to some people, notably Susan Hallowell, who is the Transportation Security Administration's security laboratory director. A picture accompanying the article shows Ms. Hallowell in the X-ray altogether, and shows about the same detail as a department-store mannequin from the 1950s, or a Barbie doll. I suppose Ms. Hallowell's willingness to pose was motivated by a sincere desire to increase the quality of airport security with less discomfort to passengers, but it wouldn't surprise me if her strategy backfires. If I put myself in the mindset of a middle-aged woman who faces the choice of either letting another woman do a pat-down search, or knowing that somewhere out of sight, somebody—possibly another woman but possibly not—is going to see every single bulge, sag, and fold underneath my clothes, I would choose the pat-down search every time. In fact, I'd go screaming to my Congressman to stop implementation of the backscatter system before my naked profile showed up on MySpace. Yes, the TSA says the images won't be stored or transmitted. And maybe they will be able to keep that promise. But if there's a leak somewhere—say Madonna goes through one of these things and a paparazzi manages to bribe an inspector—the whole plan could go up in political flames.
Besides which, there is a principle that is largely neglected today, but still deserves some attention: the Constitutional prohibition against unreasonable searches and seizures. The Fourth Amendment says in full, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." I'm no Constitutional or legal scholar, and obviously some legal means has been found to get around constitutional challenges to airport security inspections. Probably the argument is, if you don't want to be searched, take the bus. But letting somebody I don't know see me without clothes, simply on the slight chance that I'm carrying a gun or a bomb, seems to cross a line that we as a nation have hesitated to cross before.
When George Orwell portrayed the ever-present unblinking eye of Big Brother in his dystopia 1984, the idea of being spied on constantly had the power to shock, because it was so novel. But today there are places in England where you can walk for many blocks and never be out of sight of security cameras. This has not destroyed England, and it has actually helped track down terrorists such as those who committed the London subway bombings. The thing we lose when one more privacy barrier comes down is so hard to describe because it's silent, has no public relations agent promoting it, and doesn't show up in the compilation of gross national products. But it's the kind of thing that you notice mainly after it's gone. And once it goes, it can be very hard to recover.
Sources: The AP article describing the Phoenix tests was carried by many media outlets, among them MyWay (http://apnews.myway.com/article/20061201/D8LO1JLO2.html). The paper describing my foray into contraband detection was entitled “Contraband detection through clothing by means of millimeter-wave imaging,” by G. R. Huguenin et al., SPIE Proc. 1942 Underground and Obscured Object Imaging and Detection, Orlando, FL, pp. 117-128, 15-16 April 1993.
Tuesday, November 28, 2006
Freeloading or Free Speech?
Say you have an old-fashioned wireline phone sitting on your back porch. One morning you wake up to see a stranger sitting there chatting away on it. You open the window and say, "Hey, buddy, that's my phone."
Covering the mouthpiece, the man replies, "Don't worry, it's a local call. Won't cost you a thing."
You probably wouldn't just shrug your shoulders and go back to bed. But a recent New York Times article described how a reporter did the wireless equivalent to a San Francisco man named Gary Schaffer. Using a new Wi-Fi-equipped mobile phone that made a free call over Mr. Schaffer's home wireless Internet connection, the reporter hung up, then identified himself and asked Mr. Schaffer whether it was okay with him. “If you’re a friend, I’d say, let’s give it a try,” he said, but he'd be uncomfortable if strangers tried to sponge that way.
Among other things, engineering ethics deals with ownership, property rights, and the just distribution of resources and costs involved in technology. But as communications systems blur distinctions that were once clear and unambiguous, we may have to rethink some assumptions that have been around so long, we've forgotten them.
Take the example of "plain old telephone service" (POTS, for short). For the first century or so of phone service, an individual subscriber in the U. S. leased (not owned) a considerable pile of fairly costly hardware from The Phone Company, which was usually the Bell System. The dial set, twisted-pair wires, network interface, and lines going all the way back to the central exchange building several miles away were solid, immobile, physical objects. Ownership and operating rights were clear-cut, and it was a simple matter, relatively speaking, to regulate the industry so that investors received a reasonable return on the hardware and software installed over the decades, and consumers were able to afford POTS at what passed for a reasonable cost.
Since then, technical advances have made the incremental cost of a simple phone call positively microscopic compared to what it used to be. Much of the turmoil in the telecom business in the last fifteen years or so has resulted from various attempts to deal with this fact. What is a fair charge for something that costs almost nothing? Of course, somebody had to pay for the extensive wireless, wired, and fiber-optic networks that tie the world together, but we are very far from the simple, monolithic picture the Bell System presented as late as the 1960s. If you trace the path of a phone call or an email, your signal may pass through systems owned by dozens or hundreds of different entities, ranging from the neighbor next door to the federal government. Sorting out who should pay for what is an increasingly complex business, and one that the consumer is poorly equipped to do. But everybody can understand the lure of free phone calls. Hence the potential popularity of mobile phones that use Wi-Fi links.
We can go in several directions from here. Any activity that benefits the individual and also has no incremental cost lies open to what economists call "the tragedy of the commons." The commons was an area of communally owned land in some parts of England which farmers could graze their cattle on without charge. As time went on and population increased, the overgrazing of land turned the commons into mudflats, putting an end to the practice.
The Wi-Fi spectrum itself is a limited resource, although it is far from completely unregulated, since the Federal Communications Commission sets the basic boundaries for its use. But if you happen to live on a busy Manhattan street and Internet-ready mobile phones become popular enough, the day might come when your home's wireless Internet connection is jammed up with chattering freeloaders, and you won't be able to use it. Or the airwaves might get so crowded that most of the phones become useless.
Fortunately, the ether recovers instantly as soon as people quit using it, and if the airwaves turned into an electronic mosh pit, unusability would soon decrease the crowding to a manageable level. This sort of thing doesn't happen to conventional cell-phone systems much because they are operated by organizations which make sure there is enough capacity in a given region to handle the anticipated number of calls.
The opposite pole to the unrestricted use of a limited resource, of course, is excessive regulation. Many would argue that the Bell System monopoly broken up by court decisions in 1984 was an example of a kind of self-regulated extreme which stifled technical innovation. It's much too late to debate that argument again, except to say that clarity in ownership and consumer rights is worth something. When there was only one phone company, it was easy to pay your communications bill. Nowadays, the typical young consumer may ante up each month to a phone company or two, a cable operator, a mobile phone outfit, an Internet provider, and possibly some MMOG (massively multiplayer online game) bills. It seems to be the nature of modern technology to widen the variety and types of choices available to the consumer, but all for a price.
And sometimes the price is not just dollars and cents, but undesirable changes in places we may never see.
Sources: The New York Times article " The Air Is Free, and Sometimes So Are the Phone Calls That Borrow It" was carried in the Nov. 27, 2006 online edition.
Covering the mouthpiece, the man replies, "Don't worry, it's a local call. Won't cost you a thing."
You probably wouldn't just shrug your shoulders and go back to bed. But a recent New York Times article described how a reporter did the wireless equivalent to a San Francisco man named Gary Schaffer. Using a new Wi-Fi-equipped mobile phone that made a free call over Mr. Schaffer's home wireless Internet connection, the reporter hung up, then identified himself and asked Mr. Schaffer whether it was okay with him. “If you’re a friend, I’d say, let’s give it a try,” he said, but he'd be uncomfortable if strangers tried to sponge that way.
Among other things, engineering ethics deals with ownership, property rights, and the just distribution of resources and costs involved in technology. But as communications systems blur distinctions that were once clear and unambiguous, we may have to rethink some assumptions that have been around so long, we've forgotten them.
Take the example of "plain old telephone service" (POTS, for short). For the first century or so of phone service, an individual subscriber in the U. S. leased (not owned) a considerable pile of fairly costly hardware from The Phone Company, which was usually the Bell System. The dial set, twisted-pair wires, network interface, and lines going all the way back to the central exchange building several miles away were solid, immobile, physical objects. Ownership and operating rights were clear-cut, and it was a simple matter, relatively speaking, to regulate the industry so that investors received a reasonable return on the hardware and software installed over the decades, and consumers were able to afford POTS at what passed for a reasonable cost.
Since then, technical advances have made the incremental cost of a simple phone call positively microscopic compared to what it used to be. Much of the turmoil in the telecom business in the last fifteen years or so has resulted from various attempts to deal with this fact. What is a fair charge for something that costs almost nothing? Of course, somebody had to pay for the extensive wireless, wired, and fiber-optic networks that tie the world together, but we are very far from the simple, monolithic picture the Bell System presented as late as the 1960s. If you trace the path of a phone call or an email, your signal may pass through systems owned by dozens or hundreds of different entities, ranging from the neighbor next door to the federal government. Sorting out who should pay for what is an increasingly complex business, and one that the consumer is poorly equipped to do. But everybody can understand the lure of free phone calls. Hence the potential popularity of mobile phones that use Wi-Fi links.
We can go in several directions from here. Any activity that benefits the individual and also has no incremental cost lies open to what economists call "the tragedy of the commons." The commons was an area of communally owned land in some parts of England which farmers could graze their cattle on without charge. As time went on and population increased, the overgrazing of land turned the commons into mudflats, putting an end to the practice.
The Wi-Fi spectrum itself is a limited resource, although it is far from completely unregulated, since the Federal Communications Commission sets the basic boundaries for its use. But if you happen to live on a busy Manhattan street and Internet-ready mobile phones become popular enough, the day might come when your home's wireless Internet connection is jammed up with chattering freeloaders, and you won't be able to use it. Or the airwaves might get so crowded that most of the phones become useless.
Fortunately, the ether recovers instantly as soon as people quit using it, and if the airwaves turned into an electronic mosh pit, unusability would soon decrease the crowding to a manageable level. This sort of thing doesn't happen to conventional cell-phone systems much because they are operated by organizations which make sure there is enough capacity in a given region to handle the anticipated number of calls.
The opposite pole to the unrestricted use of a limited resource, of course, is excessive regulation. Many would argue that the Bell System monopoly broken up by court decisions in 1984 was an example of a kind of self-regulated extreme which stifled technical innovation. It's much too late to debate that argument again, except to say that clarity in ownership and consumer rights is worth something. When there was only one phone company, it was easy to pay your communications bill. Nowadays, the typical young consumer may ante up each month to a phone company or two, a cable operator, a mobile phone outfit, an Internet provider, and possibly some MMOG (massively multiplayer online game) bills. It seems to be the nature of modern technology to widen the variety and types of choices available to the consumer, but all for a price.
And sometimes the price is not just dollars and cents, but undesirable changes in places we may never see.
Sources: The New York Times article " The Air Is Free, and Sometimes So Are the Phone Calls That Borrow It" was carried in the Nov. 27, 2006 online edition.
Wednesday, November 22, 2006
Vistas of Choice?
In some ways, the new worlds opening up as computer science and technology progress seem to promise an almost infinite array of choices. Multi-user online games allow you to create your own avatar by selecting from an array of virtual body features, abilities, and appearances. Type almost any search term into Google, and you have thousands of pages of information to choose from.
But in other ways, once you decide to deal with computers at all (and modern life is all but unthinkable without them), your choices are extremely limited. Suppose for some reason that you simply do not like the operating systems produced by Microsoft. Well, there are Macs, there are the various Linux machines, and an array of expensive specialized systems for various technical uses. But if you're just an ordinary consumer, not a computer specialist, and you just don't like Microsoft, you'll pay a price for your pickiness.
This paradox came to mind as I read news that Microsoft is currently shipping its new operating system for PCs called Vista. Given Microsoft's large market share, most PC users will have to switch to Vista sooner or later. Vista comes with promises that it is much more secure than the previous systems, but you can also rest assured that Vista is now the main target for writers of viruses and other mischief-making software, simply because more computers will be running with Vista than with anything else, if history is any guide.
The question of whether history is any guide to these matters engaged the attention of a historian of technology at MIT named Rosalind Williams a few years ago. In her book Retooling, she explored the ways people deal with the lockstep acquiescence to the latest software upgrades such as Vista that is so often imposed upon them. Even at a supposedly future-oriented, cutting-edge institution like MIT, she realized that new administrative software was greeted with dismay as often as enthusiasm. But she also realized that in the complex, interlocking world of information and technology we have created for ourselves, not keeping up with the program (so to speak) is simply impossible without turning one's back on the way modern professional life is lived.
Of course, there are societies that do this. The religious sects collectively known as the Amish decide which modern technologies to adopt and which to forego. Sometimes their prohibitions are not absolute. For example, a whole neighborhood of Amish will share one pay telephone, but use it only for emergencies. And I recall reading about another Amish community that experimented with a video player and a set of children's films for a while. Eventually, though, the parents disposed of the equipment, because one father noticed that "the children aren't singing anymore."
Rejection of most modern technology is certainly a choice, but not one that can generally be made on an individual basis. The Amish survive, not simply because they don't watch videos or drive cars, but because they have preserved and maintained a functioning community where everyone has rights and responsibilities that are taken seriously. I understand that once an Amish child comes of age, he or she can freely choose to leave the community. But most decide to stay.
That kind of community is foreign to most of us, possibly because we demand so much in the way of freedom of choice that we refuse to be bound by obligations that would reduce that freedom. But choice comes with a price. In a peculiar way, the market is kind of a mirror of our own collective choices. Microsoft got to the place it is by giving most PC users most of what they wanted. That entails literally millions of choices. (Has any user on earth ever tried out all of Microsoft Word's features even once?) But in order to have the choices Microsoft provides, you have to forego the privilege of choosing your operating system.
Engineered systems of all kinds offer similar choices. You can choose not to own a car in a city in the western U. S., but your choices for travel will be radically restricted thereby. While millions of people do quite well in northeastern U. S. cities without cars, it is because their citizens have made a collective choice to maintain public transportation at a level that makes it possible to live without a car. The reasons for this are partly historical, partly technical, and partly political. The one thing they are not is simple.
I hope Microsoft is right about Vista's increased level of security. As for myself, I will continue to fly under the radar of many viruses by using mainly Macs. That choice is one I have found hard to maintain at times. But I enjoy being able to make it.
The ethics of choice in engineering is not a subject that comes up frequently. But since choice is a fundamental aspect of human freedom, as more of our lives are engaged with engineered products and systems all the time, those of us who create them should consider the ethics of choice more often.
Sources: Retooling: A Historian Confronts Technological Change by Rosalind Williams was published by MIT Press in 2002. My review of the book can be found in IEEE Technology and Society Magazine, vol. 23, pp. 6-8, Spring 2004.
But in other ways, once you decide to deal with computers at all (and modern life is all but unthinkable without them), your choices are extremely limited. Suppose for some reason that you simply do not like the operating systems produced by Microsoft. Well, there are Macs, there are the various Linux machines, and an array of expensive specialized systems for various technical uses. But if you're just an ordinary consumer, not a computer specialist, and you just don't like Microsoft, you'll pay a price for your pickiness.
This paradox came to mind as I read news that Microsoft is currently shipping its new operating system for PCs called Vista. Given Microsoft's large market share, most PC users will have to switch to Vista sooner or later. Vista comes with promises that it is much more secure than the previous systems, but you can also rest assured that Vista is now the main target for writers of viruses and other mischief-making software, simply because more computers will be running with Vista than with anything else, if history is any guide.
The question of whether history is any guide to these matters engaged the attention of a historian of technology at MIT named Rosalind Williams a few years ago. In her book Retooling, she explored the ways people deal with the lockstep acquiescence to the latest software upgrades such as Vista that is so often imposed upon them. Even at a supposedly future-oriented, cutting-edge institution like MIT, she realized that new administrative software was greeted with dismay as often as enthusiasm. But she also realized that in the complex, interlocking world of information and technology we have created for ourselves, not keeping up with the program (so to speak) is simply impossible without turning one's back on the way modern professional life is lived.
Of course, there are societies that do this. The religious sects collectively known as the Amish decide which modern technologies to adopt and which to forego. Sometimes their prohibitions are not absolute. For example, a whole neighborhood of Amish will share one pay telephone, but use it only for emergencies. And I recall reading about another Amish community that experimented with a video player and a set of children's films for a while. Eventually, though, the parents disposed of the equipment, because one father noticed that "the children aren't singing anymore."
Rejection of most modern technology is certainly a choice, but not one that can generally be made on an individual basis. The Amish survive, not simply because they don't watch videos or drive cars, but because they have preserved and maintained a functioning community where everyone has rights and responsibilities that are taken seriously. I understand that once an Amish child comes of age, he or she can freely choose to leave the community. But most decide to stay.
That kind of community is foreign to most of us, possibly because we demand so much in the way of freedom of choice that we refuse to be bound by obligations that would reduce that freedom. But choice comes with a price. In a peculiar way, the market is kind of a mirror of our own collective choices. Microsoft got to the place it is by giving most PC users most of what they wanted. That entails literally millions of choices. (Has any user on earth ever tried out all of Microsoft Word's features even once?) But in order to have the choices Microsoft provides, you have to forego the privilege of choosing your operating system.
Engineered systems of all kinds offer similar choices. You can choose not to own a car in a city in the western U. S., but your choices for travel will be radically restricted thereby. While millions of people do quite well in northeastern U. S. cities without cars, it is because their citizens have made a collective choice to maintain public transportation at a level that makes it possible to live without a car. The reasons for this are partly historical, partly technical, and partly political. The one thing they are not is simple.
I hope Microsoft is right about Vista's increased level of security. As for myself, I will continue to fly under the radar of many viruses by using mainly Macs. That choice is one I have found hard to maintain at times. But I enjoy being able to make it.
The ethics of choice in engineering is not a subject that comes up frequently. But since choice is a fundamental aspect of human freedom, as more of our lives are engaged with engineered products and systems all the time, those of us who create them should consider the ethics of choice more often.
Sources: Retooling: A Historian Confronts Technological Change by Rosalind Williams was published by MIT Press in 2002. My review of the book can be found in IEEE Technology and Society Magazine, vol. 23, pp. 6-8, Spring 2004.
Tuesday, November 14, 2006
Tesla and the Secret of "The Prestige"
Movies are not normally germane to engineering ethics, but I can justify the following discussion of the recent film "The Prestige" thusly. The driving force behind much good fiction, literary or cinematic, is a moral problem. When the moral problem involves technology, you have the same kind of issues that engineering ethics deals with, but in a different context.
This piece should not be read by people who have not seen the film and want to be surprised by the ending, because I'm going to give it away. If this blog had a wider readership I would hesitate to do such a thing—journalistic ethics generally forbidding it—but since all indications are that the audience is, shall we say, exclusive, I will go ahead and summarize the plot.
The time is around 1900, and two magicians, Angier and Borden, fall out when Borden ties a knot around the wrists of Angier's wife in such a way that it may have caused her death by drowning in a stage stunt. Angier, convinced that Borden killed his wife, embarks on a kind of revenge career in which he tries to out-magic Borden, who retaliates by disguising himself in Angier's audiences in order to wreak havoc with Angier's tricks, as well as devising increasingly ingenious stunts for his own London performances. Borden outdoes himself with a stunt called the Transported Man, in which it appears that he walks into a doorway on one side of the stage and emerges almost instantaneously out of a second doorway forty feet away. Angier, convinced that Borden does this trick by means of a machine he bought from the famed American inventor Nikola Tesla, visits Tesla in Colorado Springs, where Tesla has electrified (literally) the entire town in exchange for being able to use the town generator for his own experiments in transmitting energy without wires.
Here is the secret: Tesla, according to the movie, actually hits upon a way, not of transporting objects, but of duplicating them. He begins with top hats (the movie opens with a scene of a pile of top hats outside Tesla's remote laboratory), progresses to cats, and eventually duplicates Angier himself. Angier buys Tesla's machine, takes it to London, and with it stages one hundred performances of his own stunt, which requires him to drown his own double each time in order to keep the number of Angiers running around within manageable quantities, namely, one. The last scene of the movie shows where Angier (now dead—shot by Borden, who turns out to be two people who have been exchanging roles all through the movie) has hidden his one hundred dead bodies, each preserved in its own drowning tank.
As I watched that last scene, a thought flashed through my mind of the thousands of frozen embryos—babies—fetuses—whatever your preferred word is—that are preserved in in-vitro fertilization clinics around the world. They are not artfully lit and do not have all the features of a familiar screen actor, as the bodies in the tanks did. But they share with those bodies the one feature that makes them different from all other material objects: they are in some sense human, though in a state that might be termed suspended animation.
I do not believe "The Prestige" will go down in cinematic history as a great movie, although events could prove me wrong. For one thing, the main characters Angier and Borden, played by Hugh Jackman and Christian Bale, respectively, arouse little sympathy in the audience. They are each so single-mindedly focused on their rivalry that they trash the lives of women and shamelessly exploit anything within their reach to achieve their goals of mastery over the other, which necessarily involves mastery over the material world. As for the Tesla character, played by David Bowie as a kind of proto-Nazi-scientist type, he is simultaneously the enabler of the deepest wrongs committed by Angier, and the prophet who warns against the use of his own tools. When Angier offers to buy Tesla's machine, Tesla says that the best thing that could be done with it would be to sink it under the ocean.
The scientist who issues dire warnings against the use of his own creations is rather a cliché in science fiction. But with frozen embryos an everyday reality and human cloning on our doorstep, we are no longer talking about science fiction when we consider the morality of duplicating human beings. The job of the artist in a culture is not so much to solve moral problems, although they can sometimes help, as Harriet Beecher Stowe tried to do with Uncle Tom's Cabin, which she wrote explicitly to expose the horrors and wrongs of slavery. The artist should bring our attention to things we either do not see out of familiarity, or out of unfamiliarity, or for some other reason. The recent debates over so-called "therapeutic" cloning and embryonic stem cell research, frankly stated, involve the question of whether we should duplicate existing human beings and kill them for some purpose of our own. That is exactly what Angier did with his duplicated magicians. In the movie's system of justice, he died for his wrongdoing at the hand of his enemy.
I have little doubt that most of the people intellectually involved in the production of the film enthusiastically support embryonic stem cell research. Perhaps they see the connection between their film and that issue, and perhaps they don't. The typical response to someone who voices opposition to such research on the ground that it involves killing a human being is that the object in question is not a human being. In a recent Supreme Court case involving a law that prohibits partial-birth abortions, the language preferred by a Planned Parenthood lawyer was to say that the "fetus" involved in an abortion will "undergo demise." Would you feel any better if your doctor told you that you were going to "undergo demise" in a few weeks, rather than just saying flat out that you're going to die? The feelings at stake are not the baby's. Rather, people resort to this kind of language to help them deny the fact that they are dealing with other human beings, beings just like they were when they were that age.
The writer Flannery O'Connor was once asked why she so often tended to write about the grotesque and the bizarre in her stories. She responded to the effect that as a Catholic, she knew that most of her audience did not share her beliefs: " . . . for the almost-blind, you draw large and startling figures." The makers of "The Prestige" have drawn some large and startling figures for us to ponder, perhaps without meaning to. I hope more people will draw the connection between their tale of long ago and far away, and what is going on in the halls of science and medicine today.
Sources: The Supreme Court case is described in an audio report by NPR's Nina Totenberg at http://www.npr.org/templates/story/story.php?storyId=6460614. The quotation from O'Connor is at http://thinkexist.com/quotes/flannery_o'connor/.
This piece should not be read by people who have not seen the film and want to be surprised by the ending, because I'm going to give it away. If this blog had a wider readership I would hesitate to do such a thing—journalistic ethics generally forbidding it—but since all indications are that the audience is, shall we say, exclusive, I will go ahead and summarize the plot.
The time is around 1900, and two magicians, Angier and Borden, fall out when Borden ties a knot around the wrists of Angier's wife in such a way that it may have caused her death by drowning in a stage stunt. Angier, convinced that Borden killed his wife, embarks on a kind of revenge career in which he tries to out-magic Borden, who retaliates by disguising himself in Angier's audiences in order to wreak havoc with Angier's tricks, as well as devising increasingly ingenious stunts for his own London performances. Borden outdoes himself with a stunt called the Transported Man, in which it appears that he walks into a doorway on one side of the stage and emerges almost instantaneously out of a second doorway forty feet away. Angier, convinced that Borden does this trick by means of a machine he bought from the famed American inventor Nikola Tesla, visits Tesla in Colorado Springs, where Tesla has electrified (literally) the entire town in exchange for being able to use the town generator for his own experiments in transmitting energy without wires.
Here is the secret: Tesla, according to the movie, actually hits upon a way, not of transporting objects, but of duplicating them. He begins with top hats (the movie opens with a scene of a pile of top hats outside Tesla's remote laboratory), progresses to cats, and eventually duplicates Angier himself. Angier buys Tesla's machine, takes it to London, and with it stages one hundred performances of his own stunt, which requires him to drown his own double each time in order to keep the number of Angiers running around within manageable quantities, namely, one. The last scene of the movie shows where Angier (now dead—shot by Borden, who turns out to be two people who have been exchanging roles all through the movie) has hidden his one hundred dead bodies, each preserved in its own drowning tank.
As I watched that last scene, a thought flashed through my mind of the thousands of frozen embryos—babies—fetuses—whatever your preferred word is—that are preserved in in-vitro fertilization clinics around the world. They are not artfully lit and do not have all the features of a familiar screen actor, as the bodies in the tanks did. But they share with those bodies the one feature that makes them different from all other material objects: they are in some sense human, though in a state that might be termed suspended animation.
I do not believe "The Prestige" will go down in cinematic history as a great movie, although events could prove me wrong. For one thing, the main characters Angier and Borden, played by Hugh Jackman and Christian Bale, respectively, arouse little sympathy in the audience. They are each so single-mindedly focused on their rivalry that they trash the lives of women and shamelessly exploit anything within their reach to achieve their goals of mastery over the other, which necessarily involves mastery over the material world. As for the Tesla character, played by David Bowie as a kind of proto-Nazi-scientist type, he is simultaneously the enabler of the deepest wrongs committed by Angier, and the prophet who warns against the use of his own tools. When Angier offers to buy Tesla's machine, Tesla says that the best thing that could be done with it would be to sink it under the ocean.
The scientist who issues dire warnings against the use of his own creations is rather a cliché in science fiction. But with frozen embryos an everyday reality and human cloning on our doorstep, we are no longer talking about science fiction when we consider the morality of duplicating human beings. The job of the artist in a culture is not so much to solve moral problems, although they can sometimes help, as Harriet Beecher Stowe tried to do with Uncle Tom's Cabin, which she wrote explicitly to expose the horrors and wrongs of slavery. The artist should bring our attention to things we either do not see out of familiarity, or out of unfamiliarity, or for some other reason. The recent debates over so-called "therapeutic" cloning and embryonic stem cell research, frankly stated, involve the question of whether we should duplicate existing human beings and kill them for some purpose of our own. That is exactly what Angier did with his duplicated magicians. In the movie's system of justice, he died for his wrongdoing at the hand of his enemy.
I have little doubt that most of the people intellectually involved in the production of the film enthusiastically support embryonic stem cell research. Perhaps they see the connection between their film and that issue, and perhaps they don't. The typical response to someone who voices opposition to such research on the ground that it involves killing a human being is that the object in question is not a human being. In a recent Supreme Court case involving a law that prohibits partial-birth abortions, the language preferred by a Planned Parenthood lawyer was to say that the "fetus" involved in an abortion will "undergo demise." Would you feel any better if your doctor told you that you were going to "undergo demise" in a few weeks, rather than just saying flat out that you're going to die? The feelings at stake are not the baby's. Rather, people resort to this kind of language to help them deny the fact that they are dealing with other human beings, beings just like they were when they were that age.
The writer Flannery O'Connor was once asked why she so often tended to write about the grotesque and the bizarre in her stories. She responded to the effect that as a Catholic, she knew that most of her audience did not share her beliefs: " . . . for the almost-blind, you draw large and startling figures." The makers of "The Prestige" have drawn some large and startling figures for us to ponder, perhaps without meaning to. I hope more people will draw the connection between their tale of long ago and far away, and what is going on in the halls of science and medicine today.
Sources: The Supreme Court case is described in an audio report by NPR's Nina Totenberg at http://www.npr.org/templates/story/story.php?storyId=6460614. The quotation from O'Connor is at http://thinkexist.com/quotes/flannery_o'connor/.
Tuesday, November 07, 2006
Global Warming and World Views, Part II
Global Warming and World Views, Part II
Last week I started from the fact that flying takes about ten times as much fossil fuel as riding trains, and imagined how an atheist would reason out a position on whether flying is morally justified, given the news about global warming. I showed that our purported atheist could come out either in favor of flying or opposed to it, but the reasons for each conclusion came down to a matter of choosing rationales to suit one's conclusions. If you think getting enjoyment out of life is what it's all about, you'll fly as much as you can and leave the climate catastrophes for someone else to worry about. If you think man's presence on the planet is a bad idea on the whole, you'll favor the least intrusive modes of transportation possible. This leads to images of pre-agricultural primitive peoples tip-toeing through the jungle, leaving no trace of their passage. I'm sure the imaginative reader can come up with other rationalizations for either view, but that's what they are: rationalizations. You pick the outcome you want to get, and then you go looking for reasons to back it up.
I also said I'd take the example of a different worldview and see what conclusions you can draw from it as well. Here goes.
Before you say the opposite of atheism must be theism, hold on. We can keep this entirely at the level of philosophy. Instead of atheism, I should have said that the person I had in mind last time believed that there is no such thing as moral law apart from what somebody thinks. Because what I'm going to contrast that with today is the viewpoint that there IS such a thing as moral law, independent of what you or I think or say or feel, and even independent of the existence of humanity altogether.
What I mean is this. No one would quibble with the notion that whether or not people are on this planet, the law of gravity would still cause the earth to revolve around the sun. The law of gravity doesn't depend on our agreeing on it, or even knowing anything about it. Now what I'm proposing as an alternate view is the idea that common notions of right and wrong such as "don't kick babies" and "don't steal cars" are just as much an independent, inviolate part of the universe as the law of gravity. Anywhere there are sentient beings with intelligence and will, this theory goes, you find these moral principles, and they are the same everywhere.
This theory goes by various names at various times, but "natural law" is probably the most common one. It is "natural" in the sense that it is part of nature, part of the universe's structure. I won't attempt to justify it at this point, although people have done that, and not just religious people either. What I will do is to use it as a basis to adjudicate this question of whether it's moral to fly planes, knowing that you produce less greenhouse gases when you ride the train.
It turns out that the question is just as hard, but for different reasons. There seems to be a universal bias against what I'd call waste, for example. Taking a thing that is good for people and simply trashing it without benefiting from it yourself is something that hardly anybody would argue to be a good thing. If—and this is a big "if"—it turns out that our 200-year love affair with fossil fuels utterly wrecks the planet—and by this I mean, makes it completely uninhabitable, like burning down a house—then, well, I'd say anyone who burned anything combustible in the last 200 years is partly responsible. But the trouble with this notion is that we cannot know the future. That is why the "if" is so big.
You will meet people who will tell you that we have that amount of certainty about the problem, and it's time to start doing something about it. The only sure way to tell they're right is not to do anything and then wait and see. This approach has its own problems. There is a kind of prudential judgment that is part of natural law, in the sense that people are not generally expected to change their behavior based on remote possibilities that they are not intimately involved in. And that is what we should apply here.
The biggies in natural law concern how you treat your family, your friends, your neighbors, and so on. Giant geopolitical things like global warming may be the proper concern for certain specialists, but it betrays a kind of inverted set of priorities to put global warming ahead of friendships, fulfillment of duties, and charity, which is an old-fashioned word for love. I think natural lawyers would say, "If your life involves air travel and is otherwise following generally accepted moral principles, then you should consider using a less polluting form of transportation. But if your ability to do good would be seriously impaired, go ahead and fly." Of course, different people will come to different conclusions using these principles, even if they start from the same data. But that's true of almost any moral problem that isn't on the extremes. The same was true of our conclusions when we started from the atheistic or individualistic assumption.
So what good is all this? "You haven't answered the question!" you say. "Should I or should I not fly rather than ride trains?" Never mind planes or trains for the moment. Never mind global warming, even. The important question is not what mode of transportation to take, and not even whether New York City will be under water in 2106, but how you decide what is right and wrong, and what you believe the world is about, and why you are here in the first place. Get those things right, and the little stuff will take care of itself.
Last week I started from the fact that flying takes about ten times as much fossil fuel as riding trains, and imagined how an atheist would reason out a position on whether flying is morally justified, given the news about global warming. I showed that our purported atheist could come out either in favor of flying or opposed to it, but the reasons for each conclusion came down to a matter of choosing rationales to suit one's conclusions. If you think getting enjoyment out of life is what it's all about, you'll fly as much as you can and leave the climate catastrophes for someone else to worry about. If you think man's presence on the planet is a bad idea on the whole, you'll favor the least intrusive modes of transportation possible. This leads to images of pre-agricultural primitive peoples tip-toeing through the jungle, leaving no trace of their passage. I'm sure the imaginative reader can come up with other rationalizations for either view, but that's what they are: rationalizations. You pick the outcome you want to get, and then you go looking for reasons to back it up.
I also said I'd take the example of a different worldview and see what conclusions you can draw from it as well. Here goes.
Before you say the opposite of atheism must be theism, hold on. We can keep this entirely at the level of philosophy. Instead of atheism, I should have said that the person I had in mind last time believed that there is no such thing as moral law apart from what somebody thinks. Because what I'm going to contrast that with today is the viewpoint that there IS such a thing as moral law, independent of what you or I think or say or feel, and even independent of the existence of humanity altogether.
What I mean is this. No one would quibble with the notion that whether or not people are on this planet, the law of gravity would still cause the earth to revolve around the sun. The law of gravity doesn't depend on our agreeing on it, or even knowing anything about it. Now what I'm proposing as an alternate view is the idea that common notions of right and wrong such as "don't kick babies" and "don't steal cars" are just as much an independent, inviolate part of the universe as the law of gravity. Anywhere there are sentient beings with intelligence and will, this theory goes, you find these moral principles, and they are the same everywhere.
This theory goes by various names at various times, but "natural law" is probably the most common one. It is "natural" in the sense that it is part of nature, part of the universe's structure. I won't attempt to justify it at this point, although people have done that, and not just religious people either. What I will do is to use it as a basis to adjudicate this question of whether it's moral to fly planes, knowing that you produce less greenhouse gases when you ride the train.
It turns out that the question is just as hard, but for different reasons. There seems to be a universal bias against what I'd call waste, for example. Taking a thing that is good for people and simply trashing it without benefiting from it yourself is something that hardly anybody would argue to be a good thing. If—and this is a big "if"—it turns out that our 200-year love affair with fossil fuels utterly wrecks the planet—and by this I mean, makes it completely uninhabitable, like burning down a house—then, well, I'd say anyone who burned anything combustible in the last 200 years is partly responsible. But the trouble with this notion is that we cannot know the future. That is why the "if" is so big.
You will meet people who will tell you that we have that amount of certainty about the problem, and it's time to start doing something about it. The only sure way to tell they're right is not to do anything and then wait and see. This approach has its own problems. There is a kind of prudential judgment that is part of natural law, in the sense that people are not generally expected to change their behavior based on remote possibilities that they are not intimately involved in. And that is what we should apply here.
The biggies in natural law concern how you treat your family, your friends, your neighbors, and so on. Giant geopolitical things like global warming may be the proper concern for certain specialists, but it betrays a kind of inverted set of priorities to put global warming ahead of friendships, fulfillment of duties, and charity, which is an old-fashioned word for love. I think natural lawyers would say, "If your life involves air travel and is otherwise following generally accepted moral principles, then you should consider using a less polluting form of transportation. But if your ability to do good would be seriously impaired, go ahead and fly." Of course, different people will come to different conclusions using these principles, even if they start from the same data. But that's true of almost any moral problem that isn't on the extremes. The same was true of our conclusions when we started from the atheistic or individualistic assumption.
So what good is all this? "You haven't answered the question!" you say. "Should I or should I not fly rather than ride trains?" Never mind planes or trains for the moment. Never mind global warming, even. The important question is not what mode of transportation to take, and not even whether New York City will be under water in 2106, but how you decide what is right and wrong, and what you believe the world is about, and why you are here in the first place. Get those things right, and the little stuff will take care of itself.
Tuesday, October 31, 2006
Global Warming and World Views, Part I
Did you know that if you travel on an airliner from, say, London to Frankfurt, you use about ten times the greenhouse-gas-producing fossil fuel that it takes to carry you the same distance by train? Did you care?
That idea is the gist of an ad campaign sponsored by European environmental groups. The ads take the form of statements by an imaginary airline head who makes arrogant, disparaging comments about environmentalists, who he calls "lentil mobs." In Europe's largely pro-green culture, such comments are as inflammatory as running ads in U. S. media that show a fat white Southern sheriff saying disparaging things about blacks. Technique aside, the point the ads make is true: airline travel uses much more fossil fuel per passenger-mile than surface travel, and especially more than rail, which is more efficient than private cars. The way you react to that fact should depend on your view of the world and what it is all about.
Suppose you think this physical world is all there is, death is annihilation, and we are here to propagate our gene pool and along the way pick up whatever transient enjoyment we can. You may therefore view air travel as one of the greatest boons to humanity, since it lets us get from enjoyable place to enjoyable place much faster than surface transportation. Strangely, though, that attitude is uncommon in cultures where a frankly atheistic outlook prevails. In places such as France, Germany, and the Scandinavian countries, where publicly expressed religion is almost invisible, Greenpeace and similar green parties and beliefs are most common. The reasons for this are complex, but I can speculate.
If you believe man is the supreme intelligence in the universe, then he is therefore responsible for the efficient running of the planet. After all, we can't trust the elephants or the insects to do a good job. Or can we? They were here first. Down that line of thought lies the branch of environmentalism which views mankind as an unmitigated plague upon the planet, one which the Earth would be much better off without. In this view, the ideal world might be one in which the human population was reduced to the point where we could all live off the land like the pre-agriculture American Indians. The trouble with that is, estimates of the pre-Columbian population of North America run in the low dozens of millions, and that would be true in proportion to the rest of the world. To achieve that ideal, then, most of the world's people would have to go away. As it happens, the population of native Europeans (including Russians) is undergoing a population implosion that would be right on target to reduce Europe to its pre-civilization population levels, if it weren't for all the immigrants. But that is another story.
Even if you don't think mankind should commit mass suicide for the betterment of the planet, you may still feel some personal responsibility toward the globe which you cannot possibly fulfill. You may feel like a ten-year-old child put in charge of running General Motors: impossibly underqualified for the job. Accordingly, you turn to the experts, who are not quite as unqualified as you to run the planet, and they tell you that yes, the Earth is getting warmer, and yes, our burning fossil fuels has something to do with it, probably. So are you going to form an ironclad rule never to set foot on an airplane again?
Probably not. Instead, you'll fly when you can't avoid it, or maybe whenever you feel you can afford it, and feel guilty about it. And rightly so. Because if everybody quit flying and took the train, we'd burn less fossil fuel than we do now. Then what?
Well, you as an individual might live long enough to see a slight slowdown in the global-warming trend. But maybe not. And suppose it's too late? Suppose we've passed the invisible tipping point of no return, and the atmosphere is headed inexorably toward a catastrophe that will make the worst disaster movies look like child's play: storms, floods, inundated coastal cities and plains, radical rises in temperature. Again, there is nothing you can do but watch. In this case, the thought that years ago, you quit flying in airplanes as a protest against what you saw as environmental irresponsibility might furnish you some small solace, but it did nothing significant in the long run.
I don't know about you, but I find all these alternatives profoundly depressing. Doing nothing is bad, but doing something like abstaining from flying has such a small chance of making any real difference that it's not worth the effort. Of course, there is always the great mysterious process by which public opinion changes. And something like that might happen here, as it did in the sixties in the U. S. when environmentalism grew from being viewed primarily as the peculiar obsession of a few left-wing crackpots to something that President Richard M. Nixon himself embraced when he founded the Environmental Protection Agency. But such things are hardly predictable, and to trust in their occurrence takes a kind of faith akin to those who regularly buy lottery tickets.
Lest I appear to be bringing a counsel of despair, I will take a look at a different world view next week. I'll tell you right now, I won't necessarily come to any different conclusions about what to do. But the reasons will be very, very different.
Sources: The report on the spoofing airline ads is an Oct. 29, 2006 New York Times article by Eric Pfanner at http://www.nytimes.com/2006/10/30/business/media/30fuel.html. According to the Wikipedia article on the population history of American indigenous peoples, estimates of the North American native population before 1492 range from 12 million to over 100 million, and are probably no more than educated guesses. Whatever the figure is, it is much less than the current population.
That idea is the gist of an ad campaign sponsored by European environmental groups. The ads take the form of statements by an imaginary airline head who makes arrogant, disparaging comments about environmentalists, who he calls "lentil mobs." In Europe's largely pro-green culture, such comments are as inflammatory as running ads in U. S. media that show a fat white Southern sheriff saying disparaging things about blacks. Technique aside, the point the ads make is true: airline travel uses much more fossil fuel per passenger-mile than surface travel, and especially more than rail, which is more efficient than private cars. The way you react to that fact should depend on your view of the world and what it is all about.
Suppose you think this physical world is all there is, death is annihilation, and we are here to propagate our gene pool and along the way pick up whatever transient enjoyment we can. You may therefore view air travel as one of the greatest boons to humanity, since it lets us get from enjoyable place to enjoyable place much faster than surface transportation. Strangely, though, that attitude is uncommon in cultures where a frankly atheistic outlook prevails. In places such as France, Germany, and the Scandinavian countries, where publicly expressed religion is almost invisible, Greenpeace and similar green parties and beliefs are most common. The reasons for this are complex, but I can speculate.
If you believe man is the supreme intelligence in the universe, then he is therefore responsible for the efficient running of the planet. After all, we can't trust the elephants or the insects to do a good job. Or can we? They were here first. Down that line of thought lies the branch of environmentalism which views mankind as an unmitigated plague upon the planet, one which the Earth would be much better off without. In this view, the ideal world might be one in which the human population was reduced to the point where we could all live off the land like the pre-agriculture American Indians. The trouble with that is, estimates of the pre-Columbian population of North America run in the low dozens of millions, and that would be true in proportion to the rest of the world. To achieve that ideal, then, most of the world's people would have to go away. As it happens, the population of native Europeans (including Russians) is undergoing a population implosion that would be right on target to reduce Europe to its pre-civilization population levels, if it weren't for all the immigrants. But that is another story.
Even if you don't think mankind should commit mass suicide for the betterment of the planet, you may still feel some personal responsibility toward the globe which you cannot possibly fulfill. You may feel like a ten-year-old child put in charge of running General Motors: impossibly underqualified for the job. Accordingly, you turn to the experts, who are not quite as unqualified as you to run the planet, and they tell you that yes, the Earth is getting warmer, and yes, our burning fossil fuels has something to do with it, probably. So are you going to form an ironclad rule never to set foot on an airplane again?
Probably not. Instead, you'll fly when you can't avoid it, or maybe whenever you feel you can afford it, and feel guilty about it. And rightly so. Because if everybody quit flying and took the train, we'd burn less fossil fuel than we do now. Then what?
Well, you as an individual might live long enough to see a slight slowdown in the global-warming trend. But maybe not. And suppose it's too late? Suppose we've passed the invisible tipping point of no return, and the atmosphere is headed inexorably toward a catastrophe that will make the worst disaster movies look like child's play: storms, floods, inundated coastal cities and plains, radical rises in temperature. Again, there is nothing you can do but watch. In this case, the thought that years ago, you quit flying in airplanes as a protest against what you saw as environmental irresponsibility might furnish you some small solace, but it did nothing significant in the long run.
I don't know about you, but I find all these alternatives profoundly depressing. Doing nothing is bad, but doing something like abstaining from flying has such a small chance of making any real difference that it's not worth the effort. Of course, there is always the great mysterious process by which public opinion changes. And something like that might happen here, as it did in the sixties in the U. S. when environmentalism grew from being viewed primarily as the peculiar obsession of a few left-wing crackpots to something that President Richard M. Nixon himself embraced when he founded the Environmental Protection Agency. But such things are hardly predictable, and to trust in their occurrence takes a kind of faith akin to those who regularly buy lottery tickets.
Lest I appear to be bringing a counsel of despair, I will take a look at a different world view next week. I'll tell you right now, I won't necessarily come to any different conclusions about what to do. But the reasons will be very, very different.
Sources: The report on the spoofing airline ads is an Oct. 29, 2006 New York Times article by Eric Pfanner at http://www.nytimes.com/2006/10/30/business/media/30fuel.html. According to the Wikipedia article on the population history of American indigenous peoples, estimates of the North American native population before 1492 range from 12 million to over 100 million, and are probably no more than educated guesses. Whatever the figure is, it is much less than the current population.
Wednesday, October 25, 2006
Sniffing Through Your Wallet with RFID
We should all be glad that Superman was a nice guy. I mean, with his X-ray vision, his personal jet-powered cape, not to mention his lady-killing looks when he didn't have his glasses on, he would have made a formidable criminal. Well, some nice guys in the Department of Computer Science at the University of Massachusetts Amherst have shown us that it doesn't take X-ray vision to read your name and credit-card number off some new types of credit cards that incorporate something called "RFID."
First, full disclosure (I've always wanted to say that): I taught at the University of Massachusetts Amherst for fifteen years before moving south, though not in Computer Science. And even before that, my supervising professor in graduate school and I patented a system that could have been used for RFID, although nobody but the patent lawyers ever made a nickel off the patent, which has now expired.
What is RFID? It stands for "radio frequency identification," and it includes a variety of techniques to track inventories, monitor conditions remotely, and even read credit cards. The common thread in all these things is an RFID chip that goes onto the object in question: a box of Wheaties, a credit card, or even a person's body. You can think of this technology as on beyond bar codes—those little symbols that the checkout person scans at the grocery store. Using the proper RFID equipment, you can receive information about where the object is, its inventory number, and so on, all without contacting the object. So in a warehouse, for instance, every time a pallet full of computers goes out the door, an RFID transponder can count them and record each computer's serial number, and the guy driving the forklift doesn't even have to slow down. You just have to be within radio range, which can vary from inches to several feet. Which is how the clever guys at UMass Amherst did their trick.
According to the New York Times, Professor Kevin Fu asked a graduate student to take a sealed envelope bearing a new credit card and just tap it against a transponder box they had designed. In a few minutes, Professor Fu's name, the credit card number, and even the expiration date appeared on a screen. All without even opening the envelope.
The Times reporter dutifully made the rounds of credit-card firms such as American Express and J. P. Morgan Chase to describe Prof. Fu's magic trick. Visa's Brian Triplett said it was an "interesting technical exercise," but wasn't concerned that it would lead to widespread credit-card fraud. It should be noted that it wasn't Mr. Triplett's credit card number that showed up on the screen.
As with many other technologies that develop out of the public eye for years or decades before emerging into visibility, RFID has been around a lot longer than you might think. Back in World War II, a primitive form of RFID was used with aircraft to "identify friend or foe" (IFF). The equipment was far too bulky or expensive back then to be considered for consumer products, but advances in electronics have given us RFID chips cheap enough to throw away with the empty box of Wheaties. Some experts believe RFID will largely replace bar codes as the inventory technology of the future. And that's not all.
Attaching an RFID tag to one's person would lead to all sorts of situations, not all of which are pleasant. Strangely enough, one of the more popular paranoid delusions in recent years, but not so recent that RFID was developed to do it yet, was that the FBI or some equally secretive outfit had implanted a chip in the patient's body, and the chip was spying on their whereabouts and even their thoughts. I actually had dealings with such an individual when I was back at UMass, and it wasn't a pretty picture. It's not every day that billions of dollars are spent with the unintended byproduct of bringing some nut case's delusion into the realm of reality, but it happens. RFID is a long way from reading peoples' thoughts yet, but even that notion doesn't sound as goofy as it used to, what with PET scans and other noninvasive brain-monitoring techniques.
For now, RFID will begin to show up only in places like grocery stores, automated tollbooth tags such as New York State's "EZPass," and some credit cards. I don't think we need to worry about Prof. Fu's trick falling into the hands of some evil computer scientist, because it's fairly easy to foil. And fortunately, the laws about credit-card fraud in this country are written so that the consumer is liable only for the first $50 of loss, and the credit-card issuer is left holding the rest of the bag. So if Visa and company start losing substantial amounts of money to people who cobble together a duplicate of Prof. Fu's remote card reader, the firms will take the straightforward steps needed to fix that particular problem.
All the same, we need to think about how RFID could be abused, before some clever thief or saboteur does, and take reasonable precautions. And it's going to be a long while before yours truly consents to having any chips embedded in his person. But then, I was born old-fashioned.
Sources: The New York Times story appeared online on Oct. 23, 2006 at http://www.nytimes.com/2006/10/23/business/23card.html. I have recently received a copy of RFID Strategic Implementation and ROI: A Practical Roadmap to Success by Charles Poirer and Duncan Mccollum, which has a good nontechnical discussion of RFID's history and how it works.
First, full disclosure (I've always wanted to say that): I taught at the University of Massachusetts Amherst for fifteen years before moving south, though not in Computer Science. And even before that, my supervising professor in graduate school and I patented a system that could have been used for RFID, although nobody but the patent lawyers ever made a nickel off the patent, which has now expired.
What is RFID? It stands for "radio frequency identification," and it includes a variety of techniques to track inventories, monitor conditions remotely, and even read credit cards. The common thread in all these things is an RFID chip that goes onto the object in question: a box of Wheaties, a credit card, or even a person's body. You can think of this technology as on beyond bar codes—those little symbols that the checkout person scans at the grocery store. Using the proper RFID equipment, you can receive information about where the object is, its inventory number, and so on, all without contacting the object. So in a warehouse, for instance, every time a pallet full of computers goes out the door, an RFID transponder can count them and record each computer's serial number, and the guy driving the forklift doesn't even have to slow down. You just have to be within radio range, which can vary from inches to several feet. Which is how the clever guys at UMass Amherst did their trick.
According to the New York Times, Professor Kevin Fu asked a graduate student to take a sealed envelope bearing a new credit card and just tap it against a transponder box they had designed. In a few minutes, Professor Fu's name, the credit card number, and even the expiration date appeared on a screen. All without even opening the envelope.
The Times reporter dutifully made the rounds of credit-card firms such as American Express and J. P. Morgan Chase to describe Prof. Fu's magic trick. Visa's Brian Triplett said it was an "interesting technical exercise," but wasn't concerned that it would lead to widespread credit-card fraud. It should be noted that it wasn't Mr. Triplett's credit card number that showed up on the screen.
As with many other technologies that develop out of the public eye for years or decades before emerging into visibility, RFID has been around a lot longer than you might think. Back in World War II, a primitive form of RFID was used with aircraft to "identify friend or foe" (IFF). The equipment was far too bulky or expensive back then to be considered for consumer products, but advances in electronics have given us RFID chips cheap enough to throw away with the empty box of Wheaties. Some experts believe RFID will largely replace bar codes as the inventory technology of the future. And that's not all.
Attaching an RFID tag to one's person would lead to all sorts of situations, not all of which are pleasant. Strangely enough, one of the more popular paranoid delusions in recent years, but not so recent that RFID was developed to do it yet, was that the FBI or some equally secretive outfit had implanted a chip in the patient's body, and the chip was spying on their whereabouts and even their thoughts. I actually had dealings with such an individual when I was back at UMass, and it wasn't a pretty picture. It's not every day that billions of dollars are spent with the unintended byproduct of bringing some nut case's delusion into the realm of reality, but it happens. RFID is a long way from reading peoples' thoughts yet, but even that notion doesn't sound as goofy as it used to, what with PET scans and other noninvasive brain-monitoring techniques.
For now, RFID will begin to show up only in places like grocery stores, automated tollbooth tags such as New York State's "EZPass," and some credit cards. I don't think we need to worry about Prof. Fu's trick falling into the hands of some evil computer scientist, because it's fairly easy to foil. And fortunately, the laws about credit-card fraud in this country are written so that the consumer is liable only for the first $50 of loss, and the credit-card issuer is left holding the rest of the bag. So if Visa and company start losing substantial amounts of money to people who cobble together a duplicate of Prof. Fu's remote card reader, the firms will take the straightforward steps needed to fix that particular problem.
All the same, we need to think about how RFID could be abused, before some clever thief or saboteur does, and take reasonable precautions. And it's going to be a long while before yours truly consents to having any chips embedded in his person. But then, I was born old-fashioned.
Sources: The New York Times story appeared online on Oct. 23, 2006 at http://www.nytimes.com/2006/10/23/business/23card.html. I have recently received a copy of RFID Strategic Implementation and ROI: A Practical Roadmap to Success by Charles Poirer and Duncan Mccollum, which has a good nontechnical discussion of RFID's history and how it works.
Tuesday, October 17, 2006
Is Any Technology Ethically Neutral? The Sony Reader
A recent New York Times article announced the debut of the Sony Reader, an electronic book that uses tiny plastic spheres to simulate the appearance of an actual page of print. Unlike a laptop display with its energy-hogging backlighting, the Reader uses only existing room light and consumes essentially no power until you turn the page. A reader of the Reader can take satisfaction in the notion that no trees were cut down and hardly any oil or coal burned to produce the miniscule amount of energy needed to operate it.
A more environmentally friendly technology can hardly be imagined, it seems. So should we all pitch our old-fashioned stacks of paper bound together and buy Readers? It depends.
When I try to engage certain people in a discussion of the ethics of a given technology, an argument I often hear goes like this: "Well, technology by itself is neutral. It's only the ways people use technology that are good or bad." That is one of those nice-sounding phrases that look good at first, but tend to disintegrate under scrutiny. The Sony Reader would seem to be a good candidate to exemplify the idea of the neutrality of technology. No one is making us go out and buy Readers. It's simply another item on the market which may or may not prove popular. It seems to be environmentally benign, and as long as it does what its maker claims for it, what downsides could it possibly have?
That question actually sends us out upon deep philosophical waters. There is a school of thought popular in Europe that goes under the name of the "precautionary principle." Followers of this principle take the stand that any new technology must be examined thoroughly for possible harmful effects before it can be generally distributed. If no actual harm has occurred yet, the examination of a technology for possible harm necessarily involves reasoned speculation about what might occur. There is nothing intrinsically wrong with basing technical decisions upon hypotheticals. After all, the Sony Reader's designers were speculating that people would want to buy their product if they developed it, and so the use of speculation in evaluating its effects, both good and bad, is no less warranted.
For example, one could imagine Readers sweeping the world to become as popular as books, if not more so. (To a great extent, this has already taken place as computers have replaced reference volumes in libraries.) Would the world be a better place if every book was an e-book?
That depends. The people who make conventional books wouldn't think so. Technological unemployment has been around ever since there was technology. Somehow the world's economies have absorbed the paste-up artists, the platemakers, the hot-type linotypers, and all the other superseded occupations that pre-electronic forms of printing required. What has happened to a good fraction of the printing industry's past workers might eventually happen to all of them. But unless you believe in state control and ownership of the means of production, technological unemployment is just one of those things that happen.
How could this possibility be forestalled? In the world's continuing embrace of a free-market global economy, consumers can exert a certain amount of control over what they buy. But consumers can't buy what isn't there, and much of the power to decide what gets sold lies with those who control the large firms whose investments determine the directions of the markets. If next year, most investors decide that paper books are going the way of the slide rule when electronic calculators came along, the rest of us will not be able to do much about it.
Next, consider what the Reader is made of: probably some conventional electronics, a battery, and a display containing thousands if not millions of tiny plastic spheres suspended in some kind of liquid. Some day—probably sooner than later if the useful lifetime of the typical laptop is any guide—the brand-new Readers now waiting on store shelves will accumulate in attics and closets, only to be thrown out when the next model comes along. As we have learned, you can't simply throw things away these days, because there isn't any "away" anymore. More and more environmentally conscious manufacturers are doing what is called life-cycle design, which takes into account the problem of how to dispose of a used piece of equipment with minimal impact to the environment. I have no specific information on the Sony Reader in this regard, but at the least, its disposal will take up some room in a landfill somewhere. And if it contains any hazardous chemicals in its battery or display, these chemicals could cause problems later.
Finally, there is the subtle but real change in the habits of millions who change from one form of information exchange to another. No matter how closely the makers of a new technology try to imitate the experience produced by a previous one, some things are different. And sometimes the new technology imposes a whole set of new habits on the user, not all of them good ones. How many of us have rattled out an angry email and hit the send key only to regret it later? Somehow, the act of writing or typing a paper letter, signing it, folding it, addressing it, and putting it in the mailbox provided a number of additional points of decision where we could give heed to our second thoughts and at least put the letter aside instead of mailing it. What at first looked like nothing more than obstacles to the rapid communication of thought now looks more like a kind of psychological buffer that may have made society a better place.
I have no idea whether the Reader will catch on, or whether it is only a precursor of something better, or whether, like the poor, the paper books we will always have with us. And my little exercise in applying the precautionary principle to such a benign-looking piece of technology as the Reader should not be misunderstood to mean that I feel it is a threat to civilization. But I hope I have made clear that any technology whatsoever that ends up in the hands of people has intrinsic potential for both good and bad consequences, and the way it is designed can influence how those consequences develop over time.
Sources: The New York Times article by David Pogue on Oct. 12, 2006 describing the Sony Reader was located at http://www.nytimes.com/2006/10/12/technology/12pogue.html.
A more environmentally friendly technology can hardly be imagined, it seems. So should we all pitch our old-fashioned stacks of paper bound together and buy Readers? It depends.
When I try to engage certain people in a discussion of the ethics of a given technology, an argument I often hear goes like this: "Well, technology by itself is neutral. It's only the ways people use technology that are good or bad." That is one of those nice-sounding phrases that look good at first, but tend to disintegrate under scrutiny. The Sony Reader would seem to be a good candidate to exemplify the idea of the neutrality of technology. No one is making us go out and buy Readers. It's simply another item on the market which may or may not prove popular. It seems to be environmentally benign, and as long as it does what its maker claims for it, what downsides could it possibly have?
That question actually sends us out upon deep philosophical waters. There is a school of thought popular in Europe that goes under the name of the "precautionary principle." Followers of this principle take the stand that any new technology must be examined thoroughly for possible harmful effects before it can be generally distributed. If no actual harm has occurred yet, the examination of a technology for possible harm necessarily involves reasoned speculation about what might occur. There is nothing intrinsically wrong with basing technical decisions upon hypotheticals. After all, the Sony Reader's designers were speculating that people would want to buy their product if they developed it, and so the use of speculation in evaluating its effects, both good and bad, is no less warranted.
For example, one could imagine Readers sweeping the world to become as popular as books, if not more so. (To a great extent, this has already taken place as computers have replaced reference volumes in libraries.) Would the world be a better place if every book was an e-book?
That depends. The people who make conventional books wouldn't think so. Technological unemployment has been around ever since there was technology. Somehow the world's economies have absorbed the paste-up artists, the platemakers, the hot-type linotypers, and all the other superseded occupations that pre-electronic forms of printing required. What has happened to a good fraction of the printing industry's past workers might eventually happen to all of them. But unless you believe in state control and ownership of the means of production, technological unemployment is just one of those things that happen.
How could this possibility be forestalled? In the world's continuing embrace of a free-market global economy, consumers can exert a certain amount of control over what they buy. But consumers can't buy what isn't there, and much of the power to decide what gets sold lies with those who control the large firms whose investments determine the directions of the markets. If next year, most investors decide that paper books are going the way of the slide rule when electronic calculators came along, the rest of us will not be able to do much about it.
Next, consider what the Reader is made of: probably some conventional electronics, a battery, and a display containing thousands if not millions of tiny plastic spheres suspended in some kind of liquid. Some day—probably sooner than later if the useful lifetime of the typical laptop is any guide—the brand-new Readers now waiting on store shelves will accumulate in attics and closets, only to be thrown out when the next model comes along. As we have learned, you can't simply throw things away these days, because there isn't any "away" anymore. More and more environmentally conscious manufacturers are doing what is called life-cycle design, which takes into account the problem of how to dispose of a used piece of equipment with minimal impact to the environment. I have no specific information on the Sony Reader in this regard, but at the least, its disposal will take up some room in a landfill somewhere. And if it contains any hazardous chemicals in its battery or display, these chemicals could cause problems later.
Finally, there is the subtle but real change in the habits of millions who change from one form of information exchange to another. No matter how closely the makers of a new technology try to imitate the experience produced by a previous one, some things are different. And sometimes the new technology imposes a whole set of new habits on the user, not all of them good ones. How many of us have rattled out an angry email and hit the send key only to regret it later? Somehow, the act of writing or typing a paper letter, signing it, folding it, addressing it, and putting it in the mailbox provided a number of additional points of decision where we could give heed to our second thoughts and at least put the letter aside instead of mailing it. What at first looked like nothing more than obstacles to the rapid communication of thought now looks more like a kind of psychological buffer that may have made society a better place.
I have no idea whether the Reader will catch on, or whether it is only a precursor of something better, or whether, like the poor, the paper books we will always have with us. And my little exercise in applying the precautionary principle to such a benign-looking piece of technology as the Reader should not be misunderstood to mean that I feel it is a threat to civilization. But I hope I have made clear that any technology whatsoever that ends up in the hands of people has intrinsic potential for both good and bad consequences, and the way it is designed can influence how those consequences develop over time.
Sources: The New York Times article by David Pogue on Oct. 12, 2006 describing the Sony Reader was located at http://www.nytimes.com/2006/10/12/technology/12pogue.html.
Wednesday, October 11, 2006
Doctors, Data, and Doomsday
Nearly every business, government office, and organizations of any size down to the local barber shop have made the transition from paper records to computers—except doctors and hospitals. Go into any doctor's office and you will still see big file cabinets filled with cardboard folders bearing colored tabs. The system of keeping a file for each patient was an innovation when the Mayo Clinic came up with the idea in the early 1900s. As Robert Charrette reports in a recent article in IEEE Spectrum, the Clinic is one of the few medical facilities so far to make a successful transition to all-electronic records. But he warns that while we aren't necessarily facing a medical Doomsday, troubles lie ahead along the way to converting the entire U. S. medical system to computerized recordkeeping
As Charrette points out, the history of large-scale software projects is littered with the bones of huge, expensive failures. One of the most egregious was the FBI's attempt to computerize their elaborate system of case files, which had been kept on paper since the days of J. Edgar Hoover in the 1930s. After spending over $100 million, the FBI gave up on the project altogether. Why is it that society tolerates such disasters in software engineering? If banks lost your money as readily as some software firms do, people would still be keeping their cash in mattresses.
Software engineering differs from almost every other kind of engineering in two fundamental ways. In electrical, mechanical, civil, and chemical engineering, the subject matter of the discipline is something physical: steel dirt, chemicals, or electromagnetic waves. But in software engineering, the "material cause" (as Aristotle would put it), the matter out of which the discipline emerges, is thought. And thoughts are notoriously hard things to pin down. Secondly, most large-scale software projects invariably deal with the largely undocumented and tremendously variable behavior of thousands of people as they do comparatively complex intellectual tasks. This is nowhere more true than in the medical profession, where some of the most highly educated and individualistic professionals deal daily with life-or-death situations. These two factors make software engineering the most unpredictable of engineering disciplines, in that despite the best plans of competent engineers, projects often run off the rails of budgets and schedules to crash in the woods of failure (metaphorically speaking).
To what extent are software engineers morally culpable for the failure of a major software project they are involved in? Failures are a normal part of engineering. And it can be said in behalf of most software project failures that no one dies or is seriously injured, at least directly. A building that collapses usually takes someone with it, but a failed software project's worst consequences for individuals are usually the loss of jobs, not life itself. But the expenditure of millions of dollars toward an end that is ultimately never realized is hardly a social good, either.
Despite such notable failures, no one seems inclined to give up on the idea that computerizing paper medical records, if we can do it, will be better than the situation we have now, where the present limited access to data results in thousands of misdiagnoses and hundreds of deaths every year. Of course, along with the promises of better access for those who need to know medical records comes the threat of abuse by unscrupulous businesses and criminals. Patient advocacy groups have already weighed in to oppose the present versions of health information technology legislation which do not protect the privacy rights of patients enough, in their opinion. This is a problem that can be dealt with, as the largely successful effort to put private banking records on the Internet has shown. But the challenges are greater with medical records, and it would be easy to promulgate a system that would have as many security holes as a Swiss cheese if things aren't done right.
Some people advocate an increased role for the federal government in this area, pointing out that many medical practices are small and simply don't have the resources to adapt on their own. The track record of government involvement in medicine in this country is excellent with regard to research, problematic with regard to large-scale social programs such as Medicare, and largely unknown with regard to standardized software. As with anything else, if enough good people of good will are put to the task, it could be made to work. But in the present political atmosphere in which government is often regarded as the enemy of the free market and the good in general, it is hard to imagine how enough public and professional support for a government-sponsored project could be raised.
The field of software engineering itself is only about a generation old, and its practitioners are increasingly aware of the need to borrow from fields such as sociology, ethics, and psychology to do their jobs better. The old days of a geeky nerd sitting alone in a cubicle churning out code that no one else can understand are passing, if not completely over. Good software engineers study the project's intended users as thoroughly as anthropologists observe primitive tribes, in order to figure out not only what the customers say they want, but in order to discover existing methods and connections that the users may not even know about themselves and their organizations. The ideal paper-to-software transition in the medical profession will still be a lot of work. But if it is staged properly, using good examples such as the Mayo Clinic as paradigms and checking results in each new case before proceeding, it could work as smoothly as the introduction of computers into banking. But in this case, it won't be your money, it will be your life.
Sources: The article "Dying for Data" in IEEE Spectrum's October 2006 issue is available online at http://www.spectrum.ieee.org/oct06/4589. Charrette also wrote about the FBI project failure in "Why Software Fails" at http://www.spectrum.ieee.org/sep05/inthisissue. An example of one organization advocating in favor of better patient privacy rights can be found at http://www.patientprivacyrights.org.
As Charrette points out, the history of large-scale software projects is littered with the bones of huge, expensive failures. One of the most egregious was the FBI's attempt to computerize their elaborate system of case files, which had been kept on paper since the days of J. Edgar Hoover in the 1930s. After spending over $100 million, the FBI gave up on the project altogether. Why is it that society tolerates such disasters in software engineering? If banks lost your money as readily as some software firms do, people would still be keeping their cash in mattresses.
Software engineering differs from almost every other kind of engineering in two fundamental ways. In electrical, mechanical, civil, and chemical engineering, the subject matter of the discipline is something physical: steel dirt, chemicals, or electromagnetic waves. But in software engineering, the "material cause" (as Aristotle would put it), the matter out of which the discipline emerges, is thought. And thoughts are notoriously hard things to pin down. Secondly, most large-scale software projects invariably deal with the largely undocumented and tremendously variable behavior of thousands of people as they do comparatively complex intellectual tasks. This is nowhere more true than in the medical profession, where some of the most highly educated and individualistic professionals deal daily with life-or-death situations. These two factors make software engineering the most unpredictable of engineering disciplines, in that despite the best plans of competent engineers, projects often run off the rails of budgets and schedules to crash in the woods of failure (metaphorically speaking).
To what extent are software engineers morally culpable for the failure of a major software project they are involved in? Failures are a normal part of engineering. And it can be said in behalf of most software project failures that no one dies or is seriously injured, at least directly. A building that collapses usually takes someone with it, but a failed software project's worst consequences for individuals are usually the loss of jobs, not life itself. But the expenditure of millions of dollars toward an end that is ultimately never realized is hardly a social good, either.
Despite such notable failures, no one seems inclined to give up on the idea that computerizing paper medical records, if we can do it, will be better than the situation we have now, where the present limited access to data results in thousands of misdiagnoses and hundreds of deaths every year. Of course, along with the promises of better access for those who need to know medical records comes the threat of abuse by unscrupulous businesses and criminals. Patient advocacy groups have already weighed in to oppose the present versions of health information technology legislation which do not protect the privacy rights of patients enough, in their opinion. This is a problem that can be dealt with, as the largely successful effort to put private banking records on the Internet has shown. But the challenges are greater with medical records, and it would be easy to promulgate a system that would have as many security holes as a Swiss cheese if things aren't done right.
Some people advocate an increased role for the federal government in this area, pointing out that many medical practices are small and simply don't have the resources to adapt on their own. The track record of government involvement in medicine in this country is excellent with regard to research, problematic with regard to large-scale social programs such as Medicare, and largely unknown with regard to standardized software. As with anything else, if enough good people of good will are put to the task, it could be made to work. But in the present political atmosphere in which government is often regarded as the enemy of the free market and the good in general, it is hard to imagine how enough public and professional support for a government-sponsored project could be raised.
The field of software engineering itself is only about a generation old, and its practitioners are increasingly aware of the need to borrow from fields such as sociology, ethics, and psychology to do their jobs better. The old days of a geeky nerd sitting alone in a cubicle churning out code that no one else can understand are passing, if not completely over. Good software engineers study the project's intended users as thoroughly as anthropologists observe primitive tribes, in order to figure out not only what the customers say they want, but in order to discover existing methods and connections that the users may not even know about themselves and their organizations. The ideal paper-to-software transition in the medical profession will still be a lot of work. But if it is staged properly, using good examples such as the Mayo Clinic as paradigms and checking results in each new case before proceeding, it could work as smoothly as the introduction of computers into banking. But in this case, it won't be your money, it will be your life.
Sources: The article "Dying for Data" in IEEE Spectrum's October 2006 issue is available online at http://www.spectrum.ieee.org/oct06/4589. Charrette also wrote about the FBI project failure in "Why Software Fails" at http://www.spectrum.ieee.org/sep05/inthisissue. An example of one organization advocating in favor of better patient privacy rights can be found at http://www.patientprivacyrights.org.
Tuesday, October 03, 2006
Legislating Morality: The Unlawful Internet Gambling Enforcement Act
Over the weekend, the U. S. Congress approved and passed to the President a bill to prohibit financial institutions from sending payments to offshore internet gambling websites. President Bush is expected to sign it. The internet gambling industry was taken somewhat by surprise, and stocks in online casinos are tumbling all over the globe. Some view the action as a purely political ploy to help Republicans retain control of Congress after the November elections. Others see it as one more belated attempt for the law to catch up to technology.
The name of a popular bestseller some years ago was "Please Don't Eat the Daisies." The author, a mother of several young children, was preparing a dinner party and told her kids not to track mud into the living room, not to touch the china on the table, and so on. But she forgot to tell them not to eat the daisies in the centerpiece, and so they did. People will come up with ways of doing things that regulators, legislatures, and competitors simply cannot think of in advance. But the effects of these novel ideas are not always welcome.
Once enough people got onto the Internet, gambling websites were probably inevitable. The same privacy, anonymity, and ability to operate anywhere in the world with T1 lines that makes the Internet so attractive for pornographers also attracts internet gaming firms. As I noted in my Aug. 1 column, various governments over the centuries have taken attitudes toward gambling ranging from pure laissez-faire to near-total prohibition. But until recently, a government that wanted to regulate gambling could identify the bookies, their hangouts, and their customers without too much trouble. The advent of the Internet changed all that.
Because of the dispersed nature of communication over computer networks, it is impractical to identify individuals who place bets online without serious curtailment of individual liberties. In principle, Federal agents could stage raids on college dorm rooms and other places where they suspect Internet gambling is occurring, but this kind of action would be tantamount to creating a police state.
If you examine the machinery of internet gambling by U. S. customers who use offshore companies, most of it is dispersed widely. Customers gamble online, paying mostly by credit card to foreign internet casinos. The thousands of individual customers are spread all over the place. There are fewer foreign sites with servers and operators, but they are inaccessible to U. S. enforcement officials. The one link in the chain that is both accessible and fairly concentrated is the group of U. S. financial institutions which forward their customers' money to the internet casinos. This is precisely the group targeted by the law that Congress just passed.
If you are a credit-card company, what your customers do with their money is normally none of your business. Outright fraud is a concern, since by law a customer's liability in most cases of credit-card fraud is limited to $50, with banks picking up the rest of the tab. Thus motivated, banks have developed sophisticated ways of ferreting out fraudulent companies who abuse their credit-card systems. But most internet gamblers tacitly agree to the rules of the game, which over the long term mean that most gamblers lose big to the casinos, just as in real life. Nevertheless, in the eyes of the law they have not been defrauded. Rather, they chose to take an action which is technically illegal, so they can't have any recourse except to deduct gambling losses on their tax returns to the extent allowed by law (a loophole I have never understood).
The present law simply prohibits banks from reimbursing online casinos, which puts the banks in a bind. If they don't obey the law and keep on sending funds to the casinos, they will be liable to legal penalties. But if they do obey and refuse to pay online casinos, how will this affect the other parties involved?
Well, pretty soon you will see lists of unacceptable credit cards on the online gambling sites: cards issued by companies who have begun to obey the law. Depending on how dedicated a gambler is, he may shift to another card, or he may just drop that site for another one that is less picky about credit cards. What he probably won't do is quit gambling, especially if he has a habit established.
If the U. S. banking industry as a whole stands firm, foreign-owned credit firms will rush in to fill the vacuum. If this occurs, we will simply have succeeded in moving a major part of the system offshore. After all, the only pieces that have to stay here are the customers.
While I have no special insight into the mentality of those who passed this law, I suspect that they view gambling as an intrinsic evil which should be curtailed or eliminated where possible. I happen to be in sympathy with that view, but I also happen to be in sympathy with the outlook that says when you decide to do a thing, find a good way of doing it.
What is gambling, after all? In my very limited experience, accumulated chiefly in convenience store lines behind people who just wanted one more scratch ticket and yeah, lemme have five of them Texas Holdems, gambling is a way people have of facing the apparent randomness of life headon, and trying to win. It has everything to do with emotion, desire, and the consumer mentality, and very little to do with logic, higher education (except for the ill-gotten gambling dollars that pay for some of it), or the nobler aspects of life. If we went about creating a society of self-controlled, self-directed citizens who knew who they were, were largely content with their lot in life, and could count on their circumstances maintaining some stability over the next few years, I suspect we'd have a lot fewer gamblers to start with. The ones who were left could send all their money to Bermuda for all I care.
So while I agree with the goal of the anti-gambling legislation just passed, as an engineer I can see several big problems that stand in the way of its achieving it. Maybe I'm wrong and this will put a big damper on the whole business. I hope so. But some problems are deeper than a solution by legislation can address.
Sources: A summary of the recent legislation is at http://www.canada.com/nationalpost/columnists/story.html?id=101747ec-8d41-42f5-9209-1236e3ced739&p=1
The name of a popular bestseller some years ago was "Please Don't Eat the Daisies." The author, a mother of several young children, was preparing a dinner party and told her kids not to track mud into the living room, not to touch the china on the table, and so on. But she forgot to tell them not to eat the daisies in the centerpiece, and so they did. People will come up with ways of doing things that regulators, legislatures, and competitors simply cannot think of in advance. But the effects of these novel ideas are not always welcome.
Once enough people got onto the Internet, gambling websites were probably inevitable. The same privacy, anonymity, and ability to operate anywhere in the world with T1 lines that makes the Internet so attractive for pornographers also attracts internet gaming firms. As I noted in my Aug. 1 column, various governments over the centuries have taken attitudes toward gambling ranging from pure laissez-faire to near-total prohibition. But until recently, a government that wanted to regulate gambling could identify the bookies, their hangouts, and their customers without too much trouble. The advent of the Internet changed all that.
Because of the dispersed nature of communication over computer networks, it is impractical to identify individuals who place bets online without serious curtailment of individual liberties. In principle, Federal agents could stage raids on college dorm rooms and other places where they suspect Internet gambling is occurring, but this kind of action would be tantamount to creating a police state.
If you examine the machinery of internet gambling by U. S. customers who use offshore companies, most of it is dispersed widely. Customers gamble online, paying mostly by credit card to foreign internet casinos. The thousands of individual customers are spread all over the place. There are fewer foreign sites with servers and operators, but they are inaccessible to U. S. enforcement officials. The one link in the chain that is both accessible and fairly concentrated is the group of U. S. financial institutions which forward their customers' money to the internet casinos. This is precisely the group targeted by the law that Congress just passed.
If you are a credit-card company, what your customers do with their money is normally none of your business. Outright fraud is a concern, since by law a customer's liability in most cases of credit-card fraud is limited to $50, with banks picking up the rest of the tab. Thus motivated, banks have developed sophisticated ways of ferreting out fraudulent companies who abuse their credit-card systems. But most internet gamblers tacitly agree to the rules of the game, which over the long term mean that most gamblers lose big to the casinos, just as in real life. Nevertheless, in the eyes of the law they have not been defrauded. Rather, they chose to take an action which is technically illegal, so they can't have any recourse except to deduct gambling losses on their tax returns to the extent allowed by law (a loophole I have never understood).
The present law simply prohibits banks from reimbursing online casinos, which puts the banks in a bind. If they don't obey the law and keep on sending funds to the casinos, they will be liable to legal penalties. But if they do obey and refuse to pay online casinos, how will this affect the other parties involved?
Well, pretty soon you will see lists of unacceptable credit cards on the online gambling sites: cards issued by companies who have begun to obey the law. Depending on how dedicated a gambler is, he may shift to another card, or he may just drop that site for another one that is less picky about credit cards. What he probably won't do is quit gambling, especially if he has a habit established.
If the U. S. banking industry as a whole stands firm, foreign-owned credit firms will rush in to fill the vacuum. If this occurs, we will simply have succeeded in moving a major part of the system offshore. After all, the only pieces that have to stay here are the customers.
While I have no special insight into the mentality of those who passed this law, I suspect that they view gambling as an intrinsic evil which should be curtailed or eliminated where possible. I happen to be in sympathy with that view, but I also happen to be in sympathy with the outlook that says when you decide to do a thing, find a good way of doing it.
What is gambling, after all? In my very limited experience, accumulated chiefly in convenience store lines behind people who just wanted one more scratch ticket and yeah, lemme have five of them Texas Holdems, gambling is a way people have of facing the apparent randomness of life headon, and trying to win. It has everything to do with emotion, desire, and the consumer mentality, and very little to do with logic, higher education (except for the ill-gotten gambling dollars that pay for some of it), or the nobler aspects of life. If we went about creating a society of self-controlled, self-directed citizens who knew who they were, were largely content with their lot in life, and could count on their circumstances maintaining some stability over the next few years, I suspect we'd have a lot fewer gamblers to start with. The ones who were left could send all their money to Bermuda for all I care.
So while I agree with the goal of the anti-gambling legislation just passed, as an engineer I can see several big problems that stand in the way of its achieving it. Maybe I'm wrong and this will put a big damper on the whole business. I hope so. But some problems are deeper than a solution by legislation can address.
Sources: A summary of the recent legislation is at http://www.canada.com/nationalpost/columnists/story.html?id=101747ec-8d41-42f5-9209-1236e3ced739&p=1
Wednesday, September 27, 2006
Maglev Train Wreck: The Human Factor
For the past several years, a train that literally floats on air and travels at speeds up to 280 miles per hour has been operating regularly on a 19-mile test track in northwestern Germany. The Transrapid 07 "maglev" train's test runs are open to the public, and the waiting list for a ride often exceeds six months. On Friday, September 22 of this year, some thirty visitors and employees of the train's manufacturers, ThyssenKrupp and Siemens, filed aboard for a high-speed trip along an elevated guideway that wends through forests and pastures. Earlier that day, maintenance personnel had traveled the same route in a smaller service vehicle which normally was moved out of the way to clear the track for the Transrapid. But somehow, that morning the service car was still on the main line when the Transrapid plowed into it at a speed of 125 miles per hour. Twenty-three passengers and crew died and ten more were injured in the most serious accident to befall maglev technology since its inception.
The idea of using a magnetic field to support a vehicle without contacting the ground is not that new. Patents on the basic idea were filed as early as the 1930s, but the notion had to await advances in electrical power and control systems before a practical maglev train could be designed. The first full-scale experimental units were fielded in the 1960s, but so far the only commercial maglev train, a German Transrapid, shuttles between downtown Shanghai and the city's airport.
The technical appeal of magnetic levitation is easy to understand. At train speeds over a hundred miles an hour, stresses on conventional train wheels and tracks become extreme, leading to increased operation and maintenance costs. In operation, the Transrapid makes no physical contact with the track. Instead, powerful magnets hover less than an inch below steel strips on either side of the track, and automatic control systems measure the distance thousands of times every second to keep it within close limits. Heavy copper coils of wire along the track produce moving magnetic fields that propel the train up to 280 mph, eliminating any need to transfer large amounts of electrical energy to the train.
How does it stop? In normal operation, the same moving magnetic fields that accelerate the train also slow it down. The excess mechanical energy that braking makes available can even be captured and sent back into the power grid, making maglev trains one of the most energy-efficient transportation modes around. In emergencies, a mechanical system takes over. The train is fail-safe in the sense that if all power fails on the train and the track, the cars simply settle down on a skid pad on the track and the whole thing just slides to a stop without leaving the rails. All the cars remained on the track even after the recent accident.
So what went wrong? A complete answer must await future investigations, but initial reports indicate that the train operators simply did not know that the service vehicle was still on the track. At speed, any train—maglev, electric, diesel, or steam—takes a long distance to stop, a distance that increases greatly for high-speed trains. Stopping after the driver sees an obstruction is usually not an option. So the whole orientation of train safety since the nineteenth century has been to keep obstructions off the track. And this is largely a matter of good communications between the train operators and those in a position to know what is on the track ahead, out of sight.
A friend of mine belongs to the Austin Steam Train Association, a largely volunteer-staffed organization which operates excursion trains in and around Austin, Texas. Even though what they do is for fun and not for pay, they follow all applicable rules, regulations, and licensing requirements for safe train operation. After years of study, my friend finally got his engineer's "ticket" recently. Even though he is a professor of engineering, he had to undergo a course of study and a rigorous examination about the fine points of train operating procedures, including rules about authorization for train movements that seem almost Byzantine in their complexity. But decades of experience have proven these rules to be necessary, and he takes pride in following them to the letter.
Anyone can make mistakes, and this is not to say that those who operated the Transrapid on that fatal day did not have enough rules and regulations. All the regulations in the world will not prevent an accident if the rules aren't followed, and the fact that the Transrapid operated with a good safety record up till now says that by and large, the operators knew how to run it safely. Perhaps the experimental nature of the maglev train allowed a certain complacency to creep in. Track sensors that detect obstructions and interlock with train controls would have prevented this accident. And perhaps the commercial installation in Shanghai features such safety interlocks. It would be a shame if this mishap, which had nothing to do with the maglev features of the train and everything to do with human error, ends up tainting the future of maglev technology. All the same, it is a reminder that no matter how advanced technology becomes, the people working with it have essential roles to play in making it safe to use.
Sources: A New York Times article describing the Transrapid accident is at http://www.nytimes.com/2006/09/23/world/europe/23cnd-germany.html?_r=1&oref=slogin. Some interesting historical background on maglev technology in Germany can be found at http://maglev.de/index.php?en_vision. The Austin Steam Train Association's website is www.austinsteamtrain.org.
The idea of using a magnetic field to support a vehicle without contacting the ground is not that new. Patents on the basic idea were filed as early as the 1930s, but the notion had to await advances in electrical power and control systems before a practical maglev train could be designed. The first full-scale experimental units were fielded in the 1960s, but so far the only commercial maglev train, a German Transrapid, shuttles between downtown Shanghai and the city's airport.
The technical appeal of magnetic levitation is easy to understand. At train speeds over a hundred miles an hour, stresses on conventional train wheels and tracks become extreme, leading to increased operation and maintenance costs. In operation, the Transrapid makes no physical contact with the track. Instead, powerful magnets hover less than an inch below steel strips on either side of the track, and automatic control systems measure the distance thousands of times every second to keep it within close limits. Heavy copper coils of wire along the track produce moving magnetic fields that propel the train up to 280 mph, eliminating any need to transfer large amounts of electrical energy to the train.
How does it stop? In normal operation, the same moving magnetic fields that accelerate the train also slow it down. The excess mechanical energy that braking makes available can even be captured and sent back into the power grid, making maglev trains one of the most energy-efficient transportation modes around. In emergencies, a mechanical system takes over. The train is fail-safe in the sense that if all power fails on the train and the track, the cars simply settle down on a skid pad on the track and the whole thing just slides to a stop without leaving the rails. All the cars remained on the track even after the recent accident.
So what went wrong? A complete answer must await future investigations, but initial reports indicate that the train operators simply did not know that the service vehicle was still on the track. At speed, any train—maglev, electric, diesel, or steam—takes a long distance to stop, a distance that increases greatly for high-speed trains. Stopping after the driver sees an obstruction is usually not an option. So the whole orientation of train safety since the nineteenth century has been to keep obstructions off the track. And this is largely a matter of good communications between the train operators and those in a position to know what is on the track ahead, out of sight.
A friend of mine belongs to the Austin Steam Train Association, a largely volunteer-staffed organization which operates excursion trains in and around Austin, Texas. Even though what they do is for fun and not for pay, they follow all applicable rules, regulations, and licensing requirements for safe train operation. After years of study, my friend finally got his engineer's "ticket" recently. Even though he is a professor of engineering, he had to undergo a course of study and a rigorous examination about the fine points of train operating procedures, including rules about authorization for train movements that seem almost Byzantine in their complexity. But decades of experience have proven these rules to be necessary, and he takes pride in following them to the letter.
Anyone can make mistakes, and this is not to say that those who operated the Transrapid on that fatal day did not have enough rules and regulations. All the regulations in the world will not prevent an accident if the rules aren't followed, and the fact that the Transrapid operated with a good safety record up till now says that by and large, the operators knew how to run it safely. Perhaps the experimental nature of the maglev train allowed a certain complacency to creep in. Track sensors that detect obstructions and interlock with train controls would have prevented this accident. And perhaps the commercial installation in Shanghai features such safety interlocks. It would be a shame if this mishap, which had nothing to do with the maglev features of the train and everything to do with human error, ends up tainting the future of maglev technology. All the same, it is a reminder that no matter how advanced technology becomes, the people working with it have essential roles to play in making it safe to use.
Sources: A New York Times article describing the Transrapid accident is at http://www.nytimes.com/2006/09/23/world/europe/23cnd-germany.html?_r=1&oref=slogin. Some interesting historical background on maglev technology in Germany can be found at http://maglev.de/index.php?en_vision. The Austin Steam Train Association's website is www.austinsteamtrain.org.
Tuesday, September 19, 2006
Email: Boon or Bane?
If you are reading this blog, you must be on the Internet, unless you are standing outside my office door where I post a hard copy of the first page every week. Either way, you very likely have one or more email accounts. If you're like me, your feelings about email are not uniformly positive. Sure, it's convenient, cheap, and a great way to stay in touch with people on the other side of the world, if you happen to know anybody over there. But email's downsides are well known too, from the time it takes you to wade through spam up to career-destroying incidents that could have been prevented by thinking just a little longer before clicking the "send" button.
Like most other communications media, at first glance email doesn't seem to have much to do with engineering ethics. A saying around my house is, "More communication is better than less communication," and why wouldn't that apply to email? The first large-scale users of email were physicists who found it a convenient way to keep in touch via their advanced networked computers. Many of the standard features of email were developed in an environment where the users were intelligent, well-behaved, technically adept, and often had a libertarian streak that opposed excessive government regulation. The protocols and systemic features that email uses were developed in this environment. Consequently, email's marginal cost is basically zero, anyone with an email account can send mail to anyone else, and it is almost impossible to regulate without extensive government-funded intervention, as in China.
These features stayed in place as the volume of email grew far beyond what most of its early developers anticipated. Now it is a part of modern culture, as much as the telephone was half a century ago. The near-zero marginal cost of email has allowed spammers of all kinds to spring up, and a kind of electronic warfare now exists between spammers who spray the Internet with billions of bits of advertising in the hopes that a few people respond, and the system operators who keep improving spam filters in a constant battle to limit the junk percentage in the average user's in-box. One wonders how much resources are being wasted on both sides. If there was a tiny fixed cost to sending an email message built into the system, even a Federal tax, and that cost was impossible to avoid, most spammers would go out of business. The rest would have to behave more like direct-mail companies, carefully targeting their messages to only those persons who are more likely to respond, given the limited financial resources of the spammer. The horse has been out of the barn much too long to consider implementing something like that now, unless in a few years it becomes necessary to do a worldwide system upgrade reaching down to the very basics of the email protocols. And that is not likely for the foreseeable future.
Spam aside, even the volume of email from people and organizations I recognize is often overwhelming. I find that the two largest generators of emails I realize are legitimate, but which I'd rather not receive, come from the two universities I am associated with, one as an employee and one as an adjunct professor. To control this problem, there would have to be some kind of financial or other penalty associated with excessive use of the all-employee email list. Most organizations have some sort of policy along those lines, but its enforcement is sporadic and sometimes you wonder if anyone cares at all how many emails are sent out to everyone.
Finally, there is the time each individual spends dealing with email. I must personally spend an hour or more each day dealing with it: reading it, sorting it, purging it, filing it, writing responses, and so on. In the years before email, what did I do with that hour or so a day? I don't remember reading postal mail for that length of time daily. And I wasn't on the phone. I must have been able to do other useful things, such as work, reading important books, or talking with friends and relatives. Whatever it was, it doesn't happen now, or if it does, it's in the rest of the day that has been squeezed by email.
Email isn't the first communications medium that has been viewed with ambivalence. Plato, writing around 380 B. C., called into question the wisdom of the invention of writing itself. In a story he attributed to a legendary king, he noted that before the invention of writing, people had to commit important things to memory: songs, poems, even legal agreements. But now that the technology of writing was available, the skills of remembering would atrophy. He was not at all sure that writing was an unmixed blessing.
Neither am I sure that email is an unmixed blessing. But I hope that in the future, its rough edges get smoothed out and it approaches the ideal of a seamless meeting of minds that all communication should strive for. It hasn't happened yet.
Sources: The Socratic dialogue in which Plato recounts the encounter between the Egyptian divinity and inventor of writing Theuth and King Thamos is available at http://english.ttu.edu/kairos/2.1/features/brent/platowri.htm (Phaedrus 67-71).
Like most other communications media, at first glance email doesn't seem to have much to do with engineering ethics. A saying around my house is, "More communication is better than less communication," and why wouldn't that apply to email? The first large-scale users of email were physicists who found it a convenient way to keep in touch via their advanced networked computers. Many of the standard features of email were developed in an environment where the users were intelligent, well-behaved, technically adept, and often had a libertarian streak that opposed excessive government regulation. The protocols and systemic features that email uses were developed in this environment. Consequently, email's marginal cost is basically zero, anyone with an email account can send mail to anyone else, and it is almost impossible to regulate without extensive government-funded intervention, as in China.
These features stayed in place as the volume of email grew far beyond what most of its early developers anticipated. Now it is a part of modern culture, as much as the telephone was half a century ago. The near-zero marginal cost of email has allowed spammers of all kinds to spring up, and a kind of electronic warfare now exists between spammers who spray the Internet with billions of bits of advertising in the hopes that a few people respond, and the system operators who keep improving spam filters in a constant battle to limit the junk percentage in the average user's in-box. One wonders how much resources are being wasted on both sides. If there was a tiny fixed cost to sending an email message built into the system, even a Federal tax, and that cost was impossible to avoid, most spammers would go out of business. The rest would have to behave more like direct-mail companies, carefully targeting their messages to only those persons who are more likely to respond, given the limited financial resources of the spammer. The horse has been out of the barn much too long to consider implementing something like that now, unless in a few years it becomes necessary to do a worldwide system upgrade reaching down to the very basics of the email protocols. And that is not likely for the foreseeable future.
Spam aside, even the volume of email from people and organizations I recognize is often overwhelming. I find that the two largest generators of emails I realize are legitimate, but which I'd rather not receive, come from the two universities I am associated with, one as an employee and one as an adjunct professor. To control this problem, there would have to be some kind of financial or other penalty associated with excessive use of the all-employee email list. Most organizations have some sort of policy along those lines, but its enforcement is sporadic and sometimes you wonder if anyone cares at all how many emails are sent out to everyone.
Finally, there is the time each individual spends dealing with email. I must personally spend an hour or more each day dealing with it: reading it, sorting it, purging it, filing it, writing responses, and so on. In the years before email, what did I do with that hour or so a day? I don't remember reading postal mail for that length of time daily. And I wasn't on the phone. I must have been able to do other useful things, such as work, reading important books, or talking with friends and relatives. Whatever it was, it doesn't happen now, or if it does, it's in the rest of the day that has been squeezed by email.
Email isn't the first communications medium that has been viewed with ambivalence. Plato, writing around 380 B. C., called into question the wisdom of the invention of writing itself. In a story he attributed to a legendary king, he noted that before the invention of writing, people had to commit important things to memory: songs, poems, even legal agreements. But now that the technology of writing was available, the skills of remembering would atrophy. He was not at all sure that writing was an unmixed blessing.
Neither am I sure that email is an unmixed blessing. But I hope that in the future, its rough edges get smoothed out and it approaches the ideal of a seamless meeting of minds that all communication should strive for. It hasn't happened yet.
Sources: The Socratic dialogue in which Plato recounts the encounter between the Egyptian divinity and inventor of writing Theuth and King Thamos is available at http://english.ttu.edu/kairos/2.1/features/brent/platowri.htm (Phaedrus 67-71).
Tuesday, September 12, 2006
Death in Africa for Cell Phones in the U. S.
According to some estimates, four to ten million people have died in the war that has raged in the Democratic Republic of Congo since 1996. Africa's third-largest country was known as Zaire until 1997, and began its sad history of relations with the West as the Congo Free State in 1870, when King Leopold of Belgium made it his personal property. The despicable exploitation and cruelty that Leopold wrought upon millions of Africans in his efforts to extract natural resources such as rubber and diamonds reduced the population by half in thirty years, and has ever since stood as a paradigm of human rights abuse. Today, the Congo holds another material that the rest of the world covets: colombo-tantalite ore, commonly known as "coltan." And although there is no single individual like King Leopold who can be held responsible, the Congo is once again suffering horribly as the rest of the world steals its treasures.
Coltan is the world's main source of tantalum, an essential element in the manufacture of miniature electrolytic capacitors, also called "pinhead" capacitors because of their size. Without these capacitors, portable electronic equipment such as cell phones, PDAs, and iPods would either be much larger or simply wouldn't work at all. When only expensive military gear used tantalum capacitors, the demand for coltan was small. But now that consumer electronics manufacturers use millions of them, coltan is a hot commodity in the world mineral market.
The U. S. has no significant natural deposits of coltan. Other than Australia, the largest reserves are in the Congo. Makers of consumer electronics buy tantalum capacitors whose ingredients very likely come from a country where illegal mining, smuggling, and full-scale warfare over coltan-rich regions is endemic. The detailed history of the Congo and coltan is complex and tangled, involving multinational companies in the U. S. and Europe, migrations of refugees from Rwanda, interference by the government of Uganda, and general bad behavior on all sides. (For more information, see the article by Keith Harmon Snow and David Bernouski at http://zmagsite.zmag.org/JulAug2006/snowpr0706.html.) But the simple fact is that much of the coltan that makes its way into the world's supply chains of electronic components was mined either illegally or under political or moral conditions that most people would be horrified at if they knew.
So what is an electronics engineer to do? Avoid any designs that use tantalum capacitors? That's hardly practical, and for one thing, you can't tell just by looking whether the tantalum in a particular device came from Australia, the Congo, or somewhere else. But if engineers simply shrug their shoulders and say, "The supply chain isn't my problem—if a part's price is right and meets the specs, I've done my duty," then the professionals who are in the best position to know about the situation and make decisions based on it are turning their backs on the problem.
In Europe, certain activist groups have publicized the connection between portable electronics and the murderous events in the Congo, chanting "no blood on my cell phone" and calling for an embargo on tantalum from illegal mining. But embargoes and boycotts are not as effective as professionals who organize to recognize a problem and take action against it. At the very least, those who specify and use components whose ingredients may have been extracted at the cost of human suffering should be aware of the sordid background behind some innocent-looking electronics parts. And if their consciences moves them to do something about it, so much the better.
Engineers and technology specialists should learn from the food industry, where product differentiation has been raised to a high art. Most consumers can't tell organic broccoli from the other kind simply by tasting it. The U. S. Department of Agriculture has developed a "certified organic" system which tells the consumer that organic produce was grown without pesticides and so on. What the consumer pays extra for is not necessarily a better taste, but the knowledge that his vegetables were grown in a certain way. Some makers of clothing feature the fact that their products were not made under sweatshop conditions that prevail in certain parts of the world. Again, the intrinsic quality of the goods is not in question. What is being sold is a feeling or sense that the purchase is somehow making the world a better place.
Why can't this principle be applied to consumer electronics? First, an auditing system of supply chains would have to be implemented so that one could trace supplies of raw materials all the way back to the mine. Given the corrupt nature of some governments and institutions, this would be hard. But if certain firms managed it somehow and made enough of a big deal with publicity and advertisements, the fickle hand of the consumer might begin to favor the firms taking such trouble over those who were not making sure that their products did not use materials that contributed to human exploitation. It sounds silly and idealistic, maybe. In 1820, the idea of banning slavery in the U. S. sounded silly, idealistic, and dangerous. But those who believed in it persisted, and now, slavery is virtually unheard of, at least in the West.
Here is a challenge that goes beyond engineering, but needs engineers and other technologists to implement. The only thing that is lacking is the will on the part of those involved to do something.
Sources: Besides the Snow and Bernouski article noted above, Snow has many other articles on exploitation of African nations by multinational corporations at his website http://www.allthingspass.com. The boycott efforts are described briefly in a BBC article at http://news.bbc.co.uk/2/hi/africa/1468772.stm.
Coltan is the world's main source of tantalum, an essential element in the manufacture of miniature electrolytic capacitors, also called "pinhead" capacitors because of their size. Without these capacitors, portable electronic equipment such as cell phones, PDAs, and iPods would either be much larger or simply wouldn't work at all. When only expensive military gear used tantalum capacitors, the demand for coltan was small. But now that consumer electronics manufacturers use millions of them, coltan is a hot commodity in the world mineral market.
The U. S. has no significant natural deposits of coltan. Other than Australia, the largest reserves are in the Congo. Makers of consumer electronics buy tantalum capacitors whose ingredients very likely come from a country where illegal mining, smuggling, and full-scale warfare over coltan-rich regions is endemic. The detailed history of the Congo and coltan is complex and tangled, involving multinational companies in the U. S. and Europe, migrations of refugees from Rwanda, interference by the government of Uganda, and general bad behavior on all sides. (For more information, see the article by Keith Harmon Snow and David Bernouski at http://zmagsite.zmag.org/JulAug2006/snowpr0706.html.) But the simple fact is that much of the coltan that makes its way into the world's supply chains of electronic components was mined either illegally or under political or moral conditions that most people would be horrified at if they knew.
So what is an electronics engineer to do? Avoid any designs that use tantalum capacitors? That's hardly practical, and for one thing, you can't tell just by looking whether the tantalum in a particular device came from Australia, the Congo, or somewhere else. But if engineers simply shrug their shoulders and say, "The supply chain isn't my problem—if a part's price is right and meets the specs, I've done my duty," then the professionals who are in the best position to know about the situation and make decisions based on it are turning their backs on the problem.
In Europe, certain activist groups have publicized the connection between portable electronics and the murderous events in the Congo, chanting "no blood on my cell phone" and calling for an embargo on tantalum from illegal mining. But embargoes and boycotts are not as effective as professionals who organize to recognize a problem and take action against it. At the very least, those who specify and use components whose ingredients may have been extracted at the cost of human suffering should be aware of the sordid background behind some innocent-looking electronics parts. And if their consciences moves them to do something about it, so much the better.
Engineers and technology specialists should learn from the food industry, where product differentiation has been raised to a high art. Most consumers can't tell organic broccoli from the other kind simply by tasting it. The U. S. Department of Agriculture has developed a "certified organic" system which tells the consumer that organic produce was grown without pesticides and so on. What the consumer pays extra for is not necessarily a better taste, but the knowledge that his vegetables were grown in a certain way. Some makers of clothing feature the fact that their products were not made under sweatshop conditions that prevail in certain parts of the world. Again, the intrinsic quality of the goods is not in question. What is being sold is a feeling or sense that the purchase is somehow making the world a better place.
Why can't this principle be applied to consumer electronics? First, an auditing system of supply chains would have to be implemented so that one could trace supplies of raw materials all the way back to the mine. Given the corrupt nature of some governments and institutions, this would be hard. But if certain firms managed it somehow and made enough of a big deal with publicity and advertisements, the fickle hand of the consumer might begin to favor the firms taking such trouble over those who were not making sure that their products did not use materials that contributed to human exploitation. It sounds silly and idealistic, maybe. In 1820, the idea of banning slavery in the U. S. sounded silly, idealistic, and dangerous. But those who believed in it persisted, and now, slavery is virtually unheard of, at least in the West.
Here is a challenge that goes beyond engineering, but needs engineers and other technologists to implement. The only thing that is lacking is the will on the part of those involved to do something.
Sources: Besides the Snow and Bernouski article noted above, Snow has many other articles on exploitation of African nations by multinational corporations at his website http://www.allthingspass.com. The boycott efforts are described briefly in a BBC article at http://news.bbc.co.uk/2/hi/africa/1468772.stm.
Subscribe to:
Posts (Atom)