Back around 1987 or so, I walked by the bulletin board in the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst and saw a letter with a note scrawled at the bottom, "Anybody want to help Ms. X?" A woman had written the letter to our department chair because we had a reputation for doing research in microwave remote sensing and the detecting of radio waves. In the letter, she said that she was convinced the FBI had secretly embedded a radio-wave spying chip in her body. She did not go into details about the circumstances under which this had been done, nor did she say exactly where she thought the chip was. But she knew it was there, and she wanted to know if she could come to our labs to be examined by us with our sensitive equipment.
Needless to say, nobody took her up on her offer to be "examined," although her letter was the topic of some lunchtable conversation for the next few days. I understand that this sort of belief is not uncommon among individuals whom psychiatry used to term "paranoid," although I don't know what terminology would be used today. Well, yesterday's paranoid fear is today's welcome reality—welcomed by some, at least. The cover of the March 2007 issue of the engineering magazine IEEE Spectrum shows an X-ray montage of a young guy holding both hands up near the camera. In the X-ray images, two little sliver-shaped chips are clearly visible in the fleshy part of each hand between the thumb and forefinger.
Inside, the reader finds that Amal Graafstra, an entrepreneur and RFID (radio-frequency identification) enthusiast, thinks having RFID chips in each hand is just great. After convincing a plastic surgeon to insert the chips, which are a kind not officially approved for human use yet (they're sold to veterinarians for pet-tracking purposes), he rewired his house locks, motorcycle ignition, and various other gizmos that used to need keys or passwords. Now he can just make like Mandrake the Magician, waving his hand in front of his door or his motorcycle instead of hauling out keys. When he posted the initial results of his experiments on a website, he got all kinds of reactions ranging from essentially "Way to go, dude!" to negative comments based on religious convictions. As he explains, "Some Christian groups hold that the Antichrist . . . will require followers to be branded with a numeric identifier prior to the end of the world—the 'mark of the beast.' So I got some anxious notes from concerned Christians—including my own mother!"
Right after reading Mr. Graafstra's article, you can turn to a piece by Ken Foster and Jan Jaeger on the ethics of RFID implants in humans. (Full disclosure: I am acquainted with Prof. Foster through my work with IEEE's Society on Social Implications of Technology.) They dutifully point out the potential downsides of the technology, including the chance that what starts out as a purely voluntary thing, even a fashionable style among certain elites, might turn into a job requirement or something imposed by a government on citizens or aliens or both. They mention the grim precedent set by the Nazi regime when its concentration-camp guards forced every prisoner to receive a tattooed number on the arm. RFID chips are not nearly as visible as tattooes, but can contain vastly more information. Think of a miniature hard drive with your entire work history, your places of residence, your sensitive financial information and passwords, all carried around in your body and possibly accessible to anyone with the right (or perhaps wrong) equipment. Such large amounts of data cannot be stored in RFID chips yet, but if the rate of technical progress keeps up, it will be possible to do that soon. And following the rough-and-ready principle that anything which can be hacked, will be hacked, implanted RFID chips pose a great potential risk to privacy.
While Messrs. Graafstra, Foster, and Jaeger debate the pragmatic consequences of this technology, I would like to bring up something that the "Christian groups" alluded to, although they approached it in a way that is biased by some fairly recent innovations in Christian theology dating only to the mid-19th century.
A deeper theme that dates from the earliest Hebrew traditions of the Old Testament is the idea of the human body as a sacred thing, not to be treated like other material objects. The Old Testament prohibited tattooes, ritual cutting, and other practices common among ancient tribes other than the Israelites. The Christian tradition carried these ideas forward in various ways, but always with a sense that the human body is not simply a collection of atoms, but is a "substance" (a philosophical term) which stands in a unique relation to the soul.
The problem with trying to relate these ideas to modern practices is that hardly anybody, Christian, Jewish, or otherwise, pays any attention to them any more. What with heart transplants, cochlear implants, artificial lenses for cataract surgery, and so on, we are well down the road of messing with the human body to repair or improve its functions. And because something is sacred does not mean necessarily that it cannot be touched or altered in any way. The best that I can extract from this tradition in regard to the question of RFID implants, is to encourage people to give this matter special consideration. It's not the same thing as carrying around a fanny pack, or a key ring, or even a nose ring. Once it's in there, you've got it, and it can be anything from a minor annoyance to major surgery to get it out. My bottom line is that with RFID implants, you're messing with the sacred again. And there has to be some meaning to the facts that this general sort of notion was applied first on a large scale by one of the most evil governments of the twentieth century, and that it used to be an imaginary fear latched onto by mentally unbalanced individuals. Only, I don't know what the meaning is.
Sources: The articles "Hands On" by Amal Graafstra and "RFID Inside" by Kenneth Foster and Jan Jaeger appear in the March 2007 issue of IEEE Spectrum, accessible free (as of this writing) at http://spectrum.ieee.org/mar07/4940 and http://spectrum.ieee.org/mar07/4939, respectively.
Tuesday, March 27, 2007
Tuesday, March 20, 2007
Identities For Sale
Well, here's a way we can solve the trade imbalance between China and the U. S. According to Symantec, the computer-security company, the U. S. harbors more than half of the world's "underground economy servers"—computers that are used for criminal activities, including the control of other computers called "bots" without the knowledge or consent of their owners. And it turns out that about a fourth of all bots are in China. So we're using China's computers to steal money, data, and identities from around the world. And it's even tax-free, if the criminals who organize this sort of thing play their cards right. This market is running so well that you can buy a new electronic identity, complete with Social Security number, credit cards, and a bank account, for less than twenty bucks. Don't like who you are? Become someone else!
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Wednesday, March 14, 2007
Who Needs a Digital Life?
One day I rescued from the throw-out pile outside another professor's office a book entitled simply Computer Engineering, by C. Gordon Bell and two co-authors, all employees of the Digital Equipment Corporation. Published in 1978, it is a time capsule of the state of the computer art according to DEC, which around then was giving IBM a run for its money by coming out with the VAX series of minicomputers. This was just before the personal computer era changed everything.
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
Tuesday, March 06, 2007
The Ethics of Electronic Reproduction
Since so much of what we see, hear, read, and talk about has passed through digitization and cyberspace, it's easy to let that fact fade into the background and ignore the myriad of tricks that engineers have put into the hands of video editors, sound recording experts, and crooks. A story about sound-recording fraud with a neat ironic twist was reported recently by John von Rhein, the Chicago Tribune's music critic. It seems that one William Barrington-Coupe, the man behind a small record label called Concert Artists, wanted to make his concert-pianist wife Joyce Hatto look good, or at least sound good, on recordings issued under her name. So he "borrowed" recordings of famous pianists such as Vladimir Ashkenazy and altered the timing just enough to throw off suspicion that would arise if anyone noticed that Joyce Hatto's version of Rachmaninoff's Prelude in C sharp minor, for example, lasted two minutes and forty seconds, exactly as long as Vladimir Ashkenazy's. He did this digitally, of course, which is how he got caught.
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Tuesday, February 27, 2007
Cyberspace Anonymity: Good or Bad?
If you have been reading this blog for more than a few weeks, you may have noticed that I recently pulled off my mask of anonymity and posted my full name and location on it. That was a choice I made, and most choices have moral implications, if you look far enough. The internet offers abundant opportunities to those who wish to remain anonymous for whatever reason. Since the way it is engineered has contributed to this state of affairs, we are still in the realm of engineering ethics when we consider the implications of cyberspace anonymity.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
Tuesday, February 20, 2007
Global Warming or Global Shaking? A Tale of Two Theories
On Dec. 26, 2004, the most deadly tsunami in recorded history struck the Indian Ocean, killing about 280,000 people. If there had been a warning system in place along the affected coastlines to move people to higher ground, many of those who died in the disaster might be alive today. Fortunately, the technology to detect tsunamis in deep water and relay the information to the proper authorities exists today. After the terrible lesson of 2004, many governments moved to improve their tsunami-warning capabilities, and this effort is already proving fruitful. But most people think earthquakes on land, which can be just as deadly as tsunamis, are inherently unpredictable. What if that isn't true? What if it turns out that we can predict earthquakes as reliably as tomorrow's weather—not perfectly, but well enough to give warnings about truly major earthquakes? Wouldn't that be worth a little time and attention?
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
Tuesday, February 13, 2007
Non-Lethal Weapons, Part II: Taser, Anyone?
The taser is a device used mainly by law enforcement authorities up to now. It delivers a painful but allegedly non-lethal electrical charge that effectively disables an aggressor without permanent injury in the vast majority of cases. Invented in 1993, hundreds of thousands of tasers have been sold and used by police worldwide, and now Taser International is trying to enter the consumer market in a big way. In April, you will be able to buy a $300 unit called the C2, styled in a pink-and-black housing that makes it look more like a lady's shaver than a weapon.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
Tuesday, February 06, 2007
Non-Lethal Weapons, Part I: Ray Gun or Ray Howitzer?
First, some housekeeping items. When I began this blog nearly a year ago, I hid behind a screen of anonymity because I was afraid of negative repercussions that might arise from incautious words I might write. Recently, eminent engineering ethics expert Steve Unger at Columbia University wrote me that he is thinking of starting a blog, and wanted to know why I didn't put my real name on mine. (He knows who I am because my emails all have a tag line with the blog's URL in it.) I thought about it and couldn't give him a good reason, so as of today my profile and the header show my real name. As always, comments are welcome. If you have sent me a comment and I haven't replied to you, it's because the blog machinery doesn't inform me of your email address. If you would like me to be able to contact you, send an email to kdstephan@txstate.edu at the same time you add a comment to this blog, and I'll be able to respond.
Now for the first-ever two-part series in this blog: non-lethal weapons. I thank George Michael Sherry of Fort Worth, Texas for bringing my attention to an Associated Press article that was carried on MSNBC on Jan. 25, 2007. According to this report, the ray gun of science-fiction legend has arrived. It takes the form of a truck that carries a kind of radar-antenna thing about fifteen feet high. Even if you're as far away as five hundred yards, the thing's beam can make you feel like you're on fire. No actual fire results, because the total amount of power involved is limited. A video clip shows a civilian—possibly a reporter—standing in a field at Moody Air Force Base outside Valdosta, Georgia. All of a sudden he jumps like a snake bit him, and starts to laugh, aware of how foolish he looks.
As a microwave engineer, I viewed these proceedings with decidedly mixed emotions. On the one hand, my pure-engineer side rejoiced to see some familiar old technology being used in a novel and possibly helpful way. The energy used—94-GHz millimeter waves—is something I have known about and done research with for years, although at a lower power level than what the military is using in the alleged ray gun. They have taken a high-power source—probably a vacuum tube of some kind—and focused the energy in a narrow beam that probably covers a few dozen yards' worth of people at a distance of 500 yards. Full disclosure requires me to say that about twenty years ago, I received some research funds from Raytheon Corporation, which built the unit used in these tests. The technology to do this has been around for years, if not decades, but perhaps the will to try this or the funding was lacking until now.
Before we get to the ethical issues, my pure-engineer side has some questions, though. I thought a ray gun was supposed to fit in your pocket. A more apt term for this thing is "ray howitzer," a howitzer being a piece of field artillery larger than a single man can conveniently carry. Not only does this gizmo require a large truck to haul it around (and probably a multi-kilowatt generator buzzing away somewhere), but because of fundamental physical laws, there is very little chance that they'll ever be able to make it much smaller than it is now. If they tried, the beam would spread out to where you'd be as likely to shoot yourself as anybody else nearby. And then there's the cost. The article didn't mention how many tax dollars the project used up, but unless vacuum-tube millimeter-wave technology has had some dramatic breakthroughs lately (and I haven't heard of any), you can bet that even in production-quantity runs this ray gun would set you back many hundreds of thousands a piece, if not more. And while a spokesperson for the military refused to comment on whether the rays would penetrate glass, I can say that without fear of contradiction, it depends. What I can say for sure is that even a thin sheet of metal such as aluminum foil will block the rays completely. While you might look silly walking around in an aluminum suit, you'd have no worries about being zapped by the millimeter-wave ray howitzer.
Now for the ethical questions. The issue of whether non-lethal weapons should be used at all is an interesting one, but there is not space here to give it justice. My main question in this area is, does the use of this device truly have no long-term health effects? Over the years there have been several studies that link exposure to high-power microwaves with the growth of cataracts in the eye. The prevalence of convenient and effective cataract surgery these days doesn't mean that we should quit worrying about giving people cataracts. It's a legitimate question whether exposure to just one "zap" from the ray howitzer could cause enough eye damage to lead to cataracts. That is a technical question for the appropriate experts, but I raise it here simply because it may not have been asked yet.
All things considered, I don't believe we have a lot to fear from people with ray guns roaming the streets and towns of America. I will be surprised if Raytheon or anybody else can make this technology cheaply enough for it to pose a threat to water cannons, tear gas, or other popular means of dispersing angry crowds. If my experience as a lecturer on microwave engineering is any guide, you could inspire a set of rioters with the same intense longing to be somewhere else that the ray howitzer inspires, by trying to teach them the Fourier transform that relates the size of the machine's dish to the size of the beam. And the lecturer would come a lot cheaper.
Sources: The ray gun article appeared on the MSNBC website at http://www.msnbc.msn.com/id/16794717/wid/11915829?GT1=8921. Information on the relation between cataracts and microwaves can be found at places such as the Communications Workers of America website (http://www.cwa-union.org/issues/osh/articles/page.jsp?itemID=27339127) and an index of research by professor of history Nicholas Steneck on the hazards of microwave radiation (http://myweb.cableone.net/mtilton/steneck.html). It appears that "normal" exposure to microwaves and radio-frequency radiation has few if any reproducible clinical effects, although many experts disagree on the conclusions that should be drawn from the abundance of research.
Now for the first-ever two-part series in this blog: non-lethal weapons. I thank George Michael Sherry of Fort Worth, Texas for bringing my attention to an Associated Press article that was carried on MSNBC on Jan. 25, 2007. According to this report, the ray gun of science-fiction legend has arrived. It takes the form of a truck that carries a kind of radar-antenna thing about fifteen feet high. Even if you're as far away as five hundred yards, the thing's beam can make you feel like you're on fire. No actual fire results, because the total amount of power involved is limited. A video clip shows a civilian—possibly a reporter—standing in a field at Moody Air Force Base outside Valdosta, Georgia. All of a sudden he jumps like a snake bit him, and starts to laugh, aware of how foolish he looks.
As a microwave engineer, I viewed these proceedings with decidedly mixed emotions. On the one hand, my pure-engineer side rejoiced to see some familiar old technology being used in a novel and possibly helpful way. The energy used—94-GHz millimeter waves—is something I have known about and done research with for years, although at a lower power level than what the military is using in the alleged ray gun. They have taken a high-power source—probably a vacuum tube of some kind—and focused the energy in a narrow beam that probably covers a few dozen yards' worth of people at a distance of 500 yards. Full disclosure requires me to say that about twenty years ago, I received some research funds from Raytheon Corporation, which built the unit used in these tests. The technology to do this has been around for years, if not decades, but perhaps the will to try this or the funding was lacking until now.
Before we get to the ethical issues, my pure-engineer side has some questions, though. I thought a ray gun was supposed to fit in your pocket. A more apt term for this thing is "ray howitzer," a howitzer being a piece of field artillery larger than a single man can conveniently carry. Not only does this gizmo require a large truck to haul it around (and probably a multi-kilowatt generator buzzing away somewhere), but because of fundamental physical laws, there is very little chance that they'll ever be able to make it much smaller than it is now. If they tried, the beam would spread out to where you'd be as likely to shoot yourself as anybody else nearby. And then there's the cost. The article didn't mention how many tax dollars the project used up, but unless vacuum-tube millimeter-wave technology has had some dramatic breakthroughs lately (and I haven't heard of any), you can bet that even in production-quantity runs this ray gun would set you back many hundreds of thousands a piece, if not more. And while a spokesperson for the military refused to comment on whether the rays would penetrate glass, I can say that without fear of contradiction, it depends. What I can say for sure is that even a thin sheet of metal such as aluminum foil will block the rays completely. While you might look silly walking around in an aluminum suit, you'd have no worries about being zapped by the millimeter-wave ray howitzer.
Now for the ethical questions. The issue of whether non-lethal weapons should be used at all is an interesting one, but there is not space here to give it justice. My main question in this area is, does the use of this device truly have no long-term health effects? Over the years there have been several studies that link exposure to high-power microwaves with the growth of cataracts in the eye. The prevalence of convenient and effective cataract surgery these days doesn't mean that we should quit worrying about giving people cataracts. It's a legitimate question whether exposure to just one "zap" from the ray howitzer could cause enough eye damage to lead to cataracts. That is a technical question for the appropriate experts, but I raise it here simply because it may not have been asked yet.
All things considered, I don't believe we have a lot to fear from people with ray guns roaming the streets and towns of America. I will be surprised if Raytheon or anybody else can make this technology cheaply enough for it to pose a threat to water cannons, tear gas, or other popular means of dispersing angry crowds. If my experience as a lecturer on microwave engineering is any guide, you could inspire a set of rioters with the same intense longing to be somewhere else that the ray howitzer inspires, by trying to teach them the Fourier transform that relates the size of the machine's dish to the size of the beam. And the lecturer would come a lot cheaper.
Sources: The ray gun article appeared on the MSNBC website at http://www.msnbc.msn.com/id/16794717/wid/11915829?GT1=8921. Information on the relation between cataracts and microwaves can be found at places such as the Communications Workers of America website (http://www.cwa-union.org/issues/osh/articles/page.jsp?itemID=27339127) and an index of research by professor of history Nicholas Steneck on the hazards of microwave radiation (http://myweb.cableone.net/mtilton/steneck.html). It appears that "normal" exposure to microwaves and radio-frequency radiation has few if any reproducible clinical effects, although many experts disagree on the conclusions that should be drawn from the abundance of research.
Tuesday, January 30, 2007
The Engineer and The Public: How's That Again?
The Institute of Electrical and Electronics Engineers (IEEE) is probably the largest society of engineering professionals in the world, with over 300,000 members worldwide. Its Code of Ethics has a little-known clause in which IEEE members agree to "improve the understanding of technology, its appropriate application, and potential consequences." My father sometimes used to greet me as I came home from school with the question, "And what did you do to make the world a better place today?" I could equally well ask the question of engineers, "What did you do to improve the public's understanding of technology today?"
People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.
In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.
If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.
Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."
Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.
That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:
"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."
How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.
Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.
Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.
People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.
In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.
If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.
Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."
Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.
That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:
"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."
How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.
Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.
Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.
Wednesday, January 24, 2007
Googling Fame: Who's In Charge?
First, I will heed the proverbial warning not to bite the hand that feeds you, or in this case, the company that provides my blog free of charge. Google, that huge, somewhat mysterious entity run by a couple of thirty-somethings who are (I read recently) two of the most admired people in America, said they would let me blog here for free, and would provide easy-to-use facilities for setting up my blog and running it. Almost without exception, they have kept their word, whoever they are. I don't have to have ads on my blog unless I choose to, the system is as easy to use as they said, and in sum, my limited experience with the organization has been almost uniformly positive. And to make things even better, after nearly a year of blogging here, I find that if you type "engineering ethics blog" into Google's search engine, the first thing that comes up is this blog. Not only that, but among the next few results are references to this blog at the University of Illinois—Urbana-Champaign and Illinois Institute of Technology. (If you type just "engineering ethics," it shows up too, but not till the fourth page.)
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Wednesday, January 17, 2007
The Electric Car Arrives—Again?
In 1990, General Motors Chairman Roger Smith announced that his firm was developing an all-electric car for the consumer market, partly in response to a California law mandating the sale of zero-emission vehicles in the future. Six years later, the EV1 made its debut in California and Arizona. Only about a thousand were made, and technically you could never own one—GM allowed only leases. In 2002, concluding that the program failed, GM demanded the return of the vehicles, much to the dismay of some loyal EV1 drivers who saw the move as a back-door way to show that electric vehicles were still impractical. Just last week, GM announced at the Detroit International Auto Show that it plans to get back into the electric-car business with the Chevrolet Volt, a home-chargeable battery-operated model that carries a small gasoline engine. Should we believe them this time?
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
Thursday, January 11, 2007
I Spend, Therefore I'm Spied Upon?
The 17th-century philosopher René Descartes' most famous dictum was, "I think, therefore I am." While Descartes was a military man for a time, he lived long before an age when simply carrying money around in your pocket made you vulnerable to espionage. A recent Associated Press report carried in the San Francisco Examiner online edition describes "spy coins" that have been found on contractors doing classified U. S. government business in Canada. According to the report, these Canadian coins carried tiny radio transmitters that could conceivably have been used to track the contractors' movements. No details were given about who the contractors were, what work they were doing, or even what denomination of coin was used. One of the security experts consulted by the reporter said that the technique didn't seem to make a lot of sense, because there is nothing to keep a person from spending a spy coin almost as soon as he or she receives it. My guess is it's a scheme cooked up by North Korea, whose counterfeiting activities are already well-known. It would be consistent with that country's old-style cold-war mentality to cook up something so outlandish that nobody would think of it, even if it didn't have a great chance of producing useful results.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Tuesday, January 02, 2007
Science, Engineering, and Ethical Choice: Who's In Charge?
Every now and then it's a good idea to look at the foundations of a field, the usually hidden and unspoken assumptions that everybody knows, but few ever talk about. A recent New York Times essay by Dennis Overbye on free will addressed the question of whether our choices are really choices, or whether we are really just "meat computers" executing a program of which we are unaware. What has that got to do with engineering ethics? Only everything.
You can put this issue in the form of a paradox. Modern engineering got where it is today by being based on science. From the many reputable scientists interviewed by Overbye, we learn that from what science can tell so far, everything in the universe is either determined by physical law (in which case we can predict it) or random (which is another way of saying we can't predict it, and may not in principle ever be able to). This includes the behavior of all physical systems, including the human brain. And if choices and decisions can be said to come from any physical object, they come from the human brain.
Now engineering ethics is all about making the right choices. But what if the idea of choice is false? If we only think we choose something when the reality is that we're just following a hugely complex but possibly predictable program, what does it mean to make the right choice, or indeed any choice at all? According to some of the scientists Overbye talked with, not much.
The view that all our supposed choices are really determined by external factors is called determinism. Daniel Dennett, a philosopher of science, thinks free will and determinism are compatible, even mutually dependent. According to Dennett, strict causality ". . . makes us moral agents. You don't need a miracle to have responsibility." On the other hand, medical researcher Mark Hallett limits the idea of free will to the perception, not the absolute fact. "People experience free will," he says. "They have the sense that they are free. The more you scrutinize it, the more you realize you don't have it."
Dr. Hallett spends his days pondering the inner workings of the brain, and understandably tends to view it as a complex system that may one day yield all of its secrets to science, which is to say, other brains. Overbye is diligent enough to note that while a system may be deterministic, it nonetheless may not be predictable. Citing mathematicians Kurt Gödel and Alan Turing, he points out that any moderately complex mathematical system cannot be shown to be consistent within itself: there will always be statements you can make with it that you cannot prove or disprove within the system. Philosopher and historian of science Stanley Jaki has used this fact to show that the scientist's dream of a mathematically complete "final theory" that would predict everything—all physical constants, all deterministic activity down to the end of time—is only a dream. So it seems that science has also told us that we know there are things that we will never know about the world, in the objective, testable, scientific sense.
So does this mean that a truly consistent scientific engineer will disregard ethics as an illusion and act however he or she pleases? Here is where the engineer's famed pragmatism comes into play. Most engineers I know are eminently practical people, wanting to get the job done and impatient with what they regard as hairsplitting philosophical discussions about the ultimate meaning of this or that. Most engineers would immediately realize that disregarding right and wrong simply because some philosophers and scientists say choice is an illusion would be fatal both to their careers and quite possibly to the people served by their engineering. And death is a bad thing.
These common-sense notions do not come from science. In their more sober moments, most scientists—and many philosophers—will admit that science cannot pass judgment on questions of value. The stated goal of science is knowledge, not guidance or moral instruction. But to allow a scientific conclusion about the source of free will to abolish one's ethics would be to allow science to dictate morality, or rather, the lack thereof.
Prominent by their absence from Overbye's list of interviewees was anyone who spoke for the religious viewpoint which takes free will and the reality of moral agency seriously. While there are philosophical issues that arise from the question of how God can allow free will in a universe of which he has perfect foreknowledge, at least that picture makes sense morally. The issue that Overbye sidles up to, but never quite breaches, is the one that Dostoevsky made plain when he wrote in Notes from the Underground, "For what is man without desires, without free will, and without the power of choice but a stop in an organ pipe?" In other words, a passive piece of machinery whose sound and fury signifies nothing. All the shilly-shallying of the philosophers who say in effect, "Well, we don't really have it, but we think or feel that we do, and so it doesn't make much difference," simply evades the logical conclusions of their positions, which many of them are afraid to espouse openly.
Engineering is not philosophy, and most engineers are not trained philosophers. But every engineer who thinks about the reasons for professional actions must sooner or later ask, "What do I think the right thing is?" and "Can I really choose freely?" Many engineers, including yours truly, have a religious answer to these questions. And we are not bound by the dicta of scientists or philosophers to decide otherwise—especially if we couldn't decide!
Sources: The New York Times article "Free Will: Now You Have It, Now You Don't" appeared in the Jan. 2, 2007 online edition at http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=1&8dpc. The Dostoevsky quotation is from About.com's section on classic literature by Esther Lombardi at http://classiclit.about.com/od/dostoyevskyf/a/aa_fdostquote.htm.
You can put this issue in the form of a paradox. Modern engineering got where it is today by being based on science. From the many reputable scientists interviewed by Overbye, we learn that from what science can tell so far, everything in the universe is either determined by physical law (in which case we can predict it) or random (which is another way of saying we can't predict it, and may not in principle ever be able to). This includes the behavior of all physical systems, including the human brain. And if choices and decisions can be said to come from any physical object, they come from the human brain.
Now engineering ethics is all about making the right choices. But what if the idea of choice is false? If we only think we choose something when the reality is that we're just following a hugely complex but possibly predictable program, what does it mean to make the right choice, or indeed any choice at all? According to some of the scientists Overbye talked with, not much.
The view that all our supposed choices are really determined by external factors is called determinism. Daniel Dennett, a philosopher of science, thinks free will and determinism are compatible, even mutually dependent. According to Dennett, strict causality ". . . makes us moral agents. You don't need a miracle to have responsibility." On the other hand, medical researcher Mark Hallett limits the idea of free will to the perception, not the absolute fact. "People experience free will," he says. "They have the sense that they are free. The more you scrutinize it, the more you realize you don't have it."
Dr. Hallett spends his days pondering the inner workings of the brain, and understandably tends to view it as a complex system that may one day yield all of its secrets to science, which is to say, other brains. Overbye is diligent enough to note that while a system may be deterministic, it nonetheless may not be predictable. Citing mathematicians Kurt Gödel and Alan Turing, he points out that any moderately complex mathematical system cannot be shown to be consistent within itself: there will always be statements you can make with it that you cannot prove or disprove within the system. Philosopher and historian of science Stanley Jaki has used this fact to show that the scientist's dream of a mathematically complete "final theory" that would predict everything—all physical constants, all deterministic activity down to the end of time—is only a dream. So it seems that science has also told us that we know there are things that we will never know about the world, in the objective, testable, scientific sense.
So does this mean that a truly consistent scientific engineer will disregard ethics as an illusion and act however he or she pleases? Here is where the engineer's famed pragmatism comes into play. Most engineers I know are eminently practical people, wanting to get the job done and impatient with what they regard as hairsplitting philosophical discussions about the ultimate meaning of this or that. Most engineers would immediately realize that disregarding right and wrong simply because some philosophers and scientists say choice is an illusion would be fatal both to their careers and quite possibly to the people served by their engineering. And death is a bad thing.
These common-sense notions do not come from science. In their more sober moments, most scientists—and many philosophers—will admit that science cannot pass judgment on questions of value. The stated goal of science is knowledge, not guidance or moral instruction. But to allow a scientific conclusion about the source of free will to abolish one's ethics would be to allow science to dictate morality, or rather, the lack thereof.
Prominent by their absence from Overbye's list of interviewees was anyone who spoke for the religious viewpoint which takes free will and the reality of moral agency seriously. While there are philosophical issues that arise from the question of how God can allow free will in a universe of which he has perfect foreknowledge, at least that picture makes sense morally. The issue that Overbye sidles up to, but never quite breaches, is the one that Dostoevsky made plain when he wrote in Notes from the Underground, "For what is man without desires, without free will, and without the power of choice but a stop in an organ pipe?" In other words, a passive piece of machinery whose sound and fury signifies nothing. All the shilly-shallying of the philosophers who say in effect, "Well, we don't really have it, but we think or feel that we do, and so it doesn't make much difference," simply evades the logical conclusions of their positions, which many of them are afraid to espouse openly.
Engineering is not philosophy, and most engineers are not trained philosophers. But every engineer who thinks about the reasons for professional actions must sooner or later ask, "What do I think the right thing is?" and "Can I really choose freely?" Many engineers, including yours truly, have a religious answer to these questions. And we are not bound by the dicta of scientists or philosophers to decide otherwise—especially if we couldn't decide!
Sources: The New York Times article "Free Will: Now You Have It, Now You Don't" appeared in the Jan. 2, 2007 online edition at http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=1&8dpc. The Dostoevsky quotation is from About.com's section on classic literature by Esther Lombardi at http://classiclit.about.com/od/dostoyevskyf/a/aa_fdostquote.htm.
Thursday, December 28, 2006
Electric Power: Was It Broke? Did We Fix It?
Like any other profession, engineering has its particular proverbs and sayings. One of my favorites is, "If it ain't broke, don't fix it." As with most proverbs, this one captures only part of the whole picture of a complex situation. But when I look at the potential and actual problems we have these days with the U. S. electric power system, I wish more people in authority had paid attention to that particular proverb.
Electricity is an unusual commodity in that it must be produced exactly as fast as it is sold. If a million people suddenly turn on their lights all at once, somebody somewhere has to supply that much more electricity in milliseconds, or else there is big trouble for everybody on the power distribution network. For lights to come on reliably and stay on all across the country, the systems of generating plants, transmission lines, distribution lines, and monitoring and control equipment have to work in a smooth, coordinated way. And, somebody has to pay for it all.
From an economic point of view, approaches to electric utility management and financing lie somewhere between two extremes. At one extreme is completely centralized control, billing, and coordination, often performed in many countries by the national government. France is an example of this approach. Large, complex electric systems are a natural fit to large, complex government bureaucracies, and in the hands of competent, dedicated civil servants, government-owned and -operated utilities can be a model of efficiency and advanced technology. Government control and ownership can provide the stability for long-term research and development. This is one reason that France leads the world in the development of safe, reliable nuclear power, which provides most of the electricity in that country.
The other extreme can be found in third-world countries where there is little or no effective government regulation of utilities, either through incompetence, war, or other causes. In this type of situation, private enterprise rushes in to fill the gap and you have private "utilities"—often nothing more than a few guys with a generator and some wire—selling electricity for whatever the market will bear, in an uncoordinated and inefficient way. This approach leads to a spotty, inefficient market in which the availability and reliability of electricity depends on where you live, and typically large portions of the market (in rural or dangerous areas) are not served at all.
In the U. S., we have historically swung from near one extreme to the other. As electric utilities began to grow in the late 1800s and early 1900s, they began as independent companies. But the technical economies of scale quickly became apparent, and the Great Depression brought on tremendous consolidation of companies into a few large firms, which were then taken under the regulatory wing of federal and state governments. What we had then was a kind of benevolent dictatorship of the industry by government, in which private investors ceded much control to the various regulatory commissions, but received in turn a reliable but relatively small return on their investment.
This state of affairs prevailed through the 1970s, whereupon various political forces began a move toward deregulation. The record of deregulation is spotty at best, probably because it represents an attempt to have our regulatory cake and eat it too. No one wants the electricity market here to devolve to the haphazard free-for-all that it is in places like Iraq, or even India, where electricity theft is as common as beggary. So rightly, some regulations must be left in place in order to protect the interests of those who cannot protect themselves, which in the case of electric utilities means most of us.
The most noteworthy recent disasters having to do with deregulation were the disruptions and price explosions in California of a few years ago, caused in large part by Enron and other trading companies who manipulated the market during hot summers of high demand. Even if the loopholes allowing such abuses are closed and inadequate generating capacity is addressed with more power plants, however, many problems remain. A recent New York Times article points out that because the existing rules provide disincentives for power companies to spend money on transmission and distribution equipment (power lines), certain parts of the country have to pay exorbitant rates in the form of "congestion charges."
The basic problem is, there are not lines enough to carry cheap power from where it is available to where it is needed. Somebody would have to pay to build them, and somebody else would have to approve the construction. In these days of "not in my back yard" attitudes, it is increasingly hard to construct new power lines anywhere, even in rural areas. The net result of these complications is that as time goes on and demand for power increases, more and more areas may find themselves starved for power, and will have to pay rates that might be as high as twice the prevailing rate of surrounding regions.
My personal bias is that we have gone way too far in attempts to privatize the electric utility industry. It is a business which technologically fits better with a centralized authority and center of coordination. But in today's political climate, the chances of going back to a more centralized way of doing things are small. It looks like the best we can do is to continue to tinker with what regulations remain, fixing problems where pernicious disincentives appear, and keeping an eye out for Grandma and her electric heater that she needs to get through the winter. But in my opinion, the whole thing wasn't broke to begin with, and the fix of deregulation didn't need to be applied the way it was.
Sources: The New York Times article on congestion charges appeared in the Dec. 13, 2006 online edition at http://www.nytimes.com/2006/12/13/business/13power.html?hp&ex=1166072400&en=dcfbff42cc8f19d4&ei=5094&partner=homepage.
Electricity is an unusual commodity in that it must be produced exactly as fast as it is sold. If a million people suddenly turn on their lights all at once, somebody somewhere has to supply that much more electricity in milliseconds, or else there is big trouble for everybody on the power distribution network. For lights to come on reliably and stay on all across the country, the systems of generating plants, transmission lines, distribution lines, and monitoring and control equipment have to work in a smooth, coordinated way. And, somebody has to pay for it all.
From an economic point of view, approaches to electric utility management and financing lie somewhere between two extremes. At one extreme is completely centralized control, billing, and coordination, often performed in many countries by the national government. France is an example of this approach. Large, complex electric systems are a natural fit to large, complex government bureaucracies, and in the hands of competent, dedicated civil servants, government-owned and -operated utilities can be a model of efficiency and advanced technology. Government control and ownership can provide the stability for long-term research and development. This is one reason that France leads the world in the development of safe, reliable nuclear power, which provides most of the electricity in that country.
The other extreme can be found in third-world countries where there is little or no effective government regulation of utilities, either through incompetence, war, or other causes. In this type of situation, private enterprise rushes in to fill the gap and you have private "utilities"—often nothing more than a few guys with a generator and some wire—selling electricity for whatever the market will bear, in an uncoordinated and inefficient way. This approach leads to a spotty, inefficient market in which the availability and reliability of electricity depends on where you live, and typically large portions of the market (in rural or dangerous areas) are not served at all.
In the U. S., we have historically swung from near one extreme to the other. As electric utilities began to grow in the late 1800s and early 1900s, they began as independent companies. But the technical economies of scale quickly became apparent, and the Great Depression brought on tremendous consolidation of companies into a few large firms, which were then taken under the regulatory wing of federal and state governments. What we had then was a kind of benevolent dictatorship of the industry by government, in which private investors ceded much control to the various regulatory commissions, but received in turn a reliable but relatively small return on their investment.
This state of affairs prevailed through the 1970s, whereupon various political forces began a move toward deregulation. The record of deregulation is spotty at best, probably because it represents an attempt to have our regulatory cake and eat it too. No one wants the electricity market here to devolve to the haphazard free-for-all that it is in places like Iraq, or even India, where electricity theft is as common as beggary. So rightly, some regulations must be left in place in order to protect the interests of those who cannot protect themselves, which in the case of electric utilities means most of us.
The most noteworthy recent disasters having to do with deregulation were the disruptions and price explosions in California of a few years ago, caused in large part by Enron and other trading companies who manipulated the market during hot summers of high demand. Even if the loopholes allowing such abuses are closed and inadequate generating capacity is addressed with more power plants, however, many problems remain. A recent New York Times article points out that because the existing rules provide disincentives for power companies to spend money on transmission and distribution equipment (power lines), certain parts of the country have to pay exorbitant rates in the form of "congestion charges."
The basic problem is, there are not lines enough to carry cheap power from where it is available to where it is needed. Somebody would have to pay to build them, and somebody else would have to approve the construction. In these days of "not in my back yard" attitudes, it is increasingly hard to construct new power lines anywhere, even in rural areas. The net result of these complications is that as time goes on and demand for power increases, more and more areas may find themselves starved for power, and will have to pay rates that might be as high as twice the prevailing rate of surrounding regions.
My personal bias is that we have gone way too far in attempts to privatize the electric utility industry. It is a business which technologically fits better with a centralized authority and center of coordination. But in today's political climate, the chances of going back to a more centralized way of doing things are small. It looks like the best we can do is to continue to tinker with what regulations remain, fixing problems where pernicious disincentives appear, and keeping an eye out for Grandma and her electric heater that she needs to get through the winter. But in my opinion, the whole thing wasn't broke to begin with, and the fix of deregulation didn't need to be applied the way it was.
Sources: The New York Times article on congestion charges appeared in the Dec. 13, 2006 online edition at http://www.nytimes.com/2006/12/13/business/13power.html?hp&ex=1166072400&en=dcfbff42cc8f19d4&ei=5094&partner=homepage.
Tuesday, December 19, 2006
America's Chernobyl Waiting to Happen
"Dallas, Texas, Mar. 30, 2005 (AP) --- An apparent nuclear explosion in Amarillo, Texas has cut off all communications with the West Texas city and regions in a fifteen-mile radius around the blast. Eyewitness accounts by airline pilots in the vicinity report an 'incredible flash' followed by a mushroom cloud reaching at least 35,000 feet. Speculation on the source of the explosion has centered on Amarillo's Pantex plant, the nation's only facility for construction and disassembly of nuclear weapons."
In case you think you missed something a year ago last March, the news item above is fiction. But according to some sources, it is plausible. It could have happened. And there is reason to believe that unless some serious housecleaning takes place in Amarillo, the chances that something like this might happen in the future are higher than any of us would like.
The end of the Cold War brought hopes that instead of piling up megaton after megaton
of mutually assured destructive power in the shape of thermonuclear weapons, the U. S. and the Soviet Union (or what was left of it) would begin to disassemble their nuclear stockpiles to make the world a safer place. Over the past fifteen years, international agreements have been reached to do exactly that. From a peak of over 30,000 nuclear warheads in 1965, the U. S. stockpile has declined to just a little over 10,000 as of 2002. And here is where the engineering issues come in, because for every downtick of that number, somebody somewhere has to disassemble a nuclear warhead.
A nuclear bomb or missile is not something that you just throw on the surplus market to dispose of. First it has to be rendered incapable of exploding. Then the plutonium and other dangerous explosive materials have to be removed in a way that is both safe to the technicians doing the work, and also to the surrounding countryside and population. As you might imagine, these operations are difficult, dangerous, and require secret specialized knowledge. For more than thirty years, the only facility in the U. S. where nuclear weapons were made or disassembled has been the Pantex plant outside Amarillo, Texas. It is currently operated by a consortium of private contractors including BWXT, Honeywell, and Bechtel, and works exclusively for the federal government, specifically the Department of Energy. If you want a nuclear weapon taken apart, you go to Pantex, period. And therein lies a potential problem.
Where I teach engineering, the job of nuclear weapon disassembler is not one that comes up a lot when students tell me what they'd like to be when they graduate. I imagine that it is hard to recruit and retain people who are both willing and qualified to do such work. But at the same time, it is not the kind of growth industry that attracts a lot of investment. So it is plausible to me that as the demand for disassembly increases, the corporate bosses in charge of the operation might tend to skimp on things like deferred maintenance, safety training and execution, and hiring of additional staff. That is the picture which emerges from an anonymous letter made public recently by the Project on Government Oversight, a government watchdog group.
Anonymous letters can contain exaggerations, but what is not in dispute is the fact that on three occasions beginning Mar. 30, 2005, someone at Pantex tried to disassemble a nuclear weapon in a way that set off all kinds of alarms in the minds of experts who know the details. I'm speculating at this point, but as I read between the lines and use my knowledge of 1965-era technology, something like this may have happened.
A nuclear weapon built in 1965 probably contained no computers, relatively few transistors, and a good many vacuum tubes. Any safety interlocks to prevent accidental detonation were probably mechanical as well as electronic, and consisted of switches, relays, and possibly some rudimentary transistor circuits. But somewhere physically inside the long cylindrical structure lies a terminal which, if contacted by a grounded piece of metal, will probably set the whole thing off and vaporize Amarillo and the surrounding area.
A piece of equipment that has been sitting around since 1965 in a cold, drafty missile silo is probably a little corroded here and there. Screws and plugs that used to come apart easily are now stubborn or even frozen in place. The technician in charge of beginning disassembly of this baby probably tried all the standard approaches to unscrewing a vital part in order to disable it, without success. At that point, desperation overcame judgment. The official news release from the National Nuclear Safety Agency puts it in bureaucratese thus: "This includes the failures to adhere to limits in the force applied to the weapon assembly and a Technical Safety Requirement violation associated with the use of a tool that was explicitly forbidden from use as stated in a Justification for Continued Operation." Maybe he whammed at it with a big hammer. Maybe he tried drilling out a stuck bolt with an electric drill. We may never know. But what we do know is, the reasons for all these Technical Safety Requirements is that if you violate them, you edge closer to setting off an explosion of some kind.
Not every explosion that could happen at Pantex would be The Big One with the mushroom cloud and a megaton of energy. The way nuclear weapons work is by using cleverly designed pieces of conventional high explosive to create configurations that favor the initiation of the nuclear chain reactions that produce the big boom. A lot of things have to go right (or wrong, depending on your point of view) in order for a full-scale nuclear explosion to happen. Kim Jong Il of North Korea found this out not too long ago when his nuclear test fizzled rather than boomed. But even if nothing nuclear happens when the conventional explosives go off, you've got a fine mess on your hands: probably a few people killed, expensive secret equipment destroyed, and worst from an environmental viewpoint, plutonium or other hazardous nuclear material spread all over the place, including the atmosphere.
This general sort of thing was what happened at Chernobyl, Ukraine in 1986, when some technicians experimenting late at night with a badly designed nuclear power plant managed to blow it up. The bald-faced coverup that the USSR tried to mount in the disaster's aftermath may have contributed to its ultimate downfall. So even if the worst-case scenario of a nuclear explosion doesn't ever happen at Pantex, a "small" explosion of conventional weapons could cause a release of nuclear material that could harm thousands or millions of people downwind. Where I happen to live, incidentally.
I hope the concerns pointed out by the Pantex employees who apparently wrote the anonymous letter are exaggerated. I hope that the statement from Pantex's official website that "[t]here is no credible scenario at Pantex in which an accident can result in a nuclear detonation" is true. But incredible things do happen from time to time. Let's just hope they don't happen at Pantex any time soon.
Sources: The Project on Government Oversight webpage citing the Pantex employees' anonymous letter is at http://www.pogo.org/p/homeland/hl-061201-bodman.html. The official Pantex website statement about a nuclear explosion not being a credible scenario is at http://www.pantex.com/currentnews/factSheets.html. Statistics on the U. S. nuclear weapons stockpile are from Wikipedia's article on "United States and weapons of mass destruction."
In case you think you missed something a year ago last March, the news item above is fiction. But according to some sources, it is plausible. It could have happened. And there is reason to believe that unless some serious housecleaning takes place in Amarillo, the chances that something like this might happen in the future are higher than any of us would like.
The end of the Cold War brought hopes that instead of piling up megaton after megaton
of mutually assured destructive power in the shape of thermonuclear weapons, the U. S. and the Soviet Union (or what was left of it) would begin to disassemble their nuclear stockpiles to make the world a safer place. Over the past fifteen years, international agreements have been reached to do exactly that. From a peak of over 30,000 nuclear warheads in 1965, the U. S. stockpile has declined to just a little over 10,000 as of 2002. And here is where the engineering issues come in, because for every downtick of that number, somebody somewhere has to disassemble a nuclear warhead.
A nuclear bomb or missile is not something that you just throw on the surplus market to dispose of. First it has to be rendered incapable of exploding. Then the plutonium and other dangerous explosive materials have to be removed in a way that is both safe to the technicians doing the work, and also to the surrounding countryside and population. As you might imagine, these operations are difficult, dangerous, and require secret specialized knowledge. For more than thirty years, the only facility in the U. S. where nuclear weapons were made or disassembled has been the Pantex plant outside Amarillo, Texas. It is currently operated by a consortium of private contractors including BWXT, Honeywell, and Bechtel, and works exclusively for the federal government, specifically the Department of Energy. If you want a nuclear weapon taken apart, you go to Pantex, period. And therein lies a potential problem.
Where I teach engineering, the job of nuclear weapon disassembler is not one that comes up a lot when students tell me what they'd like to be when they graduate. I imagine that it is hard to recruit and retain people who are both willing and qualified to do such work. But at the same time, it is not the kind of growth industry that attracts a lot of investment. So it is plausible to me that as the demand for disassembly increases, the corporate bosses in charge of the operation might tend to skimp on things like deferred maintenance, safety training and execution, and hiring of additional staff. That is the picture which emerges from an anonymous letter made public recently by the Project on Government Oversight, a government watchdog group.
Anonymous letters can contain exaggerations, but what is not in dispute is the fact that on three occasions beginning Mar. 30, 2005, someone at Pantex tried to disassemble a nuclear weapon in a way that set off all kinds of alarms in the minds of experts who know the details. I'm speculating at this point, but as I read between the lines and use my knowledge of 1965-era technology, something like this may have happened.
A nuclear weapon built in 1965 probably contained no computers, relatively few transistors, and a good many vacuum tubes. Any safety interlocks to prevent accidental detonation were probably mechanical as well as electronic, and consisted of switches, relays, and possibly some rudimentary transistor circuits. But somewhere physically inside the long cylindrical structure lies a terminal which, if contacted by a grounded piece of metal, will probably set the whole thing off and vaporize Amarillo and the surrounding area.
A piece of equipment that has been sitting around since 1965 in a cold, drafty missile silo is probably a little corroded here and there. Screws and plugs that used to come apart easily are now stubborn or even frozen in place. The technician in charge of beginning disassembly of this baby probably tried all the standard approaches to unscrewing a vital part in order to disable it, without success. At that point, desperation overcame judgment. The official news release from the National Nuclear Safety Agency puts it in bureaucratese thus: "This includes the failures to adhere to limits in the force applied to the weapon assembly and a Technical Safety Requirement violation associated with the use of a tool that was explicitly forbidden from use as stated in a Justification for Continued Operation." Maybe he whammed at it with a big hammer. Maybe he tried drilling out a stuck bolt with an electric drill. We may never know. But what we do know is, the reasons for all these Technical Safety Requirements is that if you violate them, you edge closer to setting off an explosion of some kind.
Not every explosion that could happen at Pantex would be The Big One with the mushroom cloud and a megaton of energy. The way nuclear weapons work is by using cleverly designed pieces of conventional high explosive to create configurations that favor the initiation of the nuclear chain reactions that produce the big boom. A lot of things have to go right (or wrong, depending on your point of view) in order for a full-scale nuclear explosion to happen. Kim Jong Il of North Korea found this out not too long ago when his nuclear test fizzled rather than boomed. But even if nothing nuclear happens when the conventional explosives go off, you've got a fine mess on your hands: probably a few people killed, expensive secret equipment destroyed, and worst from an environmental viewpoint, plutonium or other hazardous nuclear material spread all over the place, including the atmosphere.
This general sort of thing was what happened at Chernobyl, Ukraine in 1986, when some technicians experimenting late at night with a badly designed nuclear power plant managed to blow it up. The bald-faced coverup that the USSR tried to mount in the disaster's aftermath may have contributed to its ultimate downfall. So even if the worst-case scenario of a nuclear explosion doesn't ever happen at Pantex, a "small" explosion of conventional weapons could cause a release of nuclear material that could harm thousands or millions of people downwind. Where I happen to live, incidentally.
I hope the concerns pointed out by the Pantex employees who apparently wrote the anonymous letter are exaggerated. I hope that the statement from Pantex's official website that "[t]here is no credible scenario at Pantex in which an accident can result in a nuclear detonation" is true. But incredible things do happen from time to time. Let's just hope they don't happen at Pantex any time soon.
Sources: The Project on Government Oversight webpage citing the Pantex employees' anonymous letter is at http://www.pogo.org/p/homeland/hl-061201-bodman.html. The official Pantex website statement about a nuclear explosion not being a credible scenario is at http://www.pantex.com/currentnews/factSheets.html. Statistics on the U. S. nuclear weapons stockpile are from Wikipedia's article on "United States and weapons of mass destruction."
Tuesday, December 12, 2006
Hacker Psych 101
Well, it's happened again. The Los Angeles Times reports that for more than a year prior to Nov. 21, 2006, somebody was siphoning personal information such as Social Security numbers from a database of more than 800,000 students and faculty at UCLA. Eventually, the system administrators noticed some unusual activity and suppressed the hack, but by the time they closed the door, a great many horses had escaped the barn.
This is one of the biggest recent breaches of data security at a university, but it is by no means the only one. The same article reports that 29 security breaches at other universities during the first six months of this year affected about 845,000 people.
Why is hacking so common? This is a profound question that goes to the heart of the nature of evil. It's good to start with the principle that, no matter how twisted, perverse, or just plain stupid a wrong action looks to observers, the person doing it sees something good about it.
For example, it's not a big mystery why people rob banks. In the famous words of 1930's gangster Willie Sutton, "Because that's where the money is." To a bank robber, simply going in and taking money by force is a way to obtain what they view as good, namely, money.
There are hackers whose motivation is essentially no different than Willie Sutton's. Identity theft turns out to be one of the easiest types of crime for them to commit, and so they turn to hacking, not because they especially enjoy it, but because it will lead to a result they want: data they can use to masquerade as somebody else in order to obtain money and goods by fraud. This motivation, although deplorable, is understandable, and fits into our historical understanding of the criminal mind, such as it is. As technology has advanced, so must the technical abilities of criminals. At this point it isn't clear whether money was the motive behind the UCLA breach or not. Because the breach had gone on so long without notable evidence of identity theft, it's possible that this was a hack for the heck of it.
Many, if not most, hacks fall into this second category. For an insight into why people do these things if they're not making money or profiting in some other way, the insights of Sarah Gordon, a senior research fellow at Symantec Security Response, shed some light on the matter.
Gordon's specialty is the ethics and psychology of hacking. In her job at Symantec, she has encountered just about every kind of hack and hacker there is. In an interview published in 2003, she says that the reason many people feel little or no guilt (at least not enough for them to stop) when they write viruses and do hacks is that they don't consider computers to be part of the real world. Speaking about school-age children learning to use computers for the first time, she said, "They don't have the same morality in the virtual world as they have in the real world because they don't think computers are part of the real world."
Gordon says that parents and teachers should share part of the blame. When a child steals someone's password and uses it, for example, a teacher could ask, "Would you steal Johnny's house key and use it to poke around in his bedroom?" Presumably not. But the analogy may be a difficult one for children to make—and many adults, for that matter.
Gordon thinks it may take a generation or two for our culture's prevailing morality to catch up with the hyper-speed advances in computer technology. She sees some progress in the U. S., noting that there is a new reluctance to post viruses online, whereas a few years ago no one thought there was anything wrong with the practice. Still, she thinks that hacking and virus-writing is an act of rebellion that remains popular in countries where young people are experiencing computers and networks for the first time, and rebellion is just part of human nature. A boy who grew up in a thatched hut with no running water, moves to a city, and finds that he can disrupt the operations of thousands of computers halfway across the world with a few keystrokes can receive a power buzz that he can get nowhere else in his life.
It seems to me that the anonymity provided by the technical nature of computer networks also contributes to the problem. Some say that a test of true morality is to ask yourself whether you would do a bad thing if you were sure you'd never get caught. The nature of computer networks ensures that very few hackers and virus writers do get caught, at least not without a lot of trouble. And it looks like lots of people fail that kind of test.
Well, I'm a teacher, so if there are any students reading this, I'm here to tell you that just because you can hide behind a computer screen, you shouldn't abandon the Golden Rule. But it may take a few years for the message to soak in. At the same time, I recognize a broader generalization of Sarah Gordon's notion that rebellion is part of human nature: evil and sin are part of human nature. I think this was a feature of humanity that many computer scientists neglected to take into consideration way back when they were establishing the foundations of some very pervasive systems and protocols that would cost billions of dollars to change today. Eventually things will get better, but it may take a generation or more before password theft and bicycle theft are viewed as the same kind of thing by most people.
Sources: The Dec. 12 L. A. Times story on the UCLA security breach is at http://www.latimes.com/news/local/la-me-ucla12dec12,0,7111141.story?coll=la-home-headlines. The interview with Sarah Gordon is at http://news.com.com/2008-1082-829812.html.
This is one of the biggest recent breaches of data security at a university, but it is by no means the only one. The same article reports that 29 security breaches at other universities during the first six months of this year affected about 845,000 people.
Why is hacking so common? This is a profound question that goes to the heart of the nature of evil. It's good to start with the principle that, no matter how twisted, perverse, or just plain stupid a wrong action looks to observers, the person doing it sees something good about it.
For example, it's not a big mystery why people rob banks. In the famous words of 1930's gangster Willie Sutton, "Because that's where the money is." To a bank robber, simply going in and taking money by force is a way to obtain what they view as good, namely, money.
There are hackers whose motivation is essentially no different than Willie Sutton's. Identity theft turns out to be one of the easiest types of crime for them to commit, and so they turn to hacking, not because they especially enjoy it, but because it will lead to a result they want: data they can use to masquerade as somebody else in order to obtain money and goods by fraud. This motivation, although deplorable, is understandable, and fits into our historical understanding of the criminal mind, such as it is. As technology has advanced, so must the technical abilities of criminals. At this point it isn't clear whether money was the motive behind the UCLA breach or not. Because the breach had gone on so long without notable evidence of identity theft, it's possible that this was a hack for the heck of it.
Many, if not most, hacks fall into this second category. For an insight into why people do these things if they're not making money or profiting in some other way, the insights of Sarah Gordon, a senior research fellow at Symantec Security Response, shed some light on the matter.
Gordon's specialty is the ethics and psychology of hacking. In her job at Symantec, she has encountered just about every kind of hack and hacker there is. In an interview published in 2003, she says that the reason many people feel little or no guilt (at least not enough for them to stop) when they write viruses and do hacks is that they don't consider computers to be part of the real world. Speaking about school-age children learning to use computers for the first time, she said, "They don't have the same morality in the virtual world as they have in the real world because they don't think computers are part of the real world."
Gordon says that parents and teachers should share part of the blame. When a child steals someone's password and uses it, for example, a teacher could ask, "Would you steal Johnny's house key and use it to poke around in his bedroom?" Presumably not. But the analogy may be a difficult one for children to make—and many adults, for that matter.
Gordon thinks it may take a generation or two for our culture's prevailing morality to catch up with the hyper-speed advances in computer technology. She sees some progress in the U. S., noting that there is a new reluctance to post viruses online, whereas a few years ago no one thought there was anything wrong with the practice. Still, she thinks that hacking and virus-writing is an act of rebellion that remains popular in countries where young people are experiencing computers and networks for the first time, and rebellion is just part of human nature. A boy who grew up in a thatched hut with no running water, moves to a city, and finds that he can disrupt the operations of thousands of computers halfway across the world with a few keystrokes can receive a power buzz that he can get nowhere else in his life.
It seems to me that the anonymity provided by the technical nature of computer networks also contributes to the problem. Some say that a test of true morality is to ask yourself whether you would do a bad thing if you were sure you'd never get caught. The nature of computer networks ensures that very few hackers and virus writers do get caught, at least not without a lot of trouble. And it looks like lots of people fail that kind of test.
Well, I'm a teacher, so if there are any students reading this, I'm here to tell you that just because you can hide behind a computer screen, you shouldn't abandon the Golden Rule. But it may take a few years for the message to soak in. At the same time, I recognize a broader generalization of Sarah Gordon's notion that rebellion is part of human nature: evil and sin are part of human nature. I think this was a feature of humanity that many computer scientists neglected to take into consideration way back when they were establishing the foundations of some very pervasive systems and protocols that would cost billions of dollars to change today. Eventually things will get better, but it may take a generation or more before password theft and bicycle theft are viewed as the same kind of thing by most people.
Sources: The Dec. 12 L. A. Times story on the UCLA security breach is at http://www.latimes.com/news/local/la-me-ucla12dec12,0,7111141.story?coll=la-home-headlines. The interview with Sarah Gordon is at http://news.com.com/2008-1082-829812.html.
Subscribe to:
Posts (Atom)