One of the first things my father would often say to me at the end of the day was this: "And what did you do to make the world a better place today?" He'd ask it in a half-joking way, and I generally didn't have a good answer. But it was a good question nonetheless.
Suppose you're a young engineering student about to graduate. You're filled with idealism and a desire to make the world a better place through engineering. Unlike medicine, counseling, and the ministry, engineering is not generally thought of as a helping profession. But it can be, in at least two ways: one pretty obvious, and one not so obvious.
The obvious way is to devote yourself to doing engineering for the billions of people on this planet who lack what the rest of us consider basic necessities: enough food to eat, enough clean water, decent sanitary facilities and medical care, and a way to earn a living that keeps you from starving to death or having to beg. The Cooper-Hewitt National Design Museum in New York City has mounted an exhibit on display through September 23 called "Design for the Other 90%" which focuses on low-cost engineered solutions to the problems that 90% of the world's population of 6.5 billion people face. Those of us in advanced industrialized countries live in protected bubbles compared to a person who has to spend hours every day lugging buckets of water from a dirty well a half mile away, gathering firewood to cook government-provided rice, and hoping that you won't come down with the latest plague that is making the rounds of your village. But far more people live like that than like most of those who are reading this blog. A New York Times article describing the exhibit carried a photo of one of the cleverest inventions: a water carrier shaped like a wide tire that even a child can tow with a rope, enabling him or her to carry five times the amount of water that a bucket would hold.
As a sometime inventor myself, I know that the world does not lack for ideas. The reason that more of those 90% don't benefit from many of these inventions is not that nobody has thought of them yet. The real problem is more in the realm of economics and politics. What investor with a few million dollars to spend is going to start a company to make products for people with almost no money? The exhibit's website carries a statement about half the world subsisting on less than two dollars a day. Speaking in terms of market segments, that is not the segment that most investors will think of first.
Hence, the altruism in today's title. If those who need these things are going to get them, many things have to change. Yes, the products that would help them in their existing ways of life need to be invented and reach the intended users. But the users have to change too: harmful and even self-destructive attitudes and habits are not unknown among the poor as well as the rich. The hardest task of all, much harder than simply designing a clever product that looks like it might help somebody poor, is understanding enough about the people and their culture to know what would enable them to benefit from the product, and working with them to make those changes. The old saw about "If you give a man a fish, you feed him for a day; if you teach him to fish, you feed him for a lifetime" embodies a profound truth: changing a person's physical circumstance without changing the person for the better can help only for a moment. But if this type of humanitarian engineering is done with full recognition of the cultural roadblocks that so often turn a technical success into a social failure, it can truly change the world.
There are several organizations that help engineers in these endeavors, notably an outfit called Engineers Without Borders. If you are either a student or professional engineer, you can locate a chapter near you and find out how to get involved.
That's one way to be an altruistic engineer. The other way is one I don't recommend unless you've already met the first pre-requisite, which is to get filthy rich in engineering or invention. Turns out that the Cooper-Hewitt exhibit is funded by the Lemelson Foundation, the brainchild (one of many) of the late Jerome Lemelson. Lemelson figured out a way to make tons of money while being an independent inventor. There are two schools of thought concerning the merits of his approach.
One school goes like this: Lemelson just happened to be an extremely clever guy whose patents for toys, industrial robots, and other useful devices brought him millions of dollars, whereupon he founded the Lemelson Foundation to promote the benefits of invention and ingenious design, and died in 1997, end of story. The other school, for which I have some limited evidence, is that at some point in his career Lemelson decided to specialize in what are known as "submarine patents." According to this version, Lemelson filed scads of patents in hot new fields on all kinds of ideas he had never tried in practice, but hoped would some day pan out and become commercialized. When a well-heeled company came out with a product that could be construed to infringe one of his broadly-written patents, he would show up on their doorstep, patent in hand, and threaten to sue. Fearful of extended litigation, many companies simply settled out of court, but even court battles can turn out in an independent inventor's favor.
Probably the truth about Lemelson lies somewhere in between. However he made his money, toward the end of his life he decided to use it to benefit humanity by encouraging invention and design. And to his credit, as far as I can tell the Lemelson Foundation has done exactly that, sponsoring annual invention competitions and exhibits about invention at the Smithsonian Institute and the Cooper-Hewitt National Design Museum, and funding other worthwhile endeavors.
And this is the second way you as an engineer or inventor can be altruistic. If you go into an engineering-related business, you can make as much money as you can. And once you make your millions, you can devote them to a good cause. The danger in this path is that once you have all that money, it can be really hard to turn loose of it. Of the world's millionaires, only a few emulate the 19th-century steel magnate Andrew Carnegie, who once stated publicly his intention to leave the world as poor as he came into it. And even he didn't quite succeed. In his effort to die poor, he built hundreds of libraries throughout the U. S., and if you happen to get to Manhattan to tour the 64-room mansion that houses the Cooper-Hewitt National Design Museum, you can thank Mr. Carnegie for it, because it was once his house.
Sources: The New York Times article on the Cooper-Hewitt National Design Museum is at http://www.nytimes.com/2007/05/29/science/29cheap.html?_r=1&oref=slogin. The exhibit website is at http://www.peoplesdesignaward.org/design_for_the_other_90/. The website for those in the U. S. interested in Engineers Without Borders is at http://www.ewb-usa.org/.
Wednesday, May 30, 2007
Tuesday, May 22, 2007
Designer Baby or Sensible Precaution?
My wife edits a section of a commercial website devoted to medical information about breast cancer. She is more than casually interested in the subject, since she just celebrated her five-year anniversary of being free from the disease after undergoing a mastectomy and chemotherapy in 2002. My mother died of the same malady in 1980, so it is safe to say I'm as familiar with it as anybody can be who hasn't had it personally.
Two families in Great Britain have also had more than their share of experiences with breast cancer, having lost ancestors to the disease over three generations. So they decided to do something about it. Both couples found a physician named Serhal who has developed a way to test a fertilized embryo at the eight-cell stage for a defective BRCA1 gene, which if present increases the risk of eventually developing breast cancer to about a fifty-fifty chance. If Dr. Serhal receives governmental approval for his plan, and it looks like he will, the couples want to proceed with in-vitro fertilization using only embryos which do not have the defective BRCA1 gene. The embryos with the defective genes will be disposed of. In this way, the couples can "annihilate the gene from the family tree," as Dr. Serhal puts it.
Where is engineering in this situation? Everywhere: in the instruments and equipment Dr. Serhal uses to do the tests, in the procedures for in-vitro fertilization (IVF), and, most importantly, in the selection of embryos. In applying the sciences of genetics and embryology to a commercial end (it is unlikely that Dr. Serhal is working for free), he is doing engineering, broadly defined. And the subject being engineered is a human being, or rather, several human beings, many of whom do not survive the process. Remember, harboring a defective BRCA1 gene does not guarantee you'll have breast cancer; it just increases the risk. Many people with that gene live long lives and die of something else altogether. So we can be pretty sure that some of the embryos that get thrown away would have developed if implanted into healthy human beings living normal lives, whatever that means these days.
Now I'm going to go off in a direction that you may not follow, but I have come to believe it is the most direct way to express what I see to be the basic problem here. A few hundred years ago, back before much was known about embryology, the development of a baby in the womb was mostly a matter of speculation. People talked metaphorically about clay gradually being molded, and for all they knew, there was some amorphous protoplasm to begin with which only gradually became the individual who made his or her first public appearance nine months after conception. But now, with everything we know about DNA, genetics, and the fabulously intricate machinery that comes together to produce a mathematically distinct individual after the process of conception is finished (which can take just a few minutes), the empirical scientific evidence supports the idea of humans as substantial beings more strongly than ever.
Substantial say what? "Substantial beings." I'm using the word "substance" in a technical philosophical sense that goes back ultimately to Aristotle. To explain it in detail would take far more room than I have, but briefly, a substantial being is one which has a wholeness or completeness or integrity. A substantial being is more than the sum of its parts. For example, you can look at a dog in a number of ways: an assembly of atoms, a combination of bones, muscles, internal organs, hair, teeth, etc., even a set of behaviors that can be predicted (more or less, depending on how well you trained your dog). But when you say, "Heel, Fido!" you don't mean, "Heel, you assembly of atoms that just happens to be moving in front of me on the sidewalk." You mean a single being—your dog—continuous in time and localized in space, a real entity that has life (another philosophical term) and will some day die.
This concept of people as substantial beings is not popular these days. Few of us think of ourselves as substantial beings in fact, never mind the terminology. We think of ourselves as just collections of needs, or inclinations, or desires, or bits of knowledge and skills. Nevertheless, substantial beings are what we are—we've just forgotten the name for it.
What has this got to do with the case of the selected defective-BRCA1-free embryos in Britain? An embryo is what the substantial being called human looks like when it's a few days old. You, I, every human on the planet was once an embryo. And one day mortality will catch up with us and we'll die of something. No exceptions so far. The couples who are trying to eliminate the defective gene from their family tree are probably motivated by some generous motives and some fears. The generous motive is to give birth to a baby that won't have an increased risk of dying of breast cancer. The fear is of seeing their child die of the same disease that killed so many other relatives. So they decided to "eliminate" the children who might die of it and bear only those who probably—but not certainly—won't.
There is an old and unpopular name for this sort of thing: eugenics. In the first half of the twentieth century, followers of Francis Galton (Charles Darwin's cousin, both biologically and intellectually) promoted the idea that we should take steps to improve the human gene pool, both individually (by marrying into "good stock" for example) and collectively (by allowing governments to sterilize those "unfit" to bear children). There are boodles of problems with these ideas, but that did not stop them from spreading in both the U. S. and Europe, and in particular Nazi Germany, where Hitler took aggressive means to eliminate "undesirables" such as mental defectives, homosexuals, the Romani (gypsies), and most famously, the Jews.
Hitler, more than anyone else, gave eugenics a bad name, although it took until 1969 for the journal Eugenics Quarterly to rename itself Social Biology. But the desire is still there, and since 1950 the tremendous advances in genetics and molecular biology have put powerful technology at the disposal of those who would use it for the same kinds of purposes that the old eugenicists had.
The British couples are not doing anything like advocating the genocide of a race. But, enabled by Dr. Serhal, they are doing the same kind of thing as Hitler did, only on a much smaller scale. On a personal level, there is nothing intrinsically wrong with desiring to produce offspring who are healthy, happy, intelligent, and possessed of other good qualities. But the end does not always justify the means. Now that you're prepped on vocabulary, I can make my point: destruction of human substantial beings is a wrong means of achieving this goal.
Sources: The article describing Dr. Serhal and his plans originally ran in The Times of London, and can be found at http://www.theaustralian.news.com.au/story/0,20867,21624095-30417,00.html. Wikipedia's article on eugenics has an abundance of historical and current information in its fairly balanced treatment.
Two families in Great Britain have also had more than their share of experiences with breast cancer, having lost ancestors to the disease over three generations. So they decided to do something about it. Both couples found a physician named Serhal who has developed a way to test a fertilized embryo at the eight-cell stage for a defective BRCA1 gene, which if present increases the risk of eventually developing breast cancer to about a fifty-fifty chance. If Dr. Serhal receives governmental approval for his plan, and it looks like he will, the couples want to proceed with in-vitro fertilization using only embryos which do not have the defective BRCA1 gene. The embryos with the defective genes will be disposed of. In this way, the couples can "annihilate the gene from the family tree," as Dr. Serhal puts it.
Where is engineering in this situation? Everywhere: in the instruments and equipment Dr. Serhal uses to do the tests, in the procedures for in-vitro fertilization (IVF), and, most importantly, in the selection of embryos. In applying the sciences of genetics and embryology to a commercial end (it is unlikely that Dr. Serhal is working for free), he is doing engineering, broadly defined. And the subject being engineered is a human being, or rather, several human beings, many of whom do not survive the process. Remember, harboring a defective BRCA1 gene does not guarantee you'll have breast cancer; it just increases the risk. Many people with that gene live long lives and die of something else altogether. So we can be pretty sure that some of the embryos that get thrown away would have developed if implanted into healthy human beings living normal lives, whatever that means these days.
Now I'm going to go off in a direction that you may not follow, but I have come to believe it is the most direct way to express what I see to be the basic problem here. A few hundred years ago, back before much was known about embryology, the development of a baby in the womb was mostly a matter of speculation. People talked metaphorically about clay gradually being molded, and for all they knew, there was some amorphous protoplasm to begin with which only gradually became the individual who made his or her first public appearance nine months after conception. But now, with everything we know about DNA, genetics, and the fabulously intricate machinery that comes together to produce a mathematically distinct individual after the process of conception is finished (which can take just a few minutes), the empirical scientific evidence supports the idea of humans as substantial beings more strongly than ever.
Substantial say what? "Substantial beings." I'm using the word "substance" in a technical philosophical sense that goes back ultimately to Aristotle. To explain it in detail would take far more room than I have, but briefly, a substantial being is one which has a wholeness or completeness or integrity. A substantial being is more than the sum of its parts. For example, you can look at a dog in a number of ways: an assembly of atoms, a combination of bones, muscles, internal organs, hair, teeth, etc., even a set of behaviors that can be predicted (more or less, depending on how well you trained your dog). But when you say, "Heel, Fido!" you don't mean, "Heel, you assembly of atoms that just happens to be moving in front of me on the sidewalk." You mean a single being—your dog—continuous in time and localized in space, a real entity that has life (another philosophical term) and will some day die.
This concept of people as substantial beings is not popular these days. Few of us think of ourselves as substantial beings in fact, never mind the terminology. We think of ourselves as just collections of needs, or inclinations, or desires, or bits of knowledge and skills. Nevertheless, substantial beings are what we are—we've just forgotten the name for it.
What has this got to do with the case of the selected defective-BRCA1-free embryos in Britain? An embryo is what the substantial being called human looks like when it's a few days old. You, I, every human on the planet was once an embryo. And one day mortality will catch up with us and we'll die of something. No exceptions so far. The couples who are trying to eliminate the defective gene from their family tree are probably motivated by some generous motives and some fears. The generous motive is to give birth to a baby that won't have an increased risk of dying of breast cancer. The fear is of seeing their child die of the same disease that killed so many other relatives. So they decided to "eliminate" the children who might die of it and bear only those who probably—but not certainly—won't.
There is an old and unpopular name for this sort of thing: eugenics. In the first half of the twentieth century, followers of Francis Galton (Charles Darwin's cousin, both biologically and intellectually) promoted the idea that we should take steps to improve the human gene pool, both individually (by marrying into "good stock" for example) and collectively (by allowing governments to sterilize those "unfit" to bear children). There are boodles of problems with these ideas, but that did not stop them from spreading in both the U. S. and Europe, and in particular Nazi Germany, where Hitler took aggressive means to eliminate "undesirables" such as mental defectives, homosexuals, the Romani (gypsies), and most famously, the Jews.
Hitler, more than anyone else, gave eugenics a bad name, although it took until 1969 for the journal Eugenics Quarterly to rename itself Social Biology. But the desire is still there, and since 1950 the tremendous advances in genetics and molecular biology have put powerful technology at the disposal of those who would use it for the same kinds of purposes that the old eugenicists had.
The British couples are not doing anything like advocating the genocide of a race. But, enabled by Dr. Serhal, they are doing the same kind of thing as Hitler did, only on a much smaller scale. On a personal level, there is nothing intrinsically wrong with desiring to produce offspring who are healthy, happy, intelligent, and possessed of other good qualities. But the end does not always justify the means. Now that you're prepped on vocabulary, I can make my point: destruction of human substantial beings is a wrong means of achieving this goal.
Sources: The article describing Dr. Serhal and his plans originally ran in The Times of London, and can be found at http://www.theaustralian.news.com.au/story/0,20867,21624095-30417,00.html. Wikipedia's article on eugenics has an abundance of historical and current information in its fairly balanced treatment.
Tuesday, May 15, 2007
Pop-Up Porn: The Trial of Julie Amero
Three days from now, on May 18, former substitute teacher Julie Amero is scheduled to be sentenced in a Connecticut court for allowing seventh-graders to see pornographic websites. That is, unless the sentencing is delayed again, which has happened since her conviction in January. Delays in sentencing sometimes mean that the prosecution is no longer as sure of its case as it once was. There are good reasons that the prosecutors in the Amero case could be reconsidering, but first let's try to get some of the basic facts straight.
Everybody agrees that on October 19, 2004, Amero was substitute-teaching a seventh-grade class at Kelly Middle School in Norwich, Connecticut. Everybody also agrees that at some point, pornographic images began to appear on a computer screen that students were using. At the trial, police detective Mark Lounsbury testified that his software (aptly named ComputerCOP) determined that such sites were accessed during the time in question. After this was explained to the jury, they convicted Amero on four counts of injuring the morals of a child.
What the jury was not allowed to hear, but what computer expert W. H. Horner determined, was that these pornographic images came from "pop-ups." As anybody who has spent more than five minutes on the Internet can agree, pop-ups are annoying, pesky little things that usually don't pose a threat to one's job, however. But in this case, Amero realized they could, so everyone also agrees that she tried to keep students from viewing the images. (The disagreement is over how vigorously and effectively she tried.) But, according to Amero, she had been told not to turn off the computer because she had no password, and since the person who had turned it on for her in the morning wasn't present, she couldn't turn it on again if she shut it off.
Horner also found that the computer in question had outdated anti-virus protection software, no Internet filter, and no anti-spyware software. Since these kinds of protection seem to be the school district's responsibility, Horner's evidence in this regard shifts at least some of the blame off Amero's shoulders.
But how much really belongs there in the first place?
For twelve-year-olds, the Internet has been as much a fact of life as television was to those born in the U. S. after 1950, say. This was brought home to me recently not by the Amero case, but by reading Forbidden Fruit: Sex & Religion in the Lives of American Teenagers, by Mark Regnerus, a sociology professor at the University of Texas at Austin. Among the many fascinating findings Regnerus presents, he makes the point that debates about the content of traditional public-school sex education classes,". . . oral sex or anal sex or gay or lesbian sex are quickly becoming utterly irrelevant, since a few clicks on a mouse will bring any of us to a demonstration of exactly how each is performed and 'experienced.'" Internet porn is ubiquitous and easily accessible, despite all that parents and teachers can do, and chances are that most of the students in Amero's class had seen worse things elsewhere than they saw on that fateful October day.
There are two issues that must be distinguished in this case. One is the technically-informed question of whether the physical evidence supports the contention that Amero voluntarily visited the websites in question, heedless of the fact that students were also seeing them. My judgment on this is that if Horner's testimony is to be relied on, Amero was caught between the rock of letting popups proliferate like flies on a dead horse, or the hard place of turning off the computer and losing whatever utility it had (and nobody seems to talk about what the machine was being legitimately used for at the time, except to say that students were looking at a hairstyling website, which doesn't sound like academic activity). And maybe she didn't realize how serious the matter was, although she evidently made some attempt to deflect students' attention away from the machine. But now she's facing the possibility of a forty-year jail sentence.
Which brings us to the second issue: the hypocrisy factor. Now don't get me wrong: porn is bad. While it may be true that, as G. K. Chesterton allegedly said, the young man knocking on the door of a whorehouse is really looking for God, that doesn't mean it's a good thing to go there. We have made the choice as a culture both to receive the manifold good things that the Internet brings, and to allow at the same time the huge Internet porn industry to profit from the millions of small evils committed by everyone who looks at their wares. To single out one person in one particular circumstance and lock her away for most of her natural life because she did not stop a student from what he or she could do outside the classroom any day of the week strikes me as cowardly, hypocritical, and pretty dumb, too.
The trial of Julie Amero reminds me of another trial held a long time ago, by a similar bunch of concerned citizens who had posted spies, not in a school computer, but near a place where a woman met her adulterous lover. The spies caught the two in the very act. The woman's lover they allowed to go free; but they hauled the guilty woman before another person they hoped to get in trouble, a troublesome preacher who had been challenging the concerned citizens' unquestioned authority to say what was right and what was wrong.
The preacher's name was Jesus. The concerned citizens were the scribes and Pharisees of Jerusalem around 30 A. D., all ready to take the woman out and stone her to death, as their law required.
All Jesus did was to write in the dust of the street (the words were not preserved for us), and then say, "He that is without sin among you, let him first cast a stone at her." And St. John records that one after another, beginning with the oldest, her persecutors quietly slipped away, until there was nobody left except the woman and Jesus. He told her to go and sin no more.
I think Julie Amero has learned her lesson about computers, about pornography, about students who see pornography, and a whole lot about the creaky, hypocritical system of law in Connecticut. Her lawyers (who do not work for free) have promised to appeal, but that will take time and money. If you think justice has not been served in the Amero case, you can inform yourself further and then contribute to her defense at the website listed below. And if you do think justice has been served in this matter in all respects, then I just hope you never get the chance to judge me!
Sources: For the time being, Wikipedia has an entry for Julie Amero at http://en.wikipedia.org/wiki/Julie_Amero. It lists many news reports and other information, along with her personal website http://julieamer.blogspot.com/index.html, where contributions can be made. I thank Peter Ingerman for drawing my attention to this case, which is well summarized in a USA Today article at http://www.usatoday.com/tech/columnist/andrewkantor/2007-02-22-julie-amaro_x.htm.
Forbidden Fruit has just been published by Oxford University Press.
Everybody agrees that on October 19, 2004, Amero was substitute-teaching a seventh-grade class at Kelly Middle School in Norwich, Connecticut. Everybody also agrees that at some point, pornographic images began to appear on a computer screen that students were using. At the trial, police detective Mark Lounsbury testified that his software (aptly named ComputerCOP) determined that such sites were accessed during the time in question. After this was explained to the jury, they convicted Amero on four counts of injuring the morals of a child.
What the jury was not allowed to hear, but what computer expert W. H. Horner determined, was that these pornographic images came from "pop-ups." As anybody who has spent more than five minutes on the Internet can agree, pop-ups are annoying, pesky little things that usually don't pose a threat to one's job, however. But in this case, Amero realized they could, so everyone also agrees that she tried to keep students from viewing the images. (The disagreement is over how vigorously and effectively she tried.) But, according to Amero, she had been told not to turn off the computer because she had no password, and since the person who had turned it on for her in the morning wasn't present, she couldn't turn it on again if she shut it off.
Horner also found that the computer in question had outdated anti-virus protection software, no Internet filter, and no anti-spyware software. Since these kinds of protection seem to be the school district's responsibility, Horner's evidence in this regard shifts at least some of the blame off Amero's shoulders.
But how much really belongs there in the first place?
For twelve-year-olds, the Internet has been as much a fact of life as television was to those born in the U. S. after 1950, say. This was brought home to me recently not by the Amero case, but by reading Forbidden Fruit: Sex & Religion in the Lives of American Teenagers, by Mark Regnerus, a sociology professor at the University of Texas at Austin. Among the many fascinating findings Regnerus presents, he makes the point that debates about the content of traditional public-school sex education classes,". . . oral sex or anal sex or gay or lesbian sex are quickly becoming utterly irrelevant, since a few clicks on a mouse will bring any of us to a demonstration of exactly how each is performed and 'experienced.'" Internet porn is ubiquitous and easily accessible, despite all that parents and teachers can do, and chances are that most of the students in Amero's class had seen worse things elsewhere than they saw on that fateful October day.
There are two issues that must be distinguished in this case. One is the technically-informed question of whether the physical evidence supports the contention that Amero voluntarily visited the websites in question, heedless of the fact that students were also seeing them. My judgment on this is that if Horner's testimony is to be relied on, Amero was caught between the rock of letting popups proliferate like flies on a dead horse, or the hard place of turning off the computer and losing whatever utility it had (and nobody seems to talk about what the machine was being legitimately used for at the time, except to say that students were looking at a hairstyling website, which doesn't sound like academic activity). And maybe she didn't realize how serious the matter was, although she evidently made some attempt to deflect students' attention away from the machine. But now she's facing the possibility of a forty-year jail sentence.
Which brings us to the second issue: the hypocrisy factor. Now don't get me wrong: porn is bad. While it may be true that, as G. K. Chesterton allegedly said, the young man knocking on the door of a whorehouse is really looking for God, that doesn't mean it's a good thing to go there. We have made the choice as a culture both to receive the manifold good things that the Internet brings, and to allow at the same time the huge Internet porn industry to profit from the millions of small evils committed by everyone who looks at their wares. To single out one person in one particular circumstance and lock her away for most of her natural life because she did not stop a student from what he or she could do outside the classroom any day of the week strikes me as cowardly, hypocritical, and pretty dumb, too.
The trial of Julie Amero reminds me of another trial held a long time ago, by a similar bunch of concerned citizens who had posted spies, not in a school computer, but near a place where a woman met her adulterous lover. The spies caught the two in the very act. The woman's lover they allowed to go free; but they hauled the guilty woman before another person they hoped to get in trouble, a troublesome preacher who had been challenging the concerned citizens' unquestioned authority to say what was right and what was wrong.
The preacher's name was Jesus. The concerned citizens were the scribes and Pharisees of Jerusalem around 30 A. D., all ready to take the woman out and stone her to death, as their law required.
All Jesus did was to write in the dust of the street (the words were not preserved for us), and then say, "He that is without sin among you, let him first cast a stone at her." And St. John records that one after another, beginning with the oldest, her persecutors quietly slipped away, until there was nobody left except the woman and Jesus. He told her to go and sin no more.
I think Julie Amero has learned her lesson about computers, about pornography, about students who see pornography, and a whole lot about the creaky, hypocritical system of law in Connecticut. Her lawyers (who do not work for free) have promised to appeal, but that will take time and money. If you think justice has not been served in the Amero case, you can inform yourself further and then contribute to her defense at the website listed below. And if you do think justice has been served in this matter in all respects, then I just hope you never get the chance to judge me!
Sources: For the time being, Wikipedia has an entry for Julie Amero at http://en.wikipedia.org/wiki/Julie_Amero. It lists many news reports and other information, along with her personal website http://julieamer.blogspot.com/index.html, where contributions can be made. I thank Peter Ingerman for drawing my attention to this case, which is well summarized in a USA Today article at http://www.usatoday.com/tech/columnist/andrewkantor/2007-02-22-julie-amaro_x.htm.
Forbidden Fruit has just been published by Oxford University Press.
Tuesday, May 08, 2007
Death in Space: NASA Ponders Eternal Questions
Sometimes the Freedom of Information Act helps you turn up stuff that you'd almost rather not know. Mike Schneider of the Associated Press recently wrote a story about a NASA memo he obtained that way. As one of the most open agencies of our government, NASA is presumably used to operating in a fishbowl, but I would imagine that even the most open-minded of NASA's bureaucrats cringed a little when this document was made public.
The subject was how to deal with certain undesirable eventualities that might take place on a long mission such as the three-year flight to Mars that NASA plans some day. In a crew of five to ten people, somebody's likely to become ill over a three-year period, maybe even fatally ill. And on an interplanetary flight (at least one not powered by Star Trek warp drives), you can't just turn around any old time and go back. The memo goes no farther than to say that NASA needs a policy about what to do if a crew member becomes so ill that death is likely or certain, and for that matter, what to do with the body.
Another ethical conundrum the memo raises is whether a sick astronaut whose need for medical care is endangering the lives of the other astronauts should be guaranteed all the help he or she needs, or whether early "termination of benefits," so to speak, would be in the best interests of the mission.
I will give NASA credit: the memo doesn't try to answer all these questions, it just brings them up. Schneider found that NASA is working on these questions with the help of outside bioethicists, but I'm not sure that's the right approach. Here's why.
NASA is the quintessential engineering bureaucracy. Engineers and the engineering attitude pervade the institution. Engineers are used to working with inanimate objects that obey physical laws without exception. When the objects do fail in the purpose for which they are designed, it is always in accordance with those same physical laws, which is why scientific and engineering knowledge is so sought after among engineers. If you can just know enough about the physics, chemistry, dynamics, and so on, you should in principle be able to predict every possible outcome, or else design a system so that only a certain number of outcomes are possible in the first place, and deal with them in turn. Once you find that answer, it will work every time the same conditions arise. You've solved the problem.
But engineering know-how can take you only so far. The issues that the Mars-mission document addresses are not technical ones. They plumb the depths of what it means to be human and why anyone would volunteer for a dangerous three-year hike in a cold merciless vacuum in the first place.
In my view, NASA may be spending too much time and money on outside experts and perhaps not paying enough attention to the astronauts themselves. Much has been made about "The Right Stuff" and what it took in the 1960s, and what it takes now, to be an astronaut. Most of the early U. S. astronauts were former military test pilots. That is no longer a necessary qualification, although it doesn't hurt. What it takes to be an astronaut now, it seems, is a Ph. D. in something technical, a sterling resumé, and the patience of Job to wade through an arduous application procedure, and to train endlessly while waiting in line for your turn in space, which you hope will come before you hit retirement age. Is this the type of person best suited for what many people regard as mankind's greatest remaining adventure? Maybe we should look a little farther than we've looked up to now, and in a different way.
To the kind of person I'm thinking of, the advice of some bioethicist with a Ph. D. would be superfluous. True courage always knows what to do, whether it is to take a calculated risk for a great cause (which every astronaut who gets aboard a Space Shuttle already does) or to sacrifice one's life for a mission, which might well come about during a trip to Mars. Back before exploration became the business of bureaucracies, people had to be this way in order to attract support. Take the example of Admiral Richard E. Byrd, whose pioneering explorations of the Antarctic by land and air in the 1920s and '30s were financed virtually entirely by private contributions. Byrd is largely forgotten now, and recent historical discoveries concerning his claims to have flown over the North Pole in 1926 have cast doubt on their validity. But the style of the man (admittedly, reinforced by autobiographical books he published to finance his projects) was that of the courageous, risk-taking adventurer who gave technical preparation its place, true, but who then simply accepted whatever remaining risks there were as part of the job. Byrd was the closest thing the 1930s had to an astronaut: a man who went where no one had gone before, taking with him other brave souls who were willing to take chances with him.
No, Byrd took no women along, at least during his early expeditions. And yes, he nearly died of carbon monoxide poisoning during one stay in the Antarctic and had to be rescued. But those kinds of risks didn't stop him from going through with several more expeditions, the last one only a couple of years before he died in 1957.
In past blogs, I have said some negative things about NASA and the Space Shuttle program, mainly that the antique shuttles ought to be retired rather than trying to squeeze a few more increasingly hazardous flights out of them. But this is not to say that we ought to simply give up on space exploration because it's dangerous. If anything, that is an excellent reason to keep trying. Only, we need to pay more attention to the character of those who we send into space, giving them much greater authority and responsibility than they currently hold in the bureaucratized system that is NASA. Columbus, Magellan, Byrd—they not only went on the voyages, they ran the whole show. Maybe the answer will come from the private sector once again, as entrepreneurs find safe and effective ways to make end runs around NASA's bureaucracy and do more with less. Of course, the government could always stop them. But the U. S. isn't the only country in the space game any more. I'd like the first man (or woman) on Mars to be a U. S. citizen, but it doesn't have to be that way. We can get there, but only if we try. And while machines can do wonderful things, running robot cars around Mars is no substitute for being there.
Sources: The article by Mike Schneider on NASA's plans for the Mars mission appeared in numerous venues, among them the Austin American-Statesman on May 6, 2007, at http://www.statesman.com/search/content/news/stories/nation/05/06/6deathinspace.html.
The subject was how to deal with certain undesirable eventualities that might take place on a long mission such as the three-year flight to Mars that NASA plans some day. In a crew of five to ten people, somebody's likely to become ill over a three-year period, maybe even fatally ill. And on an interplanetary flight (at least one not powered by Star Trek warp drives), you can't just turn around any old time and go back. The memo goes no farther than to say that NASA needs a policy about what to do if a crew member becomes so ill that death is likely or certain, and for that matter, what to do with the body.
Another ethical conundrum the memo raises is whether a sick astronaut whose need for medical care is endangering the lives of the other astronauts should be guaranteed all the help he or she needs, or whether early "termination of benefits," so to speak, would be in the best interests of the mission.
I will give NASA credit: the memo doesn't try to answer all these questions, it just brings them up. Schneider found that NASA is working on these questions with the help of outside bioethicists, but I'm not sure that's the right approach. Here's why.
NASA is the quintessential engineering bureaucracy. Engineers and the engineering attitude pervade the institution. Engineers are used to working with inanimate objects that obey physical laws without exception. When the objects do fail in the purpose for which they are designed, it is always in accordance with those same physical laws, which is why scientific and engineering knowledge is so sought after among engineers. If you can just know enough about the physics, chemistry, dynamics, and so on, you should in principle be able to predict every possible outcome, or else design a system so that only a certain number of outcomes are possible in the first place, and deal with them in turn. Once you find that answer, it will work every time the same conditions arise. You've solved the problem.
But engineering know-how can take you only so far. The issues that the Mars-mission document addresses are not technical ones. They plumb the depths of what it means to be human and why anyone would volunteer for a dangerous three-year hike in a cold merciless vacuum in the first place.
In my view, NASA may be spending too much time and money on outside experts and perhaps not paying enough attention to the astronauts themselves. Much has been made about "The Right Stuff" and what it took in the 1960s, and what it takes now, to be an astronaut. Most of the early U. S. astronauts were former military test pilots. That is no longer a necessary qualification, although it doesn't hurt. What it takes to be an astronaut now, it seems, is a Ph. D. in something technical, a sterling resumé, and the patience of Job to wade through an arduous application procedure, and to train endlessly while waiting in line for your turn in space, which you hope will come before you hit retirement age. Is this the type of person best suited for what many people regard as mankind's greatest remaining adventure? Maybe we should look a little farther than we've looked up to now, and in a different way.
To the kind of person I'm thinking of, the advice of some bioethicist with a Ph. D. would be superfluous. True courage always knows what to do, whether it is to take a calculated risk for a great cause (which every astronaut who gets aboard a Space Shuttle already does) or to sacrifice one's life for a mission, which might well come about during a trip to Mars. Back before exploration became the business of bureaucracies, people had to be this way in order to attract support. Take the example of Admiral Richard E. Byrd, whose pioneering explorations of the Antarctic by land and air in the 1920s and '30s were financed virtually entirely by private contributions. Byrd is largely forgotten now, and recent historical discoveries concerning his claims to have flown over the North Pole in 1926 have cast doubt on their validity. But the style of the man (admittedly, reinforced by autobiographical books he published to finance his projects) was that of the courageous, risk-taking adventurer who gave technical preparation its place, true, but who then simply accepted whatever remaining risks there were as part of the job. Byrd was the closest thing the 1930s had to an astronaut: a man who went where no one had gone before, taking with him other brave souls who were willing to take chances with him.
No, Byrd took no women along, at least during his early expeditions. And yes, he nearly died of carbon monoxide poisoning during one stay in the Antarctic and had to be rescued. But those kinds of risks didn't stop him from going through with several more expeditions, the last one only a couple of years before he died in 1957.
In past blogs, I have said some negative things about NASA and the Space Shuttle program, mainly that the antique shuttles ought to be retired rather than trying to squeeze a few more increasingly hazardous flights out of them. But this is not to say that we ought to simply give up on space exploration because it's dangerous. If anything, that is an excellent reason to keep trying. Only, we need to pay more attention to the character of those who we send into space, giving them much greater authority and responsibility than they currently hold in the bureaucratized system that is NASA. Columbus, Magellan, Byrd—they not only went on the voyages, they ran the whole show. Maybe the answer will come from the private sector once again, as entrepreneurs find safe and effective ways to make end runs around NASA's bureaucracy and do more with less. Of course, the government could always stop them. But the U. S. isn't the only country in the space game any more. I'd like the first man (or woman) on Mars to be a U. S. citizen, but it doesn't have to be that way. We can get there, but only if we try. And while machines can do wonderful things, running robot cars around Mars is no substitute for being there.
Sources: The article by Mike Schneider on NASA's plans for the Mars mission appeared in numerous venues, among them the Austin American-Statesman on May 6, 2007, at http://www.statesman.com/search/content/news/stories/nation/05/06/6deathinspace.html.
Tuesday, May 01, 2007
If I Could Redesign the Internet
If I could redesign the Internet, I'd fix it so I could find out who sent anything I receive: personal emails, spam, bomb threats, you name it.
If I could redesign the Internet, anybody who wanted to send thousands of emails at once, legally or otherwise, would have to pay up front first.
If I could redesign the Internet, my browser couldn't be taken over by some little ad for low-interest mortgages that suddenly balloons out and hides the thing I'm trying to read.
All right, so the last one is more along the lines of a pet peeve. But the first two are reasonable. If we could easily and reliably find out exactly who is sending spam and malware, it stands to reason that not nearly as many people would do so. And if bulk email had a cost structure similar to direct snail mail, spam wouldn't go away, but we'd get a lot less of it. So why don't we just fix these problems right away? The reason can be illustrated by a little story from my days as a radio engineer with a large mobile-radio firm, back in the 1970s.
At the time, I was on a team of fresh young engineers charged with designing a new mobile radio for police cars and fire trucks. One of the first things we did was to take a look at the connector between the radio and the antenna. The connector is like a bridge that carries the "traffic" of the radio waves. If the bridge is bumpy or full of holes, you're not going to get much traffic across the bridge. Similarly, if the connector is of poor quality, you're going to have problems sending the radio waves back and forth to the antenna. The connector on the old radio design we were replacing was called a "PL-259," a type that dated back all the way to World War II, and we decided we were going to replace it with a newer design that presented a smoother path to the waves. Then we had our first progress meeting.
At the meeting, an old-time manager listened patiently as we presented our ideas for the new design, including our plans for the new connector. "Are you finished?" he asked. When we said yes, he replied, "You kids obviously haven't heard about the First Commandment of mobile radio design."
No, we guessed we hadn't. What was it?
"Thou shalt only use a PL-259. Neither shalt thou even think of using any other connector." He pointed out that thousands of police cars and fire trucks all over the world had antennas that connected with a PL-259, and there was no way he was going to let us change it. It was what engineers call a "legacy problem": there's too much hardware (or software) out there that a change would obsolete. Thus perished the notion of updating the connector, at least for that new design. Eventually, long after I left the company, I learned that they did manage to replace the PL-259, but probably only after a long internal battle and a lot of hand-holding for customers who had to replace antennas or use adapters.
This minor episode illustrates the major problem with changing certain features of the Internet. Take the problem of anonymity. Way down at the level of the basic protocols or rules followed by all the machinery that runs the Internet, there is simply no way to ensure that you can figure out who sent what. The reason for this is partly historical. In the Internet's early days, it was a research toy shared by a few large, sophisticated, and trustworthy computing centers. For several years, it probably never entered the mind of anyone involved that one of the users would deliberately try to misuse the system to conceal their identity. By the time the Internet was large enough to attract such people, it was too late to start over with a new set of protocols that contained built-in security. There are also a lot of problems and delays caused by the fact that people using the Internet move around a lot now, with laptops, PDAs, Internet-capable cell phones, and whatnot. The system was originally designed to deal with fixed mainframe computers that were as likely to move around as the Washington Monument, and the patches and fixes that have been added to deal with mobile users are inefficient and complicated.
More patches and fixes aren't the answer. For these basic legacy problems to be solved, it looks like we will have to wait for a new Internet altogether. The National Science Foundation is paying for research into how we'd like such a new system to look with its Future Internet Network Design program (FIND). But estimates for how much it would cost to scrap the existing system and install a new one range into the many billions of dollars.
Who's going to pay for it? Well, one way or another we already support the present system, through bills to our Internet service providers, tax dollars, and other ways. It will be interesting to see how far we can stretch the old protocols, but some day they'll start looking the way that PL-259 connector looked to us young engineers. Right now it's not just a crusty old manager stopping us; it's the expense of changing over. But as the Internet becomes a vital part of life-critical services such as medical telecommunications, we may have to start something like a two-tier system, rather like the HOV lanes on freeways: an expensive but super-reliable and super-secure network, and then the regular old system for everybody else, with maybe nodes here and there connecting the two.
I'm no computer scientist, so I'll let the experts figure out how to make the transition. But spamless email and freedom from malware seem like pretty attractive goals, even if it does cost a bundle. And if somebody does eventually figure out a way around the new safeguards, we might have a few years to enjoy the Internet as it was intended to be.
Sources: A series of articles by Anick Jesdanun on redesigning the Internet was carried by the Associated Press and reprinted in several newspapers, and carried online in part by the Hartford Courant at http://www.courant.com/business/hc-rebuildinginternet.artapr15,0,5625095.story?coll=hc-headlines-business on Apr. 15, 2007, and also in the Austin American-Statesman print edition of Apr. 23, 2007, pp. D1 and D4.
If I could redesign the Internet, anybody who wanted to send thousands of emails at once, legally or otherwise, would have to pay up front first.
If I could redesign the Internet, my browser couldn't be taken over by some little ad for low-interest mortgages that suddenly balloons out and hides the thing I'm trying to read.
All right, so the last one is more along the lines of a pet peeve. But the first two are reasonable. If we could easily and reliably find out exactly who is sending spam and malware, it stands to reason that not nearly as many people would do so. And if bulk email had a cost structure similar to direct snail mail, spam wouldn't go away, but we'd get a lot less of it. So why don't we just fix these problems right away? The reason can be illustrated by a little story from my days as a radio engineer with a large mobile-radio firm, back in the 1970s.
At the time, I was on a team of fresh young engineers charged with designing a new mobile radio for police cars and fire trucks. One of the first things we did was to take a look at the connector between the radio and the antenna. The connector is like a bridge that carries the "traffic" of the radio waves. If the bridge is bumpy or full of holes, you're not going to get much traffic across the bridge. Similarly, if the connector is of poor quality, you're going to have problems sending the radio waves back and forth to the antenna. The connector on the old radio design we were replacing was called a "PL-259," a type that dated back all the way to World War II, and we decided we were going to replace it with a newer design that presented a smoother path to the waves. Then we had our first progress meeting.
At the meeting, an old-time manager listened patiently as we presented our ideas for the new design, including our plans for the new connector. "Are you finished?" he asked. When we said yes, he replied, "You kids obviously haven't heard about the First Commandment of mobile radio design."
No, we guessed we hadn't. What was it?
"Thou shalt only use a PL-259. Neither shalt thou even think of using any other connector." He pointed out that thousands of police cars and fire trucks all over the world had antennas that connected with a PL-259, and there was no way he was going to let us change it. It was what engineers call a "legacy problem": there's too much hardware (or software) out there that a change would obsolete. Thus perished the notion of updating the connector, at least for that new design. Eventually, long after I left the company, I learned that they did manage to replace the PL-259, but probably only after a long internal battle and a lot of hand-holding for customers who had to replace antennas or use adapters.
This minor episode illustrates the major problem with changing certain features of the Internet. Take the problem of anonymity. Way down at the level of the basic protocols or rules followed by all the machinery that runs the Internet, there is simply no way to ensure that you can figure out who sent what. The reason for this is partly historical. In the Internet's early days, it was a research toy shared by a few large, sophisticated, and trustworthy computing centers. For several years, it probably never entered the mind of anyone involved that one of the users would deliberately try to misuse the system to conceal their identity. By the time the Internet was large enough to attract such people, it was too late to start over with a new set of protocols that contained built-in security. There are also a lot of problems and delays caused by the fact that people using the Internet move around a lot now, with laptops, PDAs, Internet-capable cell phones, and whatnot. The system was originally designed to deal with fixed mainframe computers that were as likely to move around as the Washington Monument, and the patches and fixes that have been added to deal with mobile users are inefficient and complicated.
More patches and fixes aren't the answer. For these basic legacy problems to be solved, it looks like we will have to wait for a new Internet altogether. The National Science Foundation is paying for research into how we'd like such a new system to look with its Future Internet Network Design program (FIND). But estimates for how much it would cost to scrap the existing system and install a new one range into the many billions of dollars.
Who's going to pay for it? Well, one way or another we already support the present system, through bills to our Internet service providers, tax dollars, and other ways. It will be interesting to see how far we can stretch the old protocols, but some day they'll start looking the way that PL-259 connector looked to us young engineers. Right now it's not just a crusty old manager stopping us; it's the expense of changing over. But as the Internet becomes a vital part of life-critical services such as medical telecommunications, we may have to start something like a two-tier system, rather like the HOV lanes on freeways: an expensive but super-reliable and super-secure network, and then the regular old system for everybody else, with maybe nodes here and there connecting the two.
I'm no computer scientist, so I'll let the experts figure out how to make the transition. But spamless email and freedom from malware seem like pretty attractive goals, even if it does cost a bundle. And if somebody does eventually figure out a way around the new safeguards, we might have a few years to enjoy the Internet as it was intended to be.
Sources: A series of articles by Anick Jesdanun on redesigning the Internet was carried by the Associated Press and reprinted in several newspapers, and carried online in part by the Hartford Courant at http://www.courant.com/business/hc-rebuildinginternet.artapr15,0,5625095.story?coll=hc-headlines-business on Apr. 15, 2007, and also in the Austin American-Statesman print edition of Apr. 23, 2007, pp. D1 and D4.
Tuesday, April 24, 2007
Does Engineering Ethics Matter? Joe Carson Wonders. . . .
Last week, I said here that in light of a tragedy such as the shootings at Virginia Tech, engineering ethics paled into insignificance. The question for today is, why should engineering ethics deserve any attention at all, when there are so many more pressing matters demanding our attention?
There are those who take the view that codes of engineering ethics as they now exist are little more than window-dressing, apparently designed to create a good impression on the public, but not to do anything more substantial than that. One such is Joe Carson, an engineer whose experiences as an employee of the Department of Energy taught him that the engineering profession does not rush to defend every engineer who is fired or otherwised penalized for "whistle-blowing." According to Carson's website, many engineering-related disasters and hazards result from the engineering profession's reluctance to both take its codes of ethics seriously, and to defend its members from unjust retribution by employers who are made to look bad when engineers bring such problems to light. Carson has organized an Association of Christian Engineers whose purpose is to bring Christian-based ethical principles into engineering in a way that makes a real difference.
Carson makes some good points. As things now stand, nearly all engineering codes of ethics are not binding and have no force either of law or rule. In other words, the worst that can happen if an engineer, or an entire organization, violates ethical codes but otherwise stays within the limits of statutory laws, is a guilty conscience. And many of us are used to living with those.
One reason is that most engineers in the U. S. are not required to have a Professional Engineer license in order to work in industry. This is in marked contrast to the status quo in the legal and medical professions, and even such mundane enterprises as surveying and plumbing, where some form of state or federal licensure is needed in order to make money doing those jobs. People who violate legal or medical codes of ethics (which often have the force of law) can lose their privilege to practice by the action of a professional licensing board. This economic threat must have some effect, although cases of lawyers and doctors who lose their licensure through malpractice are not as common as you might think.
Another reason is the lack of solidarity among engineers as contrasted with, for example, trade unions. The grievance procedure is a time-honored feature of all unionized workplaces. Any employer who runs afoul of union-monitored workplace rules runs the risk of getting embroiled in a lengthy and costly battle with the union, which generally rushes to the aid of its allegedly wronged member. As in any conflict involving organizational power, abuse can take place on both sides, but at least there is a restraint in place to limit the power of the employer to act arbitrarily. Not so in the case of engineering societies, which for the most part strenuously avoid acting like unions. If Mr. Carson had been a member of a federally-recognized union instead of just belonging to the National Society of Professional Engineers, the American Society of Mechanical Engineers, and the American Nuclear Society, the outcome of his conflicts with the Department of Energy might have been very different, at least for him personally, and perhaps for the people who are endangered by the hazards he has spoken about publicly.
So what should be done? Mr. Carson has several suggestions. One is to make licensure a requirement for employment in any engineering job, not just for those few engineers whose need to sign off on plans for public projects makes licensure a necessity for them. Standing in the way of this goal is the fact that all states have what is called an industrial exemption which waives the license requirement for jobs in the private sector, by and large. This is a matter for state legislatures, which are notoriously tied to local industry and will loosen those ties only if another powerful force will make itself felt. The engineering societies could move in this direction, but so far they have given little sign of any interest along these lines. Another suggestion, which requires no legislation, is for the professional engineering societies to take up arms in defense of members who unjustly lose their jobs or other privileges when they act in accordance with ethical principles. At various times in the past, organizations such as the Institute of Electrical and Electronics Engineers (IEEE) have produced "friend-of-the-court" briefs in legal cases involving ethical engineers and unethical employers. But for the last decade or so, I have seen little evidence that IEEE is interested in such matters, although its Society for Social Implications of Technology (SSIT) does give out a Barus Award from time to time which honors notably courageous engineers who put their careers at risk to expose risky products or practices. (Full disclosure: I am currently treasurer of SSIT, which office is not as impressive as it may sound.)
Finally, Mr. Carson wishes that religious motivations for ethical behavior were not automatically ruled out of order in most modern technical societies. He writes that "engineering professional societies should acknowledge that faith-based motivations are valid . . . [and relate] to their efforts to uplift and defend the engineering profession, its code of ethics, and its service to society." As we have noted elsewhere (see the Jan. 2 blog herein "Science, Engineering, and Ethical Choice: Who's In Charge?"), without some larger encompassing narrative or worldview, all engineering activity becomes "sound and fury, signifying nothing." The significance of engineering must be placed in a larger context, or else the thing that should be only a means to human blessing becomes a monstrous and insatiable end in itself.
Dallas Willard, a professor of philosophy at the University of Southern California, says this about the dangers of technology unlimited by some kind of theological understanding: "Human beings have long aspired to control the ultimate foundations of ordinary reality. We have made a little progress, and there remains an unwavering sense that this is the direction of our destiny. That is the theological meaning of the scientific and technological enterprise. It has always presented itself as the instrument for solving human problems, though without its theological context it becomes idolatrous and goes mad."
Stern words. Does that mean I favor a religious belief test before any engineer can become licensed to practice in private or public enterprises? Absolutely not. But I do think we have gone so far in the other direction away from any acknowledgment of the role of supernatural belief (including but not limited to Christianity) in the engineering enterprise, that we should not be surprised when the rather feeble and often ineffective things we do regarding engineering ethics, often fail to improve the ethical behavior of people and organizations engaged in technology. I do not agree with everything Joe Carson says. But I do think he's on to something, and I hope that his efforts meet with greater success than they have so far.
Sources: Joe Carson is president of the Association of Christian Engineers, whose website is www.christianengineer.org. His account of his trials and tribulations with the Department of Energy can be found at www.carsonversusdoe.com. The quotation about engineering and faith-based motivations is from his article in the December 2005 issue of the American Association for the Advancement of Science's publication "Professional Ethics Report." Dallas Willard's words are from p. 336 of Willard's The Divine Conspiracy (Harper San Francisco, 1998). The list of engineers and others who have received the IEEE Society for Social Implications of Technology's Barus Award can be found at http://www.ieeessit.org/about.asp?Level2ItemID=5.
There are those who take the view that codes of engineering ethics as they now exist are little more than window-dressing, apparently designed to create a good impression on the public, but not to do anything more substantial than that. One such is Joe Carson, an engineer whose experiences as an employee of the Department of Energy taught him that the engineering profession does not rush to defend every engineer who is fired or otherwised penalized for "whistle-blowing." According to Carson's website, many engineering-related disasters and hazards result from the engineering profession's reluctance to both take its codes of ethics seriously, and to defend its members from unjust retribution by employers who are made to look bad when engineers bring such problems to light. Carson has organized an Association of Christian Engineers whose purpose is to bring Christian-based ethical principles into engineering in a way that makes a real difference.
Carson makes some good points. As things now stand, nearly all engineering codes of ethics are not binding and have no force either of law or rule. In other words, the worst that can happen if an engineer, or an entire organization, violates ethical codes but otherwise stays within the limits of statutory laws, is a guilty conscience. And many of us are used to living with those.
One reason is that most engineers in the U. S. are not required to have a Professional Engineer license in order to work in industry. This is in marked contrast to the status quo in the legal and medical professions, and even such mundane enterprises as surveying and plumbing, where some form of state or federal licensure is needed in order to make money doing those jobs. People who violate legal or medical codes of ethics (which often have the force of law) can lose their privilege to practice by the action of a professional licensing board. This economic threat must have some effect, although cases of lawyers and doctors who lose their licensure through malpractice are not as common as you might think.
Another reason is the lack of solidarity among engineers as contrasted with, for example, trade unions. The grievance procedure is a time-honored feature of all unionized workplaces. Any employer who runs afoul of union-monitored workplace rules runs the risk of getting embroiled in a lengthy and costly battle with the union, which generally rushes to the aid of its allegedly wronged member. As in any conflict involving organizational power, abuse can take place on both sides, but at least there is a restraint in place to limit the power of the employer to act arbitrarily. Not so in the case of engineering societies, which for the most part strenuously avoid acting like unions. If Mr. Carson had been a member of a federally-recognized union instead of just belonging to the National Society of Professional Engineers, the American Society of Mechanical Engineers, and the American Nuclear Society, the outcome of his conflicts with the Department of Energy might have been very different, at least for him personally, and perhaps for the people who are endangered by the hazards he has spoken about publicly.
So what should be done? Mr. Carson has several suggestions. One is to make licensure a requirement for employment in any engineering job, not just for those few engineers whose need to sign off on plans for public projects makes licensure a necessity for them. Standing in the way of this goal is the fact that all states have what is called an industrial exemption which waives the license requirement for jobs in the private sector, by and large. This is a matter for state legislatures, which are notoriously tied to local industry and will loosen those ties only if another powerful force will make itself felt. The engineering societies could move in this direction, but so far they have given little sign of any interest along these lines. Another suggestion, which requires no legislation, is for the professional engineering societies to take up arms in defense of members who unjustly lose their jobs or other privileges when they act in accordance with ethical principles. At various times in the past, organizations such as the Institute of Electrical and Electronics Engineers (IEEE) have produced "friend-of-the-court" briefs in legal cases involving ethical engineers and unethical employers. But for the last decade or so, I have seen little evidence that IEEE is interested in such matters, although its Society for Social Implications of Technology (SSIT) does give out a Barus Award from time to time which honors notably courageous engineers who put their careers at risk to expose risky products or practices. (Full disclosure: I am currently treasurer of SSIT, which office is not as impressive as it may sound.)
Finally, Mr. Carson wishes that religious motivations for ethical behavior were not automatically ruled out of order in most modern technical societies. He writes that "engineering professional societies should acknowledge that faith-based motivations are valid . . . [and relate] to their efforts to uplift and defend the engineering profession, its code of ethics, and its service to society." As we have noted elsewhere (see the Jan. 2 blog herein "Science, Engineering, and Ethical Choice: Who's In Charge?"), without some larger encompassing narrative or worldview, all engineering activity becomes "sound and fury, signifying nothing." The significance of engineering must be placed in a larger context, or else the thing that should be only a means to human blessing becomes a monstrous and insatiable end in itself.
Dallas Willard, a professor of philosophy at the University of Southern California, says this about the dangers of technology unlimited by some kind of theological understanding: "Human beings have long aspired to control the ultimate foundations of ordinary reality. We have made a little progress, and there remains an unwavering sense that this is the direction of our destiny. That is the theological meaning of the scientific and technological enterprise. It has always presented itself as the instrument for solving human problems, though without its theological context it becomes idolatrous and goes mad."
Stern words. Does that mean I favor a religious belief test before any engineer can become licensed to practice in private or public enterprises? Absolutely not. But I do think we have gone so far in the other direction away from any acknowledgment of the role of supernatural belief (including but not limited to Christianity) in the engineering enterprise, that we should not be surprised when the rather feeble and often ineffective things we do regarding engineering ethics, often fail to improve the ethical behavior of people and organizations engaged in technology. I do not agree with everything Joe Carson says. But I do think he's on to something, and I hope that his efforts meet with greater success than they have so far.
Sources: Joe Carson is president of the Association of Christian Engineers, whose website is www.christianengineer.org. His account of his trials and tribulations with the Department of Energy can be found at www.carsonversusdoe.com. The quotation about engineering and faith-based motivations is from his article in the December 2005 issue of the American Association for the Advancement of Science's publication "Professional Ethics Report." Dallas Willard's words are from p. 336 of Willard's The Divine Conspiracy (Harper San Francisco, 1998). The list of engineers and others who have received the IEEE Society for Social Implications of Technology's Barus Award can be found at http://www.ieeessit.org/about.asp?Level2ItemID=5.
Tuesday, April 17, 2007
In Memoriam: Victims of the Virginia Polytechnic University Shootings
On this, the evening of the day that saw the violent deaths of more than thirty victims of a shooting at Virginia Tech, ordinary concerns regarding engineering ethics pale into insignificance. Engineering has few martyrs. But these slayings took place at an institution dedicated to the education of engineers. If any of those who died had not chosen to enter that difficult and challenging field, he or she might well be alive tonight.
We are not told why one person, well-liked, promising, full of life and enthusiasm, is cut down at an early age, while another is spared to live a long, selfish, and unfruitful life. Those who believe that the things perceived by the five senses do not comprise all there is, but also believe in "that which is unseen," can hope to know the Source of all knowledge some day. And it may be that what is shocking and senseless to us now, may then seem part of a larger pattern or shape that we cannot now imagine. Whether any of this we saw today will make sense then—is another question we cannot now answer.
Those that fell today are martyrs—the word originally meant "witnesses"—as much as those engineers who accept assignments in the military to bring the blessings of clean water and electricity to Iraq, or those who fight tropical diseases and harsh conditions to build cell-phone networks in developing countries. Engineering is not an easy course of education, nor is it an easy profession. But it can be a good one—good in the sense of benevolence, in the sense of bringing things of real value to people who need them. And good things that bless people are worth doing, even at the cost of personal risk.
My profound sympathy goes to the families of the victims, the students, staff, and faculty members of the Virginia Tech community.
O God, whose mercies cannot be numbered;
Accept our prayers on behalf of the souls of thy servants departed,
And grant them entrance into the land of light and joy,
in the fellowship of thy saints;
through Jesus Christ our Lord. Amen.
We are not told why one person, well-liked, promising, full of life and enthusiasm, is cut down at an early age, while another is spared to live a long, selfish, and unfruitful life. Those who believe that the things perceived by the five senses do not comprise all there is, but also believe in "that which is unseen," can hope to know the Source of all knowledge some day. And it may be that what is shocking and senseless to us now, may then seem part of a larger pattern or shape that we cannot now imagine. Whether any of this we saw today will make sense then—is another question we cannot now answer.
Those that fell today are martyrs—the word originally meant "witnesses"—as much as those engineers who accept assignments in the military to bring the blessings of clean water and electricity to Iraq, or those who fight tropical diseases and harsh conditions to build cell-phone networks in developing countries. Engineering is not an easy course of education, nor is it an easy profession. But it can be a good one—good in the sense of benevolence, in the sense of bringing things of real value to people who need them. And good things that bless people are worth doing, even at the cost of personal risk.
My profound sympathy goes to the families of the victims, the students, staff, and faculty members of the Virginia Tech community.
O God, whose mercies cannot be numbered;
Accept our prayers on behalf of the souls of thy servants departed,
And grant them entrance into the land of light and joy,
in the fellowship of thy saints;
through Jesus Christ our Lord. Amen.
Tuesday, April 10, 2007
May I Beam Your Passport, Please?
Fraudulent U. S. passports can lead to a lot of trouble, which is why a couple of years ago, the U. S. State Department announced that as of October 2006, all new passports issued would contain an RFID chip with identifying information such as the owner's photograph, name, and birth date. These chips provide their information to a suitably equipped reader placed a few inches away, without the need for physical contact.
From the viewpoint of a potential passport forger, this is bad news. From now on, he will have to imitate not only the paper quality and other distinguishing characteristics of a genuine passport, but will also have to make or steal an RFID chip with encrypted data that matches the printed information and can be read by a U. S. customs official's machine. Or at least that seems to be the thinking of the State Department.
What they may not have counted on is the chorus of negative publicity that has greeted the introduction of the new technology. Numerous news reports over the last two years portray the RFID-equipped passport as a security risk, not a benefit. The fear is that a hacker with pirated software and enough hardware could read your name and personal information from many feet away, not just inches, and without your knowledge. To alleviate these fears, State added a metallic shield in the cover so the chip can't be read unless the booklet is open. But critics weren't satisfied: hotels, restaurants, banks, and many other establishments often want to see your passport, and who knows if you're being spied upon by radio waves at any of those places? The government has gone ahead with the rollout, but the prevailing winds of public opinion still blow cold on the idea.
I've discussed RFID at other times, so today I'd like to concentrate on a factor that many engineers either ignore or neglect in dealing with ethical issues: public perception of a technology. For better or worse, engineers tend to be a breed apart: conversant with mathematics that is unfamiliar to most people, inclined to think in terms of logical connections and detailed chains of reasoning rather than overall impressions, and often (but not always) insensitive to the emotional resonance of a situation. To a logical, problem-solving mind (many of which may work for the U. S. State Department, we hope), the problem of U. S. passport fraud suggests a technical solution: RFID chips that are hard to fake and hard to read without authorized gear. Since the cost of a passport hasn't gone up, and they will be easier to use if anything, why on earth would anyone object to such a thing?
I'll tell you why: because the notion of someone being able to view your photograph, date of birth, and other personal data by invisible means of which you are unaware, creeps out many ordinary people. (If I concentrate, I can get creeped out by it myself, although it's an effort.) I think it's this instinctive repugnance at the idea that some kind of evil twin of Superman can look through your clothes, into your wallet, and read stuff that you don't want just anybody to see, that is at the root of a lot of the opposition to RFID-equipped passports.
Technically speaking, the critics have a point. I am no RFID expert, but I do know something about antennas, and with any RFID system there are at least two antennas involved: one on the chip and one in the reader. Basic antenna theory says that the maximum distance you can read an RFID chip from depends on the characteristics of both antennas. A potential data thief can't do anything about the RFID chip's antenna, but he can certainly build a fancier and more sensitive antenna than the usual reader employs, especially if he can hide it somewhere at a distance (because it will tend to be larger than the conventional unit). So there is some truth to the idea that RFID chips which are normally read from a few inches away can sometimes be read at much larger distances if you go to enough trouble on the reader end.
As far as hacking the encryption software goes, unless the State Department has come up with something new that they're not talking about, it is simply a matter of bringing to bear enough resources to break virtually any computer encryption. One big problem in this department is that passports are supposed to be valid for ten years. If some bad guy out there does manage to break the RFID encryption code, is the U. S. State Department going to recall all its passports for an upgrade? The answer isn't clear.
But beyond these technical problems lies the larger public relations problem. If I were a State Department engineer, I might say something like, "Look, these people who are complaining don't understand the technology, they don't understand the problems with forgery we're having, and anyway, they don't have a choice, so they might as well pipe down." Needless to say, such an attitude is unhelpful. Whenever an organization tries to introduce a new technology, people will try to make sense of it by using whatever intellectual resources they have. For good or ill, RFID has a kind of spooky spying-at-a-distance reputation these days which seems to be predominantly negative except among a minority of enthusiasts such as the gentleman who implanted RFID chips in his hands (see this blog's "A Chip In Your Shoulder?", Mar. 27). The public doesn't seem to mind RFID chips in bags of cookies or packaged rutabags if it helps check you out at the grocery store faster. But chips in your passport or your body, that's getting personal, and the emotional temperature falls right away.
I'm not sure how the State Department could have handled this better. But it does seem like they should have informed themselves more about what people would think of the new technology. They did respond to initial concerns with the shielding fix, but as often happens, the negative press got rolling and gained a momentum of its own. Now you can read different ideas on how to disable the chips, ranging from washing the passport with your socks and underwear (doesn't work) to running it through a microwave (throws off sparks and catches fire) to pounding the back cover with a hammer (probably effective). Nobody is saying what happens if you show up with one of the new passports in which the chip doesn't work. Maybe if it means a full-body search, people will change their minds about wrecking the chips. For me personally, I'm going to hang on to my old passport till it expires in 2011, and maybe by that time they will have come up with something even more advanced—or more controversial.
Sources: An article by Kelly Heyboer in the New Orleans Times-Picayune online edition of Apr. 8, 2007 (http://www.nola.com/national/t-p/index.ssf?/base/news-0/1176014434312450.xml&coll=1) clued me in to this issue. Bruce Schneier of the Washington Post wrote a critical piece about it in the Sept. 16, 2006 edition found at http://www.washingtonpost.com/wp-dyn/content/article/2006/09/15/AR2006091500923.html. I tried to look at the U. S. State Department's website that deals with U. S. passports, but the page was apparently down or overloaded.
From the viewpoint of a potential passport forger, this is bad news. From now on, he will have to imitate not only the paper quality and other distinguishing characteristics of a genuine passport, but will also have to make or steal an RFID chip with encrypted data that matches the printed information and can be read by a U. S. customs official's machine. Or at least that seems to be the thinking of the State Department.
What they may not have counted on is the chorus of negative publicity that has greeted the introduction of the new technology. Numerous news reports over the last two years portray the RFID-equipped passport as a security risk, not a benefit. The fear is that a hacker with pirated software and enough hardware could read your name and personal information from many feet away, not just inches, and without your knowledge. To alleviate these fears, State added a metallic shield in the cover so the chip can't be read unless the booklet is open. But critics weren't satisfied: hotels, restaurants, banks, and many other establishments often want to see your passport, and who knows if you're being spied upon by radio waves at any of those places? The government has gone ahead with the rollout, but the prevailing winds of public opinion still blow cold on the idea.
I've discussed RFID at other times, so today I'd like to concentrate on a factor that many engineers either ignore or neglect in dealing with ethical issues: public perception of a technology. For better or worse, engineers tend to be a breed apart: conversant with mathematics that is unfamiliar to most people, inclined to think in terms of logical connections and detailed chains of reasoning rather than overall impressions, and often (but not always) insensitive to the emotional resonance of a situation. To a logical, problem-solving mind (many of which may work for the U. S. State Department, we hope), the problem of U. S. passport fraud suggests a technical solution: RFID chips that are hard to fake and hard to read without authorized gear. Since the cost of a passport hasn't gone up, and they will be easier to use if anything, why on earth would anyone object to such a thing?
I'll tell you why: because the notion of someone being able to view your photograph, date of birth, and other personal data by invisible means of which you are unaware, creeps out many ordinary people. (If I concentrate, I can get creeped out by it myself, although it's an effort.) I think it's this instinctive repugnance at the idea that some kind of evil twin of Superman can look through your clothes, into your wallet, and read stuff that you don't want just anybody to see, that is at the root of a lot of the opposition to RFID-equipped passports.
Technically speaking, the critics have a point. I am no RFID expert, but I do know something about antennas, and with any RFID system there are at least two antennas involved: one on the chip and one in the reader. Basic antenna theory says that the maximum distance you can read an RFID chip from depends on the characteristics of both antennas. A potential data thief can't do anything about the RFID chip's antenna, but he can certainly build a fancier and more sensitive antenna than the usual reader employs, especially if he can hide it somewhere at a distance (because it will tend to be larger than the conventional unit). So there is some truth to the idea that RFID chips which are normally read from a few inches away can sometimes be read at much larger distances if you go to enough trouble on the reader end.
As far as hacking the encryption software goes, unless the State Department has come up with something new that they're not talking about, it is simply a matter of bringing to bear enough resources to break virtually any computer encryption. One big problem in this department is that passports are supposed to be valid for ten years. If some bad guy out there does manage to break the RFID encryption code, is the U. S. State Department going to recall all its passports for an upgrade? The answer isn't clear.
But beyond these technical problems lies the larger public relations problem. If I were a State Department engineer, I might say something like, "Look, these people who are complaining don't understand the technology, they don't understand the problems with forgery we're having, and anyway, they don't have a choice, so they might as well pipe down." Needless to say, such an attitude is unhelpful. Whenever an organization tries to introduce a new technology, people will try to make sense of it by using whatever intellectual resources they have. For good or ill, RFID has a kind of spooky spying-at-a-distance reputation these days which seems to be predominantly negative except among a minority of enthusiasts such as the gentleman who implanted RFID chips in his hands (see this blog's "A Chip In Your Shoulder?", Mar. 27). The public doesn't seem to mind RFID chips in bags of cookies or packaged rutabags if it helps check you out at the grocery store faster. But chips in your passport or your body, that's getting personal, and the emotional temperature falls right away.
I'm not sure how the State Department could have handled this better. But it does seem like they should have informed themselves more about what people would think of the new technology. They did respond to initial concerns with the shielding fix, but as often happens, the negative press got rolling and gained a momentum of its own. Now you can read different ideas on how to disable the chips, ranging from washing the passport with your socks and underwear (doesn't work) to running it through a microwave (throws off sparks and catches fire) to pounding the back cover with a hammer (probably effective). Nobody is saying what happens if you show up with one of the new passports in which the chip doesn't work. Maybe if it means a full-body search, people will change their minds about wrecking the chips. For me personally, I'm going to hang on to my old passport till it expires in 2011, and maybe by that time they will have come up with something even more advanced—or more controversial.
Sources: An article by Kelly Heyboer in the New Orleans Times-Picayune online edition of Apr. 8, 2007 (http://www.nola.com/national/t-p/index.ssf?/base/news-0/1176014434312450.xml&coll=1) clued me in to this issue. Bruce Schneier of the Washington Post wrote a critical piece about it in the Sept. 16, 2006 edition found at http://www.washingtonpost.com/wp-dyn/content/article/2006/09/15/AR2006091500923.html. I tried to look at the U. S. State Department's website that deals with U. S. passports, but the page was apparently down or overloaded.
Tuesday, April 03, 2007
A Nanny for Nanotech? Government and Nanotechnology Hazards
Very small things can cause us lots of trouble, from flu viruses to tiny asbestos fibers that lodge in the lungs and lead to mesothelioma, a rare form of cancer. But up to now, all the very small things we had to worry about occurred naturally. In the last few years, we've learned how to make things that small artificially as well. And some people are worried that no one is paying much attention to the question of whether tiny artificial stuff could be as dangerous as the tiny natural stuff we've learned to live with—or die with.
Scientists have developed a special unit of measure for these things: the nanometer. One billion nanometers is a meter (which is a little longer than a yard, for you non-metric types). A human hair looks like the trunk of a redwood tree compared to a virus or an asbestos fiber, which can be as small as 10 nanometers in diameter. When things get that small, they start acting peculiar, because the graininess or lumpiness of matter begins to show up—the fact that it's made of atoms. This can be both very good or very bad, depending on what you're looking at. Take carbon nanotubes, for instance. These are tiny tubes that, if you could see them, would look like elegantly woven fabric, every atom in place. Atom for atom, if you pull on one of these tubes, it's much stronger than steel, and it can conduct electricity much better than copper, but only along the direction of the tube. This stuff has already made it into some commercial products, and hopes are that it will form the basis of entire new industries. Other nano-size chemicals and particles are finding their way into everything from electrical products to cosmetics. That's the good news.
The possible bad news is, no one much is looking into the question of whether these tiny engineered particles are dangerous to living organisms, and in particular, people. So far, there hasn't been a tragedy involving artificial nanotech products along the lines of the "radium girls" disaster of the 1920s. But we don't know that it won't happen, either.
In some ways, radium was the nanotech of the early 1900s. Marie and Pierre Curie, radium's discoverers, were international heroes. Women who were hired to paint glow-in-the-dark numbers on watch and clock dials with radium-bearing paint thought they were lucky to be working with such exciting stuff. Some even used it as makeup and lipstick, which must have freaked out their boyfriends when they turned off the lights.
But within a few years, these women found out their jobs were no joking matter as many of them began to fall ill with liver problems, anemia, bone fractures, and rotting jawbones. The cause, of course, was the intense doses of radiation from the radium they absorbed in their bodies. Their employers initially denied any responsibility, the U. S. government declined to get involved, and it took years of persistent work by industrial pathologists, politicians, and others sympathetic to the workers' plight to get radium recognized for the terrible occupational hazard it was.
Are we facing a similar situation in the proliferation of nanotech products for consumers? There is a technical aspect and a political aspect to the question.
The technical aspect is, nobody knows for certain. But scientific knowledge isn't free: someone has to pay for tests, investigations, reports, and the other overhead stuff that goes along with finding out things these days. We know some things about nano-scale materials and how they interact with the nano-scale machinery of living cells, but certainly not everything. One reason nanotechnology and biotechnology are so attractive to researchers and investors is the fact that we don't know all about what goes on between these two areas, and so we're trying to find out. Absolute certainty that a product is free from any hazard to humans is not something we can usually obtain at a reasonable cost. The usual product testing will often show up prompt hazards (ones that don't take years to develop), and as for the others, well, since many companies operate on a six-month product cycle, waiting fifteen years for the outcome of a longitudinal study of biohazards just doesn't make a lot of sense to them.
That brings up the political question. Partly because I'm no political scientist and like to reduce everything to vectors (at least that's what my wife says), I like to drive things to extremes in order to understand where we stand in the middle. On one extreme would be total non-regulation: anybody can make anything anywhere, and sell it to anyone, claiming anything for it, and let the buyer beware. I understand this state of affairs isn't too far from reality in parts of China nowadays. It's a pretty good environment for entrepreneurs, assuming they don't have to live downwind from a paper mill or something equally offensive. But the dangers to consumers are obvious.
The other extreme is complete and total "nanny-stateism" (hence the nanny in today's headline): no product is allowed to fall into the hands of the consumer until the manufacturer has been held guilty of its being harmful, and forced to prove himself innocent. Things are not quite this bad in some Scandinavian countries, but show signs of moving in that direction. At this extreme, companies give up on making money and spend their dwindling capital on safety studies that take years and let their competitors in less regulated regions beat them to the market. Clearly, this extreme isn't going to work very well either.
Being an engineer and not a political scientist, I tend to trust democracy to stumble around between these two extremes and find a middle road that is neither too negligent of the consumer's interests nor too stifling of the manufacturer's initiative. Nobody will be entirely happy with such a compromise, but that is how democracy works, or is supposed to work. In the past, it has taken a major tragedy, with people dying in large numbers from unusual causes, to motivate large-scale regulation of certain industries. That's too bad, from one point of view, but if the alternative is to regulate ourselves into the past and defer the use of any new nanotech products until we're absolutely, positively sure they're safe, then that's not so good either. Some studies by the Project on Emerging Nanotechnologies of the Woodrow Wilson International Center for Scholars indicate that no one—meaning no government agency charged with the responsibility—is overseeing the vast new field of consumer products that use nano-size particles. At the risk of annoying any libertarian readers of my blog, I would venture the opinion that at least somebody who is not beholden to manufacturers should look into this on a regular basis. But I would also venture that they shouldn't interfere with things until they find there is some reason to believe there is trouble brewing.
Sources: The Wilson Center website at http://www.wilsoncenter.org/ describes some of the work of their emerging nanotechnology project at http://www.nanotechproject.org/. This column was inspired by a piece in the Austin American-Statesman for Apr. 1, 2007 (p. A19) by Jeff Nesmith about the Wilson Center. Reviews of Radium Girls: Women and Industrial Health Reform, 1910-1935 by Claudia Clark (Chapel Hill, NC: Univ. of North Carolina Press, 1997), which I haven't read but would like to some day, can be found at the Amazon.com entry for the book.
Scientists have developed a special unit of measure for these things: the nanometer. One billion nanometers is a meter (which is a little longer than a yard, for you non-metric types). A human hair looks like the trunk of a redwood tree compared to a virus or an asbestos fiber, which can be as small as 10 nanometers in diameter. When things get that small, they start acting peculiar, because the graininess or lumpiness of matter begins to show up—the fact that it's made of atoms. This can be both very good or very bad, depending on what you're looking at. Take carbon nanotubes, for instance. These are tiny tubes that, if you could see them, would look like elegantly woven fabric, every atom in place. Atom for atom, if you pull on one of these tubes, it's much stronger than steel, and it can conduct electricity much better than copper, but only along the direction of the tube. This stuff has already made it into some commercial products, and hopes are that it will form the basis of entire new industries. Other nano-size chemicals and particles are finding their way into everything from electrical products to cosmetics. That's the good news.
The possible bad news is, no one much is looking into the question of whether these tiny engineered particles are dangerous to living organisms, and in particular, people. So far, there hasn't been a tragedy involving artificial nanotech products along the lines of the "radium girls" disaster of the 1920s. But we don't know that it won't happen, either.
In some ways, radium was the nanotech of the early 1900s. Marie and Pierre Curie, radium's discoverers, were international heroes. Women who were hired to paint glow-in-the-dark numbers on watch and clock dials with radium-bearing paint thought they were lucky to be working with such exciting stuff. Some even used it as makeup and lipstick, which must have freaked out their boyfriends when they turned off the lights.
But within a few years, these women found out their jobs were no joking matter as many of them began to fall ill with liver problems, anemia, bone fractures, and rotting jawbones. The cause, of course, was the intense doses of radiation from the radium they absorbed in their bodies. Their employers initially denied any responsibility, the U. S. government declined to get involved, and it took years of persistent work by industrial pathologists, politicians, and others sympathetic to the workers' plight to get radium recognized for the terrible occupational hazard it was.
Are we facing a similar situation in the proliferation of nanotech products for consumers? There is a technical aspect and a political aspect to the question.
The technical aspect is, nobody knows for certain. But scientific knowledge isn't free: someone has to pay for tests, investigations, reports, and the other overhead stuff that goes along with finding out things these days. We know some things about nano-scale materials and how they interact with the nano-scale machinery of living cells, but certainly not everything. One reason nanotechnology and biotechnology are so attractive to researchers and investors is the fact that we don't know all about what goes on between these two areas, and so we're trying to find out. Absolute certainty that a product is free from any hazard to humans is not something we can usually obtain at a reasonable cost. The usual product testing will often show up prompt hazards (ones that don't take years to develop), and as for the others, well, since many companies operate on a six-month product cycle, waiting fifteen years for the outcome of a longitudinal study of biohazards just doesn't make a lot of sense to them.
That brings up the political question. Partly because I'm no political scientist and like to reduce everything to vectors (at least that's what my wife says), I like to drive things to extremes in order to understand where we stand in the middle. On one extreme would be total non-regulation: anybody can make anything anywhere, and sell it to anyone, claiming anything for it, and let the buyer beware. I understand this state of affairs isn't too far from reality in parts of China nowadays. It's a pretty good environment for entrepreneurs, assuming they don't have to live downwind from a paper mill or something equally offensive. But the dangers to consumers are obvious.
The other extreme is complete and total "nanny-stateism" (hence the nanny in today's headline): no product is allowed to fall into the hands of the consumer until the manufacturer has been held guilty of its being harmful, and forced to prove himself innocent. Things are not quite this bad in some Scandinavian countries, but show signs of moving in that direction. At this extreme, companies give up on making money and spend their dwindling capital on safety studies that take years and let their competitors in less regulated regions beat them to the market. Clearly, this extreme isn't going to work very well either.
Being an engineer and not a political scientist, I tend to trust democracy to stumble around between these two extremes and find a middle road that is neither too negligent of the consumer's interests nor too stifling of the manufacturer's initiative. Nobody will be entirely happy with such a compromise, but that is how democracy works, or is supposed to work. In the past, it has taken a major tragedy, with people dying in large numbers from unusual causes, to motivate large-scale regulation of certain industries. That's too bad, from one point of view, but if the alternative is to regulate ourselves into the past and defer the use of any new nanotech products until we're absolutely, positively sure they're safe, then that's not so good either. Some studies by the Project on Emerging Nanotechnologies of the Woodrow Wilson International Center for Scholars indicate that no one—meaning no government agency charged with the responsibility—is overseeing the vast new field of consumer products that use nano-size particles. At the risk of annoying any libertarian readers of my blog, I would venture the opinion that at least somebody who is not beholden to manufacturers should look into this on a regular basis. But I would also venture that they shouldn't interfere with things until they find there is some reason to believe there is trouble brewing.
Sources: The Wilson Center website at http://www.wilsoncenter.org/ describes some of the work of their emerging nanotechnology project at http://www.nanotechproject.org/. This column was inspired by a piece in the Austin American-Statesman for Apr. 1, 2007 (p. A19) by Jeff Nesmith about the Wilson Center. Reviews of Radium Girls: Women and Industrial Health Reform, 1910-1935 by Claudia Clark (Chapel Hill, NC: Univ. of North Carolina Press, 1997), which I haven't read but would like to some day, can be found at the Amazon.com entry for the book.
Tuesday, March 27, 2007
A Chip In Your Shoulder?
Back around 1987 or so, I walked by the bulletin board in the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst and saw a letter with a note scrawled at the bottom, "Anybody want to help Ms. X?" A woman had written the letter to our department chair because we had a reputation for doing research in microwave remote sensing and the detecting of radio waves. In the letter, she said that she was convinced the FBI had secretly embedded a radio-wave spying chip in her body. She did not go into details about the circumstances under which this had been done, nor did she say exactly where she thought the chip was. But she knew it was there, and she wanted to know if she could come to our labs to be examined by us with our sensitive equipment.
Needless to say, nobody took her up on her offer to be "examined," although her letter was the topic of some lunchtable conversation for the next few days. I understand that this sort of belief is not uncommon among individuals whom psychiatry used to term "paranoid," although I don't know what terminology would be used today. Well, yesterday's paranoid fear is today's welcome reality—welcomed by some, at least. The cover of the March 2007 issue of the engineering magazine IEEE Spectrum shows an X-ray montage of a young guy holding both hands up near the camera. In the X-ray images, two little sliver-shaped chips are clearly visible in the fleshy part of each hand between the thumb and forefinger.
Inside, the reader finds that Amal Graafstra, an entrepreneur and RFID (radio-frequency identification) enthusiast, thinks having RFID chips in each hand is just great. After convincing a plastic surgeon to insert the chips, which are a kind not officially approved for human use yet (they're sold to veterinarians for pet-tracking purposes), he rewired his house locks, motorcycle ignition, and various other gizmos that used to need keys or passwords. Now he can just make like Mandrake the Magician, waving his hand in front of his door or his motorcycle instead of hauling out keys. When he posted the initial results of his experiments on a website, he got all kinds of reactions ranging from essentially "Way to go, dude!" to negative comments based on religious convictions. As he explains, "Some Christian groups hold that the Antichrist . . . will require followers to be branded with a numeric identifier prior to the end of the world—the 'mark of the beast.' So I got some anxious notes from concerned Christians—including my own mother!"
Right after reading Mr. Graafstra's article, you can turn to a piece by Ken Foster and Jan Jaeger on the ethics of RFID implants in humans. (Full disclosure: I am acquainted with Prof. Foster through my work with IEEE's Society on Social Implications of Technology.) They dutifully point out the potential downsides of the technology, including the chance that what starts out as a purely voluntary thing, even a fashionable style among certain elites, might turn into a job requirement or something imposed by a government on citizens or aliens or both. They mention the grim precedent set by the Nazi regime when its concentration-camp guards forced every prisoner to receive a tattooed number on the arm. RFID chips are not nearly as visible as tattooes, but can contain vastly more information. Think of a miniature hard drive with your entire work history, your places of residence, your sensitive financial information and passwords, all carried around in your body and possibly accessible to anyone with the right (or perhaps wrong) equipment. Such large amounts of data cannot be stored in RFID chips yet, but if the rate of technical progress keeps up, it will be possible to do that soon. And following the rough-and-ready principle that anything which can be hacked, will be hacked, implanted RFID chips pose a great potential risk to privacy.
While Messrs. Graafstra, Foster, and Jaeger debate the pragmatic consequences of this technology, I would like to bring up something that the "Christian groups" alluded to, although they approached it in a way that is biased by some fairly recent innovations in Christian theology dating only to the mid-19th century.
A deeper theme that dates from the earliest Hebrew traditions of the Old Testament is the idea of the human body as a sacred thing, not to be treated like other material objects. The Old Testament prohibited tattooes, ritual cutting, and other practices common among ancient tribes other than the Israelites. The Christian tradition carried these ideas forward in various ways, but always with a sense that the human body is not simply a collection of atoms, but is a "substance" (a philosophical term) which stands in a unique relation to the soul.
The problem with trying to relate these ideas to modern practices is that hardly anybody, Christian, Jewish, or otherwise, pays any attention to them any more. What with heart transplants, cochlear implants, artificial lenses for cataract surgery, and so on, we are well down the road of messing with the human body to repair or improve its functions. And because something is sacred does not mean necessarily that it cannot be touched or altered in any way. The best that I can extract from this tradition in regard to the question of RFID implants, is to encourage people to give this matter special consideration. It's not the same thing as carrying around a fanny pack, or a key ring, or even a nose ring. Once it's in there, you've got it, and it can be anything from a minor annoyance to major surgery to get it out. My bottom line is that with RFID implants, you're messing with the sacred again. And there has to be some meaning to the facts that this general sort of notion was applied first on a large scale by one of the most evil governments of the twentieth century, and that it used to be an imaginary fear latched onto by mentally unbalanced individuals. Only, I don't know what the meaning is.
Sources: The articles "Hands On" by Amal Graafstra and "RFID Inside" by Kenneth Foster and Jan Jaeger appear in the March 2007 issue of IEEE Spectrum, accessible free (as of this writing) at http://spectrum.ieee.org/mar07/4940 and http://spectrum.ieee.org/mar07/4939, respectively.
Needless to say, nobody took her up on her offer to be "examined," although her letter was the topic of some lunchtable conversation for the next few days. I understand that this sort of belief is not uncommon among individuals whom psychiatry used to term "paranoid," although I don't know what terminology would be used today. Well, yesterday's paranoid fear is today's welcome reality—welcomed by some, at least. The cover of the March 2007 issue of the engineering magazine IEEE Spectrum shows an X-ray montage of a young guy holding both hands up near the camera. In the X-ray images, two little sliver-shaped chips are clearly visible in the fleshy part of each hand between the thumb and forefinger.
Inside, the reader finds that Amal Graafstra, an entrepreneur and RFID (radio-frequency identification) enthusiast, thinks having RFID chips in each hand is just great. After convincing a plastic surgeon to insert the chips, which are a kind not officially approved for human use yet (they're sold to veterinarians for pet-tracking purposes), he rewired his house locks, motorcycle ignition, and various other gizmos that used to need keys or passwords. Now he can just make like Mandrake the Magician, waving his hand in front of his door or his motorcycle instead of hauling out keys. When he posted the initial results of his experiments on a website, he got all kinds of reactions ranging from essentially "Way to go, dude!" to negative comments based on religious convictions. As he explains, "Some Christian groups hold that the Antichrist . . . will require followers to be branded with a numeric identifier prior to the end of the world—the 'mark of the beast.' So I got some anxious notes from concerned Christians—including my own mother!"
Right after reading Mr. Graafstra's article, you can turn to a piece by Ken Foster and Jan Jaeger on the ethics of RFID implants in humans. (Full disclosure: I am acquainted with Prof. Foster through my work with IEEE's Society on Social Implications of Technology.) They dutifully point out the potential downsides of the technology, including the chance that what starts out as a purely voluntary thing, even a fashionable style among certain elites, might turn into a job requirement or something imposed by a government on citizens or aliens or both. They mention the grim precedent set by the Nazi regime when its concentration-camp guards forced every prisoner to receive a tattooed number on the arm. RFID chips are not nearly as visible as tattooes, but can contain vastly more information. Think of a miniature hard drive with your entire work history, your places of residence, your sensitive financial information and passwords, all carried around in your body and possibly accessible to anyone with the right (or perhaps wrong) equipment. Such large amounts of data cannot be stored in RFID chips yet, but if the rate of technical progress keeps up, it will be possible to do that soon. And following the rough-and-ready principle that anything which can be hacked, will be hacked, implanted RFID chips pose a great potential risk to privacy.
While Messrs. Graafstra, Foster, and Jaeger debate the pragmatic consequences of this technology, I would like to bring up something that the "Christian groups" alluded to, although they approached it in a way that is biased by some fairly recent innovations in Christian theology dating only to the mid-19th century.
A deeper theme that dates from the earliest Hebrew traditions of the Old Testament is the idea of the human body as a sacred thing, not to be treated like other material objects. The Old Testament prohibited tattooes, ritual cutting, and other practices common among ancient tribes other than the Israelites. The Christian tradition carried these ideas forward in various ways, but always with a sense that the human body is not simply a collection of atoms, but is a "substance" (a philosophical term) which stands in a unique relation to the soul.
The problem with trying to relate these ideas to modern practices is that hardly anybody, Christian, Jewish, or otherwise, pays any attention to them any more. What with heart transplants, cochlear implants, artificial lenses for cataract surgery, and so on, we are well down the road of messing with the human body to repair or improve its functions. And because something is sacred does not mean necessarily that it cannot be touched or altered in any way. The best that I can extract from this tradition in regard to the question of RFID implants, is to encourage people to give this matter special consideration. It's not the same thing as carrying around a fanny pack, or a key ring, or even a nose ring. Once it's in there, you've got it, and it can be anything from a minor annoyance to major surgery to get it out. My bottom line is that with RFID implants, you're messing with the sacred again. And there has to be some meaning to the facts that this general sort of notion was applied first on a large scale by one of the most evil governments of the twentieth century, and that it used to be an imaginary fear latched onto by mentally unbalanced individuals. Only, I don't know what the meaning is.
Sources: The articles "Hands On" by Amal Graafstra and "RFID Inside" by Kenneth Foster and Jan Jaeger appear in the March 2007 issue of IEEE Spectrum, accessible free (as of this writing) at http://spectrum.ieee.org/mar07/4940 and http://spectrum.ieee.org/mar07/4939, respectively.
Tuesday, March 20, 2007
Identities For Sale
Well, here's a way we can solve the trade imbalance between China and the U. S. According to Symantec, the computer-security company, the U. S. harbors more than half of the world's "underground economy servers"—computers that are used for criminal activities, including the control of other computers called "bots" without the knowledge or consent of their owners. And it turns out that about a fourth of all bots are in China. So we're using China's computers to steal money, data, and identities from around the world. And it's even tax-free, if the criminals who organize this sort of thing play their cards right. This market is running so well that you can buy a new electronic identity, complete with Social Security number, credit cards, and a bank account, for less than twenty bucks. Don't like who you are? Become someone else!
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Wednesday, March 14, 2007
Who Needs a Digital Life?
One day I rescued from the throw-out pile outside another professor's office a book entitled simply Computer Engineering, by C. Gordon Bell and two co-authors, all employees of the Digital Equipment Corporation. Published in 1978, it is a time capsule of the state of the computer art according to DEC, which around then was giving IBM a run for its money by coming out with the VAX series of minicomputers. This was just before the personal computer era changed everything.
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
Tuesday, March 06, 2007
The Ethics of Electronic Reproduction
Since so much of what we see, hear, read, and talk about has passed through digitization and cyberspace, it's easy to let that fact fade into the background and ignore the myriad of tricks that engineers have put into the hands of video editors, sound recording experts, and crooks. A story about sound-recording fraud with a neat ironic twist was reported recently by John von Rhein, the Chicago Tribune's music critic. It seems that one William Barrington-Coupe, the man behind a small record label called Concert Artists, wanted to make his concert-pianist wife Joyce Hatto look good, or at least sound good, on recordings issued under her name. So he "borrowed" recordings of famous pianists such as Vladimir Ashkenazy and altered the timing just enough to throw off suspicion that would arise if anyone noticed that Joyce Hatto's version of Rachmaninoff's Prelude in C sharp minor, for example, lasted two minutes and forty seconds, exactly as long as Vladimir Ashkenazy's. He did this digitally, of course, which is how he got caught.
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Tuesday, February 27, 2007
Cyberspace Anonymity: Good or Bad?
If you have been reading this blog for more than a few weeks, you may have noticed that I recently pulled off my mask of anonymity and posted my full name and location on it. That was a choice I made, and most choices have moral implications, if you look far enough. The internet offers abundant opportunities to those who wish to remain anonymous for whatever reason. Since the way it is engineered has contributed to this state of affairs, we are still in the realm of engineering ethics when we consider the implications of cyberspace anonymity.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
Tuesday, February 20, 2007
Global Warming or Global Shaking? A Tale of Two Theories
On Dec. 26, 2004, the most deadly tsunami in recorded history struck the Indian Ocean, killing about 280,000 people. If there had been a warning system in place along the affected coastlines to move people to higher ground, many of those who died in the disaster might be alive today. Fortunately, the technology to detect tsunamis in deep water and relay the information to the proper authorities exists today. After the terrible lesson of 2004, many governments moved to improve their tsunami-warning capabilities, and this effort is already proving fruitful. But most people think earthquakes on land, which can be just as deadly as tsunamis, are inherently unpredictable. What if that isn't true? What if it turns out that we can predict earthquakes as reliably as tomorrow's weather—not perfectly, but well enough to give warnings about truly major earthquakes? Wouldn't that be worth a little time and attention?
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
Tuesday, February 13, 2007
Non-Lethal Weapons, Part II: Taser, Anyone?
The taser is a device used mainly by law enforcement authorities up to now. It delivers a painful but allegedly non-lethal electrical charge that effectively disables an aggressor without permanent injury in the vast majority of cases. Invented in 1993, hundreds of thousands of tasers have been sold and used by police worldwide, and now Taser International is trying to enter the consumer market in a big way. In April, you will be able to buy a $300 unit called the C2, styled in a pink-and-black housing that makes it look more like a lady's shaver than a weapon.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
Subscribe to:
Posts (Atom)