Last week, I said here that in light of a tragedy such as the shootings at Virginia Tech, engineering ethics paled into insignificance. The question for today is, why should engineering ethics deserve any attention at all, when there are so many more pressing matters demanding our attention?
There are those who take the view that codes of engineering ethics as they now exist are little more than window-dressing, apparently designed to create a good impression on the public, but not to do anything more substantial than that. One such is Joe Carson, an engineer whose experiences as an employee of the Department of Energy taught him that the engineering profession does not rush to defend every engineer who is fired or otherwised penalized for "whistle-blowing." According to Carson's website, many engineering-related disasters and hazards result from the engineering profession's reluctance to both take its codes of ethics seriously, and to defend its members from unjust retribution by employers who are made to look bad when engineers bring such problems to light. Carson has organized an Association of Christian Engineers whose purpose is to bring Christian-based ethical principles into engineering in a way that makes a real difference.
Carson makes some good points. As things now stand, nearly all engineering codes of ethics are not binding and have no force either of law or rule. In other words, the worst that can happen if an engineer, or an entire organization, violates ethical codes but otherwise stays within the limits of statutory laws, is a guilty conscience. And many of us are used to living with those.
One reason is that most engineers in the U. S. are not required to have a Professional Engineer license in order to work in industry. This is in marked contrast to the status quo in the legal and medical professions, and even such mundane enterprises as surveying and plumbing, where some form of state or federal licensure is needed in order to make money doing those jobs. People who violate legal or medical codes of ethics (which often have the force of law) can lose their privilege to practice by the action of a professional licensing board. This economic threat must have some effect, although cases of lawyers and doctors who lose their licensure through malpractice are not as common as you might think.
Another reason is the lack of solidarity among engineers as contrasted with, for example, trade unions. The grievance procedure is a time-honored feature of all unionized workplaces. Any employer who runs afoul of union-monitored workplace rules runs the risk of getting embroiled in a lengthy and costly battle with the union, which generally rushes to the aid of its allegedly wronged member. As in any conflict involving organizational power, abuse can take place on both sides, but at least there is a restraint in place to limit the power of the employer to act arbitrarily. Not so in the case of engineering societies, which for the most part strenuously avoid acting like unions. If Mr. Carson had been a member of a federally-recognized union instead of just belonging to the National Society of Professional Engineers, the American Society of Mechanical Engineers, and the American Nuclear Society, the outcome of his conflicts with the Department of Energy might have been very different, at least for him personally, and perhaps for the people who are endangered by the hazards he has spoken about publicly.
So what should be done? Mr. Carson has several suggestions. One is to make licensure a requirement for employment in any engineering job, not just for those few engineers whose need to sign off on plans for public projects makes licensure a necessity for them. Standing in the way of this goal is the fact that all states have what is called an industrial exemption which waives the license requirement for jobs in the private sector, by and large. This is a matter for state legislatures, which are notoriously tied to local industry and will loosen those ties only if another powerful force will make itself felt. The engineering societies could move in this direction, but so far they have given little sign of any interest along these lines. Another suggestion, which requires no legislation, is for the professional engineering societies to take up arms in defense of members who unjustly lose their jobs or other privileges when they act in accordance with ethical principles. At various times in the past, organizations such as the Institute of Electrical and Electronics Engineers (IEEE) have produced "friend-of-the-court" briefs in legal cases involving ethical engineers and unethical employers. But for the last decade or so, I have seen little evidence that IEEE is interested in such matters, although its Society for Social Implications of Technology (SSIT) does give out a Barus Award from time to time which honors notably courageous engineers who put their careers at risk to expose risky products or practices. (Full disclosure: I am currently treasurer of SSIT, which office is not as impressive as it may sound.)
Finally, Mr. Carson wishes that religious motivations for ethical behavior were not automatically ruled out of order in most modern technical societies. He writes that "engineering professional societies should acknowledge that faith-based motivations are valid . . . [and relate] to their efforts to uplift and defend the engineering profession, its code of ethics, and its service to society." As we have noted elsewhere (see the Jan. 2 blog herein "Science, Engineering, and Ethical Choice: Who's In Charge?"), without some larger encompassing narrative or worldview, all engineering activity becomes "sound and fury, signifying nothing." The significance of engineering must be placed in a larger context, or else the thing that should be only a means to human blessing becomes a monstrous and insatiable end in itself.
Dallas Willard, a professor of philosophy at the University of Southern California, says this about the dangers of technology unlimited by some kind of theological understanding: "Human beings have long aspired to control the ultimate foundations of ordinary reality. We have made a little progress, and there remains an unwavering sense that this is the direction of our destiny. That is the theological meaning of the scientific and technological enterprise. It has always presented itself as the instrument for solving human problems, though without its theological context it becomes idolatrous and goes mad."
Stern words. Does that mean I favor a religious belief test before any engineer can become licensed to practice in private or public enterprises? Absolutely not. But I do think we have gone so far in the other direction away from any acknowledgment of the role of supernatural belief (including but not limited to Christianity) in the engineering enterprise, that we should not be surprised when the rather feeble and often ineffective things we do regarding engineering ethics, often fail to improve the ethical behavior of people and organizations engaged in technology. I do not agree with everything Joe Carson says. But I do think he's on to something, and I hope that his efforts meet with greater success than they have so far.
Sources: Joe Carson is president of the Association of Christian Engineers, whose website is www.christianengineer.org. His account of his trials and tribulations with the Department of Energy can be found at www.carsonversusdoe.com. The quotation about engineering and faith-based motivations is from his article in the December 2005 issue of the American Association for the Advancement of Science's publication "Professional Ethics Report." Dallas Willard's words are from p. 336 of Willard's The Divine Conspiracy (Harper San Francisco, 1998). The list of engineers and others who have received the IEEE Society for Social Implications of Technology's Barus Award can be found at http://www.ieeessit.org/about.asp?Level2ItemID=5.
Tuesday, April 24, 2007
Tuesday, April 17, 2007
In Memoriam: Victims of the Virginia Polytechnic University Shootings
On this, the evening of the day that saw the violent deaths of more than thirty victims of a shooting at Virginia Tech, ordinary concerns regarding engineering ethics pale into insignificance. Engineering has few martyrs. But these slayings took place at an institution dedicated to the education of engineers. If any of those who died had not chosen to enter that difficult and challenging field, he or she might well be alive tonight.
We are not told why one person, well-liked, promising, full of life and enthusiasm, is cut down at an early age, while another is spared to live a long, selfish, and unfruitful life. Those who believe that the things perceived by the five senses do not comprise all there is, but also believe in "that which is unseen," can hope to know the Source of all knowledge some day. And it may be that what is shocking and senseless to us now, may then seem part of a larger pattern or shape that we cannot now imagine. Whether any of this we saw today will make sense then—is another question we cannot now answer.
Those that fell today are martyrs—the word originally meant "witnesses"—as much as those engineers who accept assignments in the military to bring the blessings of clean water and electricity to Iraq, or those who fight tropical diseases and harsh conditions to build cell-phone networks in developing countries. Engineering is not an easy course of education, nor is it an easy profession. But it can be a good one—good in the sense of benevolence, in the sense of bringing things of real value to people who need them. And good things that bless people are worth doing, even at the cost of personal risk.
My profound sympathy goes to the families of the victims, the students, staff, and faculty members of the Virginia Tech community.
O God, whose mercies cannot be numbered;
Accept our prayers on behalf of the souls of thy servants departed,
And grant them entrance into the land of light and joy,
in the fellowship of thy saints;
through Jesus Christ our Lord. Amen.
We are not told why one person, well-liked, promising, full of life and enthusiasm, is cut down at an early age, while another is spared to live a long, selfish, and unfruitful life. Those who believe that the things perceived by the five senses do not comprise all there is, but also believe in "that which is unseen," can hope to know the Source of all knowledge some day. And it may be that what is shocking and senseless to us now, may then seem part of a larger pattern or shape that we cannot now imagine. Whether any of this we saw today will make sense then—is another question we cannot now answer.
Those that fell today are martyrs—the word originally meant "witnesses"—as much as those engineers who accept assignments in the military to bring the blessings of clean water and electricity to Iraq, or those who fight tropical diseases and harsh conditions to build cell-phone networks in developing countries. Engineering is not an easy course of education, nor is it an easy profession. But it can be a good one—good in the sense of benevolence, in the sense of bringing things of real value to people who need them. And good things that bless people are worth doing, even at the cost of personal risk.
My profound sympathy goes to the families of the victims, the students, staff, and faculty members of the Virginia Tech community.
O God, whose mercies cannot be numbered;
Accept our prayers on behalf of the souls of thy servants departed,
And grant them entrance into the land of light and joy,
in the fellowship of thy saints;
through Jesus Christ our Lord. Amen.
Tuesday, April 10, 2007
May I Beam Your Passport, Please?
Fraudulent U. S. passports can lead to a lot of trouble, which is why a couple of years ago, the U. S. State Department announced that as of October 2006, all new passports issued would contain an RFID chip with identifying information such as the owner's photograph, name, and birth date. These chips provide their information to a suitably equipped reader placed a few inches away, without the need for physical contact.
From the viewpoint of a potential passport forger, this is bad news. From now on, he will have to imitate not only the paper quality and other distinguishing characteristics of a genuine passport, but will also have to make or steal an RFID chip with encrypted data that matches the printed information and can be read by a U. S. customs official's machine. Or at least that seems to be the thinking of the State Department.
What they may not have counted on is the chorus of negative publicity that has greeted the introduction of the new technology. Numerous news reports over the last two years portray the RFID-equipped passport as a security risk, not a benefit. The fear is that a hacker with pirated software and enough hardware could read your name and personal information from many feet away, not just inches, and without your knowledge. To alleviate these fears, State added a metallic shield in the cover so the chip can't be read unless the booklet is open. But critics weren't satisfied: hotels, restaurants, banks, and many other establishments often want to see your passport, and who knows if you're being spied upon by radio waves at any of those places? The government has gone ahead with the rollout, but the prevailing winds of public opinion still blow cold on the idea.
I've discussed RFID at other times, so today I'd like to concentrate on a factor that many engineers either ignore or neglect in dealing with ethical issues: public perception of a technology. For better or worse, engineers tend to be a breed apart: conversant with mathematics that is unfamiliar to most people, inclined to think in terms of logical connections and detailed chains of reasoning rather than overall impressions, and often (but not always) insensitive to the emotional resonance of a situation. To a logical, problem-solving mind (many of which may work for the U. S. State Department, we hope), the problem of U. S. passport fraud suggests a technical solution: RFID chips that are hard to fake and hard to read without authorized gear. Since the cost of a passport hasn't gone up, and they will be easier to use if anything, why on earth would anyone object to such a thing?
I'll tell you why: because the notion of someone being able to view your photograph, date of birth, and other personal data by invisible means of which you are unaware, creeps out many ordinary people. (If I concentrate, I can get creeped out by it myself, although it's an effort.) I think it's this instinctive repugnance at the idea that some kind of evil twin of Superman can look through your clothes, into your wallet, and read stuff that you don't want just anybody to see, that is at the root of a lot of the opposition to RFID-equipped passports.
Technically speaking, the critics have a point. I am no RFID expert, but I do know something about antennas, and with any RFID system there are at least two antennas involved: one on the chip and one in the reader. Basic antenna theory says that the maximum distance you can read an RFID chip from depends on the characteristics of both antennas. A potential data thief can't do anything about the RFID chip's antenna, but he can certainly build a fancier and more sensitive antenna than the usual reader employs, especially if he can hide it somewhere at a distance (because it will tend to be larger than the conventional unit). So there is some truth to the idea that RFID chips which are normally read from a few inches away can sometimes be read at much larger distances if you go to enough trouble on the reader end.
As far as hacking the encryption software goes, unless the State Department has come up with something new that they're not talking about, it is simply a matter of bringing to bear enough resources to break virtually any computer encryption. One big problem in this department is that passports are supposed to be valid for ten years. If some bad guy out there does manage to break the RFID encryption code, is the U. S. State Department going to recall all its passports for an upgrade? The answer isn't clear.
But beyond these technical problems lies the larger public relations problem. If I were a State Department engineer, I might say something like, "Look, these people who are complaining don't understand the technology, they don't understand the problems with forgery we're having, and anyway, they don't have a choice, so they might as well pipe down." Needless to say, such an attitude is unhelpful. Whenever an organization tries to introduce a new technology, people will try to make sense of it by using whatever intellectual resources they have. For good or ill, RFID has a kind of spooky spying-at-a-distance reputation these days which seems to be predominantly negative except among a minority of enthusiasts such as the gentleman who implanted RFID chips in his hands (see this blog's "A Chip In Your Shoulder?", Mar. 27). The public doesn't seem to mind RFID chips in bags of cookies or packaged rutabags if it helps check you out at the grocery store faster. But chips in your passport or your body, that's getting personal, and the emotional temperature falls right away.
I'm not sure how the State Department could have handled this better. But it does seem like they should have informed themselves more about what people would think of the new technology. They did respond to initial concerns with the shielding fix, but as often happens, the negative press got rolling and gained a momentum of its own. Now you can read different ideas on how to disable the chips, ranging from washing the passport with your socks and underwear (doesn't work) to running it through a microwave (throws off sparks and catches fire) to pounding the back cover with a hammer (probably effective). Nobody is saying what happens if you show up with one of the new passports in which the chip doesn't work. Maybe if it means a full-body search, people will change their minds about wrecking the chips. For me personally, I'm going to hang on to my old passport till it expires in 2011, and maybe by that time they will have come up with something even more advanced—or more controversial.
Sources: An article by Kelly Heyboer in the New Orleans Times-Picayune online edition of Apr. 8, 2007 (http://www.nola.com/national/t-p/index.ssf?/base/news-0/1176014434312450.xml&coll=1) clued me in to this issue. Bruce Schneier of the Washington Post wrote a critical piece about it in the Sept. 16, 2006 edition found at http://www.washingtonpost.com/wp-dyn/content/article/2006/09/15/AR2006091500923.html. I tried to look at the U. S. State Department's website that deals with U. S. passports, but the page was apparently down or overloaded.
From the viewpoint of a potential passport forger, this is bad news. From now on, he will have to imitate not only the paper quality and other distinguishing characteristics of a genuine passport, but will also have to make or steal an RFID chip with encrypted data that matches the printed information and can be read by a U. S. customs official's machine. Or at least that seems to be the thinking of the State Department.
What they may not have counted on is the chorus of negative publicity that has greeted the introduction of the new technology. Numerous news reports over the last two years portray the RFID-equipped passport as a security risk, not a benefit. The fear is that a hacker with pirated software and enough hardware could read your name and personal information from many feet away, not just inches, and without your knowledge. To alleviate these fears, State added a metallic shield in the cover so the chip can't be read unless the booklet is open. But critics weren't satisfied: hotels, restaurants, banks, and many other establishments often want to see your passport, and who knows if you're being spied upon by radio waves at any of those places? The government has gone ahead with the rollout, but the prevailing winds of public opinion still blow cold on the idea.
I've discussed RFID at other times, so today I'd like to concentrate on a factor that many engineers either ignore or neglect in dealing with ethical issues: public perception of a technology. For better or worse, engineers tend to be a breed apart: conversant with mathematics that is unfamiliar to most people, inclined to think in terms of logical connections and detailed chains of reasoning rather than overall impressions, and often (but not always) insensitive to the emotional resonance of a situation. To a logical, problem-solving mind (many of which may work for the U. S. State Department, we hope), the problem of U. S. passport fraud suggests a technical solution: RFID chips that are hard to fake and hard to read without authorized gear. Since the cost of a passport hasn't gone up, and they will be easier to use if anything, why on earth would anyone object to such a thing?
I'll tell you why: because the notion of someone being able to view your photograph, date of birth, and other personal data by invisible means of which you are unaware, creeps out many ordinary people. (If I concentrate, I can get creeped out by it myself, although it's an effort.) I think it's this instinctive repugnance at the idea that some kind of evil twin of Superman can look through your clothes, into your wallet, and read stuff that you don't want just anybody to see, that is at the root of a lot of the opposition to RFID-equipped passports.
Technically speaking, the critics have a point. I am no RFID expert, but I do know something about antennas, and with any RFID system there are at least two antennas involved: one on the chip and one in the reader. Basic antenna theory says that the maximum distance you can read an RFID chip from depends on the characteristics of both antennas. A potential data thief can't do anything about the RFID chip's antenna, but he can certainly build a fancier and more sensitive antenna than the usual reader employs, especially if he can hide it somewhere at a distance (because it will tend to be larger than the conventional unit). So there is some truth to the idea that RFID chips which are normally read from a few inches away can sometimes be read at much larger distances if you go to enough trouble on the reader end.
As far as hacking the encryption software goes, unless the State Department has come up with something new that they're not talking about, it is simply a matter of bringing to bear enough resources to break virtually any computer encryption. One big problem in this department is that passports are supposed to be valid for ten years. If some bad guy out there does manage to break the RFID encryption code, is the U. S. State Department going to recall all its passports for an upgrade? The answer isn't clear.
But beyond these technical problems lies the larger public relations problem. If I were a State Department engineer, I might say something like, "Look, these people who are complaining don't understand the technology, they don't understand the problems with forgery we're having, and anyway, they don't have a choice, so they might as well pipe down." Needless to say, such an attitude is unhelpful. Whenever an organization tries to introduce a new technology, people will try to make sense of it by using whatever intellectual resources they have. For good or ill, RFID has a kind of spooky spying-at-a-distance reputation these days which seems to be predominantly negative except among a minority of enthusiasts such as the gentleman who implanted RFID chips in his hands (see this blog's "A Chip In Your Shoulder?", Mar. 27). The public doesn't seem to mind RFID chips in bags of cookies or packaged rutabags if it helps check you out at the grocery store faster. But chips in your passport or your body, that's getting personal, and the emotional temperature falls right away.
I'm not sure how the State Department could have handled this better. But it does seem like they should have informed themselves more about what people would think of the new technology. They did respond to initial concerns with the shielding fix, but as often happens, the negative press got rolling and gained a momentum of its own. Now you can read different ideas on how to disable the chips, ranging from washing the passport with your socks and underwear (doesn't work) to running it through a microwave (throws off sparks and catches fire) to pounding the back cover with a hammer (probably effective). Nobody is saying what happens if you show up with one of the new passports in which the chip doesn't work. Maybe if it means a full-body search, people will change their minds about wrecking the chips. For me personally, I'm going to hang on to my old passport till it expires in 2011, and maybe by that time they will have come up with something even more advanced—or more controversial.
Sources: An article by Kelly Heyboer in the New Orleans Times-Picayune online edition of Apr. 8, 2007 (http://www.nola.com/national/t-p/index.ssf?/base/news-0/1176014434312450.xml&coll=1) clued me in to this issue. Bruce Schneier of the Washington Post wrote a critical piece about it in the Sept. 16, 2006 edition found at http://www.washingtonpost.com/wp-dyn/content/article/2006/09/15/AR2006091500923.html. I tried to look at the U. S. State Department's website that deals with U. S. passports, but the page was apparently down or overloaded.
Tuesday, April 03, 2007
A Nanny for Nanotech? Government and Nanotechnology Hazards
Very small things can cause us lots of trouble, from flu viruses to tiny asbestos fibers that lodge in the lungs and lead to mesothelioma, a rare form of cancer. But up to now, all the very small things we had to worry about occurred naturally. In the last few years, we've learned how to make things that small artificially as well. And some people are worried that no one is paying much attention to the question of whether tiny artificial stuff could be as dangerous as the tiny natural stuff we've learned to live with—or die with.
Scientists have developed a special unit of measure for these things: the nanometer. One billion nanometers is a meter (which is a little longer than a yard, for you non-metric types). A human hair looks like the trunk of a redwood tree compared to a virus or an asbestos fiber, which can be as small as 10 nanometers in diameter. When things get that small, they start acting peculiar, because the graininess or lumpiness of matter begins to show up—the fact that it's made of atoms. This can be both very good or very bad, depending on what you're looking at. Take carbon nanotubes, for instance. These are tiny tubes that, if you could see them, would look like elegantly woven fabric, every atom in place. Atom for atom, if you pull on one of these tubes, it's much stronger than steel, and it can conduct electricity much better than copper, but only along the direction of the tube. This stuff has already made it into some commercial products, and hopes are that it will form the basis of entire new industries. Other nano-size chemicals and particles are finding their way into everything from electrical products to cosmetics. That's the good news.
The possible bad news is, no one much is looking into the question of whether these tiny engineered particles are dangerous to living organisms, and in particular, people. So far, there hasn't been a tragedy involving artificial nanotech products along the lines of the "radium girls" disaster of the 1920s. But we don't know that it won't happen, either.
In some ways, radium was the nanotech of the early 1900s. Marie and Pierre Curie, radium's discoverers, were international heroes. Women who were hired to paint glow-in-the-dark numbers on watch and clock dials with radium-bearing paint thought they were lucky to be working with such exciting stuff. Some even used it as makeup and lipstick, which must have freaked out their boyfriends when they turned off the lights.
But within a few years, these women found out their jobs were no joking matter as many of them began to fall ill with liver problems, anemia, bone fractures, and rotting jawbones. The cause, of course, was the intense doses of radiation from the radium they absorbed in their bodies. Their employers initially denied any responsibility, the U. S. government declined to get involved, and it took years of persistent work by industrial pathologists, politicians, and others sympathetic to the workers' plight to get radium recognized for the terrible occupational hazard it was.
Are we facing a similar situation in the proliferation of nanotech products for consumers? There is a technical aspect and a political aspect to the question.
The technical aspect is, nobody knows for certain. But scientific knowledge isn't free: someone has to pay for tests, investigations, reports, and the other overhead stuff that goes along with finding out things these days. We know some things about nano-scale materials and how they interact with the nano-scale machinery of living cells, but certainly not everything. One reason nanotechnology and biotechnology are so attractive to researchers and investors is the fact that we don't know all about what goes on between these two areas, and so we're trying to find out. Absolute certainty that a product is free from any hazard to humans is not something we can usually obtain at a reasonable cost. The usual product testing will often show up prompt hazards (ones that don't take years to develop), and as for the others, well, since many companies operate on a six-month product cycle, waiting fifteen years for the outcome of a longitudinal study of biohazards just doesn't make a lot of sense to them.
That brings up the political question. Partly because I'm no political scientist and like to reduce everything to vectors (at least that's what my wife says), I like to drive things to extremes in order to understand where we stand in the middle. On one extreme would be total non-regulation: anybody can make anything anywhere, and sell it to anyone, claiming anything for it, and let the buyer beware. I understand this state of affairs isn't too far from reality in parts of China nowadays. It's a pretty good environment for entrepreneurs, assuming they don't have to live downwind from a paper mill or something equally offensive. But the dangers to consumers are obvious.
The other extreme is complete and total "nanny-stateism" (hence the nanny in today's headline): no product is allowed to fall into the hands of the consumer until the manufacturer has been held guilty of its being harmful, and forced to prove himself innocent. Things are not quite this bad in some Scandinavian countries, but show signs of moving in that direction. At this extreme, companies give up on making money and spend their dwindling capital on safety studies that take years and let their competitors in less regulated regions beat them to the market. Clearly, this extreme isn't going to work very well either.
Being an engineer and not a political scientist, I tend to trust democracy to stumble around between these two extremes and find a middle road that is neither too negligent of the consumer's interests nor too stifling of the manufacturer's initiative. Nobody will be entirely happy with such a compromise, but that is how democracy works, or is supposed to work. In the past, it has taken a major tragedy, with people dying in large numbers from unusual causes, to motivate large-scale regulation of certain industries. That's too bad, from one point of view, but if the alternative is to regulate ourselves into the past and defer the use of any new nanotech products until we're absolutely, positively sure they're safe, then that's not so good either. Some studies by the Project on Emerging Nanotechnologies of the Woodrow Wilson International Center for Scholars indicate that no one—meaning no government agency charged with the responsibility—is overseeing the vast new field of consumer products that use nano-size particles. At the risk of annoying any libertarian readers of my blog, I would venture the opinion that at least somebody who is not beholden to manufacturers should look into this on a regular basis. But I would also venture that they shouldn't interfere with things until they find there is some reason to believe there is trouble brewing.
Sources: The Wilson Center website at http://www.wilsoncenter.org/ describes some of the work of their emerging nanotechnology project at http://www.nanotechproject.org/. This column was inspired by a piece in the Austin American-Statesman for Apr. 1, 2007 (p. A19) by Jeff Nesmith about the Wilson Center. Reviews of Radium Girls: Women and Industrial Health Reform, 1910-1935 by Claudia Clark (Chapel Hill, NC: Univ. of North Carolina Press, 1997), which I haven't read but would like to some day, can be found at the Amazon.com entry for the book.
Scientists have developed a special unit of measure for these things: the nanometer. One billion nanometers is a meter (which is a little longer than a yard, for you non-metric types). A human hair looks like the trunk of a redwood tree compared to a virus or an asbestos fiber, which can be as small as 10 nanometers in diameter. When things get that small, they start acting peculiar, because the graininess or lumpiness of matter begins to show up—the fact that it's made of atoms. This can be both very good or very bad, depending on what you're looking at. Take carbon nanotubes, for instance. These are tiny tubes that, if you could see them, would look like elegantly woven fabric, every atom in place. Atom for atom, if you pull on one of these tubes, it's much stronger than steel, and it can conduct electricity much better than copper, but only along the direction of the tube. This stuff has already made it into some commercial products, and hopes are that it will form the basis of entire new industries. Other nano-size chemicals and particles are finding their way into everything from electrical products to cosmetics. That's the good news.
The possible bad news is, no one much is looking into the question of whether these tiny engineered particles are dangerous to living organisms, and in particular, people. So far, there hasn't been a tragedy involving artificial nanotech products along the lines of the "radium girls" disaster of the 1920s. But we don't know that it won't happen, either.
In some ways, radium was the nanotech of the early 1900s. Marie and Pierre Curie, radium's discoverers, were international heroes. Women who were hired to paint glow-in-the-dark numbers on watch and clock dials with radium-bearing paint thought they were lucky to be working with such exciting stuff. Some even used it as makeup and lipstick, which must have freaked out their boyfriends when they turned off the lights.
But within a few years, these women found out their jobs were no joking matter as many of them began to fall ill with liver problems, anemia, bone fractures, and rotting jawbones. The cause, of course, was the intense doses of radiation from the radium they absorbed in their bodies. Their employers initially denied any responsibility, the U. S. government declined to get involved, and it took years of persistent work by industrial pathologists, politicians, and others sympathetic to the workers' plight to get radium recognized for the terrible occupational hazard it was.
Are we facing a similar situation in the proliferation of nanotech products for consumers? There is a technical aspect and a political aspect to the question.
The technical aspect is, nobody knows for certain. But scientific knowledge isn't free: someone has to pay for tests, investigations, reports, and the other overhead stuff that goes along with finding out things these days. We know some things about nano-scale materials and how they interact with the nano-scale machinery of living cells, but certainly not everything. One reason nanotechnology and biotechnology are so attractive to researchers and investors is the fact that we don't know all about what goes on between these two areas, and so we're trying to find out. Absolute certainty that a product is free from any hazard to humans is not something we can usually obtain at a reasonable cost. The usual product testing will often show up prompt hazards (ones that don't take years to develop), and as for the others, well, since many companies operate on a six-month product cycle, waiting fifteen years for the outcome of a longitudinal study of biohazards just doesn't make a lot of sense to them.
That brings up the political question. Partly because I'm no political scientist and like to reduce everything to vectors (at least that's what my wife says), I like to drive things to extremes in order to understand where we stand in the middle. On one extreme would be total non-regulation: anybody can make anything anywhere, and sell it to anyone, claiming anything for it, and let the buyer beware. I understand this state of affairs isn't too far from reality in parts of China nowadays. It's a pretty good environment for entrepreneurs, assuming they don't have to live downwind from a paper mill or something equally offensive. But the dangers to consumers are obvious.
The other extreme is complete and total "nanny-stateism" (hence the nanny in today's headline): no product is allowed to fall into the hands of the consumer until the manufacturer has been held guilty of its being harmful, and forced to prove himself innocent. Things are not quite this bad in some Scandinavian countries, but show signs of moving in that direction. At this extreme, companies give up on making money and spend their dwindling capital on safety studies that take years and let their competitors in less regulated regions beat them to the market. Clearly, this extreme isn't going to work very well either.
Being an engineer and not a political scientist, I tend to trust democracy to stumble around between these two extremes and find a middle road that is neither too negligent of the consumer's interests nor too stifling of the manufacturer's initiative. Nobody will be entirely happy with such a compromise, but that is how democracy works, or is supposed to work. In the past, it has taken a major tragedy, with people dying in large numbers from unusual causes, to motivate large-scale regulation of certain industries. That's too bad, from one point of view, but if the alternative is to regulate ourselves into the past and defer the use of any new nanotech products until we're absolutely, positively sure they're safe, then that's not so good either. Some studies by the Project on Emerging Nanotechnologies of the Woodrow Wilson International Center for Scholars indicate that no one—meaning no government agency charged with the responsibility—is overseeing the vast new field of consumer products that use nano-size particles. At the risk of annoying any libertarian readers of my blog, I would venture the opinion that at least somebody who is not beholden to manufacturers should look into this on a regular basis. But I would also venture that they shouldn't interfere with things until they find there is some reason to believe there is trouble brewing.
Sources: The Wilson Center website at http://www.wilsoncenter.org/ describes some of the work of their emerging nanotechnology project at http://www.nanotechproject.org/. This column was inspired by a piece in the Austin American-Statesman for Apr. 1, 2007 (p. A19) by Jeff Nesmith about the Wilson Center. Reviews of Radium Girls: Women and Industrial Health Reform, 1910-1935 by Claudia Clark (Chapel Hill, NC: Univ. of North Carolina Press, 1997), which I haven't read but would like to some day, can be found at the Amazon.com entry for the book.
Tuesday, March 27, 2007
A Chip In Your Shoulder?
Back around 1987 or so, I walked by the bulletin board in the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst and saw a letter with a note scrawled at the bottom, "Anybody want to help Ms. X?" A woman had written the letter to our department chair because we had a reputation for doing research in microwave remote sensing and the detecting of radio waves. In the letter, she said that she was convinced the FBI had secretly embedded a radio-wave spying chip in her body. She did not go into details about the circumstances under which this had been done, nor did she say exactly where she thought the chip was. But she knew it was there, and she wanted to know if she could come to our labs to be examined by us with our sensitive equipment.
Needless to say, nobody took her up on her offer to be "examined," although her letter was the topic of some lunchtable conversation for the next few days. I understand that this sort of belief is not uncommon among individuals whom psychiatry used to term "paranoid," although I don't know what terminology would be used today. Well, yesterday's paranoid fear is today's welcome reality—welcomed by some, at least. The cover of the March 2007 issue of the engineering magazine IEEE Spectrum shows an X-ray montage of a young guy holding both hands up near the camera. In the X-ray images, two little sliver-shaped chips are clearly visible in the fleshy part of each hand between the thumb and forefinger.
Inside, the reader finds that Amal Graafstra, an entrepreneur and RFID (radio-frequency identification) enthusiast, thinks having RFID chips in each hand is just great. After convincing a plastic surgeon to insert the chips, which are a kind not officially approved for human use yet (they're sold to veterinarians for pet-tracking purposes), he rewired his house locks, motorcycle ignition, and various other gizmos that used to need keys or passwords. Now he can just make like Mandrake the Magician, waving his hand in front of his door or his motorcycle instead of hauling out keys. When he posted the initial results of his experiments on a website, he got all kinds of reactions ranging from essentially "Way to go, dude!" to negative comments based on religious convictions. As he explains, "Some Christian groups hold that the Antichrist . . . will require followers to be branded with a numeric identifier prior to the end of the world—the 'mark of the beast.' So I got some anxious notes from concerned Christians—including my own mother!"
Right after reading Mr. Graafstra's article, you can turn to a piece by Ken Foster and Jan Jaeger on the ethics of RFID implants in humans. (Full disclosure: I am acquainted with Prof. Foster through my work with IEEE's Society on Social Implications of Technology.) They dutifully point out the potential downsides of the technology, including the chance that what starts out as a purely voluntary thing, even a fashionable style among certain elites, might turn into a job requirement or something imposed by a government on citizens or aliens or both. They mention the grim precedent set by the Nazi regime when its concentration-camp guards forced every prisoner to receive a tattooed number on the arm. RFID chips are not nearly as visible as tattooes, but can contain vastly more information. Think of a miniature hard drive with your entire work history, your places of residence, your sensitive financial information and passwords, all carried around in your body and possibly accessible to anyone with the right (or perhaps wrong) equipment. Such large amounts of data cannot be stored in RFID chips yet, but if the rate of technical progress keeps up, it will be possible to do that soon. And following the rough-and-ready principle that anything which can be hacked, will be hacked, implanted RFID chips pose a great potential risk to privacy.
While Messrs. Graafstra, Foster, and Jaeger debate the pragmatic consequences of this technology, I would like to bring up something that the "Christian groups" alluded to, although they approached it in a way that is biased by some fairly recent innovations in Christian theology dating only to the mid-19th century.
A deeper theme that dates from the earliest Hebrew traditions of the Old Testament is the idea of the human body as a sacred thing, not to be treated like other material objects. The Old Testament prohibited tattooes, ritual cutting, and other practices common among ancient tribes other than the Israelites. The Christian tradition carried these ideas forward in various ways, but always with a sense that the human body is not simply a collection of atoms, but is a "substance" (a philosophical term) which stands in a unique relation to the soul.
The problem with trying to relate these ideas to modern practices is that hardly anybody, Christian, Jewish, or otherwise, pays any attention to them any more. What with heart transplants, cochlear implants, artificial lenses for cataract surgery, and so on, we are well down the road of messing with the human body to repair or improve its functions. And because something is sacred does not mean necessarily that it cannot be touched or altered in any way. The best that I can extract from this tradition in regard to the question of RFID implants, is to encourage people to give this matter special consideration. It's not the same thing as carrying around a fanny pack, or a key ring, or even a nose ring. Once it's in there, you've got it, and it can be anything from a minor annoyance to major surgery to get it out. My bottom line is that with RFID implants, you're messing with the sacred again. And there has to be some meaning to the facts that this general sort of notion was applied first on a large scale by one of the most evil governments of the twentieth century, and that it used to be an imaginary fear latched onto by mentally unbalanced individuals. Only, I don't know what the meaning is.
Sources: The articles "Hands On" by Amal Graafstra and "RFID Inside" by Kenneth Foster and Jan Jaeger appear in the March 2007 issue of IEEE Spectrum, accessible free (as of this writing) at http://spectrum.ieee.org/mar07/4940 and http://spectrum.ieee.org/mar07/4939, respectively.
Needless to say, nobody took her up on her offer to be "examined," although her letter was the topic of some lunchtable conversation for the next few days. I understand that this sort of belief is not uncommon among individuals whom psychiatry used to term "paranoid," although I don't know what terminology would be used today. Well, yesterday's paranoid fear is today's welcome reality—welcomed by some, at least. The cover of the March 2007 issue of the engineering magazine IEEE Spectrum shows an X-ray montage of a young guy holding both hands up near the camera. In the X-ray images, two little sliver-shaped chips are clearly visible in the fleshy part of each hand between the thumb and forefinger.
Inside, the reader finds that Amal Graafstra, an entrepreneur and RFID (radio-frequency identification) enthusiast, thinks having RFID chips in each hand is just great. After convincing a plastic surgeon to insert the chips, which are a kind not officially approved for human use yet (they're sold to veterinarians for pet-tracking purposes), he rewired his house locks, motorcycle ignition, and various other gizmos that used to need keys or passwords. Now he can just make like Mandrake the Magician, waving his hand in front of his door or his motorcycle instead of hauling out keys. When he posted the initial results of his experiments on a website, he got all kinds of reactions ranging from essentially "Way to go, dude!" to negative comments based on religious convictions. As he explains, "Some Christian groups hold that the Antichrist . . . will require followers to be branded with a numeric identifier prior to the end of the world—the 'mark of the beast.' So I got some anxious notes from concerned Christians—including my own mother!"
Right after reading Mr. Graafstra's article, you can turn to a piece by Ken Foster and Jan Jaeger on the ethics of RFID implants in humans. (Full disclosure: I am acquainted with Prof. Foster through my work with IEEE's Society on Social Implications of Technology.) They dutifully point out the potential downsides of the technology, including the chance that what starts out as a purely voluntary thing, even a fashionable style among certain elites, might turn into a job requirement or something imposed by a government on citizens or aliens or both. They mention the grim precedent set by the Nazi regime when its concentration-camp guards forced every prisoner to receive a tattooed number on the arm. RFID chips are not nearly as visible as tattooes, but can contain vastly more information. Think of a miniature hard drive with your entire work history, your places of residence, your sensitive financial information and passwords, all carried around in your body and possibly accessible to anyone with the right (or perhaps wrong) equipment. Such large amounts of data cannot be stored in RFID chips yet, but if the rate of technical progress keeps up, it will be possible to do that soon. And following the rough-and-ready principle that anything which can be hacked, will be hacked, implanted RFID chips pose a great potential risk to privacy.
While Messrs. Graafstra, Foster, and Jaeger debate the pragmatic consequences of this technology, I would like to bring up something that the "Christian groups" alluded to, although they approached it in a way that is biased by some fairly recent innovations in Christian theology dating only to the mid-19th century.
A deeper theme that dates from the earliest Hebrew traditions of the Old Testament is the idea of the human body as a sacred thing, not to be treated like other material objects. The Old Testament prohibited tattooes, ritual cutting, and other practices common among ancient tribes other than the Israelites. The Christian tradition carried these ideas forward in various ways, but always with a sense that the human body is not simply a collection of atoms, but is a "substance" (a philosophical term) which stands in a unique relation to the soul.
The problem with trying to relate these ideas to modern practices is that hardly anybody, Christian, Jewish, or otherwise, pays any attention to them any more. What with heart transplants, cochlear implants, artificial lenses for cataract surgery, and so on, we are well down the road of messing with the human body to repair or improve its functions. And because something is sacred does not mean necessarily that it cannot be touched or altered in any way. The best that I can extract from this tradition in regard to the question of RFID implants, is to encourage people to give this matter special consideration. It's not the same thing as carrying around a fanny pack, or a key ring, or even a nose ring. Once it's in there, you've got it, and it can be anything from a minor annoyance to major surgery to get it out. My bottom line is that with RFID implants, you're messing with the sacred again. And there has to be some meaning to the facts that this general sort of notion was applied first on a large scale by one of the most evil governments of the twentieth century, and that it used to be an imaginary fear latched onto by mentally unbalanced individuals. Only, I don't know what the meaning is.
Sources: The articles "Hands On" by Amal Graafstra and "RFID Inside" by Kenneth Foster and Jan Jaeger appear in the March 2007 issue of IEEE Spectrum, accessible free (as of this writing) at http://spectrum.ieee.org/mar07/4940 and http://spectrum.ieee.org/mar07/4939, respectively.
Tuesday, March 20, 2007
Identities For Sale
Well, here's a way we can solve the trade imbalance between China and the U. S. According to Symantec, the computer-security company, the U. S. harbors more than half of the world's "underground economy servers"—computers that are used for criminal activities, including the control of other computers called "bots" without the knowledge or consent of their owners. And it turns out that about a fourth of all bots are in China. So we're using China's computers to steal money, data, and identities from around the world. And it's even tax-free, if the criminals who organize this sort of thing play their cards right. This market is running so well that you can buy a new electronic identity, complete with Social Security number, credit cards, and a bank account, for less than twenty bucks. Don't like who you are? Become someone else!
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Lest anyone take me seriously, the above was written in the spirit of Jonathan Swift's "modest proposal" of 1729 to alleviate poverty in Ireland by encouraging families to sell their babies to be eaten. I do not think it is a good thing that we lead the world in the number of servers devoted to criminal ends. But it's a fact worth pondering, and one question in particular intrigues me: why is computer crime so organized and, well, successful in this country?
Part of the answer has to do with the extraordinary freedom we enjoy compared to many other countries, both in the economic and political realms. While businesspeople complain about Sarbanes-Oxley and other burdensome regulations here, they should compare these relatively mild restrictions with those in China or many countries in Europe, where red tape and bureaucracy, not to mention the occasional corrupt official, can bog down business deals and keep foreign firms away.
Another part of the answer has to do with the relative ease of committing computer crime, and the relative difficulty law enforcement officials have in catching bit-wise criminals. According to the Symantec report, which was summarized in an article on the San Jose Mercury News website, much of the code needed for criminal work was done in regular nine-to-five shifts. This indicates the era of the late-night amateur hacker is giving way to the white-collar criminal who either does his work under the radar of a legitimate business, or simply sets up shop as a company whose activities are purposely vague to outsiders. And nothing could be more in keeping with modern U. S. business practices. It's easy to tell what goes on at a steel mill: there's smoke, flames, and railroad cars full of steel coming in and out. But you can walk into numberless establishments in office parks around this country, look around, even watch over somebody's shoulder, and you'll still have trouble figuring out what many of these outfits actually do.
And that's maybe a third reason the U. S. is so hospitable to computer crime: the ease with which you can hide behind anonymity here. In more traditional cultures, the loner is a rarity, and most people are tied to friends and relatives by networks of interdependent connections, obligations, and moral strictures. But here no one thinks badly of a person who lives alone in an apartment, works at a company called something like United Associated Global Enterprises, and keeps to himself. The fact that he is trading in millions of dollars' worth of stolen identities every week is known only to him and perhaps a few associates who could be scattered around the country or the world. Maybe the lack of distinctive identity that such bland, interchangeable surroundings impose on people who live and work in them makes it perversely attractive to deal in other peoples' identities, even for nefarious purposes.
Computer networks were designed in the early years by people who were, if not saints, at least folks who were very good at legitimate uses of computer technology, and they were dealing at first only with other people like themselves. There is a strong streak of idealism in many computer types, and that is one reason that many of them worked so hard to realize their ideal of a world community joining together on the Internet. But few of them had extensive experience with criminality, and so the possibility that someone might actually abuse this wonderful new system was not considered very seriously, in some ways. I speak as an amateur here, not as an expert. But the radically egalitarian structure of the Internet embodies a philosophy as much as it embodies a technical system.
There is no use crying over spilt idealism, and we have to deal with the way the Internet and computers are today, not the way they might have been if the founders had taken a more sanguine view of human nature when they set up the early protocols. I understand that sooner or later the Internet and its basic protocols will have to be overhauled in a far-reaching way. Maybe then we can put in some more sophisticated ways of tracking bad guys down, and of preventing the kinds of attacks that come without warning and shut down whole net-based businesses. But technology can take us only so far. As long as there are people using the Internet and not just machines, some of them are going to try to con, cheat, lie, and steal. The more that future systems are designed with that in mind, the better.
Sources: The Symantec report was summarized by Ryan Blitstein of the San Jose Mercury News on Mar. 19, 2007 at http://www.siliconvalley.com/mld/siliconvalley/16933863.htm. Jonathan Swift's "Modest Proposal," the heavy irony of which was completely missed by some of its first readers, is available complete at http://www.uoregon.edu/~rbear/modest.html.
Wednesday, March 14, 2007
Who Needs a Digital Life?
One day I rescued from the throw-out pile outside another professor's office a book entitled simply Computer Engineering, by C. Gordon Bell and two co-authors, all employees of the Digital Equipment Corporation. Published in 1978, it is a time capsule of the state of the computer art according to DEC, which around then was giving IBM a run for its money by coming out with the VAX series of minicomputers. This was just before the personal computer era changed everything.
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
This month I ran across the name Gordon Bell again, this time in the pages of Scientific American. By now, Bell is a vigorous-looking nearly bald guy with a strange idea that Microsoft, his current employer, has given him the resources to try out. After struggling to digitize his thirty years' career worth of documents, notes, papers, and books (including, no doubt, Computer Engineering), he decided in 2001 to not only go paperless, but to experiment with recording his life—digitally. The goal is to record and make available for future access everything Bell reads, hears, and sees (taste, touch, and smell weren't addressed, but I'm sure they're working on those too). The article shows Bell with a little digital camera slung around his neck. The camera senses heat from another body's presence or changes in light intensity, and snaps a picture along with time, GPS location, and wind speed and direction too, for all I know. So far this project has accumulated about 150 gigabytes in 300,000 records.
Two things are surprising about this. Well, more than two things, but two immediately come to mind. One is that 150 gigabytes isn't that much anymore. The computer I'm typing this on has a 75-gigabyte hard drive, and somehow or other I've managed to use 50 or so gigabytes already. Most of it is a single video project, and Bell admits most of his space is used by video. With new compression technologies video won't take up so much room in the future.
The second surprising thing is, why would anybody want to do this? Yes, it is becoming technologically feasible in the last few years, but only head-in-the-sand nerds automatically assert that because we can do a technical thing, we must do it. I hope Mr. Bell is beyond that stage, but one wonders. One of Bell's main motivations was simply to be able to remember things he would otherwise forget. Things like, "Oh, what was the name of that old boy I worked on the VAX with in 1975?" If it was anywhere in his scanned-in paper archives, I guess he can find out now. But at what cost?
Cost for Bell is not that much of an obstacle, seeing as how Microsoft is behind the project, and in any case, if technology keeps heading in the same general direction, costs for this sort of thing will plummet and everybody down to kindergarteners will be able to carry around their digital lives in their cellphones, or cellphone earring, or whatever form it will take. But since this blog is about ethical implications of technology, let's look at just two for the moment: dependence and deception.
Nobody knows what will happen to a person who grows up never having to memorize anything. I mean, where do you stop? I don't need to remember my phone number, my digital assistant does that. I don't need to know the capital of South Dakota, my digital assistant knows that. I don't need to know the letters of the alphabet, my digital . . . and so on. At the very least, if we go far enough with digital-life technology, it will create a peculiar kind of mental dependence that up to now has been experienced only by people on iron lungs. When a technology becomes a necessity, and something happens to the necessity, you can be in deep trouble. So far the project doesn't seem to have done Gordon Bell any harm, except to have absorbed much of his time and energy for the last several years. But if this sort of thing becomes as commonplace as electric lighting (which did in fact revolutionize our lives in ways that are both good and bad), it would work changes in culture and human relationships that at the very least, deserve a lot more thought and consideration than they have received up to now.
The second implication concerns deception. For practical purposes, there is no such thing as a networked computer system that is absolutely immune to jimmying of some kind: viruses, worms, falsification of data, and identity theft. Bell and Gemmell admit as much toward the end of the article when they talk about questions of privacy. You think someone stealing your Social Security number is bad, wait till somebody steals that photo your digital assistant took of your "escort" on Saturday night in Las Vegas during that convention. Their proposed solution, as is typical with true believers of this kind, is more technology: intelligent systems to "advise" us when sharing information would be stupid. But what technology will keep us from being stupid anyway? And their solution to the storage of what they call "sensitive information that might put someone in legal jeopardy" is to have an "offshore data storage account . . . to place it beyond the reach of U. S. courts." It's so thoughtful for Scientific American to place in the hands of its readers such convenient advice about how to evade the law. This advice betrays an attitude that is increasingly common among certain groups who feel strongly that the digital community trumps all other human institutions, including legal and governmental ones.
Well, I'm glad Mr. Bell is still exploring the wonderful world of computers, even if his interests in the wider ranges of human experience appear not to have changed since his early days on the VAX project. Despite the tone of technological determinism in his article, I assert that the way digital lives will develop and be used is far from predictable, and it is even far from certain that it will happen at all. If the technology does become popular, I hope others will think more deeply than Bell and Gemmell have about the possible dangers and downsides.
Sources: "A Digital Life" by C. Gordon Bell and Jim Gemmell appeared in the March 2007 edition of Scientific American (pp. 58-65, vol. 296, no. 3).
Tuesday, March 06, 2007
The Ethics of Electronic Reproduction
Since so much of what we see, hear, read, and talk about has passed through digitization and cyberspace, it's easy to let that fact fade into the background and ignore the myriad of tricks that engineers have put into the hands of video editors, sound recording experts, and crooks. A story about sound-recording fraud with a neat ironic twist was reported recently by John von Rhein, the Chicago Tribune's music critic. It seems that one William Barrington-Coupe, the man behind a small record label called Concert Artists, wanted to make his concert-pianist wife Joyce Hatto look good, or at least sound good, on recordings issued under her name. So he "borrowed" recordings of famous pianists such as Vladimir Ashkenazy and altered the timing just enough to throw off suspicion that would arise if anyone noticed that Joyce Hatto's version of Rachmaninoff's Prelude in C sharp minor, for example, lasted two minutes and forty seconds, exactly as long as Vladimir Ashkenazy's. He did this digitally, of course, which is how he got caught.
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Seems there is software out there that can compare the bits directly between two digital recordings. Although I don't know the details, I can imagine that a direct bit-by-bit comparison, even with digital time fiddling thrown in, could reveal copying of this kind much more positively than any subjective human judgment. Anyway, somebody tried it out on one of Joyce Hutto's Concert Artist CDs and found that the bits actually originated from the playing of Hungarian pianist Laszlo Simon. Confronted with the evidence, Barrington-Coupe confessed, making publicity of a kind he probably wasn't hoping for.
The Tribune critic von Rhein makes the point that this is only the most egregious case of the kind of thing that has been going on for generations: electronic manipulation of performances to make them sound better. "Better" can mean anything from editing out mistakes and poorly performed passages to complete voice makeovers that can make a raspy-voiced eight-year-old boy sound like Arnold Schwarzenegger. Von Rhein traces this trend back to the introduction of tape recording and its comparatively convenient razor-blade-and-cement editing techniques, but there's an even earlier example: reproducing piano rolls. As early as the 1920s, inventors developed a system that recorded not only the timing of keystrokes but their force, in sixteen increments from loud to soft, and reproduced these strokes on a fancy player piano that embodied elements of digital technology implemented with air valves and bellows. Famed artists such as George Gershwin recorded numerous reproducing piano rolls, whose dynamics sounded much better than the ordinary tinkly player pianos of the day. It is well known that these reproducing piano rolls were edited by the performers to remove imperfections and otherwise improve upon the live studio performance.
Most people who listen to music these days are at least vaguely aware that even so-called "live" recordings have been doctored somewhat, and few seem to care. When someone strays into outright fraud, as Mr. Barrington-Coupe did, most people would agree that this is wrong. But should we be free to take what naturally comes out of a piano or a horn and transmogrify it digitally any way we wish, while still passing it off as "live" or "original"?
The novelty of this sort of thing is largely illusory. If we pass from the auditory to the visual realm, there is abundant evidence that those who could afford to make themselves look better than reality have done so, all the way back to ancient Rome. Portrait painters, other artists, and craftspeople have always faced the dilemma of whether to be strictly honest or flattering to their subjects. If the subject also pays the bill, and if honesty will not make as many bucks as flattery, flattery often wins out. The fact that flattery can now be done digitally is not a fundamental change in the human condition, but simply reflects the fact that as our media change, we take the same old human motivations into new fields of endeavor and capability.
What is truly novel about the story of Barrington-Coupe and his wife Joyce Hatto is not the intent or act of fraud, but the way he was caught. It was said long ago that he who lives by the sword dies by the sword. It often turns out these days that he who attempts to deceive by digital means gets caught by means of digital technology as well. On balance, I don't think we have a lot to worry about concerning musicians who want to sound a little better than reality on their recordings. Those who would forbid them the use of digital improvements are to my mind in the same category as those who want to prohibit the use of makeup by women. Maybe there are good religious reasons for such a prohibition, but it would make the world a little less attractive. The greater danger of digital technology as applied to media appears to me to lie in the area of control by large, powerful interests such as corporations and governments. But that is a discussion for another time.
Sources: John van Rhein's article appeared in the Mar. 4, 2007 online edition of the Chicago Tribune at http://www.chicagotribune.com/technology/chi-0703040408mar04,1,3237470.story?coll=chi-techtopheds-hed (free registration required).
Tuesday, February 27, 2007
Cyberspace Anonymity: Good or Bad?
If you have been reading this blog for more than a few weeks, you may have noticed that I recently pulled off my mask of anonymity and posted my full name and location on it. That was a choice I made, and most choices have moral implications, if you look far enough. The internet offers abundant opportunities to those who wish to remain anonymous for whatever reason. Since the way it is engineered has contributed to this state of affairs, we are still in the realm of engineering ethics when we consider the implications of cyberspace anonymity.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
In the last few days, I have been corresponding with a person halfway around the world, in Australia, about a laptop computer problem. He (I assume it's a he, although I might be wrong) and I have never met and will in all likelihood never meet in this life. But he's had the kindness to take note of my plea for help on a user's forum, and for the last three or four days we've each been posting a remark a day, me asking questions, him giving advice. I notice he usually posts around four in the afternoon his time, which is just a bit before I'll get on around six in the morning in Texas. So although the sun set decades ago on the British Empire, the sun never sets on this spontaneous two-person computer consulting organization, at least as long as it lasts. So far, I've found this to be a good and helpful interchange.
One of the issues he's helping me with is computer viruses. They are another product of the anonymity the Internet provides. As I've remarked elsewhere, many computer hackers don't view the theft of software (or the theft by virus and worm vandalism of other people's time and resources) in the same light that they'd view the act of walking into a convenience store and heisting a loaf of bread. One reason for that is you're much more likely to get caught with bread under your coat than you are to be caught with illegal software, mainly because transactions over the Internet are usually anonymous unless you go to the trouble to advertise who you are. If by some magic, the writers of viruses, worms, and all the other plague carriers of computerdom were brought into the same room with their victims, you'd need a plenty big room, for one thing. Some of the perpetrators might be shamed into confessing, but others might just brazen it out like juvenile delinquents everywhere, and deny it all. At the very least, though, the victims would have the perverse satisfaction of seeing the person who messed up their computer. If this kind of encounter happened on a regular basis, the number of virus-writers would probably decline, but not die out entirely. Unfortunately, I don't have that particular magic trick in my bag.
What you think about the anonymity of cyberspace depends on what you think about humanity. The (relatively few) hard-core materialists among us cannot make a principled distinction between the silicon-and-aluminum machines on which the meat machines communicate, and the meat machines themselves. It's all bits anyway, and so whether one meat machine "knows" who another meat machine is, doesn't really matter except for routine pragmatic reasons, which are the only kind of reasons there are. Those of us who see something unique and distinct about humanity also see something unique and distinct about one person getting to know another, and even about names themselves. In the Hebrew Bible, the knowledge of a person's name conveyed an almost magical power. At the burning bush, Moses asked God, ". . . when I come unto the children of Israel. . . and they shall say to me, What is his name? what shall I say unto them? And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you." The fact that God told Moses His Name was the sign of a special relationship. And so it should be between people as well.
It doesn't particularly bother me that I don't know Mr. (or Ms.) Australia's name. Long before computers came along, people in cities got used to being served by employees whose names they didn't know. That may not be a good thing in itself, but if it's a moral wrong not to call a salesperson by name, it's one millions commit every day. Normal life for centuries has brought with it various degrees of interaction, from the most casual one-time encounter to the most exalted lifelong friendships and marriages. A life in which each of us knew the most intimate details of the lives of all our acquaintances would be like living on a small desert island with other castaways. We have unfortunately been exposed to the real-life consequences of that kind of life on reality-show TV, and it's obviously got its problems. On the other hand, a life lived with no marriage partner, no close friends, and no one who calls you by your first name would fall short of what most people consider a reasonably fulfilled existence.
Should we throw up our hands and say that cyberspace anonymity is neutral? Absolutely not. It depends on how it's used. If anonymity encourages otherwise shy people to risk more in the way of human encounters, then it may be a benefit. If a criminal uses it the same way he'd use a mask, then it's wrong. Anonymous criticism, hate mail, letters, and email are likewise wrong, or at least cowardly, although there may be extenuating circumstances, such as when whistleblowers fearing for their jobs expose corruption and wrongdoing anonymously on hotlines. And I include most spam in that category.
We can hide behind the masks we don online because we're having fun, or helping each other, or considering a more serious relationship, or trying to make a buck, or plotting to kill. If the Internet had been set up to be totally transparent—everyone knowing the identity of everyone else—it would be a very different place, and probably closer to that global village that Marshall McLuhan talked about. But probably our interactions on it would be very different too. And I might not have gotten any help for my computer problem—at least, not from Australia.
Sources: The Canadian social theorist and media critic Marshall McLuhan did indeed coin the phrase "global village," according to his son Eric, who writes about its origins at http://www.chass.utoronto.ca/mcluhan-studies/v1_iss2/1_2art2.htm.
Tuesday, February 20, 2007
Global Warming or Global Shaking? A Tale of Two Theories
On Dec. 26, 2004, the most deadly tsunami in recorded history struck the Indian Ocean, killing about 280,000 people. If there had been a warning system in place along the affected coastlines to move people to higher ground, many of those who died in the disaster might be alive today. Fortunately, the technology to detect tsunamis in deep water and relay the information to the proper authorities exists today. After the terrible lesson of 2004, many governments moved to improve their tsunami-warning capabilities, and this effort is already proving fruitful. But most people think earthquakes on land, which can be just as deadly as tsunamis, are inherently unpredictable. What if that isn't true? What if it turns out that we can predict earthquakes as reliably as tomorrow's weather—not perfectly, but well enough to give warnings about truly major earthquakes? Wouldn't that be worth a little time and attention?
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
One of the people who think so is Friedemann Freund, long associated with the NASA Ames Research Center at Moffett Field, California. Freund is a mineralogist who has never been afraid to go against the prevailing climate of opinion, even as a child growing up in post-World-War II Germany. His interest in how rocks behave under conditions of extreme temperature and pressure that exist deep below the earth's surface led him to the discovery that their electrical conductivity changes in unexpected ways. Freund believes his research is a key to understanding why attempts to predict earthquakes using electromagnetic measurements have failed to live up to early expectations. (For more details on this type of earthquake prediction, see the entry in this blog "Earthquake Prediction: Ready for Prime Time?" for Apr. 13, 2006.) When Freund's information about the way electric currents can pass through rocks is added to the current state-of-the-art theories, he believes it will make way for a major advance in the technology and science of predicting earthquakes.
Freund's hopes may be realized, but leaving aside the technical questions of whether he is right, let's look at the degree of attention he and other earthquake-prediction scientists have received from the public, the politicians, and the media. Let's compare it to another scientific issue with global implications: global warming.
One rough way to compare general awareness of topics is to see how many results a given phrase returns on Google. The phrase "earthquake prediction" turns up about a million; "global warming" turns up 45 million. While all sorts of things influence these numbers, a difference that large means that a lot more people are thinking and writing about global warming than about earthquake prediction.
Now why is that? One reason has to do with the connection many scientists are making between the behavior of human beings—especially wealthy American human beings that drive gas-guzzling vehicles—and climate changes. If we just hadn't burned all that fossil fuel, they are saying, we might not have to put up with hotter summers, stormier winters, and coastline property values going down (or up, depending on how close you are to the coast). And any great disaster for which we believe we are culpable even a tiny bit will get our attention more than something we can have no influence over. But that doesn't mean we should ignore other things that we might be able to do something about too.
Next, consider the quality of answers to two questions: (1) Has anybody died from global warming yet? (2) Has anybody died from earthquakes and tsunamis we failed to predict yet? Answers to (1) will be all over the map, depending on whether you attribute this famine or that flood to global warming or to other causes. Compared to that answer, the answer to (2) is like the difference between the sky on a foggy day and a diamond in brilliant sunlight. Yes, many thousands have died in earthquakes and tsunamis—deaths that might have been averted if we had possessed the means to predict these events. And with a fraction of the effort (and publicity) spent so far on global warming, the science of earthquake prediction could be much farther along than it is.
Part of engineering ethics, at least the way I view it, is to decide what technical matters deserve attention—what to do, as opposed to simply how to do it well, whatever it is. Professional inertia, which is a tendency of professions to circle the wagons whenever a cherished idea is threatened by an outsider, has slowed recognition of Freund's work and the work of others in earthquake prediction. I'm not saying the outsiders are right. But they deserve a much wider hearing, and encouragement in terms of funding and programs, than they've been getting so far. Even if spending money to look into earthquake prediction turns out to have been a bad bet, it is a wager society ought to make. And personally, I bet they are more right than wrong.
Sources: I thank Alberto Enriquez, the author of a recent IEEE Spectrum article on Freund's research, for drawing my attention to it. His article can be found at http://www.spectrum.ieee.org/feb07/4886 (free registration required for viewing). A nine-page thesis explaining some of Freund's recent ideas can be found at a website whose URL is so long I have to split it in half. You will have to copy and paste it into one line for it to work. Here are the pieces (no space between the two halves):
http://joshua-j-mellon.tripod.com/sitebuilder
content/sitebuilderfiles/Thesis_16Aug06.doc.
Tuesday, February 13, 2007
Non-Lethal Weapons, Part II: Taser, Anyone?
The taser is a device used mainly by law enforcement authorities up to now. It delivers a painful but allegedly non-lethal electrical charge that effectively disables an aggressor without permanent injury in the vast majority of cases. Invented in 1993, hundreds of thousands of tasers have been sold and used by police worldwide, and now Taser International is trying to enter the consumer market in a big way. In April, you will be able to buy a $300 unit called the C2, styled in a pink-and-black housing that makes it look more like a lady's shaver than a weapon.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
An Austin American-Statesman report of Feb. 4, 2007 on the introduction of this latest taser model raises the question of safety. Is carrying around a high-voltage generator in your handbag really any better than packing a rod, as the saying goes? Even if the user doesn't harm himself or herself, are these devices really safe in both a technical and societal sense, or are they a step down the road to a police state where torture is routinely carried out by ordinary citizens?
Amnesty International seems to think tasers are a bad idea all around, and wants a moratorium on their sale. Not surprisingly, Taser's co-founder and CEO, Tom Smith, thinks a moratorium is a bad idea, since his company seems to be the main if not sole supplier of non-lethal electrical-shock devices for use on humans. What facts should guide one's decisions about these things?
Medically speaking, the taser people seem to be standing on pretty firm ground. Without going into a lot of details about amps, volts, watts, joules (not jewels, although it's pronounced the same way) and so on, I will simply say that the taser is carefully designed to deliver enough electrical energy to cause loss of voluntary control of the main skeletal muscles, but not enough to stop your heart or cause significant burns or other injuries typically associated with electrical shock. If you can't control your leg muscles, you fall down, which is the posture that police officers desire to see a recalcitrant subject in.
Taser International has posted a disquieting video showing the CEO and other high-ups receiving taser shocks. The grimaces of agony and cries of anguish are not faked, but all of them lived to tell about it. When I did a brief web search for taser injuries, one of the first articles that came up was about a series of lawsuits filed against Taser International, not by criminals (who don't usually have the wherewithal to sue anyway), but by police departments whose members claimed they sustained heart damage and other injuries while demonstrating the taser during training sessions. That was a couple of years ago, and the company now has four-page warning statements on their website describing all the things to watch out for, from sprained ankles to heart attacks in people with pre-existing heart conditions.
For the sake of argument, say the taser is as safe as its maker claims, and the people who get tasered suffer no permanent harm in nearly all cases. Do we still want Joe and Jane Public walking around with a C2 model, even if it is equipped with identification confetti that sprays out so that any use of a taser by the wrong person can be traced?
I once knew a guy who was a truck driver by profession, plenty big enough to take care of himself in a barroom brawl. For a long time he carried a gun, but after a while he married a young Christian woman and decided to quit carrying it. I asked him why. He said he didn't like the way just having the gun on him changed his attitude toward people and situations. He didn't go into detail, but what he may have meant is that he had those first thoughts that must always come before someone actually uses a weapon: what if this happens? should I pull it out then? does this guy deserve to be shot? And I guess he just got tired of having those kinds of thoughts.
If tasers get wildly popular, you can count on more people misusing them, because despite all the training brochures and videos in the world, if a consumer buys a thing and throws the training material away, there's nothing to stop him. Fortunately, the consequences of misusing tasers are less severe than the misuse of handguns. Wouldn't it be nice if we could replace all handguns with tasers? Unfortunately, we'd get right back into the arms race the minute somebody went out and got a handgun. So I think any hopes of getting criminals to use tasers instead of guns are fruitless, especially since they have the anti-crime confetti feature.
From a historical point of view, tasers are an interesting step backward in the grand arms race that has been going on since the first caveman hit another caveman with a rock—or since Cain murdered Abel, if you please. It is an effort to find a kinder, gentler way to subdue your fellow man (or woman). I find it rather charming that the acronym "taser" is supposed to stand for "Thomas A. Swift Electric Rifle." Tom Swift was the inventor hero of the eponymous series of adventure stories for boys that were popular in the early 1900s. In Tom Swift and his Electric Rifle (1911), Tom never actually deploys his weapon, which shoots ball-lightning-like glowing bombs, at another person. He hies himself off to Africa in an airship and shoots elephants instead. Inventor Tom Smith must have had some familiarity with the series, which has attracted a kind of cult following among engineers and inventors over the years.
Tom Swift's world was a very black-and-white place, both in the racial sense and in the moral sense. In Tom Swift's world, the only people with tasers would be the good guys, who could always subdue the bad guys, save the girl, and return home in triumph to a hero's welcome. Let's hope that everybody who uses one lives up to that ideal—but let's also plan on what to do if they don't.
(Correction added 2/18/2007: A more careful re-reading of Tom Swift and His Electric Rifle reveals that Tom did indeed use his weapon against people, namely a tribe of entirely fictional three-foot-high natives covered with red hair. At first, he "regulated the charge" (p. 166) so as to stun, not kill, just like the modern taser, but toward the end of the book desperation overcame moderation and he blasted away at full power, bowling over hordes of the "red imps.")
Sources: The article "New Tasers Alarm Safety Advocates" by Joshunda Sanders appeared in the Austin American-Statesman print edition of Feb. 4, 2007, on the front page. Taser International's website is at www.taser.com. The article describing the lawsuits against Taser International appeared in August 2005 in the Arizona Republic and is found at http://www.azcentral.com/arizonarepublic/local/articles/0820taser20.html. Medical information about typical taser injuries can be found in an article by Sir (first name, maybe?) Scott Savage at http://www.ncchc.org/pubs/CC/tasers.html. And Wikipedia has a nice, though apparently controversial, article on the Tom Swift series.
Tuesday, February 06, 2007
Non-Lethal Weapons, Part I: Ray Gun or Ray Howitzer?
First, some housekeeping items. When I began this blog nearly a year ago, I hid behind a screen of anonymity because I was afraid of negative repercussions that might arise from incautious words I might write. Recently, eminent engineering ethics expert Steve Unger at Columbia University wrote me that he is thinking of starting a blog, and wanted to know why I didn't put my real name on mine. (He knows who I am because my emails all have a tag line with the blog's URL in it.) I thought about it and couldn't give him a good reason, so as of today my profile and the header show my real name. As always, comments are welcome. If you have sent me a comment and I haven't replied to you, it's because the blog machinery doesn't inform me of your email address. If you would like me to be able to contact you, send an email to kdstephan@txstate.edu at the same time you add a comment to this blog, and I'll be able to respond.
Now for the first-ever two-part series in this blog: non-lethal weapons. I thank George Michael Sherry of Fort Worth, Texas for bringing my attention to an Associated Press article that was carried on MSNBC on Jan. 25, 2007. According to this report, the ray gun of science-fiction legend has arrived. It takes the form of a truck that carries a kind of radar-antenna thing about fifteen feet high. Even if you're as far away as five hundred yards, the thing's beam can make you feel like you're on fire. No actual fire results, because the total amount of power involved is limited. A video clip shows a civilian—possibly a reporter—standing in a field at Moody Air Force Base outside Valdosta, Georgia. All of a sudden he jumps like a snake bit him, and starts to laugh, aware of how foolish he looks.
As a microwave engineer, I viewed these proceedings with decidedly mixed emotions. On the one hand, my pure-engineer side rejoiced to see some familiar old technology being used in a novel and possibly helpful way. The energy used—94-GHz millimeter waves—is something I have known about and done research with for years, although at a lower power level than what the military is using in the alleged ray gun. They have taken a high-power source—probably a vacuum tube of some kind—and focused the energy in a narrow beam that probably covers a few dozen yards' worth of people at a distance of 500 yards. Full disclosure requires me to say that about twenty years ago, I received some research funds from Raytheon Corporation, which built the unit used in these tests. The technology to do this has been around for years, if not decades, but perhaps the will to try this or the funding was lacking until now.
Before we get to the ethical issues, my pure-engineer side has some questions, though. I thought a ray gun was supposed to fit in your pocket. A more apt term for this thing is "ray howitzer," a howitzer being a piece of field artillery larger than a single man can conveniently carry. Not only does this gizmo require a large truck to haul it around (and probably a multi-kilowatt generator buzzing away somewhere), but because of fundamental physical laws, there is very little chance that they'll ever be able to make it much smaller than it is now. If they tried, the beam would spread out to where you'd be as likely to shoot yourself as anybody else nearby. And then there's the cost. The article didn't mention how many tax dollars the project used up, but unless vacuum-tube millimeter-wave technology has had some dramatic breakthroughs lately (and I haven't heard of any), you can bet that even in production-quantity runs this ray gun would set you back many hundreds of thousands a piece, if not more. And while a spokesperson for the military refused to comment on whether the rays would penetrate glass, I can say that without fear of contradiction, it depends. What I can say for sure is that even a thin sheet of metal such as aluminum foil will block the rays completely. While you might look silly walking around in an aluminum suit, you'd have no worries about being zapped by the millimeter-wave ray howitzer.
Now for the ethical questions. The issue of whether non-lethal weapons should be used at all is an interesting one, but there is not space here to give it justice. My main question in this area is, does the use of this device truly have no long-term health effects? Over the years there have been several studies that link exposure to high-power microwaves with the growth of cataracts in the eye. The prevalence of convenient and effective cataract surgery these days doesn't mean that we should quit worrying about giving people cataracts. It's a legitimate question whether exposure to just one "zap" from the ray howitzer could cause enough eye damage to lead to cataracts. That is a technical question for the appropriate experts, but I raise it here simply because it may not have been asked yet.
All things considered, I don't believe we have a lot to fear from people with ray guns roaming the streets and towns of America. I will be surprised if Raytheon or anybody else can make this technology cheaply enough for it to pose a threat to water cannons, tear gas, or other popular means of dispersing angry crowds. If my experience as a lecturer on microwave engineering is any guide, you could inspire a set of rioters with the same intense longing to be somewhere else that the ray howitzer inspires, by trying to teach them the Fourier transform that relates the size of the machine's dish to the size of the beam. And the lecturer would come a lot cheaper.
Sources: The ray gun article appeared on the MSNBC website at http://www.msnbc.msn.com/id/16794717/wid/11915829?GT1=8921. Information on the relation between cataracts and microwaves can be found at places such as the Communications Workers of America website (http://www.cwa-union.org/issues/osh/articles/page.jsp?itemID=27339127) and an index of research by professor of history Nicholas Steneck on the hazards of microwave radiation (http://myweb.cableone.net/mtilton/steneck.html). It appears that "normal" exposure to microwaves and radio-frequency radiation has few if any reproducible clinical effects, although many experts disagree on the conclusions that should be drawn from the abundance of research.
Now for the first-ever two-part series in this blog: non-lethal weapons. I thank George Michael Sherry of Fort Worth, Texas for bringing my attention to an Associated Press article that was carried on MSNBC on Jan. 25, 2007. According to this report, the ray gun of science-fiction legend has arrived. It takes the form of a truck that carries a kind of radar-antenna thing about fifteen feet high. Even if you're as far away as five hundred yards, the thing's beam can make you feel like you're on fire. No actual fire results, because the total amount of power involved is limited. A video clip shows a civilian—possibly a reporter—standing in a field at Moody Air Force Base outside Valdosta, Georgia. All of a sudden he jumps like a snake bit him, and starts to laugh, aware of how foolish he looks.
As a microwave engineer, I viewed these proceedings with decidedly mixed emotions. On the one hand, my pure-engineer side rejoiced to see some familiar old technology being used in a novel and possibly helpful way. The energy used—94-GHz millimeter waves—is something I have known about and done research with for years, although at a lower power level than what the military is using in the alleged ray gun. They have taken a high-power source—probably a vacuum tube of some kind—and focused the energy in a narrow beam that probably covers a few dozen yards' worth of people at a distance of 500 yards. Full disclosure requires me to say that about twenty years ago, I received some research funds from Raytheon Corporation, which built the unit used in these tests. The technology to do this has been around for years, if not decades, but perhaps the will to try this or the funding was lacking until now.
Before we get to the ethical issues, my pure-engineer side has some questions, though. I thought a ray gun was supposed to fit in your pocket. A more apt term for this thing is "ray howitzer," a howitzer being a piece of field artillery larger than a single man can conveniently carry. Not only does this gizmo require a large truck to haul it around (and probably a multi-kilowatt generator buzzing away somewhere), but because of fundamental physical laws, there is very little chance that they'll ever be able to make it much smaller than it is now. If they tried, the beam would spread out to where you'd be as likely to shoot yourself as anybody else nearby. And then there's the cost. The article didn't mention how many tax dollars the project used up, but unless vacuum-tube millimeter-wave technology has had some dramatic breakthroughs lately (and I haven't heard of any), you can bet that even in production-quantity runs this ray gun would set you back many hundreds of thousands a piece, if not more. And while a spokesperson for the military refused to comment on whether the rays would penetrate glass, I can say that without fear of contradiction, it depends. What I can say for sure is that even a thin sheet of metal such as aluminum foil will block the rays completely. While you might look silly walking around in an aluminum suit, you'd have no worries about being zapped by the millimeter-wave ray howitzer.
Now for the ethical questions. The issue of whether non-lethal weapons should be used at all is an interesting one, but there is not space here to give it justice. My main question in this area is, does the use of this device truly have no long-term health effects? Over the years there have been several studies that link exposure to high-power microwaves with the growth of cataracts in the eye. The prevalence of convenient and effective cataract surgery these days doesn't mean that we should quit worrying about giving people cataracts. It's a legitimate question whether exposure to just one "zap" from the ray howitzer could cause enough eye damage to lead to cataracts. That is a technical question for the appropriate experts, but I raise it here simply because it may not have been asked yet.
All things considered, I don't believe we have a lot to fear from people with ray guns roaming the streets and towns of America. I will be surprised if Raytheon or anybody else can make this technology cheaply enough for it to pose a threat to water cannons, tear gas, or other popular means of dispersing angry crowds. If my experience as a lecturer on microwave engineering is any guide, you could inspire a set of rioters with the same intense longing to be somewhere else that the ray howitzer inspires, by trying to teach them the Fourier transform that relates the size of the machine's dish to the size of the beam. And the lecturer would come a lot cheaper.
Sources: The ray gun article appeared on the MSNBC website at http://www.msnbc.msn.com/id/16794717/wid/11915829?GT1=8921. Information on the relation between cataracts and microwaves can be found at places such as the Communications Workers of America website (http://www.cwa-union.org/issues/osh/articles/page.jsp?itemID=27339127) and an index of research by professor of history Nicholas Steneck on the hazards of microwave radiation (http://myweb.cableone.net/mtilton/steneck.html). It appears that "normal" exposure to microwaves and radio-frequency radiation has few if any reproducible clinical effects, although many experts disagree on the conclusions that should be drawn from the abundance of research.
Tuesday, January 30, 2007
The Engineer and The Public: How's That Again?
The Institute of Electrical and Electronics Engineers (IEEE) is probably the largest society of engineering professionals in the world, with over 300,000 members worldwide. Its Code of Ethics has a little-known clause in which IEEE members agree to "improve the understanding of technology, its appropriate application, and potential consequences." My father sometimes used to greet me as I came home from school with the question, "And what did you do to make the world a better place today?" I could equally well ask the question of engineers, "What did you do to improve the public's understanding of technology today?"
People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.
In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.
If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.
Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."
Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.
That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:
"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."
How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.
Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.
Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.
People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.
In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.
If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.
Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."
Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.
That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:
"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."
How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.
Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.
Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.
Wednesday, January 24, 2007
Googling Fame: Who's In Charge?
First, I will heed the proverbial warning not to bite the hand that feeds you, or in this case, the company that provides my blog free of charge. Google, that huge, somewhat mysterious entity run by a couple of thirty-somethings who are (I read recently) two of the most admired people in America, said they would let me blog here for free, and would provide easy-to-use facilities for setting up my blog and running it. Almost without exception, they have kept their word, whoever they are. I don't have to have ads on my blog unless I choose to, the system is as easy to use as they said, and in sum, my limited experience with the organization has been almost uniformly positive. And to make things even better, after nearly a year of blogging here, I find that if you type "engineering ethics blog" into Google's search engine, the first thing that comes up is this blog. Not only that, but among the next few results are references to this blog at the University of Illinois—Urbana-Champaign and Illinois Institute of Technology. (If you type just "engineering ethics," it shows up too, but not till the fourth page.)
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Wednesday, January 17, 2007
The Electric Car Arrives—Again?
In 1990, General Motors Chairman Roger Smith announced that his firm was developing an all-electric car for the consumer market, partly in response to a California law mandating the sale of zero-emission vehicles in the future. Six years later, the EV1 made its debut in California and Arizona. Only about a thousand were made, and technically you could never own one—GM allowed only leases. In 2002, concluding that the program failed, GM demanded the return of the vehicles, much to the dismay of some loyal EV1 drivers who saw the move as a back-door way to show that electric vehicles were still impractical. Just last week, GM announced at the Detroit International Auto Show that it plans to get back into the electric-car business with the Chevrolet Volt, a home-chargeable battery-operated model that carries a small gasoline engine. Should we believe them this time?
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
Thursday, January 11, 2007
I Spend, Therefore I'm Spied Upon?
The 17th-century philosopher René Descartes' most famous dictum was, "I think, therefore I am." While Descartes was a military man for a time, he lived long before an age when simply carrying money around in your pocket made you vulnerable to espionage. A recent Associated Press report carried in the San Francisco Examiner online edition describes "spy coins" that have been found on contractors doing classified U. S. government business in Canada. According to the report, these Canadian coins carried tiny radio transmitters that could conceivably have been used to track the contractors' movements. No details were given about who the contractors were, what work they were doing, or even what denomination of coin was used. One of the security experts consulted by the reporter said that the technique didn't seem to make a lot of sense, because there is nothing to keep a person from spending a spy coin almost as soon as he or she receives it. My guess is it's a scheme cooked up by North Korea, whose counterfeiting activities are already well-known. It would be consistent with that country's old-style cold-war mentality to cook up something so outlandish that nobody would think of it, even if it didn't have a great chance of producing useful results.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Subscribe to:
Posts (Atom)