Tuesday, January 30, 2007

The Engineer and The Public: How's That Again?

The Institute of Electrical and Electronics Engineers (IEEE) is probably the largest society of engineering professionals in the world, with over 300,000 members worldwide. Its Code of Ethics has a little-known clause in which IEEE members agree to "improve the understanding of technology, its appropriate application, and potential consequences." My father sometimes used to greet me as I came home from school with the question, "And what did you do to make the world a better place today?" I could equally well ask the question of engineers, "What did you do to improve the public's understanding of technology today?"

People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.

In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.

If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.

Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."

Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.

That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:

"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."

How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.

Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.

Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.

Wednesday, January 24, 2007

Googling Fame: Who's In Charge?

First, I will heed the proverbial warning not to bite the hand that feeds you, or in this case, the company that provides my blog free of charge. Google, that huge, somewhat mysterious entity run by a couple of thirty-somethings who are (I read recently) two of the most admired people in America, said they would let me blog here for free, and would provide easy-to-use facilities for setting up my blog and running it. Almost without exception, they have kept their word, whoever they are. I don't have to have ads on my blog unless I choose to, the system is as easy to use as they said, and in sum, my limited experience with the organization has been almost uniformly positive. And to make things even better, after nearly a year of blogging here, I find that if you type "engineering ethics blog" into Google's search engine, the first thing that comes up is this blog. Not only that, but among the next few results are references to this blog at the University of Illinois—Urbana-Champaign and Illinois Institute of Technology. (If you type just "engineering ethics," it shows up too, but not till the fourth page.)

Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?

The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)

In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.

In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.

Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.

How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.

In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.

Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.

Wednesday, January 17, 2007

The Electric Car Arrives—Again?

In 1990, General Motors Chairman Roger Smith announced that his firm was developing an all-electric car for the consumer market, partly in response to a California law mandating the sale of zero-emission vehicles in the future. Six years later, the EV1 made its debut in California and Arizona. Only about a thousand were made, and technically you could never own one—GM allowed only leases. In 2002, concluding that the program failed, GM demanded the return of the vehicles, much to the dismay of some loyal EV1 drivers who saw the move as a back-door way to show that electric vehicles were still impractical. Just last week, GM announced at the Detroit International Auto Show that it plans to get back into the electric-car business with the Chevrolet Volt, a home-chargeable battery-operated model that carries a small gasoline engine. Should we believe them this time?

In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.

If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.

That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.

The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.

With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.

Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.

Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.

Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.

Thursday, January 11, 2007

I Spend, Therefore I'm Spied Upon?

The 17th-century philosopher René Descartes' most famous dictum was, "I think, therefore I am." While Descartes was a military man for a time, he lived long before an age when simply carrying money around in your pocket made you vulnerable to espionage. A recent Associated Press report carried in the San Francisco Examiner online edition describes "spy coins" that have been found on contractors doing classified U. S. government business in Canada. According to the report, these Canadian coins carried tiny radio transmitters that could conceivably have been used to track the contractors' movements. No details were given about who the contractors were, what work they were doing, or even what denomination of coin was used. One of the security experts consulted by the reporter said that the technique didn't seem to make a lot of sense, because there is nothing to keep a person from spending a spy coin almost as soon as he or she receives it. My guess is it's a scheme cooked up by North Korea, whose counterfeiting activities are already well-known. It would be consistent with that country's old-style cold-war mentality to cook up something so outlandish that nobody would think of it, even if it didn't have a great chance of producing useful results.

Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.

RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.

The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.

What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.

Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.

Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.

Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.

Tuesday, January 02, 2007

Science, Engineering, and Ethical Choice: Who's In Charge?

Every now and then it's a good idea to look at the foundations of a field, the usually hidden and unspoken assumptions that everybody knows, but few ever talk about. A recent New York Times essay by Dennis Overbye on free will addressed the question of whether our choices are really choices, or whether we are really just "meat computers" executing a program of which we are unaware. What has that got to do with engineering ethics? Only everything.

You can put this issue in the form of a paradox. Modern engineering got where it is today by being based on science. From the many reputable scientists interviewed by Overbye, we learn that from what science can tell so far, everything in the universe is either determined by physical law (in which case we can predict it) or random (which is another way of saying we can't predict it, and may not in principle ever be able to). This includes the behavior of all physical systems, including the human brain. And if choices and decisions can be said to come from any physical object, they come from the human brain.

Now engineering ethics is all about making the right choices. But what if the idea of choice is false? If we only think we choose something when the reality is that we're just following a hugely complex but possibly predictable program, what does it mean to make the right choice, or indeed any choice at all? According to some of the scientists Overbye talked with, not much.

The view that all our supposed choices are really determined by external factors is called determinism. Daniel Dennett, a philosopher of science, thinks free will and determinism are compatible, even mutually dependent. According to Dennett, strict causality ". . . makes us moral agents. You don't need a miracle to have responsibility." On the other hand, medical researcher Mark Hallett limits the idea of free will to the perception, not the absolute fact. "People experience free will," he says. "They have the sense that they are free. The more you scrutinize it, the more you realize you don't have it."

Dr. Hallett spends his days pondering the inner workings of the brain, and understandably tends to view it as a complex system that may one day yield all of its secrets to science, which is to say, other brains. Overbye is diligent enough to note that while a system may be deterministic, it nonetheless may not be predictable. Citing mathematicians Kurt Gödel and Alan Turing, he points out that any moderately complex mathematical system cannot be shown to be consistent within itself: there will always be statements you can make with it that you cannot prove or disprove within the system. Philosopher and historian of science Stanley Jaki has used this fact to show that the scientist's dream of a mathematically complete "final theory" that would predict everything—all physical constants, all deterministic activity down to the end of time—is only a dream. So it seems that science has also told us that we know there are things that we will never know about the world, in the objective, testable, scientific sense.

So does this mean that a truly consistent scientific engineer will disregard ethics as an illusion and act however he or she pleases? Here is where the engineer's famed pragmatism comes into play. Most engineers I know are eminently practical people, wanting to get the job done and impatient with what they regard as hairsplitting philosophical discussions about the ultimate meaning of this or that. Most engineers would immediately realize that disregarding right and wrong simply because some philosophers and scientists say choice is an illusion would be fatal both to their careers and quite possibly to the people served by their engineering. And death is a bad thing.

These common-sense notions do not come from science. In their more sober moments, most scientists—and many philosophers—will admit that science cannot pass judgment on questions of value. The stated goal of science is knowledge, not guidance or moral instruction. But to allow a scientific conclusion about the source of free will to abolish one's ethics would be to allow science to dictate morality, or rather, the lack thereof.

Prominent by their absence from Overbye's list of interviewees was anyone who spoke for the religious viewpoint which takes free will and the reality of moral agency seriously. While there are philosophical issues that arise from the question of how God can allow free will in a universe of which he has perfect foreknowledge, at least that picture makes sense morally. The issue that Overbye sidles up to, but never quite breaches, is the one that Dostoevsky made plain when he wrote in Notes from the Underground, "For what is man without desires, without free will, and without the power of choice but a stop in an organ pipe?" In other words, a passive piece of machinery whose sound and fury signifies nothing. All the shilly-shallying of the philosophers who say in effect, "Well, we don't really have it, but we think or feel that we do, and so it doesn't make much difference," simply evades the logical conclusions of their positions, which many of them are afraid to espouse openly.

Engineering is not philosophy, and most engineers are not trained philosophers. But every engineer who thinks about the reasons for professional actions must sooner or later ask, "What do I think the right thing is?" and "Can I really choose freely?" Many engineers, including yours truly, have a religious answer to these questions. And we are not bound by the dicta of scientists or philosophers to decide otherwise—especially if we couldn't decide!

Sources: The New York Times article "Free Will: Now You Have It, Now You Don't" appeared in the Jan. 2, 2007 online edition at http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=1&8dpc. The Dostoevsky quotation is from About.com's section on classic literature by Esther Lombardi at http://classiclit.about.com/od/dostoyevskyf/a/aa_fdostquote.htm.

Thursday, December 28, 2006

Electric Power: Was It Broke? Did We Fix It?

Like any other profession, engineering has its particular proverbs and sayings. One of my favorites is, "If it ain't broke, don't fix it." As with most proverbs, this one captures only part of the whole picture of a complex situation. But when I look at the potential and actual problems we have these days with the U. S. electric power system, I wish more people in authority had paid attention to that particular proverb.

Electricity is an unusual commodity in that it must be produced exactly as fast as it is sold. If a million people suddenly turn on their lights all at once, somebody somewhere has to supply that much more electricity in milliseconds, or else there is big trouble for everybody on the power distribution network. For lights to come on reliably and stay on all across the country, the systems of generating plants, transmission lines, distribution lines, and monitoring and control equipment have to work in a smooth, coordinated way. And, somebody has to pay for it all.

From an economic point of view, approaches to electric utility management and financing lie somewhere between two extremes. At one extreme is completely centralized control, billing, and coordination, often performed in many countries by the national government. France is an example of this approach. Large, complex electric systems are a natural fit to large, complex government bureaucracies, and in the hands of competent, dedicated civil servants, government-owned and -operated utilities can be a model of efficiency and advanced technology. Government control and ownership can provide the stability for long-term research and development. This is one reason that France leads the world in the development of safe, reliable nuclear power, which provides most of the electricity in that country.

The other extreme can be found in third-world countries where there is little or no effective government regulation of utilities, either through incompetence, war, or other causes. In this type of situation, private enterprise rushes in to fill the gap and you have private "utilities"—often nothing more than a few guys with a generator and some wire—selling electricity for whatever the market will bear, in an uncoordinated and inefficient way. This approach leads to a spotty, inefficient market in which the availability and reliability of electricity depends on where you live, and typically large portions of the market (in rural or dangerous areas) are not served at all.

In the U. S., we have historically swung from near one extreme to the other. As electric utilities began to grow in the late 1800s and early 1900s, they began as independent companies. But the technical economies of scale quickly became apparent, and the Great Depression brought on tremendous consolidation of companies into a few large firms, which were then taken under the regulatory wing of federal and state governments. What we had then was a kind of benevolent dictatorship of the industry by government, in which private investors ceded much control to the various regulatory commissions, but received in turn a reliable but relatively small return on their investment.

This state of affairs prevailed through the 1970s, whereupon various political forces began a move toward deregulation. The record of deregulation is spotty at best, probably because it represents an attempt to have our regulatory cake and eat it too. No one wants the electricity market here to devolve to the haphazard free-for-all that it is in places like Iraq, or even India, where electricity theft is as common as beggary. So rightly, some regulations must be left in place in order to protect the interests of those who cannot protect themselves, which in the case of electric utilities means most of us.

The most noteworthy recent disasters having to do with deregulation were the disruptions and price explosions in California of a few years ago, caused in large part by Enron and other trading companies who manipulated the market during hot summers of high demand. Even if the loopholes allowing such abuses are closed and inadequate generating capacity is addressed with more power plants, however, many problems remain. A recent New York Times article points out that because the existing rules provide disincentives for power companies to spend money on transmission and distribution equipment (power lines), certain parts of the country have to pay exorbitant rates in the form of "congestion charges."

The basic problem is, there are not lines enough to carry cheap power from where it is available to where it is needed. Somebody would have to pay to build them, and somebody else would have to approve the construction. In these days of "not in my back yard" attitudes, it is increasingly hard to construct new power lines anywhere, even in rural areas. The net result of these complications is that as time goes on and demand for power increases, more and more areas may find themselves starved for power, and will have to pay rates that might be as high as twice the prevailing rate of surrounding regions.

My personal bias is that we have gone way too far in attempts to privatize the electric utility industry. It is a business which technologically fits better with a centralized authority and center of coordination. But in today's political climate, the chances of going back to a more centralized way of doing things are small. It looks like the best we can do is to continue to tinker with what regulations remain, fixing problems where pernicious disincentives appear, and keeping an eye out for Grandma and her electric heater that she needs to get through the winter. But in my opinion, the whole thing wasn't broke to begin with, and the fix of deregulation didn't need to be applied the way it was.

Sources: The New York Times article on congestion charges appeared in the Dec. 13, 2006 online edition at http://www.nytimes.com/2006/12/13/business/13power.html?hp&ex=1166072400&en=dcfbff42cc8f19d4&ei=5094&partner=homepage.

Tuesday, December 19, 2006

America's Chernobyl Waiting to Happen

"Dallas, Texas, Mar. 30, 2005 (AP) --- An apparent nuclear explosion in Amarillo, Texas has cut off all communications with the West Texas city and regions in a fifteen-mile radius around the blast. Eyewitness accounts by airline pilots in the vicinity report an 'incredible flash' followed by a mushroom cloud reaching at least 35,000 feet. Speculation on the source of the explosion has centered on Amarillo's Pantex plant, the nation's only facility for construction and disassembly of nuclear weapons."

In case you think you missed something a year ago last March, the news item above is fiction. But according to some sources, it is plausible. It could have happened. And there is reason to believe that unless some serious housecleaning takes place in Amarillo, the chances that something like this might happen in the future are higher than any of us would like.

The end of the Cold War brought hopes that instead of piling up megaton after megaton
of mutually assured destructive power in the shape of thermonuclear weapons, the U. S. and the Soviet Union (or what was left of it) would begin to disassemble their nuclear stockpiles to make the world a safer place. Over the past fifteen years, international agreements have been reached to do exactly that. From a peak of over 30,000 nuclear warheads in 1965, the U. S. stockpile has declined to just a little over 10,000 as of 2002. And here is where the engineering issues come in, because for every downtick of that number, somebody somewhere has to disassemble a nuclear warhead.

A nuclear bomb or missile is not something that you just throw on the surplus market to dispose of. First it has to be rendered incapable of exploding. Then the plutonium and other dangerous explosive materials have to be removed in a way that is both safe to the technicians doing the work, and also to the surrounding countryside and population. As you might imagine, these operations are difficult, dangerous, and require secret specialized knowledge. For more than thirty years, the only facility in the U. S. where nuclear weapons were made or disassembled has been the Pantex plant outside Amarillo, Texas. It is currently operated by a consortium of private contractors including BWXT, Honeywell, and Bechtel, and works exclusively for the federal government, specifically the Department of Energy. If you want a nuclear weapon taken apart, you go to Pantex, period. And therein lies a potential problem.

Where I teach engineering, the job of nuclear weapon disassembler is not one that comes up a lot when students tell me what they'd like to be when they graduate. I imagine that it is hard to recruit and retain people who are both willing and qualified to do such work. But at the same time, it is not the kind of growth industry that attracts a lot of investment. So it is plausible to me that as the demand for disassembly increases, the corporate bosses in charge of the operation might tend to skimp on things like deferred maintenance, safety training and execution, and hiring of additional staff. That is the picture which emerges from an anonymous letter made public recently by the Project on Government Oversight, a government watchdog group.

Anonymous letters can contain exaggerations, but what is not in dispute is the fact that on three occasions beginning Mar. 30, 2005, someone at Pantex tried to disassemble a nuclear weapon in a way that set off all kinds of alarms in the minds of experts who know the details. I'm speculating at this point, but as I read between the lines and use my knowledge of 1965-era technology, something like this may have happened.

A nuclear weapon built in 1965 probably contained no computers, relatively few transistors, and a good many vacuum tubes. Any safety interlocks to prevent accidental detonation were probably mechanical as well as electronic, and consisted of switches, relays, and possibly some rudimentary transistor circuits. But somewhere physically inside the long cylindrical structure lies a terminal which, if contacted by a grounded piece of metal, will probably set the whole thing off and vaporize Amarillo and the surrounding area.

A piece of equipment that has been sitting around since 1965 in a cold, drafty missile silo is probably a little corroded here and there. Screws and plugs that used to come apart easily are now stubborn or even frozen in place. The technician in charge of beginning disassembly of this baby probably tried all the standard approaches to unscrewing a vital part in order to disable it, without success. At that point, desperation overcame judgment. The official news release from the National Nuclear Safety Agency puts it in bureaucratese thus: "This includes the failures to adhere to limits in the force applied to the weapon assembly and a Technical Safety Requirement violation associated with the use of a tool that was explicitly forbidden from use as stated in a Justification for Continued Operation." Maybe he whammed at it with a big hammer. Maybe he tried drilling out a stuck bolt with an electric drill. We may never know. But what we do know is, the reasons for all these Technical Safety Requirements is that if you violate them, you edge closer to setting off an explosion of some kind.

Not every explosion that could happen at Pantex would be The Big One with the mushroom cloud and a megaton of energy. The way nuclear weapons work is by using cleverly designed pieces of conventional high explosive to create configurations that favor the initiation of the nuclear chain reactions that produce the big boom. A lot of things have to go right (or wrong, depending on your point of view) in order for a full-scale nuclear explosion to happen. Kim Jong Il of North Korea found this out not too long ago when his nuclear test fizzled rather than boomed. But even if nothing nuclear happens when the conventional explosives go off, you've got a fine mess on your hands: probably a few people killed, expensive secret equipment destroyed, and worst from an environmental viewpoint, plutonium or other hazardous nuclear material spread all over the place, including the atmosphere.

This general sort of thing was what happened at Chernobyl, Ukraine in 1986, when some technicians experimenting late at night with a badly designed nuclear power plant managed to blow it up. The bald-faced coverup that the USSR tried to mount in the disaster's aftermath may have contributed to its ultimate downfall. So even if the worst-case scenario of a nuclear explosion doesn't ever happen at Pantex, a "small" explosion of conventional weapons could cause a release of nuclear material that could harm thousands or millions of people downwind. Where I happen to live, incidentally.

I hope the concerns pointed out by the Pantex employees who apparently wrote the anonymous letter are exaggerated. I hope that the statement from Pantex's official website that "[t]here is no credible scenario at Pantex in which an accident can result in a nuclear detonation" is true. But incredible things do happen from time to time. Let's just hope they don't happen at Pantex any time soon.

Sources: The Project on Government Oversight webpage citing the Pantex employees' anonymous letter is at http://www.pogo.org/p/homeland/hl-061201-bodman.html. The official Pantex website statement about a nuclear explosion not being a credible scenario is at http://www.pantex.com/currentnews/factSheets.html. Statistics on the U. S. nuclear weapons stockpile are from Wikipedia's article on "United States and weapons of mass destruction."

Tuesday, December 12, 2006

Hacker Psych 101

Well, it's happened again. The Los Angeles Times reports that for more than a year prior to Nov. 21, 2006, somebody was siphoning personal information such as Social Security numbers from a database of more than 800,000 students and faculty at UCLA. Eventually, the system administrators noticed some unusual activity and suppressed the hack, but by the time they closed the door, a great many horses had escaped the barn.
This is one of the biggest recent breaches of data security at a university, but it is by no means the only one. The same article reports that 29 security breaches at other universities during the first six months of this year affected about 845,000 people.

Why is hacking so common? This is a profound question that goes to the heart of the nature of evil. It's good to start with the principle that, no matter how twisted, perverse, or just plain stupid a wrong action looks to observers, the person doing it sees something good about it.

For example, it's not a big mystery why people rob banks. In the famous words of 1930's gangster Willie Sutton, "Because that's where the money is." To a bank robber, simply going in and taking money by force is a way to obtain what they view as good, namely, money.

There are hackers whose motivation is essentially no different than Willie Sutton's. Identity theft turns out to be one of the easiest types of crime for them to commit, and so they turn to hacking, not because they especially enjoy it, but because it will lead to a result they want: data they can use to masquerade as somebody else in order to obtain money and goods by fraud. This motivation, although deplorable, is understandable, and fits into our historical understanding of the criminal mind, such as it is. As technology has advanced, so must the technical abilities of criminals. At this point it isn't clear whether money was the motive behind the UCLA breach or not. Because the breach had gone on so long without notable evidence of identity theft, it's possible that this was a hack for the heck of it.

Many, if not most, hacks fall into this second category. For an insight into why people do these things if they're not making money or profiting in some other way, the insights of Sarah Gordon, a senior research fellow at Symantec Security Response, shed some light on the matter.

Gordon's specialty is the ethics and psychology of hacking. In her job at Symantec, she has encountered just about every kind of hack and hacker there is. In an interview published in 2003, she says that the reason many people feel little or no guilt (at least not enough for them to stop) when they write viruses and do hacks is that they don't consider computers to be part of the real world. Speaking about school-age children learning to use computers for the first time, she said, "They don't have the same morality in the virtual world as they have in the real world because they don't think computers are part of the real world."

Gordon says that parents and teachers should share part of the blame. When a child steals someone's password and uses it, for example, a teacher could ask, "Would you steal Johnny's house key and use it to poke around in his bedroom?" Presumably not. But the analogy may be a difficult one for children to make—and many adults, for that matter.

Gordon thinks it may take a generation or two for our culture's prevailing morality to catch up with the hyper-speed advances in computer technology. She sees some progress in the U. S., noting that there is a new reluctance to post viruses online, whereas a few years ago no one thought there was anything wrong with the practice. Still, she thinks that hacking and virus-writing is an act of rebellion that remains popular in countries where young people are experiencing computers and networks for the first time, and rebellion is just part of human nature. A boy who grew up in a thatched hut with no running water, moves to a city, and finds that he can disrupt the operations of thousands of computers halfway across the world with a few keystrokes can receive a power buzz that he can get nowhere else in his life.

It seems to me that the anonymity provided by the technical nature of computer networks also contributes to the problem. Some say that a test of true morality is to ask yourself whether you would do a bad thing if you were sure you'd never get caught. The nature of computer networks ensures that very few hackers and virus writers do get caught, at least not without a lot of trouble. And it looks like lots of people fail that kind of test.

Well, I'm a teacher, so if there are any students reading this, I'm here to tell you that just because you can hide behind a computer screen, you shouldn't abandon the Golden Rule. But it may take a few years for the message to soak in. At the same time, I recognize a broader generalization of Sarah Gordon's notion that rebellion is part of human nature: evil and sin are part of human nature. I think this was a feature of humanity that many computer scientists neglected to take into consideration way back when they were establishing the foundations of some very pervasive systems and protocols that would cost billions of dollars to change today. Eventually things will get better, but it may take a generation or more before password theft and bicycle theft are viewed as the same kind of thing by most people.

Sources: The Dec. 12 L. A. Times story on the UCLA security breach is at http://www.latimes.com/news/local/la-me-ucla12dec12,0,7111141.story?coll=la-home-headlines. The interview with Sarah Gordon is at http://news.com.com/2008-1082-829812.html.

Tuesday, December 05, 2006

Superman Works for Airport Security Now

I've had occasion to mention Superman before ("Sniffing Through Your Wallet with RFID", Oct. 25, 2006), but my reference then to his X-ray vision was in jest. Well, a news item from the U. S. Transportation Security Administration says that in effect, they've hired Superman (at least, the mechanical equivalent of his X-ray vision ability) to watch passengers at Phoenix's Sky Harbor International Airport. The effect is to allow strip searches without stripping.

According to the Dec. 1 Associated Press news item, in the initial tests of the system, which uses a type of X-ray technology called "backscatter," security officials will examine only people who fail the primary screening. These passengers will be offered the choice of either a pat-down search or examination by the backscatter machine. The images, which reportedly are blurred by software in "certain areas," are nevertheless detailed enough to show items as small as jewelry next to the body. The technology is already in use in prisons, and the intensity of X-rays is much lower than a typical medical X-ray.

When I read this story, it brought back memories of my days as a junior terrorist. Before you get up from your computer to call the FBI, let me explain. In the 1990's, I did some consulting work for a company that was developing a contraband detection system using short radio waves called millimeter waves. It turns out that the human body emits these waves just because it's warm. With a detector that is sensitive enough, you can detect the waves coming through clothing, and if you are wearing something like plastic explosive under your shirt, the shadow of it will show up in the image.

We built a system, and to test it, several of us took turns playing terrorist by wearing lumps of modelling clay and plastic pistols taped to our shirts underneath a windbreaker. It was a tedious task, because the machine took 15 minutes or more to make a decent picture and you had to hold still the whole time. The results looked like blurry photographic negatives, but you could see the outlines of the contraband clearly. You could also see the main features of the body underneath the clothing, and that led to some privacy concerns, as you might imagine. The wife of the company president volunteered to be our female subject. I never saw the resulting picture—apparently it was detailed enough to be censored. For a number of reasons, both technical and social, that particular machine never made it to market, but all this was before 9/11 and the sea change in our attitudes toward airport security that resulted.

This change in attitudes has done funny things to some people, notably Susan Hallowell, who is the Transportation Security Administration's security laboratory director. A picture accompanying the article shows Ms. Hallowell in the X-ray altogether, and shows about the same detail as a department-store mannequin from the 1950s, or a Barbie doll. I suppose Ms. Hallowell's willingness to pose was motivated by a sincere desire to increase the quality of airport security with less discomfort to passengers, but it wouldn't surprise me if her strategy backfires. If I put myself in the mindset of a middle-aged woman who faces the choice of either letting another woman do a pat-down search, or knowing that somewhere out of sight, somebody—possibly another woman but possibly not—is going to see every single bulge, sag, and fold underneath my clothes, I would choose the pat-down search every time. In fact, I'd go screaming to my Congressman to stop implementation of the backscatter system before my naked profile showed up on MySpace. Yes, the TSA says the images won't be stored or transmitted. And maybe they will be able to keep that promise. But if there's a leak somewhere—say Madonna goes through one of these things and a paparazzi manages to bribe an inspector—the whole plan could go up in political flames.

Besides which, there is a principle that is largely neglected today, but still deserves some attention: the Constitutional prohibition against unreasonable searches and seizures. The Fourth Amendment says in full, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." I'm no Constitutional or legal scholar, and obviously some legal means has been found to get around constitutional challenges to airport security inspections. Probably the argument is, if you don't want to be searched, take the bus. But letting somebody I don't know see me without clothes, simply on the slight chance that I'm carrying a gun or a bomb, seems to cross a line that we as a nation have hesitated to cross before.

When George Orwell portrayed the ever-present unblinking eye of Big Brother in his dystopia 1984, the idea of being spied on constantly had the power to shock, because it was so novel. But today there are places in England where you can walk for many blocks and never be out of sight of security cameras. This has not destroyed England, and it has actually helped track down terrorists such as those who committed the London subway bombings. The thing we lose when one more privacy barrier comes down is so hard to describe because it's silent, has no public relations agent promoting it, and doesn't show up in the compilation of gross national products. But it's the kind of thing that you notice mainly after it's gone. And once it goes, it can be very hard to recover.

Sources: The AP article describing the Phoenix tests was carried by many media outlets, among them MyWay (http://apnews.myway.com/article/20061201/D8LO1JLO2.html). The paper describing my foray into contraband detection was entitled “Contraband detection through clothing by means of millimeter-wave imaging,” by G. R. Huguenin et al., SPIE Proc. 1942 Underground and Obscured Object Imaging and Detection, Orlando, FL, pp. 117-128, 15-16 April 1993.

Tuesday, November 28, 2006

Freeloading or Free Speech?

Say you have an old-fashioned wireline phone sitting on your back porch. One morning you wake up to see a stranger sitting there chatting away on it. You open the window and say, "Hey, buddy, that's my phone."

Covering the mouthpiece, the man replies, "Don't worry, it's a local call. Won't cost you a thing."

You probably wouldn't just shrug your shoulders and go back to bed. But a recent New York Times article described how a reporter did the wireless equivalent to a San Francisco man named Gary Schaffer. Using a new Wi-Fi-equipped mobile phone that made a free call over Mr. Schaffer's home wireless Internet connection, the reporter hung up, then identified himself and asked Mr. Schaffer whether it was okay with him. “If you’re a friend, I’d say, let’s give it a try,” he said, but he'd be uncomfortable if strangers tried to sponge that way.

Among other things, engineering ethics deals with ownership, property rights, and the just distribution of resources and costs involved in technology. But as communications systems blur distinctions that were once clear and unambiguous, we may have to rethink some assumptions that have been around so long, we've forgotten them.

Take the example of "plain old telephone service" (POTS, for short). For the first century or so of phone service, an individual subscriber in the U. S. leased (not owned) a considerable pile of fairly costly hardware from The Phone Company, which was usually the Bell System. The dial set, twisted-pair wires, network interface, and lines going all the way back to the central exchange building several miles away were solid, immobile, physical objects. Ownership and operating rights were clear-cut, and it was a simple matter, relatively speaking, to regulate the industry so that investors received a reasonable return on the hardware and software installed over the decades, and consumers were able to afford POTS at what passed for a reasonable cost.

Since then, technical advances have made the incremental cost of a simple phone call positively microscopic compared to what it used to be. Much of the turmoil in the telecom business in the last fifteen years or so has resulted from various attempts to deal with this fact. What is a fair charge for something that costs almost nothing? Of course, somebody had to pay for the extensive wireless, wired, and fiber-optic networks that tie the world together, but we are very far from the simple, monolithic picture the Bell System presented as late as the 1960s. If you trace the path of a phone call or an email, your signal may pass through systems owned by dozens or hundreds of different entities, ranging from the neighbor next door to the federal government. Sorting out who should pay for what is an increasingly complex business, and one that the consumer is poorly equipped to do. But everybody can understand the lure of free phone calls. Hence the potential popularity of mobile phones that use Wi-Fi links.

We can go in several directions from here. Any activity that benefits the individual and also has no incremental cost lies open to what economists call "the tragedy of the commons." The commons was an area of communally owned land in some parts of England which farmers could graze their cattle on without charge. As time went on and population increased, the overgrazing of land turned the commons into mudflats, putting an end to the practice.

The Wi-Fi spectrum itself is a limited resource, although it is far from completely unregulated, since the Federal Communications Commission sets the basic boundaries for its use. But if you happen to live on a busy Manhattan street and Internet-ready mobile phones become popular enough, the day might come when your home's wireless Internet connection is jammed up with chattering freeloaders, and you won't be able to use it. Or the airwaves might get so crowded that most of the phones become useless.

Fortunately, the ether recovers instantly as soon as people quit using it, and if the airwaves turned into an electronic mosh pit, unusability would soon decrease the crowding to a manageable level. This sort of thing doesn't happen to conventional cell-phone systems much because they are operated by organizations which make sure there is enough capacity in a given region to handle the anticipated number of calls.

The opposite pole to the unrestricted use of a limited resource, of course, is excessive regulation. Many would argue that the Bell System monopoly broken up by court decisions in 1984 was an example of a kind of self-regulated extreme which stifled technical innovation. It's much too late to debate that argument again, except to say that clarity in ownership and consumer rights is worth something. When there was only one phone company, it was easy to pay your communications bill. Nowadays, the typical young consumer may ante up each month to a phone company or two, a cable operator, a mobile phone outfit, an Internet provider, and possibly some MMOG (massively multiplayer online game) bills. It seems to be the nature of modern technology to widen the variety and types of choices available to the consumer, but all for a price.
And sometimes the price is not just dollars and cents, but undesirable changes in places we may never see.

Sources: The New York Times article " The Air Is Free, and Sometimes So Are the Phone Calls That Borrow It" was carried in the Nov. 27, 2006 online edition.

Wednesday, November 22, 2006

Vistas of Choice?

In some ways, the new worlds opening up as computer science and technology progress seem to promise an almost infinite array of choices. Multi-user online games allow you to create your own avatar by selecting from an array of virtual body features, abilities, and appearances. Type almost any search term into Google, and you have thousands of pages of information to choose from.

But in other ways, once you decide to deal with computers at all (and modern life is all but unthinkable without them), your choices are extremely limited. Suppose for some reason that you simply do not like the operating systems produced by Microsoft. Well, there are Macs, there are the various Linux machines, and an array of expensive specialized systems for various technical uses. But if you're just an ordinary consumer, not a computer specialist, and you just don't like Microsoft, you'll pay a price for your pickiness.

This paradox came to mind as I read news that Microsoft is currently shipping its new operating system for PCs called Vista. Given Microsoft's large market share, most PC users will have to switch to Vista sooner or later. Vista comes with promises that it is much more secure than the previous systems, but you can also rest assured that Vista is now the main target for writers of viruses and other mischief-making software, simply because more computers will be running with Vista than with anything else, if history is any guide.

The question of whether history is any guide to these matters engaged the attention of a historian of technology at MIT named Rosalind Williams a few years ago. In her book Retooling, she explored the ways people deal with the lockstep acquiescence to the latest software upgrades such as Vista that is so often imposed upon them. Even at a supposedly future-oriented, cutting-edge institution like MIT, she realized that new administrative software was greeted with dismay as often as enthusiasm. But she also realized that in the complex, interlocking world of information and technology we have created for ourselves, not keeping up with the program (so to speak) is simply impossible without turning one's back on the way modern professional life is lived.

Of course, there are societies that do this. The religious sects collectively known as the Amish decide which modern technologies to adopt and which to forego. Sometimes their prohibitions are not absolute. For example, a whole neighborhood of Amish will share one pay telephone, but use it only for emergencies. And I recall reading about another Amish community that experimented with a video player and a set of children's films for a while. Eventually, though, the parents disposed of the equipment, because one father noticed that "the children aren't singing anymore."

Rejection of most modern technology is certainly a choice, but not one that can generally be made on an individual basis. The Amish survive, not simply because they don't watch videos or drive cars, but because they have preserved and maintained a functioning community where everyone has rights and responsibilities that are taken seriously. I understand that once an Amish child comes of age, he or she can freely choose to leave the community. But most decide to stay.

That kind of community is foreign to most of us, possibly because we demand so much in the way of freedom of choice that we refuse to be bound by obligations that would reduce that freedom. But choice comes with a price. In a peculiar way, the market is kind of a mirror of our own collective choices. Microsoft got to the place it is by giving most PC users most of what they wanted. That entails literally millions of choices. (Has any user on earth ever tried out all of Microsoft Word's features even once?) But in order to have the choices Microsoft provides, you have to forego the privilege of choosing your operating system.

Engineered systems of all kinds offer similar choices. You can choose not to own a car in a city in the western U. S., but your choices for travel will be radically restricted thereby. While millions of people do quite well in northeastern U. S. cities without cars, it is because their citizens have made a collective choice to maintain public transportation at a level that makes it possible to live without a car. The reasons for this are partly historical, partly technical, and partly political. The one thing they are not is simple.

I hope Microsoft is right about Vista's increased level of security. As for myself, I will continue to fly under the radar of many viruses by using mainly Macs. That choice is one I have found hard to maintain at times. But I enjoy being able to make it.

The ethics of choice in engineering is not a subject that comes up frequently. But since choice is a fundamental aspect of human freedom, as more of our lives are engaged with engineered products and systems all the time, those of us who create them should consider the ethics of choice more often.

Sources: Retooling: A Historian Confronts Technological Change by Rosalind Williams was published by MIT Press in 2002. My review of the book can be found in IEEE Technology and Society Magazine, vol. 23, pp. 6-8, Spring 2004.

Tuesday, November 14, 2006

Tesla and the Secret of "The Prestige"

Movies are not normally germane to engineering ethics, but I can justify the following discussion of the recent film "The Prestige" thusly. The driving force behind much good fiction, literary or cinematic, is a moral problem. When the moral problem involves technology, you have the same kind of issues that engineering ethics deals with, but in a different context.

This piece should not be read by people who have not seen the film and want to be surprised by the ending, because I'm going to give it away. If this blog had a wider readership I would hesitate to do such a thing—journalistic ethics generally forbidding it—but since all indications are that the audience is, shall we say, exclusive, I will go ahead and summarize the plot.

The time is around 1900, and two magicians, Angier and Borden, fall out when Borden ties a knot around the wrists of Angier's wife in such a way that it may have caused her death by drowning in a stage stunt. Angier, convinced that Borden killed his wife, embarks on a kind of revenge career in which he tries to out-magic Borden, who retaliates by disguising himself in Angier's audiences in order to wreak havoc with Angier's tricks, as well as devising increasingly ingenious stunts for his own London performances. Borden outdoes himself with a stunt called the Transported Man, in which it appears that he walks into a doorway on one side of the stage and emerges almost instantaneously out of a second doorway forty feet away. Angier, convinced that Borden does this trick by means of a machine he bought from the famed American inventor Nikola Tesla, visits Tesla in Colorado Springs, where Tesla has electrified (literally) the entire town in exchange for being able to use the town generator for his own experiments in transmitting energy without wires.

Here is the secret: Tesla, according to the movie, actually hits upon a way, not of transporting objects, but of duplicating them. He begins with top hats (the movie opens with a scene of a pile of top hats outside Tesla's remote laboratory), progresses to cats, and eventually duplicates Angier himself. Angier buys Tesla's machine, takes it to London, and with it stages one hundred performances of his own stunt, which requires him to drown his own double each time in order to keep the number of Angiers running around within manageable quantities, namely, one. The last scene of the movie shows where Angier (now dead—shot by Borden, who turns out to be two people who have been exchanging roles all through the movie) has hidden his one hundred dead bodies, each preserved in its own drowning tank.

As I watched that last scene, a thought flashed through my mind of the thousands of frozen embryos—babies—fetuses—whatever your preferred word is—that are preserved in in-vitro fertilization clinics around the world. They are not artfully lit and do not have all the features of a familiar screen actor, as the bodies in the tanks did. But they share with those bodies the one feature that makes them different from all other material objects: they are in some sense human, though in a state that might be termed suspended animation.

I do not believe "The Prestige" will go down in cinematic history as a great movie, although events could prove me wrong. For one thing, the main characters Angier and Borden, played by Hugh Jackman and Christian Bale, respectively, arouse little sympathy in the audience. They are each so single-mindedly focused on their rivalry that they trash the lives of women and shamelessly exploit anything within their reach to achieve their goals of mastery over the other, which necessarily involves mastery over the material world. As for the Tesla character, played by David Bowie as a kind of proto-Nazi-scientist type, he is simultaneously the enabler of the deepest wrongs committed by Angier, and the prophet who warns against the use of his own tools. When Angier offers to buy Tesla's machine, Tesla says that the best thing that could be done with it would be to sink it under the ocean.

The scientist who issues dire warnings against the use of his own creations is rather a cliché in science fiction. But with frozen embryos an everyday reality and human cloning on our doorstep, we are no longer talking about science fiction when we consider the morality of duplicating human beings. The job of the artist in a culture is not so much to solve moral problems, although they can sometimes help, as Harriet Beecher Stowe tried to do with Uncle Tom's Cabin, which she wrote explicitly to expose the horrors and wrongs of slavery. The artist should bring our attention to things we either do not see out of familiarity, or out of unfamiliarity, or for some other reason. The recent debates over so-called "therapeutic" cloning and embryonic stem cell research, frankly stated, involve the question of whether we should duplicate existing human beings and kill them for some purpose of our own. That is exactly what Angier did with his duplicated magicians. In the movie's system of justice, he died for his wrongdoing at the hand of his enemy.

I have little doubt that most of the people intellectually involved in the production of the film enthusiastically support embryonic stem cell research. Perhaps they see the connection between their film and that issue, and perhaps they don't. The typical response to someone who voices opposition to such research on the ground that it involves killing a human being is that the object in question is not a human being. In a recent Supreme Court case involving a law that prohibits partial-birth abortions, the language preferred by a Planned Parenthood lawyer was to say that the "fetus" involved in an abortion will "undergo demise." Would you feel any better if your doctor told you that you were going to "undergo demise" in a few weeks, rather than just saying flat out that you're going to die? The feelings at stake are not the baby's. Rather, people resort to this kind of language to help them deny the fact that they are dealing with other human beings, beings just like they were when they were that age.

The writer Flannery O'Connor was once asked why she so often tended to write about the grotesque and the bizarre in her stories. She responded to the effect that as a Catholic, she knew that most of her audience did not share her beliefs: " . . . for the almost-blind, you draw large and startling figures." The makers of "The Prestige" have drawn some large and startling figures for us to ponder, perhaps without meaning to. I hope more people will draw the connection between their tale of long ago and far away, and what is going on in the halls of science and medicine today.

Sources: The Supreme Court case is described in an audio report by NPR's Nina Totenberg at http://www.npr.org/templates/story/story.php?storyId=6460614. The quotation from O'Connor is at http://thinkexist.com/quotes/flannery_o'connor/.

Tuesday, November 07, 2006

Global Warming and World Views, Part II

Global Warming and World Views, Part II

Last week I started from the fact that flying takes about ten times as much fossil fuel as riding trains, and imagined how an atheist would reason out a position on whether flying is morally justified, given the news about global warming. I showed that our purported atheist could come out either in favor of flying or opposed to it, but the reasons for each conclusion came down to a matter of choosing rationales to suit one's conclusions. If you think getting enjoyment out of life is what it's all about, you'll fly as much as you can and leave the climate catastrophes for someone else to worry about. If you think man's presence on the planet is a bad idea on the whole, you'll favor the least intrusive modes of transportation possible. This leads to images of pre-agricultural primitive peoples tip-toeing through the jungle, leaving no trace of their passage. I'm sure the imaginative reader can come up with other rationalizations for either view, but that's what they are: rationalizations. You pick the outcome you want to get, and then you go looking for reasons to back it up.

I also said I'd take the example of a different worldview and see what conclusions you can draw from it as well. Here goes.

Before you say the opposite of atheism must be theism, hold on. We can keep this entirely at the level of philosophy. Instead of atheism, I should have said that the person I had in mind last time believed that there is no such thing as moral law apart from what somebody thinks. Because what I'm going to contrast that with today is the viewpoint that there IS such a thing as moral law, independent of what you or I think or say or feel, and even independent of the existence of humanity altogether.

What I mean is this. No one would quibble with the notion that whether or not people are on this planet, the law of gravity would still cause the earth to revolve around the sun. The law of gravity doesn't depend on our agreeing on it, or even knowing anything about it. Now what I'm proposing as an alternate view is the idea that common notions of right and wrong such as "don't kick babies" and "don't steal cars" are just as much an independent, inviolate part of the universe as the law of gravity. Anywhere there are sentient beings with intelligence and will, this theory goes, you find these moral principles, and they are the same everywhere.

This theory goes by various names at various times, but "natural law" is probably the most common one. It is "natural" in the sense that it is part of nature, part of the universe's structure. I won't attempt to justify it at this point, although people have done that, and not just religious people either. What I will do is to use it as a basis to adjudicate this question of whether it's moral to fly planes, knowing that you produce less greenhouse gases when you ride the train.

It turns out that the question is just as hard, but for different reasons. There seems to be a universal bias against what I'd call waste, for example. Taking a thing that is good for people and simply trashing it without benefiting from it yourself is something that hardly anybody would argue to be a good thing. If—and this is a big "if"—it turns out that our 200-year love affair with fossil fuels utterly wrecks the planet—and by this I mean, makes it completely uninhabitable, like burning down a house—then, well, I'd say anyone who burned anything combustible in the last 200 years is partly responsible. But the trouble with this notion is that we cannot know the future. That is why the "if" is so big.

You will meet people who will tell you that we have that amount of certainty about the problem, and it's time to start doing something about it. The only sure way to tell they're right is not to do anything and then wait and see. This approach has its own problems. There is a kind of prudential judgment that is part of natural law, in the sense that people are not generally expected to change their behavior based on remote possibilities that they are not intimately involved in. And that is what we should apply here.

The biggies in natural law concern how you treat your family, your friends, your neighbors, and so on. Giant geopolitical things like global warming may be the proper concern for certain specialists, but it betrays a kind of inverted set of priorities to put global warming ahead of friendships, fulfillment of duties, and charity, which is an old-fashioned word for love. I think natural lawyers would say, "If your life involves air travel and is otherwise following generally accepted moral principles, then you should consider using a less polluting form of transportation. But if your ability to do good would be seriously impaired, go ahead and fly." Of course, different people will come to different conclusions using these principles, even if they start from the same data. But that's true of almost any moral problem that isn't on the extremes. The same was true of our conclusions when we started from the atheistic or individualistic assumption.

So what good is all this? "You haven't answered the question!" you say. "Should I or should I not fly rather than ride trains?" Never mind planes or trains for the moment. Never mind global warming, even. The important question is not what mode of transportation to take, and not even whether New York City will be under water in 2106, but how you decide what is right and wrong, and what you believe the world is about, and why you are here in the first place. Get those things right, and the little stuff will take care of itself.

Tuesday, October 31, 2006

Global Warming and World Views, Part I

Did you know that if you travel on an airliner from, say, London to Frankfurt, you use about ten times the greenhouse-gas-producing fossil fuel that it takes to carry you the same distance by train? Did you care?

That idea is the gist of an ad campaign sponsored by European environmental groups. The ads take the form of statements by an imaginary airline head who makes arrogant, disparaging comments about environmentalists, who he calls "lentil mobs." In Europe's largely pro-green culture, such comments are as inflammatory as running ads in U. S. media that show a fat white Southern sheriff saying disparaging things about blacks. Technique aside, the point the ads make is true: airline travel uses much more fossil fuel per passenger-mile than surface travel, and especially more than rail, which is more efficient than private cars. The way you react to that fact should depend on your view of the world and what it is all about.

Suppose you think this physical world is all there is, death is annihilation, and we are here to propagate our gene pool and along the way pick up whatever transient enjoyment we can. You may therefore view air travel as one of the greatest boons to humanity, since it lets us get from enjoyable place to enjoyable place much faster than surface transportation. Strangely, though, that attitude is uncommon in cultures where a frankly atheistic outlook prevails. In places such as France, Germany, and the Scandinavian countries, where publicly expressed religion is almost invisible, Greenpeace and similar green parties and beliefs are most common. The reasons for this are complex, but I can speculate.

If you believe man is the supreme intelligence in the universe, then he is therefore responsible for the efficient running of the planet. After all, we can't trust the elephants or the insects to do a good job. Or can we? They were here first. Down that line of thought lies the branch of environmentalism which views mankind as an unmitigated plague upon the planet, one which the Earth would be much better off without. In this view, the ideal world might be one in which the human population was reduced to the point where we could all live off the land like the pre-agriculture American Indians. The trouble with that is, estimates of the pre-Columbian population of North America run in the low dozens of millions, and that would be true in proportion to the rest of the world. To achieve that ideal, then, most of the world's people would have to go away. As it happens, the population of native Europeans (including Russians) is undergoing a population implosion that would be right on target to reduce Europe to its pre-civilization population levels, if it weren't for all the immigrants. But that is another story.

Even if you don't think mankind should commit mass suicide for the betterment of the planet, you may still feel some personal responsibility toward the globe which you cannot possibly fulfill. You may feel like a ten-year-old child put in charge of running General Motors: impossibly underqualified for the job. Accordingly, you turn to the experts, who are not quite as unqualified as you to run the planet, and they tell you that yes, the Earth is getting warmer, and yes, our burning fossil fuels has something to do with it, probably. So are you going to form an ironclad rule never to set foot on an airplane again?

Probably not. Instead, you'll fly when you can't avoid it, or maybe whenever you feel you can afford it, and feel guilty about it. And rightly so. Because if everybody quit flying and took the train, we'd burn less fossil fuel than we do now. Then what?

Well, you as an individual might live long enough to see a slight slowdown in the global-warming trend. But maybe not. And suppose it's too late? Suppose we've passed the invisible tipping point of no return, and the atmosphere is headed inexorably toward a catastrophe that will make the worst disaster movies look like child's play: storms, floods, inundated coastal cities and plains, radical rises in temperature. Again, there is nothing you can do but watch. In this case, the thought that years ago, you quit flying in airplanes as a protest against what you saw as environmental irresponsibility might furnish you some small solace, but it did nothing significant in the long run.

I don't know about you, but I find all these alternatives profoundly depressing. Doing nothing is bad, but doing something like abstaining from flying has such a small chance of making any real difference that it's not worth the effort. Of course, there is always the great mysterious process by which public opinion changes. And something like that might happen here, as it did in the sixties in the U. S. when environmentalism grew from being viewed primarily as the peculiar obsession of a few left-wing crackpots to something that President Richard M. Nixon himself embraced when he founded the Environmental Protection Agency. But such things are hardly predictable, and to trust in their occurrence takes a kind of faith akin to those who regularly buy lottery tickets.

Lest I appear to be bringing a counsel of despair, I will take a look at a different world view next week. I'll tell you right now, I won't necessarily come to any different conclusions about what to do. But the reasons will be very, very different.

Sources: The report on the spoofing airline ads is an Oct. 29, 2006 New York Times article by Eric Pfanner at http://www.nytimes.com/2006/10/30/business/media/30fuel.html. According to the Wikipedia article on the population history of American indigenous peoples, estimates of the North American native population before 1492 range from 12 million to over 100 million, and are probably no more than educated guesses. Whatever the figure is, it is much less than the current population.

Wednesday, October 25, 2006

Sniffing Through Your Wallet with RFID

We should all be glad that Superman was a nice guy. I mean, with his X-ray vision, his personal jet-powered cape, not to mention his lady-killing looks when he didn't have his glasses on, he would have made a formidable criminal. Well, some nice guys in the Department of Computer Science at the University of Massachusetts Amherst have shown us that it doesn't take X-ray vision to read your name and credit-card number off some new types of credit cards that incorporate something called "RFID."

First, full disclosure (I've always wanted to say that): I taught at the University of Massachusetts Amherst for fifteen years before moving south, though not in Computer Science. And even before that, my supervising professor in graduate school and I patented a system that could have been used for RFID, although nobody but the patent lawyers ever made a nickel off the patent, which has now expired.

What is RFID? It stands for "radio frequency identification," and it includes a variety of techniques to track inventories, monitor conditions remotely, and even read credit cards. The common thread in all these things is an RFID chip that goes onto the object in question: a box of Wheaties, a credit card, or even a person's body. You can think of this technology as on beyond bar codes—those little symbols that the checkout person scans at the grocery store. Using the proper RFID equipment, you can receive information about where the object is, its inventory number, and so on, all without contacting the object. So in a warehouse, for instance, every time a pallet full of computers goes out the door, an RFID transponder can count them and record each computer's serial number, and the guy driving the forklift doesn't even have to slow down. You just have to be within radio range, which can vary from inches to several feet. Which is how the clever guys at UMass Amherst did their trick.

According to the New York Times, Professor Kevin Fu asked a graduate student to take a sealed envelope bearing a new credit card and just tap it against a transponder box they had designed. In a few minutes, Professor Fu's name, the credit card number, and even the expiration date appeared on a screen. All without even opening the envelope.

The Times reporter dutifully made the rounds of credit-card firms such as American Express and J. P. Morgan Chase to describe Prof. Fu's magic trick. Visa's Brian Triplett said it was an "interesting technical exercise," but wasn't concerned that it would lead to widespread credit-card fraud. It should be noted that it wasn't Mr. Triplett's credit card number that showed up on the screen.

As with many other technologies that develop out of the public eye for years or decades before emerging into visibility, RFID has been around a lot longer than you might think. Back in World War II, a primitive form of RFID was used with aircraft to "identify friend or foe" (IFF). The equipment was far too bulky or expensive back then to be considered for consumer products, but advances in electronics have given us RFID chips cheap enough to throw away with the empty box of Wheaties. Some experts believe RFID will largely replace bar codes as the inventory technology of the future. And that's not all.

Attaching an RFID tag to one's person would lead to all sorts of situations, not all of which are pleasant. Strangely enough, one of the more popular paranoid delusions in recent years, but not so recent that RFID was developed to do it yet, was that the FBI or some equally secretive outfit had implanted a chip in the patient's body, and the chip was spying on their whereabouts and even their thoughts. I actually had dealings with such an individual when I was back at UMass, and it wasn't a pretty picture. It's not every day that billions of dollars are spent with the unintended byproduct of bringing some nut case's delusion into the realm of reality, but it happens. RFID is a long way from reading peoples' thoughts yet, but even that notion doesn't sound as goofy as it used to, what with PET scans and other noninvasive brain-monitoring techniques.

For now, RFID will begin to show up only in places like grocery stores, automated tollbooth tags such as New York State's "EZPass," and some credit cards. I don't think we need to worry about Prof. Fu's trick falling into the hands of some evil computer scientist, because it's fairly easy to foil. And fortunately, the laws about credit-card fraud in this country are written so that the consumer is liable only for the first $50 of loss, and the credit-card issuer is left holding the rest of the bag. So if Visa and company start losing substantial amounts of money to people who cobble together a duplicate of Prof. Fu's remote card reader, the firms will take the straightforward steps needed to fix that particular problem.

All the same, we need to think about how RFID could be abused, before some clever thief or saboteur does, and take reasonable precautions. And it's going to be a long while before yours truly consents to having any chips embedded in his person. But then, I was born old-fashioned.

Sources: The New York Times story appeared online on Oct. 23, 2006 at http://www.nytimes.com/2006/10/23/business/23card.html. I have recently received a copy of RFID Strategic Implementation and ROI: A Practical Roadmap to Success by Charles Poirer and Duncan Mccollum, which has a good nontechnical discussion of RFID's history and how it works.

Tuesday, October 17, 2006

Is Any Technology Ethically Neutral? The Sony Reader

A recent New York Times article announced the debut of the Sony Reader, an electronic book that uses tiny plastic spheres to simulate the appearance of an actual page of print. Unlike a laptop display with its energy-hogging backlighting, the Reader uses only existing room light and consumes essentially no power until you turn the page. A reader of the Reader can take satisfaction in the notion that no trees were cut down and hardly any oil or coal burned to produce the miniscule amount of energy needed to operate it.
A more environmentally friendly technology can hardly be imagined, it seems. So should we all pitch our old-fashioned stacks of paper bound together and buy Readers? It depends.

When I try to engage certain people in a discussion of the ethics of a given technology, an argument I often hear goes like this: "Well, technology by itself is neutral. It's only the ways people use technology that are good or bad." That is one of those nice-sounding phrases that look good at first, but tend to disintegrate under scrutiny. The Sony Reader would seem to be a good candidate to exemplify the idea of the neutrality of technology. No one is making us go out and buy Readers. It's simply another item on the market which may or may not prove popular. It seems to be environmentally benign, and as long as it does what its maker claims for it, what downsides could it possibly have?

That question actually sends us out upon deep philosophical waters. There is a school of thought popular in Europe that goes under the name of the "precautionary principle." Followers of this principle take the stand that any new technology must be examined thoroughly for possible harmful effects before it can be generally distributed. If no actual harm has occurred yet, the examination of a technology for possible harm necessarily involves reasoned speculation about what might occur. There is nothing intrinsically wrong with basing technical decisions upon hypotheticals. After all, the Sony Reader's designers were speculating that people would want to buy their product if they developed it, and so the use of speculation in evaluating its effects, both good and bad, is no less warranted.

For example, one could imagine Readers sweeping the world to become as popular as books, if not more so. (To a great extent, this has already taken place as computers have replaced reference volumes in libraries.) Would the world be a better place if every book was an e-book?

That depends. The people who make conventional books wouldn't think so. Technological unemployment has been around ever since there was technology. Somehow the world's economies have absorbed the paste-up artists, the platemakers, the hot-type linotypers, and all the other superseded occupations that pre-electronic forms of printing required. What has happened to a good fraction of the printing industry's past workers might eventually happen to all of them. But unless you believe in state control and ownership of the means of production, technological unemployment is just one of those things that happen.

How could this possibility be forestalled? In the world's continuing embrace of a free-market global economy, consumers can exert a certain amount of control over what they buy. But consumers can't buy what isn't there, and much of the power to decide what gets sold lies with those who control the large firms whose investments determine the directions of the markets. If next year, most investors decide that paper books are going the way of the slide rule when electronic calculators came along, the rest of us will not be able to do much about it.

Next, consider what the Reader is made of: probably some conventional electronics, a battery, and a display containing thousands if not millions of tiny plastic spheres suspended in some kind of liquid. Some day—probably sooner than later if the useful lifetime of the typical laptop is any guide—the brand-new Readers now waiting on store shelves will accumulate in attics and closets, only to be thrown out when the next model comes along. As we have learned, you can't simply throw things away these days, because there isn't any "away" anymore. More and more environmentally conscious manufacturers are doing what is called life-cycle design, which takes into account the problem of how to dispose of a used piece of equipment with minimal impact to the environment. I have no specific information on the Sony Reader in this regard, but at the least, its disposal will take up some room in a landfill somewhere. And if it contains any hazardous chemicals in its battery or display, these chemicals could cause problems later.

Finally, there is the subtle but real change in the habits of millions who change from one form of information exchange to another. No matter how closely the makers of a new technology try to imitate the experience produced by a previous one, some things are different. And sometimes the new technology imposes a whole set of new habits on the user, not all of them good ones. How many of us have rattled out an angry email and hit the send key only to regret it later? Somehow, the act of writing or typing a paper letter, signing it, folding it, addressing it, and putting it in the mailbox provided a number of additional points of decision where we could give heed to our second thoughts and at least put the letter aside instead of mailing it. What at first looked like nothing more than obstacles to the rapid communication of thought now looks more like a kind of psychological buffer that may have made society a better place.

I have no idea whether the Reader will catch on, or whether it is only a precursor of something better, or whether, like the poor, the paper books we will always have with us. And my little exercise in applying the precautionary principle to such a benign-looking piece of technology as the Reader should not be misunderstood to mean that I feel it is a threat to civilization. But I hope I have made clear that any technology whatsoever that ends up in the hands of people has intrinsic potential for both good and bad consequences, and the way it is designed can influence how those consequences develop over time.

Sources: The New York Times article by David Pogue on Oct. 12, 2006 describing the Sony Reader was located at http://www.nytimes.com/2006/10/12/technology/12pogue.html.