Monday, November 26, 2007

Can Robots Be Ethical?

Earlier this month, Canadian science-fiction writer Bob Sawyer attracted a lot of attention with an editorial he wrote for a special robotics issue of the prestigious research journal Science. In his piece, Sawyer showed that writers of science fiction have been exploring the relationship between humans and robots at least since the early stories of Isaac Asimov in the 1940s. But far from coming up with a tidy solution to the moral implications of autonomous, seemingly intelligent machines, the sci-fi crowd appears to have concentrated on the dismal downsides of what could go wrong with robots despite the best intentions of humans to make them safe and obedient. Think Frankenstein, only with Energizer-Bunny endurance and superhuman powers.

Nevertheless, Sawyer is an optimist. He applauds the efforts of South Korea, Japan, and the European Robotics Research Network to develop guidelines for the ethical aspects of robot use, and chides the U. S. for lagging in this area. He uses phrases like "robot responsibilities and rights" and speculates that the main reason this country hasn't developed robot ethics is that many robots or robot-like machines are used by the military. He wants us specifically to explore the question of whether "biological and artificial beings can share this world as equals." He winds up with the hope that we might all aspire to the outcome of a 1938 story in which a man married a robot. That is, he looks forward to the time that all of us, like the lovers in countless fairy tales, can be "living, as one day all humans and robots might, happily ever after."

Well. Hope is a specifically human virtue, and is not to be thoughtlessly disparaged. But Sawyer has erred in blurring some vitally important distinctions that often get overlooked in discussions about the present and future role of robots in society.

I do not know anything about Sawyer's core beliefs and philosophy other than what he said in his editorial. But I hope he writes his fiction more carefully than he writes editorials.

The key question is whether a machine described by that term is under the control of a human being, and to what extent that control is exercised. He begins his editorial with the story of how a remotely piloted vehicle dropped a bomb on two people who looked like they were planting an explosive device in Iraq. He terms this vehicle, which was undoubtedly under the continuous control of a human operator, a "quasi-robot." No doubt it contains numerous servomechanisms to relieve the operator from tedious hand-controlled steering and stabilization duties, but to call a remotely controlled bomber a "quasi-robot" is to give it a degree of autonomy which it does not possess.

Autonomy is a relative term. There is no entirely autonomous (the word's roots mean "self-governing") being in the universe except God. The issue of autonomy is a red herring that distracts attention from the real question, which is this: is it even possible for a human-made machine to possess moral agency?

Now I've got to explain what moral agency is. We are used to the idea that children below a certain age are not allowed to enter into contracts, marry, smoke, or drink. Why is this? Because society has rendered a judgment that they are in general not mature enough to exercise independent (autonomous) moral judgment about these matters. They are not old enough to be regarded as moral agents in every respect under law. Of course, even young children seem to have some built-in ability to make moral judgments. Isn't "That's not fair!" one of the favorite phrases in the six-year-old set? We accord certain rights and responsibilities to humans as they mature because we recognize that they, and only they, can act as moral agents.

Sawyer's mistake (or one of them, anyway) is to assume that as progress in artificial intelligence and robotics progresses, robots will mature essentially like humans do and will be able to behave like moral agents. I would point out that this achievement is far from being demonstrated. But even if moral agency is simulated some day by a robot in a realistic way indistinguishable from humanity, this fact will always be true: machines have been, are, and always will be the products of the human mind. As such, the human mind or minds which create them also possess the ultimate moral responsibility for the robot's actions and behavior, no matter how seemingly autonomous the robot becomes. So the robot can have no "rights and responsibilities"—those are things which only moral agents, namely humans, can possess.

This fact is illustrated by one of Sawyer's own examples. He cites the case of a $10 million jury award to a man who was injured back in 1979 by a robot, probably an industrial machine. You can bet that in 1979, the robot in question was no autonomous R2D2—it was probably something like one of those advanced welders that you see in automotive ads, the ones that zip around making ten welds in the time it takes a human to make one. I merely note that the injured party did not sue the robot for $10 million—he sued the robot's operators and owners, because everybody agrees that if a machine causes injury, and one or more humans are responsible for the actions of the machine, that the humans are at fault and bear the moral responsibility for the machine's actions.

Another distinction Sawyer fails to make is the difference between good and necessary laws regulating the development and use of robots as robots, and the entirely pernicious and unnecessary idea of treating robots as autonomous moral agents. But as I'm out of space for today, I will take this question up next week.

Sources: Sawyer's editorial appeared in the Nov. 16, 2007 issue of Science, vol. 318, p. 1037. I addressed some issues related to the question of robot ethics in my blog "Are Robots Human? or, Are Humans Robots?" for July 30, 2007. Bob Sawyer's webpage is at http://sfwriter.com.

Monday, November 19, 2007

Yahoo Pays—A Little—for Internet Censorship in China

Shi Tao is still languishing in a Chinese prison. But now Yahoo, the company that helped put him there, has to pay something for what they did.

Until November of 2004, Shi was a journalist working for a Chinese business journal. Earlier that year, his newspaper received a message from the Chinese government warning the journal not to run stories on the 15th anniversary of the Tiananmen Square massacre of 1989. Shi emailed a copy of this message to an editor at Democracy News, a New York-based human-rights organization. Chinese government officials found out about the email and pressured Yahoo, Shi's internet service provider, to reveal the identity of the email's author. Yahoo did so, and on Nov. 24, 2004, agents of the government arrested Shi in the northeastern city of Taiyuan. He was convicted the following April of revealing "state secrets" and has been in jail ever since. In a similar case, Yahoo revealed the identity of engineer Wang Xiaoling, who had posted pro-democracy comments online about the same time. He suffered the same fate as Shi Tao, but Wang's wife Yu Ling decided not to take this lying down.

After years of delay trying to obtain court documents, Yu Ling filed suit in a California court against Yahoo last April. And last week, Yahoo announced that the suit had been settled out of court, though few details were released other than the fact that Yahoo executives promised they would do "everything they can to get the men out of prison." In a fight between a totalitarian sovereign government and the CEO of one U. S. company, I think it is fair to say the odds are stacked against the company—and any prisoners the company is trying to help.

A lot of engineering ethics involves shades of gray, ambivalent situations, and other complexities. That is not the case here. At stake is the question of whether freedom to criticize one's government is a good thing or not. The founders of the United States believed it was. It is a principle enshrined in the U. S. Constitution and defended to what some might view as an absurd degree today. If it is a good thing in one culture or state, it is a good thing everywhere. That freedom is just as valuable and worth protecting in Shanghai as it is in Peoria.

So what happens to your respect for this freedom if you run a large multinational company eager to profit from the giant potential market that is China? It appears that you agree to whatever compromises with freedom the communist government demands of you, up to and including the divulging of email account holder's identities. Now internet service providers in this country also divulge account names to law enforcement officials from time to time, but only after court orders with regard to what is likely to be truly criminal activity. Posting a blog saying you don't like George Bush will not get you sent to jail here. But as we have seen, doing something similar in China will get you sent to jail there, and Yahoo helped.

Only when one of the prisoner's relatives went to great personal trouble and expense to file a lawsuit against Yahoo did that company even start to act. In the past, and as recently as last week, it has justified its betrayal of Shi and Wang by saying if it doesn't follow the Chinese government's rules, Yahoo's own employees might be in danger. Well, duh! Better our customers go to jail than us. Is this the kind of attitude you want from a company that you do business with?

Rather than admit wrongdoing or even disclose what compensation is involved in the lawsuit's settlement, Yahoo convinced the dissidents to settle out of court for an undisclosed amount plus a promise to do whatever they can to gain the prisoners' release. If I were Shi or Wang, I would not hold my breath.

This kind of thing is what happens when a corporation allows profits to overwhelm its moral sense. The pressure on publicly owned corporations to make the most money while staying just within the bounds of the law is immense. And as someone whose retirement investments include corporate stocks and bonds, I am as much a part of that problem as anyone else who invests money in today's economy. But when that legal- economic principle is allowed to trump all others, you end up with situations in which settling lawsuits for doing heinous things is simply a matter of buying off those you have injured at the lowest negotiated price. That appears to be exactly what Yahoo has done.

If you are expecting me to pull any punches here, I'm not going to. Last week we invited a Chinese graduate student and her husband over for supper at our house. Both of the were born in the Peoples' Republic of China, but in different cities, and they met only last year when he was in his fifth year as a professor of mathematics and she was a new graduate student at Texas State University. They fell in love, got married, and now she is looking for a job. In this country there are no work committees to pass judgment on whether you can marry, what job you can take, where you can live, and so on. I did not discuss with them the reasons they emigrated to the U. S., but I think the answers are obvious.

Leaving one's native land is a terrible wrench, and these young people must have had very good reasons to abandon the land of their birth, learn a difficult foreign language, and excel academically in a strange environment. But it happens all the time. Wouldn't it be nice if stories like this could happen in China too? And some day they may, but only if the government decides to change its ways. And that will happen only when people like Shi Tao and Wang Xiaoling can make their voices heard without fear that a company based in the freedom-loving United States of America will rat on them and help send them to jail.

Sources: A report on the Yahoo settlement can be found in CNN's Asia online edition at
http://edition.cnn.com/2007/WORLD/asiapcf/11/13/yahoo.china/index.html. I first commented on this issue in a blog posted here on March 30, 2006.

Monday, November 12, 2007

Safety's Sleuths: The NTSB Investigation of the Minneapolis Bridge Collapse

Bridges are not supposed to fall down. But last August 1, the 1,900-foot-long bridge that carried I-35W over the Mississippi in Minneapolis came apart and landed in the river, carrying thirteen people to their highly unexpected deaths. We fear dying from a lot of things, but it is safe to say that nobody on that bridge that day spent a lot of time worrying about whether they would die as an interstate-highway bridge fell out from under them.

That very fact attests to the rarity, in this country anyway, of major structural failures in transportation-related public works such as bridges. One reason they are so rare is that for the last four decades, the National Transportation Safety Board has investigated accidents involving the nation's transportation infrastructure. In so doing, it performs a critical task that historian of technology Henry Petroski says is essential to the continued safety of engineered structures. In "Design Paradigms," a book of engineering case studies that includes three famous bridge failures, Petroski writes, "The surest way to obviate new failures is to understand fully the facts and circumstances surrounding failures that have already occurred." That is what the NTSB is doing now with regard to the Minneapolis bridge collapse.

The same day of the collapse, the NTSB dispatched a nineteen-member "Go Team" from its headquarters in Washington, D. C. to Minneapolis. As rescue and recovery work allowed, members of this team collected many kinds of information. They used an FBI-provided three-dimensional laser scanning device and gyro-stabilized high-resolution cameras to establish exactly where the parts of the bridge came to rest after the collapse. They collected names and recollections from dozens of eyewitnesses and secured the original of the famous security-camera recording that showed part of the bridge in the act of collapsing. By the third week of August, NTSB officials had interviewed over 300 people, including over a hundred who called in to a specially arranged witness hotline. In the following month or so, critical pieces of the bridge were removed to a nearby secure site for detailed inspection and investigation.

One of the most important tools currently available to the NTSB is powerful computer software called "finite-element analysis." This is a way to solve the fundamental materials-science equations that describe how steel (or any other solid) behaves under complicated conditions of stress. While it can't predict exactly where cracks will occur in an overstressed beam, it can reveal locations in a complex structure such as a bridge, where the local stresses exceed the tensile strength of steel. It is in such locations that cracks and failures are most likely to occur.

But as with any computer program, finite-element analysis software is only as good as the data you put into it. This is why the NTSB has spent the last three months gathering as much information as they can on not only the details of the bridge structure, including core samples showing how thick the deck was, but also other factors such as loading. You may recall that at the time of the collapse, a construction crew with heavy equipment was working on a portion of the bridge. The NTSB has concluded that a total of 287 tons of construction equipment and materials were on the bridge at the time of the accident. The exact location and weight of this extra loading is critical input to the computer analysis. The NTSB has made good progress in procuring such information by talking with eyewitnesses and viewing an aerial photograph taken by an airline passenger from a plane that passed over the bridge shortly before its collapse. Although the NTSB turned the disaster site over to the Minnesota Department of Transportation on Oct. 12, some thirty NTSB staffers are still working full-time on the investigation, which is not expected to be wrapped up for over a year.

Well-run operations are often taken for granted. Things could be very different. In places where there is nothing like the NTSB, disasters like this can be much more frequent, and citizens trying to affix blame have little if any recourse if something terrible happens to them or their loved ones. The NTSB could be corrupt, for example, or subject to bribes or falsification of its reports in response to political pressures. To my knowledge, however, its reputation for probity and "just-the-facts" scientific integrity is essentially spotless. This is no minor achievement, and the engineers who work for the Board have accomplished great things in the service of informing the both the technical public and the general public about the reasons for tragedies such as the Minneapolis bridge collapse.

Every major engineering failure marks the start of a detective story. Accident investigation is one of the few lines of work where engineers can spend their professional lives in the role of detectives. Now and then the culprit is a true criminal, but most of the time, accidents are due to inattention, bad communications, or inadvertent mistakes rather than any active will to do harm. Nevertheless, harm is done.

We will have to wait a while longer before we have the full story of how a part of I-35W suddenly lost altitude that hot August day. But it will be a story worth waiting for, because we can learn from it how to keep accidents like it from happening again.

Sources: The NTSB posts updates periodically on its accident investigations at its website. The latest such release about the Minneapolis bridge collapse was posted on Oct. 23, 2007 at http://www.ntsb.gov/pressrel/2007/071023c.html.

Monday, November 05, 2007

Identity Theft Gets Personal, or, Licenses to Steal?

Well, it's happened to me—sort of. My identity wasn't stolen, exactly—just left out in a place it didn't belong for a few days. When the Commonwealth of Massachusetts discovered its error, it tried to fix the damage and then it let me know all about it. And as far as I know, no harm has been done. Still, it leaves me with an uncomfortable feeling.

Here's what happened. Some years ago, I decided to become a licensed professional engineer. Unlike the medical and legal professions, the engineering professions generally don't require a practitioner to be licensed, except in a few cases where an engineer involved in public works such as bridges or roads has to sign off on plans for legal liability reasons. The vast majority of engineers working in private industry and academia in this country do not have to be licensed in order to hold their jobs. (The reasons for this are interesting, but a story for another day.)

Nevertheless, if you're licensed you get a pretty certificate to put on your wall, and some university engineering departments technically require their professors to be licensed professional engineers, although I've never heard of anybody losing their job over it. At the time, I was living in Massachusetts, and so I got online and found out what I had to do to become licensed.

The conventional route is a two-step process. Undergraduate engineering students can take an EIT (engineers-in-training) exam, and if they pass they become engineers in training. After five years or so of practice or the equivalent, they can take a second exam and become full-fledged licensed professional engineers. For older types like me, with a lot more than four years of experience, the Massachusetts Division of Professional Licensure had an alternative: I could put together about five pounds of documentation on my career and send it in and they'd interview me, and if they thought it was enough, they would license me after that. So that was the course I adopted, and in due course I received Electrical Engineering License No. 40940.

That number is part of the public record, which, as it turns out, the Division sends out regularly in the form of computer disks when it receives requests for lists of professional engineers of various types. This is how I get all kinds of junk mail from companies selling engineering-related products, I suppose, but I don't mind that aspect of the situation too much. What I mind a little more is what prompted the letter I received from the Division last week.

For four days last September, some disks they sent out in response to requests for licensees' names and addresses also accidentally included our Social Security numbers. That is NOT supposed to be a part of the public record, and commendably, the Division caught their mistake before too much damage had been done. They called all the places they'd sent the numbers to, got them to return the disks, made them sign papers saying they didn't retain any information from the disks, and so the incident is presumably closed. Just as a precaution, however, the Division told me to call one of the national credit reporting agencies and put a fraud alert on my credit report. I may get around to doing that one of these days.

As identity thefts go, this is a pretty minor case, more of a slipup than any deliberate crime. And I must say that the Division appears to have handled it in an exemplary fashion, notifying the potential victims and so on and getting the unintended recipients of the sensitive information to promise they didn't do anything fraudulent. But it gives one pause, because I have no idea who else has my Social Security number, and how careful they are being with it, and whether they've slipped up or had stuff stolen from them without even knowing about it.

This issue is shortly going to become even more important as most medical records go online in the next few years. I'm pretty sure one of the things you are always asked for in a doctor's office is your Social Security number, and that's how many medical records are indexed. Medical records have a lot of stuff in them that's even more sensitive than Social Security numbers, and I only hope that the doctors will learn from the bankers how to protect sensitive information.

The trouble is that the motivations are different. If a crook perpetrates credit-card fraud, the consumer is liable for only the first fifty dollars, and the bank or credit card company is left holding the bag for the rest. That one law has prompted the financial sector to develop one of the most secure and reliable systems of online information transfer in the world.

Doctors and healthcare providers don't have the same kind of motivation. A breach in your medical security is no skin off their nose, so to speak. So the laws will have to be written in a way that motivates the holders of sensitive information to protect it at the price of some penalty that will be greater than the cost of doing a good job of data security.

As for my little identity problem, I do believe I'll give one of those credit agencies a call. I had a very minor problem with one of them a few years ago and they fixed it with reasonable promptness, so it can't hurt to take that extra step of caution. That's an engineer for you.

Sources: More info on becoming a licensed professional engineer can be found at the website of the National Society of Professional Engineers, www.nspe.org.

Monday, October 29, 2007

Working the Bugs Out In Space

If you see metal shavings in the oil you change out of your car, that's not an encouraging sign. But what if your vehicle cost several billion dollars and is flying hundreds of miles above the ground at fifteen thousand miles an hour? That is the problem faced by the engineers and astronauts trying to build the International Space Station.

News reports this week say that space shuttle Discovery mission specialist Daniel Tani opened a plastic cover on a gearbox during a spacewalk to reposition some solar panels. He was following orders from ground engineers who had noted excessive vibration and power consumption from the motors that move the 30,000-pound solar panels so as to collect the maximum amount of sunlight. Inside the box, Tani found an abundance of metal shavings, and collected some for analysis back home.

Everything is harder in space: repairs, inspections, lubrication, and even engineering and design. Although there are a few expensive giant vacuum chambers around that let engineers test satellites and other small to medium-size objects in something close to the reality of space, these don't simulate zero-G conditions. So the only way to check out most space-bound systems in 100% realistic conditions is to fasten them on a rocket and send them out there to see what happens. This is one reason that space exploration is so expensive and fraught with failures.

Readers of this blog know that I have serious reservations about the continuing use of the Space Shuttle (it ought to be replaced yesterday, not in two or three years) and the wisdom of spending billions on a space station which is too shaky for really good science and too small for really meaningful colonization of space. All the same, it's good to know that when something goes wrong on a system as big as the Space Station, you can send up a guy to take off the covers and have a look around, even if the service call costs millions of dollars. Discovery's latest trip was not only for maintenance—it is part of a tightly scheduled program to keep the Space Station's construction on track for completion by 2010.

Since this effort is costing several countries (Russia, the U. S., Japan, and Canada are major partners) both money and lives (if you count those who died in the 2003 Columbia disaster), it is only reasonable to ask what good it is doing. There is a scientific answer, an engineering answer, and a political answer. As is the nature of these things, they all blur into each other.

The scientific answer is, so far, not much. I cannot think of a single major scientific discovery that has resulted from work performed directly by astronauts, as opposed to research enabled by the Hubble Space Telescope or other unmanned lunar and planetary probes. This of course may change once the station is "completed" (such a project is never really finished for good, but the bulk of work will eventually shift from construction to use). But right now, it's too early to say if there will be any significant scientific payoff from the project at all.

From an engineering standpoint, building and operating the space station can tell us loads about the problems of building and operating a space station. We've had a smoke problem, a computer problem, and now a ground-up-gear problem, possibly, and those are only the ones that made headlines. As the first system of its kind, the International Space Station is bound to have all kinds of engineering issues that we can learn a lot from, assuming we try to do something like this again. As every engineer knows, the first time is mainly learning from mistakes. If your funding goes long enough to let you try a second time, you have a chance at getting it mostly right.

From a political view, the space station is an experiment in international cooperation on an intensely complex technical project, and by and large, this aspect of it seems to have gone well. When the U. S. manned space program went on hold for two years after the Columbia disaster, the Russians stepped up to the plate and kept the station in business with Soyuz launches. So far, the politicians have mostly kept out of the way of the committed engineers and managers in all the countries involved who want to see this thing go. Engineers have a way of forgetting about nationalities or political differences when they share a common technical goal, and the International Space Station is a good example of how that can work.

In the meanwhile, there's the question of where all those metal shavings are coming from. The ten-foot boxes that serve as pivots for the large solar panels could be replaced, I suppose, but that would be a major undertaking. On the other hand, if the bearings freeze up that will severely limit the amount of electrical power available to the station. I hope this turns out to be something trivial, as one engineer on the ground hoped that the shavings were just chewed-up foil insulation. My instincts tell me that such a hope is wishful thinking, but we'll just have to wait and see.

Sources: The New York Times article describing the metal-shaving problem is at http://www.nytimes.com/2007/10/29/science/space/29shuttle.html. Wikipedia has a good articles on the International Space Station's history and construction.

Monday, October 22, 2007

One Laptop Per Child: Will It Fly?

Being poor and isolated is rotten. A recent book by Paul Collier entitled The Bottom Billion: Why the Poorest Countries Are Failing and What Can Be Done About It deals with the poorest one-sixth of the world's population of six billion. According to reviews, Collier identifies four main reasons that these poorest of the poor are where they are. Internal and regional conflicts (1) are sometimes worsened by concentrations of natural resources (2) such as gold and oil that distort economies, especially when (3) you live in a country next to one where similar problems are going on, and (4) your government is corrupted by sweetheart deals with everybody from Western multinational companies all the way down to international crooks. Although I haven't read the book, the problem of a country's poor children not having laptops apparently did not make Collier's list of the top four issues. Nevertheless, an organization in Cambridge, Massachusetts is busily working on solving that problem.

The outfit called "One Laptop Per Child" aims to put specially-designed, inexpensive laptop computers into the hands of millions of children in the poorest countries in the next few years. The machine itself will be powered by solar cells, hand crank, or batteries, and uses special hardware and software to reduce its operating power consumption to less than a watt under some conditions, which is about a tenth or less of what an ordinary laptop uses. Recent reports indicate that the designers have not yet reached their target cost of $100 per unit, but present estimates are below $200 and the hope is the cost will fall as manufacturing climbs the learning curve.

The project's founder is Nicholas Negroponte, who has held various positions at MIT and related organizations for many years. Negroponte, who also founded MIT's Media Lab, is a member of what one might term the MIT computer brain trust, a group of individuals including Seymour Papert and Marvin Minsky who have shaped the direction of a great deal of computer and artificial intelligence research and publicity.

Clearly, the hearts of Negroponte and company appear to be in the right place. Children don't live by bread alone, and it is a noble goal to bring the benefits of computer technology to people who are impoverished in other ways as well. The plan is to sell the laptops only to governments, which would presumably distribute the units to their citizens either free or at a heavily subsidized low cost. Although the XO-1, as it's called, will not be available for consumer purchases in general, the Wikipedia article on it reports that this Christmas, you will be able to "get one and give one": you can buy one for yourself and at the same time, donate one to a poor child somewhere.

There is a movement in engineering ethics to encourage the study of what are called "moral exemplars": people or organizations who do the right thing in engineering, furnishing good examples to the rest of us. I will say that the XO-1 project certainly has the potential to be a moral exemplar, but so far the jury is out. The organizers are still awaiting large-scale production and distribution, and until they have large numbers of units out in the field and do some studies to see how they are used, we will simply have to wait and see how the project turns out.

A few critics have pointed out that the venture is very "top-down," in the sense that a bunch of experts in Cambridge got together and designed a laptop that they thought would be good for third-world children to use. It has certainly gained Negroponte a lot of favorable media attention. For example, he introduced a kind of pre-prototype at a UN-sponsored meeting in Tunisia in 2005, sharing the platform with then-UN secretary general Kofi Annan. And judging by the specialized hardware and software, the MIT types have had a field day trying out some of their pet ideas in this thing, using it as kind of a test bed for a lot of what-if notions.

But whether the unit really meets a genuine need or truly improves the lives of children around the world remains to be seen. One concern is the fact that all the software on the unit is open-source. This is a nice gesture toward an ideal world that some people would like to live in, where all software would be open-source, but it ignores the reality that most software used by most computers today is proprietary. And if you can't run any proprietary software on these XO-1s (although users might install it after purchase, since the operating system is Linux), there is a real danger that the things may turn into just expensive toys.

Years ago, I experienced what happens when a new piece of computer hardware is launched without any software available for it. One of the leading lights in the Massachusetts computing world back then was the Digital Equipment Corporation, or DEC. I spent a good chunk of my first research dollars as a professor on a DEC computer highly recommended by a colleague who, I found later, used to work for DEC. It was a good machine hardware-wise, but as the months dragged on and nobody besides DEC developed any software for it, I found that I'd bought an expensive boat-anchor, and ended up having to buy a PC.

I hope such a fate does not await the XO-1, but surely the developers have thought of this problem in advance. Most of the world's effective software has been developed under the aegis of the free-enterprise system where people had to pay something for it. Maybe the children will surprise us and develop software on their own—the system is said to allow for this. I wish the XO-1 the best, but a community that benefits from computers is more than just the sum of software, hardware, training, and distribution. Time will tell, as it usually does.

Sources: The official One Laptop Per Child website (in English) is at http://laptop.org/en/. The Wikipedia article about it is at http://en.wikipedia.org/wiki/XO-1_(laptop). I learned about the project in an article by Kirk Ladendorf in the Oct. 22 issue of the Austin American-Statesman. Collier's book was reviewed in the November 2007 issue of First Things.

Monday, October 15, 2007

Copyright or Copywrong? The Ethics of Technological Multiplication

On Oct. 5, a jury in Minneapolis fined Jammie Thomas, a 30-year-old single mother, a total of $220,000 for downloading twenty-four copyrighted songs. Thomas was the target of a lawsuit filed by the Recording Industry Association of America (RIAA) and major music labels. Although music-downloading websites have been sued successfully in the past, this is one of the first times in recent months that an individual downloader has been fined.

Let’s leave aside, if we can, the picture this story gives us of six large, wealthy corporations, and a trade association representing many more, all ganging up on a woman who is not likely to be able to pay these fines any time soon. It can actually happen that a poor person does something wrong enough to be fined a lot of money for it, if not sent to jail. But is that what happened here?

Thomas’s case is just one tip of a huge iceberg that is floating around in electronic media today: the fact that making essentially flawless copies of a digital original requires less technical resources every week. Let’s try to clarify the issues a little bit.

Even back in the Stone Age, every tribe probably had some clowns and singers that other Cro-Magnons enjoyed listening to. These prehistoric entertainers created something of value: an economic good. Elementary justice demands that the entertainers who spend time and effort practicing and performing should receive some kind of reward for their effort. In those days, it might have been an extra joint of meat from the stewpot. Whatever the reward, the performer may have insisted on it before performing. The more people his performance attracted, the more stewpots he could sample from, but before the Internet, radio, printing, or writing, his ultimate market was pretty small.

Since the invention of writing itself (probably the oldest communications technology), the reproduction of economically desirable artifacts (stories, jokes, songs, etc.) has had a technological component. But even way back at the prehistoric origins of entertainment, there were two extremes that everyone involved had to navigate between. At one extreme, the performer has an absolute monopoly: he is the only performer in the world, everybody wants to see him perform or die, and so he can charge whatever he wants. He can demand the entire wealth of the whole tribe in exchange for one performance if he wishes. This is clearly unfair to the rest of the folks, who have themselves acted unwisely in becoming such slaves to amusement.

At the other extreme, the performer himself becomes a slave: he is threatened with death if he doesn’t perform, but he gets no rewards if he does. Anybody who wants to can walk up to him and demand a performance any time, with no charge to the members of the audience. This extreme is clearly unfair to the performer, who would be better off waking up dead some day.

You’ve been waiting for the technology to come in, right? This is an engineering ethics blog, after all. Well, here it is. All that technology can do is to multiply the performer’s performance in number, magnitude, impressiveness, duration, or other ways. But without the performer, that human being who originates the thing everybody wants to see, you have nothing. Printing, radio, television, motion pictures, phonographs, DVDs, the Internet, YouTube—all these things just give more people access to the performance, whatever it is. Now, it takes a certain amount of time and money to execute this multiplication—call it the marginal resource cost. What has happened over the last few decades is that the marginal resource cost for multiplying the performance has shrunk by many orders of magnitude. When you compare what the Bell System charged a major TV network in 1955 to operate its network transmission facilities (and factor in inflation)—probably the equivalent of many millions of dollars today—with what it costs some 14-year-old kid in Casper, Wyoming to make a video and put it on YouTube, you get some idea of how these marginal resource costs have collapsed. With some exceptions, the direction the technology has moved is to make more stuff available, for everybody, cheaper. So if there were no copyright laws at all, you’d get a situation in which few people would bother to do anything very good that requires a lot of resources (personal or financial), because they could never recoup their investment.

On the other hand, strong-arm tactics like the RIAA lawsuit against Jammie Thomas attempt to move things in the other direction, toward total, perpetual control of the performance by those who own it (not necessarily those who actually did it in the first place). Many people, including Stanford law professor Lawrence Lessig, think we have already gone too far in this direction, at least on paper. Copyright terms have been extended greatly in the last few years, to the point where many artists are worried that quoting or citing anything more recent than 1910 in print, music, or film will make them liable to a lawsuit. Part of this trend, no doubt, arises from a fear on the part of corporate copyright owners that if they don’t do something quick, everybody will digitize everything and just swap it around forever without anyone making a dime off any of it. These fears are no doubt exaggerated, and another part of the trend arises from a much simpler cause: greed.

Mixed up in all this are things like cultural traditions, expectations of private purchasers of entertainment media, technical standards and compatibilities, and many other factors which make copyright law such a happy hunting ground for lawyers. Certain acts of technological duplication in themselves should be made illegal. I don’t think anyone seriously disagrees with the principle that counterfeiting money should be against the law, even if you do it just to have some pretty pieces of paper to look at and you never intend to spend any of it. But attempts to make simple acts of technological multiplication illegal get into murky waters involving privacy, intentionality, and the tradition that what you do in your own home is your own business. The problem is as much political as it is technical, and politics, generally speaking, is not my beat. Still, there's enough engineering involved to make it worth thinking about in an engineering ethics blog.

This blog itself is an example of how nearly-free multiplication costs are used: I don’t pay to write it (except with my time and effort) and you don’t pay to read it. Still, I hope you get more than your money’s worth.

Sources: An article describing the Jammie Thomas case is at the Australian Broadcasting Corporation’s website at http://www.abc.net.au/news/stories/2007/10/05/2051724.htm?section=entertainment. Lawrence Lessig’s webpage is at www.lessig.com. And an interesting comparison between copyright law and the way magicians safeguard the secrets of their tricks appears in Tim Harford’s blog http://www.slate.com/id/2175616.

Monday, October 08, 2007

Losing By A Whisker: Lead-Free Solder and the Tin Whisker Problem

In 1998, the $250 million Galaxy IV geostationary communications satellite carrying millions of pager signals as well as the broadcast feeds of the CBS and NPR networks failed after only five years of service. Pager service wasn't restored for days and the company operating the satellite suffered considerable financial losses. Engineers determined that the problem was tiny tin whiskers that sprouted from soldered connections in the satellite's primary control processor. Because of a decision made by the European Union to prohibit the use of lead-based solder in electronics, we may see a lot more failures due to tin whiskers in the near future. How did the simple act of choosing electronic components become a complex moral issue? First, you need to understand something about tin whiskers.

When metals such as tin, zinc, and cadmium are under some kind of mechanical stress, one way they tend to relieve this stress is by sprouting tiny threads or sticks of metal called whiskers. They are very thin, much thinner than a human hair, and grow slowly over a period of months or years to a length of a few millimeters. But in the microminiature world of modern electronics, that distance is more than enough to bridge the gap between two terminals that will cause an equipment failure if shorted together. That is exactly what happened to the Galaxy IV satellite in both its primary and backup processor.

The whisker problem was first identified in the late 1940s, and since then engineers have found several ways to mitigate or eliminate it. Adding lead to tin plating or solder typically cures any whisker issues. Until very recently, the standard mixture of solder (the tin/lead alloy used to connect together most electronic components by melting it around terminals to be joined) was 60% tin and 40% lead. This alloy was reasonably inexpensive, had a low melting point, and served the electronics industry well for many decades.

In 2003, the European Union enacted a policy called Reduction of Hazardous Substances (RoHS, for short). This directive said that by July 1, 2006, most electronic products made or sold within the EU could not contain more than a very small amount of lead, cadmium, mercury, and a few other hazardous chemicals. Since the EU is a large market, and it is not practical for the thousands of electronics component manufacturers around the world to maintain two separate production lines, one for RoHS and another for non-RoHS products, this created a huge amount of turmoil in the industry as companies retooled their processes to eliminate lead from their solder, interconnection wires, plating processes, coatings, connectors, and everywhere else it was used. If you look in an electronic parts catalog these days you find "RoHS-compliant" labels on many if not most products, although non-RoHS stuff is still available, including the nasty old lead-bearing solder (which I have used, incidentally, since about the age of ten with no harmful effects). In fairness to the RoHS policy, the concern is not so much that people who use the electronics products are in any immediate danger of exposure, but that both at the manufacturing end and the recycling or disposal end, the lead can cause health problems. And that is an entirely legitimate concern.

But so is the problem of multi-million-dollar systems conking out because of tiny tin whiskers. The only commonly available RoHS-compliant solder, for example, is about 96% tin and 2% silver. Silver is not cheap, and so it costs about 50% more than the lead-bearing solder. It works all right—I've used some—but there is no lead in it to prevent the tin-whisker problem. And apparently there are few if any long-term studies of this new solder formulation that tell us how likely it is that joints soldered with it will need a shave in a few years.

The RoHS directive does exempt certain high-reliability systems such as medical devices from the no-lead requirements. But as some industry spokesmen point out, this is an empty gesture, because pretty soon it will be very hard to find any non-RoHS parts, for the simple reason that the market for them will dry up. NASA, for example, has good reason to be very concerned about the tin-whisker problem, since their satellites, and above all the Space Shuttle, contain electronic systems that are old enough to vote. So far, no life-threatening failure has occurred in the Shuttle due to tin whiskers, but the Shuttle has to keep going another two or three years at least before its commercial replacement may be available.

So what's an engineer to do? Well, the law is the law, and if your company makes or sells anything in the EU, it better comply with RoHS. As for systems that demand high reliability, there are ways around the whisker problem even if you have to use lead-free solder: wax or other impermeable coatings, proper spacing and insulating layers of other kinds, and so on. But many of these techniques are either largely untried or have problems of their own. That is what engineering is all about: solving problems. And the world will be a better place when new electronic products don't carry the burden of toxic heavy metals that they did in the past. But engineers now have to consider a new technical problem introduced by the well-meant, but perhaps technologically immature, RoHS directive. And we'll all be dealing with the consequences, perhaps in unexpected ways.

Sources: The Oct. 8, 2007 Austin American-Statesman carried an AP article by Jordan Robertson on how the high-tech industry is dealing with the challenges of tin whiskers and RoHS. Wikipedia's article "Whiskers (metallurgy)" gives a good description of the phenomenon and problems it can cause. The NASA Tin Whisker Homepage http://nepp.nasa.gov/whisker/ contains several pictures of actual whiskers and articles and presentations about the problem.

Monday, October 01, 2007

Battle of the Airways: How to Fix the FAA

Ladies and gentlemen! Your attention please! The Battle of the Airways is about to begin!

In this corner, we have The System. Hailed as a marvel of modern engineering when he debuted in the 1960s, The System has seen better days. Last week (Sept. 25, to be exact), he suffered a defeat at the hands of a failure in a telephone switch in Memphis, Tennessee. The scene was fantastic: air traffic controllers desperately punching numbers into their personal cellphones to call their cohorts in adjacent airspace control centers, because their radios went out and a good number of radar screens went blank, too. All flights were grounded within a 250-mile radius of Memphis, and it took the rest of the day for air traffic on the Eastern Seaboard to get back to what we call normal these days.

In this corner, we have ATA, the Air Transport Association. This airline trade association is ready to come out swinging, because they pay nearly all the taxes and fees that go to support The System. But a one-engine plane flying from Astabula, Ohio to a landing strip in an Iowa corn field takes as much or more resources from The System as a 747 pilot carrying over a hundred passengers, while paying hardly anything compared to the commercial flight.

In this corner, we have NATCA, the National Air Traffic Controllers Association. They're ready to punch somebody out before it's too late, because they've slimmed down way below weight—they've lost 10% of their numbers since 9/11/01, but air traffic's increased since then. NATCA, like The System its members operate, is getting older, smaller, and more poorly paid every day, if you believe what it tells you. And why would a fighter lie about a thing like that?

And last but not least, in this corner, we have John Q. Flying Public. Bigger than ever (individually and collectively), he's not happy about sitting in planes for hours on end and having flights canceled. Something's not right, he's pretty sure of that, but he doesn't even know who to go beat up on to fix the problem.

Waiting in the wings are the referees and the bookmakers: POTUS and Congress making the rules, and politicians and lobbyists betting on the outcome (metaphorically, we hope). The once-a-decade renewal of the FAA funding law that expired on Sept. 30, 2007 is a great opportunity for all the fighters to show their stuff. The only question is, who'll be the last man standing?

. . . Fighting is not a generally recognized way to solve complex technical disputes, but it looks like that may be how the FAA gets fixed—or doesn't, as the case may be. It may not have been a coincidence that in one week, we had a serious communications breakdown in the Memphis regional air traffic center, a Presidential statement about how the airlines had better get their act together or else, and the expiration of the current funding system for the Federal Aviation Administration, or FAA.

The technical problems are pretty clear. The present system was designed when the only way to track air traffic efficiently was with centralized radar systems that treated a 707 or a flock of birds the same way: a passive microwave-reflecting object. Identification, location, and tracking were all done either by hand or eventually by computer, but the ultimate channel through which information passed was the human air traffic controller.

That system worked great through the 70s and 80s, but as traffic has increased and newer technologies such as satellite-enabled global positioning systems (GPS) have become available, the old way of doing things has become increasingly cumbersome, unreliable, and even dangerous. Near-misses in the air are not an uncommon occurrence, and it was only by quick action on the part of already over-stressed air traffic controllers that the Memphis breakdown didn't result in a major tragedy.

Okay, we need to replace the system with a satellite-GPS-based automated one. Who's going to pay? Presently, most of the money that pays for the FAA's technology and staff (in good years, anyway) comes from ticket taxes, fees, and other sources which have little directly to do with the workload that each user represents. The Air Transport Association points out that the FAA is basically a utility, and like a water or electric company, most utilities should charge by the amount of services provided. But this is not what happens. As a result, the disconnect between funding sources and funding needs has given rise to a typical situation that often develops in government-provided services: lack of infrastructure investment and long-term planning.

How to fix it? Well, there's the good, sensible way—and the other way. The good, sensible way is for all parties involved—folks from all five or seven corners of our boxing ring, however many there are—to sit down, look at the system's needs for the next twenty years or so, figure out a big road map of how to get from here to there, and then find the money and resources to do it. This kind of thing happens all the time in private industry—the semiconductor industry, for example, has hewed closely to a roadmap of theirs that basically insures that Moore's "Law" keeps running year after year, and integrated circuits keep getting more and more complex. Airplanes aren't computer chips, but I'm talking about a planning process, not a technology.

That's the good way. The other way is to wait for a super-Memphis: something like the entire system freezing up and planes falling out of the sky, or flight delays all over the country that take a solid week to straighten out, or something equally as damaging to the airline industry as 9/11. It is my fond wish that something like this does not happen, and that the parties involved will get together and fix the problem the good way. But in a democracy, sometimes it takes a crisis to knock everybody's heads together enough to overcome differences and get things done.

Sources: A report on the Memphis breakdown can be found at the CNN website http://www.cnn.com/2007/US/09/25/memphis.air.snafu/index.html. A report of President Bush's comments on Sept. 27 about the airline industry is at http://money.cnn.com/2007/09/27/news/economy/bush_airlines.ap/index.htm. The Air Transport Association explains its view of FAA funding at http://www.smartskies.org/LearningCenter/faa_funding/default.htm, and the National Air Traffic Controllers Association explain some of their troubles at http://www.natca.net/mediacenter/press-release-detail.aspx?id=455.

Monday, September 24, 2007

Friends, "Friends," and Facebook

Last week, a lady named Sal who uses the social-networking website called Facebook showed a group of older professors (including yours truly) how the system works, what her own site looks like, and answered questions about it. Someone asked her how interactions with students through Facebook compares to dealing with them live and in person. She said some students will tell her things on her "wall" or in private messages on Facebook, that they would never mention in person. She finds that these students are rather more awkward socially than otherwise, but can open up and be quite interesting online.

This experience comes on the heels of an article by Christine Rosen, a senior editor at The New Atlantis, which is a quarterly devoted to issues of technology, ethics, and society. Rosen writes that friendship, a kind of personal interaction which has not fared that well in the modern era in the first place, may be suffering further decline as people trade the risks and uncertainties of face-to-face relationships for the reliability and controllability of online connections. If you tire of a person who's sitting in your room, we have not yet gotten to the point where you can acceptably say, "Go away, I'd rather not see you right now." But if you're reading your latest wall entries or your latest statistics on how many "friends" you have on Facebook, you can quit and do something else at any time and nobody else is the wiser—or gets their feelings hurt, either.

Facebook, of course, is a for-profit enterprise, and they are doing pretty much everything they can to increase the number of users beyond the current 34 million or so worldwide reported on Wikipedia. So it's understandable that the system is biased to encourage quantity of connections rather than quality. We've all known people who seem to collect relationships as others collect stamps or matchbook covers. To such people, you count mainly as a number, not as a unique individual.

To a computer, everybody counts only as a number, and that is only one way that computer-mediated interactions tempt us to objectify other people. If I know Joe Schmo mainly as a particular bizarre emoticon with a peculiar expression, the next time I think of Joe Schmo, the first thing that is likely to come to mind is that weird emoticon, not a living, breathing human being with his own history, likes, dislikes, hopes, and fears. But it was Joe who chose that emoticon, and for all I know, he likes for me to associate it with him, just as certain dramatic personalities in the past went around wearing capes and waxed moustaches for effect. In a larger and larger marketplace of potential friends, people will adopt more and more attention-grabbing disguises in order to get any traffic at all.

So in one sense, there is nothing new going on here. The reality of social networks—the thing you can diagram by writing names on a big sheet of paper and drawing lines between any two people who know each other—has been around since before history began. For people who get charged up by social interaction, joining Facebook may be like putting wings on a wildcat. For those of us (myself included) whose main sensation after meeting a boatload of new people is usually just a headache, Facebook's attractions may be harder to grasp. But for everybody who uses it, whether they're out simply to increase their number of friends or whether they are seeking the deepest and most profound relationship possible, the fact that their interactions on it are mediated by technology set up a certain way, will slant the nature of all those relationships in a way that favors quantity over quality.

There will be some people who try to abuse the system: stalkers, con artists, and so on, though according to Sal, Facebook is notably free of most such problems so far. And there will be more people who simply overuse it, like the students who neglect their homework and crash university servers when they buzz around on Facebook for hours upon end. But like the Internet itself, Facebook does put more people in touch with each other, in some fashion, than would otherwise be the case, or at least it looks that way so far.

All the same, I wonder whether someone like C. S. Lewis would have found much of a use for Facebook. As a student at Oxford he was fond of meeting a few intimate friends, nearly always male, with whom he would go on long walks in the hills and forests, discussing anything and everything, from what kinds of clothes they were made to wear when they were boys to the meaning of life. He also wrote letters, but it is clear from the journal he kept as a young man that the heart and soul of his friendships (many of which he maintained through most of his life) was conversation: sitting in a room together and talking. In a time when telephoning was mainly local and telegrams were used only when needed, he clearly regarded letters, phone calls, and other means of communicating with those not present as secondary substitutes for the real thing. I can't help but think that there is some deep preset bias in the human being that favors in-person conversation over all other forms. These other forms can be learned, used to mutual benefit, and abused as well. But if a person begins to prefer them over being in the same room with someone else, I also can't help but think that something is awry.

Sources: Rosen's article "Virtual Friendship and the New Narcissism" appears in the Summer 2007 issue of The New Atlantis, p. 15. C. S. Lewis's journal of the 1920s was edited by Walter Hooper and published as All My Road Before Me (HarperCollins, 1991).

Monday, September 17, 2007

Toying with Safety

Anybody who knows anything about the toxicity of lead paint has more sense than to put it on a kid's toy. But somehow, millions of toys painted in China carried detectable amounts of lead across the oceans and possibly into the mouths of children all over the U. S., and in other parts of the world too. Even small amounts of lead can affect a child's neurological development, and so the hue and cry over this problem is justified, by and large. I'd like to look at two questions regarding this issue: (1) how did it happen, and (2) how serious is it, really?

A complete story of the whole sequence of events is probably not available now and may not be until months or years of investigation are completed. But based on available evidence—namely, tests that show lead in paint and a knowledge of where the toys came from—I can imagine the following scenario. Government regulation in the Peoples' Republic of China is a sometime thing. About the only activity you can count on being universally suppressed everywhere in the country is political protest. But when it comes to industrial development, economic shortcuts, and evasion of taxes and other government regulations, there seems to be a kind of patchiness in effect that depends on where you are and who you know. Just to give you an idea of how strange things are over there compared to the U. S. business environment, one of the largest owners of factories and other industrial facilities is the army. A Chinese friend of mine who now lives in Hong Kong described the situation to me a few years ago as "the wild wild West."

Given such a free-wheeling environment, it isn't surprising that an ambitious toy-factory owner looking to save a few yuan on his supply costs would buy paint from a source who would either lie about its chemical makeup, or simply not know. If it looked good and stayed on the toys, the paint was fine as far as he was concerned.

Although Mattel Inc. has come across looking like the bad guy in many news reports, to their credit they appear to have taken most of the right actions, once they became aware of the problem. That does leave the question of how thorough their product safety testing was, if millions of toys slipped through it before the first lead was found. Clearly they were not testing as extensively as they are now, but now CEO Bob Eckert realizes his company is fighting for survival. In a video on the company website, he apologizes abjectly and shows laboratory scenes of people in white coats taking samples from toy trucks to test for lead content. Clearly, for a while someone was using lead on toys made in China and imported by Mattel, and nobody who could do anything about it knew. This is not an engineering problem as much as it was a management and information problem, but engineering is also about management and information. All the technical smarts in the world won't produce safe products if an organization can't use those smarts to protect consumers, and itself, from harm. Mattel's current vigilance, along with the possibility of tightened Federal regulations, will probably clear up this problem eventually, or at least make it much less likely to recur.

That being said, how serious was it? While no child should be exposed to lead in his or her environment, the paint problem itself has not caused any known fatalities. This was not the case in a parallel episode that took place in Europe in the 1800s. Around 1820, the technology of printing and paper manufacture advanced enough to make wallpaper a popular new interior decorating option. One of the most-used dyes in the new industry was something called Paris green, based on the chemical copper arsenate. Bedbugs were a big problem back then, and people who bought green wallpaper noticed a side benefit, which was that in bedrooms where they'd put up the wallpaper, you never had problems with bedbugs. Now and then, especially in damp weather, the wallpaper gave off a slight garlicky odor, but standards of sanitation back then weren't what they are now, and that might have been a selling point too compared to other things you could smell in a house around that time.

Then there began to surface some rumors that people who lived in the bedrooms with green wallpaper often got a mysterious illness and eventually died. Statistical epidemiology was in its infancy back then, but something looked fishy enough to the Prussian government that by 1838, they prohibited the use of poisonous substances in wallpaper. But most other countries shrugged off the issue and the mystery continued until 1897. In that year a chemist named Gosio showed that the starch in wallpaper pasted encouraged the growth of a mold in damp weather that turned the copper arsenate in green wallpaper into a gas which we now know as trimethylarsenate. It smells like garlic and will kill you if you breathe enough of it. That was enough to put an end to the use of Paris green in wallpaper for good, although it continued to be sold as an insecticide for years until newer organic compounds replaced it.

The moral from that little story is that ignorance of the technical principles behind a safety problem can slow down its solution for decades. We've known about the hazards of lead paint for many years, so ignorance was no excuse in this case. All the same, if you compare Mattel's problems with the green-wallpaper story, I'd say it's like comparing a fender-bender to a five-car freeway pileup that resulted in a fire and eight fatalities. No, you shouldn't even have fender-benders, but there are worse things that can happen than fender-benders.

Sources: The Mattel recall has been reported extensively at sites such as MSNBC.com, where an AP story appeared on Aug. 14 at http://www.msnbc.msn.com/id/20254745/. Mattel CEO Bob Eckert's apology can be viewed at http://www.mattel.com/safety/us/. I am indebted to a geochemistry instructor named Moore (possibly Johnnie Moore) at the University of Montana, whose course notes at http://www.umt.edu/geosciences/faculty/moore/G431/lectur17.htm contain the green-wallpaper story.

Tuesday, September 11, 2007

The Spy Under the Hood: Friend or Foe?

Most people have heard of the "black-box" data recorders that commercial airliners carry in case of a crash. Designed to survive high impact and long immersion under water, these bulletproof devices carry a record of vital statistics about the plane's speed, altitude, and control settings up to the point of impact, and have proved invaluable in countless crash investigations. What you may not have heard is that your own car very likely carries a small-scale version of the same technology. And if you ever have a wreck, the information in your car's black box might be used against you—or in your favor.

The technical name for the device is an Event Data Recorder. It typically preserves information on vehicle velocity, throttle settings, and even steering-wheel positions for the last five seconds or so before an impact. It is an outgrowth of the sensor systems originally developed to operate air bags. As more and more of the typical automobile's operation has become digitized and mediated by computers, engineers found that it would be little added trouble to store certain data in a non-volatile format (technically called an EEPROM) that can be read out even after a wreck, with the proper equipment. Already the systems have proved useful to both prosecutors and defendants in civil and criminal cases involving car wrecks.

In Austin, Texas, evidence from Daniel Talamante's GMC pickup was used against him to prove that he was going 85 mph before he slammed into another car, killing two children. He was convicted of murder. On the other hand, the system worked in favor of a woman in Connecticut who was facing conviction for negligent homicide resulting from a collision she had one winter day after crossing a main-road center line. The data recorder showed that her vehicle's speed was well below the posted limit and suggested that she drove onto a patch of ice that caused the accident. As a result, the charges were reduced.

What is your reaction to the idea that your car could essential turn government witness against you? From one point of view, the situation is not much different than a policeman using a radar detector to clock your speed. In both cases, law enforcement uses technology to monitor aspects of your driving. But if the data recorder's evidence is used against you, there is the added little sting that you paid for it yourself.

In my very limited research into this issue, it doesn't appear that evidence from the recorders is being abused or manipulated. Rather, as with most technical evidence, both defendants and plaintiffs use it, depending on which side the data favors. And in some cases, no doubt, the data is equivocal, consistent with a variety of interpretations.

The case of the automotive event data recorder is only one example of a trend that will likely grow in the future: the prospect that more and more aspects of our lives, from what websites we view, to where we go, to what we say, will get digitized and recorded somewhere. This trend will no doubt lead to great changes, just as the advent of mechanical sound and motion-picture recordings led to a revolution (or series of revolutions) in the entertainment industry, journalism, politics, and so on.

The extreme civil libertarians among us will object to any and every encroachment on what they see as the right to privacy, and such concerns should not be ignored. Some states such as California require that purchasers of new cars be notified that the black box is inside your new car. This has probably had little effect but to add another sheet to be signed to the growing pile of paper that has to change hands every time you buy a car, but at least it is an effort to let people know.

There is something to be said to the principled objection that a person should not be compelled to pay for a gizmo that can potentially record evidence that is not in their own interest. Some people even try to disable the device, but this is not a good idea, because its function is tied in with the airbag system. In damaging the data recorder, you might disable your airbags—or even set them off, which would be quite entertaining, to say the least. I'm in favor of people at least knowing that there is such a device in most new cars, but going beyond that to a right to disable them might be a little much. And who knows?—maybe some folks drive a little more carefully knowing that every turn of the wheel could be used against them in a court of law.

On the whole, this technology looks pretty benign. In the New Testament, we read that ". . . rulers are not a terror to good works, but to the evil." What's said about rulers can be applied to this kind of technology as well. If you're a good driver, or even the innocent victim of adverse circumstances, your black box's evidence can only help, it seems. And if you're a drunk driver or otherwise misbehaving, it can provide one more witness against you, which most people would agree is a good thing.

Sources: A column by Ben Wear in the Sept. 10, 2007 Austin American-Statesman discussed event data recorders. The story of the woman who hit the patch of ice appears at http://www.clickondetroit.com/automotive/3786478/detail.html. A good technical description of the kinds of data recorded, written by an employe of a company that makes software to download the data, is at http://www-nrd.nhtsa.dot.gov/edr-site/uploads/Auto_Black_Box_Data_Recovery_Systems_by_TARO.pdf. And the New Testament quotation is from the letter of St. Paul to the Romans, chapter 13, verse 4.

Monday, September 03, 2007

Ray Guns Revisited

Back in February, I did a two-part series on non-lethal weapons. The first piece was about a system whose formal name is the Active Denial System. Despite the fact that the name sounds more like what politicians do when they get in trouble, the system in question is a rather elegant technical achievement. It consists of a microwave generator probably similar in principle to your microwave oven. Only instead of making waves that are about five inches long (the standard microwave-oven wavelength), these waves are only about a tenth of an inch long. If you're dealing with water-bearing substances such as potroasts or people, it turns out the depth of penetration of microwaves relates to the wavelength. So while you can cook a whole potroast that's several inches thick in your microwave, these shorter microwaves used by the Active Denial System only penetrate 1/64 of an inch into human skin. But if you pack about a kilowatt or more of short-wavelength power over an area of only a few square yards, the heat generated in that thin layer of skin with only a two-second exposure goes up to 130 degrees F. And that's uncomfortable. So uncomfortable, in fact, that the Air Force scientists who developed the thing believe it will be a sure-fire (so to speak) way to disperse unruly crowds. Better than tear gas, because it leaves no residue or long-lasting health effects (they believe). And better than rubber bullets or any of the other accepted non-lethal technologies in present use.

Well, that's the idea, anyway. But as with so many technical solutions that appeal to technologists, the wider world raises objections that the scientists maybe didn't consider. According to a recent Associated Press report by Richard Lardner, the Active Denial System has run afoul of bureaucratic hesitation. After the first major conflict in Fallujah between insurgents and U. S. troops in 2003, the head of the Air Force Space Command, Gene McCall, sent an urgent email to the Joint Chiefs of Staff, Gen. Richard Myers, saying that the Active Denial System could take care of just such problems. In 2006, Marine Corps Major General Robert Neller requested procurement of eight commercialized versions of the same system, called Silent Guardians. But Col. Kirk Hymes, chief of the Defense Department's Joint Non-Lethal Weapons Directorate, says that one reason the system hasn't been adopted for field use is because of fears that it might be perceived as a form of torture, raising specters of Abu Ghraib. There are also outfits such as Human Rights Watch that don't want to see the system deployed. They complain that the testing and legal reviews on which the Pentagon bases its claims that the system is legal under international law and medically harmless, are classified and can't be independently verified.

When I first heard about this system, I was tickled technically. and less thrilled from an ethical point of view. I honestly don't know if it would be a good idea to use this thing in a real battle or not. Given (a big given, in the case of many readers) that we ought to be fighting in Iraq in the first place, my suspicion is that the thing would be helpful up to a point. The point would be when the target population figures out a way to defend themselves against the device. I won't give aid and comfort to the enemy by spilling beans right here, but it turns out that an item that would provide pretty certain defense against the system is available in any U. S. supermarket. (In Iraq, it might not be so easy to come by.) And there's the expense factor, which nobody in the know wants to discuss. If it was as inexpensive as a Humvee with chrome trim, you can bet they'd be bragging about it. As I mentioned in my February column, these things are probably not cheap at all—a lot more costly than a conventional weapon of comparable size. But every new piece of hardware is expensive until you start making lots of them and get economies of scale.

Independently of the questions about the weapon's safety, cost, and so on, what bothers me more than anything about this whole episode is the organizational schizophrenia it reveals. Here one part of the Defense Department has been spending $60 million over twelve years to develop a potentially promising new weapon, and wants to see it used. And some commanders are eager to try it. But some other part of the Pentagon successfully throws roadblocks up and says, "Well, not yet, not quite, we're not sure. . . ." Now even in well-run organizations you get different parts running off in different directions, and stopping a thing that's gone on too long is sometimes the right thing to do. But it does seem to me that if there were a more unified spirit—I don't know what other word to use—in the military establishment today, either the project would have been rejected at the outset, at a savings of millions of dollars, or else everybody would have been in favor of it from the start and it would be out there today zapping terrorists and doing whatever other damage it can do. There's an adage that says something like, "husbanded bullets are no bullets at all." Meaning, roughly, if you go into a battle worried more about how many bullets you have than about winning, you're likely both to run out of bullets, and lose. The abstract ethical question about the Active Denial System is one that we simply lack enough information to decide, at least in public. But what is very plain is that the internal squabbling that the system has created, is a sign of a deeper malaise within the military that can do no good at all.

Sources: My previous blog on this subject appeared on Feb. 6, 2007. The AP story on the Active Denial System by Richard Lardner ran in the Austin American-Statesman for Sept. 2, 2007.

Monday, August 27, 2007

Hackers and Slackers: Hotz's iPhone Hack

Thanks to George Hotz, 17, of Hackensack, New Jersey, we all know how to hack into an Apple iPhone to make it work with at least one cellphone carrier besides AT&T. Of course, not everybody has the combination of manual dexterity, software skills, and access to knowledgeable friends that Hotz brought to bear on the problem. As soon as George got one of the newly released phones in June, he set to work with some fellow online hackers to crack the iPhone's secrets. A week or so ago, he succeeded, and newswires everywhere carried reports about his feat and interviews with him. Despite comments from some of his "slacker" friends that he wasted his summer, I emphatically disagree.

I must confess a fond feeling of spiritual fellowship with Hotz. When I was his age, I spent my summers on similar techie quests that mystified most of my friends and relatives, although none of my exploits gained the publicity Hotz's did. He is no stranger to techno-fame, having competed successfully in Intel's Science Talent Search several times. All the same, we know that Apple and AT&T are probably not thrilled to hear that at least a few people can use their equipment in a manner contrary to their intentions. Is what Hotz did ethical? For that matter, what are the ethics of hacking in general?

From all reports, Hotz is clearly not trying to profit from his endeavors, at least not directly. He saw the hack simply as a technical challenge to overcome, a test of his own hacker skills, and after hundreds of hours of work, he and his online buddies succeeded. The fact that using the iPhone with a network other than AT&T goes against the spirit if not the letter of the law (at least as interpreted by AT&T and Apple) is peripheral to the main issue, which was whether Hotz could make the thing work the way he wanted it to, not the way its makers intended.

Hacking can be viewed as a game. The hacker pits his (or occasionally her) brainpower against whoever or whatever made the objective to be hacked—an iPhone, a Defense Department database, or a bank's credit card system. The rules are of two kinds: technical and moral. The technical rules are determined by the existing structure of the objective, which includes software, hardware, and physical and mathematical laws. The moral rules have to be internalized—there are no moral signposts out there that have to be obeyed in the sense that the law of gravity has to be obeyed. Hotz has expressed no interest in running a business hacking iPhones, but now that his hack is on the web, somebody else may do just that. And at least indirectly, Hotz would bear that responsibility.

Believe it or not, this matter relates to a distinction made by the philosopher Alasdair MacIntyre between what he calls internal goods and external goods. In essence, MacIntyre asks the question, "Given a practice which requires attention, the development of skill, and devotion over a period of time, what are the goods that we seek in return?" That is, if one wants to be a doctor, or an engineer, or a priest, one has to devote years of life to learning how to do these things well. If human beings seek the good, what are the goods that we seek in learning how to do such practices?

MacIntyre classifies such goods into two categories. Goods internal to a practice are examples of excellence judged according to the rules of the practice itself. A good internal to the practice of surgery is a new and more effective way of doing a gall-bladder operation, for example. People who are really "into" a skill such as surgery, music, or even iPhone hacking get a thrill from doing the practice well and thus creating goods internal to the practice. On the other hand, goods external to the practice are things like money, adulation, promotions, and the other incentives that organizations use to get professionals to do their practice for them. Clearly, there are many ways to get goods external to a practice, but to achieve goods internal to a practice, you have to do the practice itself well.

All right. It looks to me like Hotz's main motivation was a good internal to the practice of hacking. Hacking the world's most famous cellphone was a truly elegant hack, and Hotz did it. The fact that he's not skipping college to go make lots of money hacking cellphones shows that he is not unduly attracted by goods external to the practice of hacking, as some may be.

MacIntyre develops these concepts of goods and practices in the context of his ethics of virtue, which he bases on Aristotle's ideas. Since nobody can put things quite like MacIntyre, I'm going to quote his definition of virtue in its entirety, from his book After Virtue: "A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods." To do his hack, Hotz had to be persistent, patient, attentive to detail, communicative with his hacker friends, ingenious, and self-educated, largely (there are no official hacker schools, to my knowledge). All these are virtues, in MacIntyre's terms, which helped Hotz do his hack. Were he to be tempted by external goods—the money, the fame of being blatted over MSNBC, etc.—he might turn his skills to nefarious purposes. It's interesting that Hotz wants to major in neuroscience—"hacking the brain!" as he puts it in one report. And if he achieves his dream, even partly, of "hacking the brain," there is no need to expand here on what dangers and promises that goal holds.

What Hotz does next depends on not only his technical skills, but the kind of person he is and the kind of circumstances he finds in college and beyond. You may recall that as a teenager, Bill Gates engaged in a similar kind of hacking with a "blue box" that allowed him to make free long-distance phone calls, provoking the ire of what was then the monolithic Bell System. Smart, effective people generally have something of the rebel in them, and suppressing such tendencies too much would lose us some good talent. But judgment comes with age and experience, and let's just hope that in the future, Hotz and his friends use their abilities for internal goods—and the good in general.

Sources: An MSNBC story about Hotz's achievement is found at http://www.msnbc.msn.com/id/20424880/. The Austin American-Statesman carried a reprint of a story about him from by Martha McKay of The Record on Aug. 27, 2007. Alasdair MacIntyre's After Virtue (2nd edition 1984) is published by University of Notre Dame Press.

Monday, August 20, 2007

Skype's Wipe-Out

Just because a surfer wipes out every now and then, you don't jump to the inevitable conclusion that he's a bad surfer. And if a relatively new technology suffers a massive failure that puts it out of action for a few days, that isn't necessarily a reason to give up on it, condemn it, or conclude that it will never work. All the same, the recent collapse of the peer-to-peer function of what one source calls the world's most popular Internet telephone service has some lessons about reliability, the Internet, and using things for what they were designed for in the first place.

First of all, what is Internet phone service? The form provided by Skype works like this. With some inexpensive hardware such as a headphone and microphone, you can log on to Skype and call any of the millions of its other subscribers without incurring a per-use or per-minute fee. My understanding is there is a flat monthly fee, but that's it. Your phone call is routed directly over the Internet, completely independently of landline telephone wires or cellphone networks. So as long as the party you wish to call is on Skype too, you can say good-bye to concerns about talking too long on long distance calls, using up your cellphone minutes, and all those other worries.

Well, the other day (Thursday, August 16, to be exact), all Skype users woke up to a rude surprise—Skype was down worldwide. Despite initial concerns that it might have been a malware attack, the latest news is that a software glitch caused it. From the description posted on Skype's official website by staffer Villu Arak, Skype inadvertently caused the problem itself. Apparently, they sent out a routine software update to every user's computer. This update told the computers to restart. Well, all those computers restarting all over the world woke up and started trying to log on to Skype again. This massive pile of logon requests should have been handled by Skype's system, but due to a software defect, it wasn't. The end result was that the whole thing came unraveled and took a couple of days to put back together.

I don't know whether anyone uses Skype as their main form of telecommunications. Probably there are a few people in special situations in remote areas, but only a few. If there were, they were high and dry without a phone for the time that Skype was down. Probably most users take advantage of it as one of several communications options, an inexpensive alternative, possibly within a company where a central authority can enforce the use of Skype rather than conventional telecomm systems that cost more. But the convenience and low cost come at a price.

Technologies are not just hardware, or hardware and software, but a combination of that physical stuff and ideas, aspirations, and habits in the minds of billions of users. As new technologies come into being, to be successful they have to fit into the existing complex of human activity and the material environment, while changing both. In the process, existing technologies are often adapted for uses that their original designers never thought of.

Internet phone service is a case in point. If you were going to set up a worldwide computer network from scratch and design it mainly to provide telephone service, it would look like nothing that exists today except in a few laboratories. Why is that?

The closest thing to it is what is operated by the old-line telephone companies—the Bell System babies, or teenagers, or however you want to describe them. Their fiber-optic based networks are full of compromises because they've had to keep handling their huge amounts of traffic ever since the dawn of the telephone age. This requirement to use existing hardware rather than throwing everything away, starting from scratch, and going broke in the process has left them with a material burden that is matched by the regulatory burden which prevents them from doing a lot of things that they'd like to do. Because of the burdens of history, neither their physical environment nor their legal environment is what they'd like if they were starting over from the beginning.

The Internet was built basically from scratch over the last two or three decades, so in principle it comes closer to the ideal. But it wasn't designed for rapid, reliable, two-way audio signal transmission. You can force internet protocols to deliver up something that resembles an old-fashioned analog phone conversation, but it's difficult, it wastes bandwidth, and you're basically making the system to do something it wasn't designed initially to do. Fortunately, with enough bandwidth a lot of hard things become easy, which is why Skype can be as successful as it generally is. Still, Skype has the huge problem that not everybody in the world is on it. On the other hand, everybody with a telephone of some kind can in principle dial anyone else with a phone, and that fact makes the conventional international telecomm system that much more valuable. Every person added to that system makes it incrementally more valuable to everyone else already on the system. This is why communications networks tend to be dominated by a few large players, or only one.

And then there's the reliability problem. Since the public telecomm systems have gone heavily software-intensive, they have had their share of software glitches. But decades of conservative engineering practice have taught them to be hyper-cautious about changing anything. I once spoke with a woman who was a software engineer with one of the major "baby Bells" in an office near Chicago. She said that in order to make a small change in one line of code in the master operating software for their network, she had to put in about six months of work testing, checking, getting authorizations, and so on, before she could make the change. Only large, established organizations have the resources to take such pains, but it pays off in reliability.

Maybe Skype will learn from this experience, and spend a little more time testing new software. As it happened, the problem they had was more of an inconvenience than a disaster, except maybe to their bottom line. But as we rely more on Internet-based communications systems for things like medical records and emergency communications, reliability will move up the list of desirable features closer to the top. Let's just hope that the Internet can stand the strain.

Sources: The San Jose Mercury-News carried an article by Sarah Jane Tribble on Skype's outage at http://www.siliconvalley.com/news/ci_6656717. Mr. Arak's comments can be found on the Skype website under the title "What happened on August 16" at heartbeat.skype.com.

Tuesday, August 14, 2007

Emergency Communications: FCC To the Rescue

So much of engineering ethics deals with bad news that I'm glad to report some potentially good news for a change. At the end of last month, the U. S. Federal Communications Commission did something that may vastly improve the way first responders across the nation can communicate in large-scale emergencies. But to appreciate this good news, you need to hear some old bad news about the sorry state of emergency communications today.

During the World Trade Center attack on Sept. 11, 2001, dozens of firefighters died, and later studies showed that a contributing factor was the gridlock in radio communications that happened that day. Policemen, firemen, ambulance drivers, and other emergency organizations need fast, reliable communications to save lives of both disaster victims and their own. But in the World Trade Center collapse and during Hurricane Katrina, people died needlessly because emergency radio communications systems broke down.

First responders have used two-way radios in this country since at least the 1930s, but unfortunately, the basic design plan of the technology has improved only marginally since then. Radios are smaller, lighter, and more durable, and computer technology has made some improvements, but many if not most emergency radio systems operated by city, state, and federal jurisdictions are basically analog point-to-point links. If phone companies had stayed with this model, we would still have about ten mobile telephones per metropolitan area instead of the millions of cell phones we have today.

Why haven't emergency communications systems gotten on the cellphone bandwagon? The reasons are complex, but here are two. First, most first responders are local: town fire departments, regional sheriff's offices, etc. Cellphone-like wireless networks require vast investments in infrastructure (towers, switches, computers, etc.) and are inherently large-scale operations, covering vast geographic areas. Second, the regulatory environment reflected traditional technology—the Federal Communications Commission (our traffic cops of the airwaves) up to now has not updated the frequency spectrum allocations to allow broadband wireless technology in this sector, even if there was anyone around who wanted to do it. As a result, we have a system that works okay most of the time, but tends to collapse in a crisis such as 9/11 or Hurricane Katrina–just when you need it the most.

Well, I am happy to report that at least the FCC is getting its act together in this area. On July 31, FCC Commissioner Michael Copps issued a statement accompanying some rule changes that promise to improve the situation in emergency communications in a big way.

You may be old enough to remember TVs with tuner dials, like cheap radios have even today. One dial covered the VHF channels 2 to 13, and the other dial was labeled UHF and went from 14 to 83. Well, now that digital TV is coming along like a freight train, the new smaller frequency allocations it requires have freed up what amount to UHF channels 52 to 69, some 108 MHz of spectrum space. The FCC is going to auction this valuable natural resource off in various ways, but it has reserved a chunk of it for (drum roll, please) a national interoperable public-safety system.

Now what does that mean? If all goes according to plan (and the plan, which involves both public and private funding, is by no means certain to work), we will go from creaky old analog radio systems that basically don't let firemen from Town A talk to policemen in Town B right next to them, to a broadband wireless cellphone-like system that will let anybody talk with anybody else they need to, and will have enough reserve capacity to handle the largest emergencies likely to happen. In his prepared statement, Commissioner Copps regretted that his fondest dream of a fully federal-funded system wasn't going to happen, but apparently he has high hopes that a commercial outfit will step up to the plate and bid for the spectrum that can be used to achieve these ambitious goals.

I have not studied the details of the FCC plan, but I do know the present hodge-podge of emergency communications systems has big problems. I congratulate the FCC on at least trying to do something about it, and hope that Commissioner Copps' dream becomes reality. So if you have any old analog TVs that you're going to have to scrap come February of 2009 (when analog TV is scheduled to fade into the sunset), comfort yourself with the thought that at least some of the spectrum thus freed is going to be used for a good cause. In my experience, those high-band UHF channels never came in very well anyway.

Sources: Commissioner Copps' July 31, 2007 statement can be obtained from the FCC website (http://www.fcc.gov). For more about the problems with present emergency communications systems, see my article "We've Got to Talk: Emergency Communications and Engineering Ethics," scheduled for publication in the Fall 2007 issue of IEEE Technology & Society Magazine.