Back in January, I blogged on the Conficker or Downadup worm that had spread to millions of computers worldwide. Conficker is a worm that is intended to form "botnets" of computers owned by unsuspecting users who have no idea that their machine has been taken over for (usually) nefarious purposes. Since then, Conficker has continued to spread and its developer (or developers) have managed to stay a few steps ahead of the growing team of computer-security experts who are trying to foil it.
A recent New York Times article describes how the "Conficker Cabal," a team of leading security specialists from a variety of private and governmental organizations, have tried to frustrate the worm's attempts to control its botnets from a list of Internet domain names that was originally only 250 or so. The Conficker authors foxed the experts by modifying the program so it can now use about 50,000 addresses from which to send its nefarious instructions, making the problem of combating it much harder. Even the U. S. military doesn't seem to know what to do. The situation grows more urgent as April 1 approaches, which is evidently the date at which the bots in the botnet will report for Conficker duty. But what that duty might be is a matter of speculation, ranging from a harmless April Fool prank to a severe attack on Internet sites of major importance, or even the entire Internet.
I'm trying to think of another case in which a high-tech system of international scope has been turned from good to evil purposes. It's not that hard. The Sept. 11, 2001 attacks on the World Trade Center used atoms, not bits, but the idea was similar: take a complex technology that involves large amounts of power and divert it to harmful purposes. Conficker lacks the element of surprise that 9/11 carried, but the level of planning and expertise required is comparable. Nuclear energy is another ongoing example. The beneficial use of nuclear energy for peaceful power reactors carries with it the constant hazard of diversion of nuclear fuel and knowhow to rogue regimes who want nuclear weapons.
A question we could ask that ties all these cases together is this: to what extent should engineers who develop a new technology, take into account the evil purposes to which it could be applied? I'm not talking about accidental hazards, but intentional misuse. I can't help but think that the original developers of the Internet were not thinking too heavily along these lines when they came up with the protocols that they did. Obviously, the Internet is generally one of the greatest success stories of the twenty-first century, and such problems that we have run into on it so far have not led to fatalities on a wide scale. But as we depend on it more and more and as attacks grow more sophisticated, that may change.
I have mentioned previously the need for engineers to use moral imagination, but mostly in the context of imagining how a given technology employed for its intended purpose can affect various groups of people. This is not always an easy thing to do, and it takes determined effort and a kind of thinking outside the usual engineering box to do it. But it often pays off in terms of new insights about potential problems that can be avoided, sometimes with simple low-cost fixes such as notifications or minor changes.
What I haven't considered in such musings is the need for a kind of twisted or evil imagination. It looks like not only should you think of how a technology will affect people if it is used as intended, but also if some evil person comes along and tries to do really nasty things with it. For some reason, this line of thinking has gone farther in computer technology than in most other forms of technology, partly because attempts to defeat security measures have been a part of computer programming almost since the beginning. There are several reasons for this.
Much more than other kinds of technology, computer technology is homogeneous: there's the human programmer or user, and the machine with its software. And the prize is simple: control. While control is only one aspect of the problem with hijacking other kinds of technology, control is the major part of the battle with computer hacking. Once you have control, computers will do your bidding with entire indifference to your moral values. And computer technology is the supreme example of fungibility: a general-purpose computer can literally do almost anything, limited only by resources. So once you have control, there's no particular problem in making the botnet or whatever do your evil will.
All the same, when programmers and computer scientists create new technologies, they build into them realms of possible and impossible actions. Because of the way the system is structured, there are certain things that it is physically impossible to do with the Internet. It's too late now, but wouldn't it be nice if one of those impossible things was to create a botnet and do evil things with it? Hindsight is generally sharper than foresight, but there are always new technologies coming along, and so there is still a chance to get it right, or more nearly right, in the future.
Of course, if you're clever and wicked enough, you can take almost any technology and do something bad with it. This doesn't mean that designers should simply drop any project that could conceivably be used for malicious acts. Engineering is all about compromises and tradeoffs. All I'm suggesting is that when you can think of an obvious nefarious use for a new technology, it would be a good idea to take some small steps toward building in preventive measures that would make it harder to use in a bad way.
In the meantime, let's hope that nothing worse happens on April 1 than a few bad practical jokes here and there.
Sources: I last blogged about the Conficker worm on Jan, 16, 2009. The New York Times article "Computer Experts Unite to Hunt Worm" can be found at http://www.nytimes.com/2009/03/19/technology/19worm.html.
A Note About Broken Links: Whenever I give a source URL link, I make sure that it is working at the time I write the blog. Over time, some of these links have become broken because the source website has taken down the article or for other reasons. I do not have the resources to go back and repair old links, so if you are interested in a source URL, my suggestion is to click on it as soon as you see it show up. If you are interested in a link but find it is broken and can't locate the material any other way, you can email me at firstname.lastname@example.org. I sometimes keep local file copies of the source material referred to, and if I have done so I will be happy to provide you with a copy if the original URL is broken.