Sunday, July 24, 2011

Stuxnet and the Future of Cyberwarfare

Gunpowder led to guns, electronics led to electronic warfare, and now we can expect cyberspace to breed its own versions of armed conflict. This month, Wired Magazine published an online tale that rivals any James Bond flick in its twists, turns, intrigue, and drama. It’s the story of Stuxnet: the first computer virus explicitly designed to do physical sabotage to a target of international significance. And it probably worked.

There’s no room here to do justice to the whole story, but the essentials are these. In June of 2010, a computer security firm in Belarus got a call to investigate a persistently rebooting computer. The cause turned out to be an unusual virus that exploited what is called a “zero-day” vulnerability: one that the hackers themselves discovered, and neither the software maker nor antivirus firms know about yet. As both the Belarus firm and investigators at Symantec studied the virus, bearing the name “Stuxnet,” they became more intrigued, because it appeared to be a large, sophisticated virus designed to look for a particular kind of software made by Siemens. This software operated PLCs, which are industrial minicomputers that directly interface with electromechanical gear such as pumps, valves—and uranium-enrichment centrifuges.

After about a dozen cyber-sleuths spent the equivalent of several man-weeks on the problem, they determined that Stuxnet counted on infecting USB drives in a facility using the targeted Siemens software. Once it found its target, it would silently wait until a certain date, and then suddenly increase a motor-drive speed far beyond its rated maximum, all while generating fake signals to the control room making it look like everything was OK. The operators at the target uranium-enrichment facility would be clueless until their centrifuges blew up.

Six months earlier, in January of 2010, International Atomic Energy Agency personnel reviewing security-monitoring camera data for an Iranian nuclear facility at Natanz noticed that over a period of only a month or two, the operators had to replace over 1,000 centrifuges, far more than routine maintenance would require. So far, this is the most direct evidence that the Stuxnet virus was at least partially successful. Iran has obvious reasons for not giving out a lot of details, and whoever developed the highly sophisticated Stuxnet virus has even less motivation for coming forward and admitting that they did it. But internal evidence points to either the U. S. or more likely Israel as the probable source of the malware.

This story is bristling with so many ethical issues I don’t know where to start. For one thing, how does a company with worldwide branches in countries at cyberwar with each other treat information that could potentially ruin a planned cyberattack if it was disclosed? Symantec, where much of the deciphering was done, did not stop their employees from publishing basically everything they found out about the virus almost as soon as they figured it out. This is the customary way cybersecurity firms work, and so far it seems to be the best way to stem the ever-flowing tide of malware that the companies exist to fight. One of the principal engineers involved, Liam O Murchu of the firm’s Culver City, California office, said that the one thing which might have made him hesitate about publishing is if they had found evidence for “100 percent attribution who was behind it.” But because no such evidence emerged, the firm went ahead with their announcements.

In the event, the investigators figured out Stuxnet after it had apparently done most of its damage. This is hardly reassuring to those of us who don’t worry about cyberattacks on PLC-controlled infrastructure such as power grids, water delivery systems, gas mains, and so on. The resources needed to develop Stuxnet, although substantial, are estimated at less than a million dollars. What it took besides money was cleverness, some auxiliary secret information probably known only to a government security operation, and lots of guts. None of these commodities are in scarce supply in various places around the world, so the fact that Stuxnet got as far as it did is a cautionary tale for everyone who has an interest or stake in these matters, which these days means nearly everyone.

Another issue that this raises is the question of where cyberattacks fit on the moral spectrum of war. In a way, a cyberattack designed to do nothing more than disable a plant is the best kind of weapon: no one gets killed, there’s no collateral damage to speak of, and you surgically strike at exactly what you want to take out. If you compare the consequences of a Stuxnet-style attack to something more crude, such as dropping a bomb on the whole facility, the cyberattack looks a lot better when judged by criteria of the just-war theory: it is specifically targeted, it leaves no unnecessary civilian deaths, and it can be proportionate to the situation provoking it.

But by the same token, Stuxnet is now common knowledge among those whose interests it is to guard their own targets of military importance and attack those of the enemy’s. This lesson will be learned, and it’s a very good chance that we’ll see something like Stuxnet happen again. Only the next time it may not be an Iranian nuclear facility. It could be a U. S. power plant, or a German steel mill, or any number of other places. We have taken a long first step down the cyber-warfare road with Stuxnet, and there is no telling where it will lead.

Sources: The excellent article by Kim Zetter entitled “How Digital Detectives Deciphered Stuxnet, The Most Menacing Malware in History,” appeared on July 11, 2011 in Wired’s online edition at http://www.wired.com/threatlevel/2011/07/how-digital-detectives-deciphered-stuxnet/all/1. The article also mentions a widely reported but apparently unverified 1981 incident in which the U. S. Central Intelligence Agency reportedly destroyed a natural-gas pumping station in the old Soviet Union with malware.

1 comment:

  1. I heard rumors at one point that Stuxnet was actually created by someone inside Siemens at the request of German and possibly US governments to sabotage Iran: http://www.presstv.ir/detail/144770.html
    I can't imagine that Siemens knowingly built this in, but it's plausible to think that someone familiar with the software went rogue.
    Do you think it's ethical for Software Engineers to knowingly build a tool for sabotage like this? Can they claim the same innocence as aerospace engineers who build missiles?

    ReplyDelete