Monday, July 26, 2010

Hackers in Hardware: Will It Happen and Can We Stop It?

Most famous cases involving engineering ethics start with a headline, a crisis, or a tragedy. In such situations, we can take steps to make sure it’s less likely to happen next time only after we figure out what went wrong. But every once in a while, farsighted engineers can actually anticipate a major new problem before it happens and convince people to prevent it without anyone getting hurt. This is obviously the best way to go if we can manage it. The potential problem of hardware hackers is a case in point.

As John Villasenor explains in the current issue of Scientific American, hardware hacking is the planting of a malicious circuit in hardware, typically deep within the incredible complexity of an integrated circuit (microchip). ICs are so complicated nowadays that the design of most of them is comparable to the design of a large building or an oil refinery. Simply because the system is so large and diverse, pieces of the design are farmed out to numerous subcontractors. In such a complex circumstance, a hardware-hacking scenario could come about in the following way.

Suppose some evil person wants all cell phones sold by a given firm to quit working at a certain date in the future. They infiltrate one of the subcontractors that helps to design a critical IC in the new phone models, and slip in a circuit that monitors the time code and suddenly ties up the communication bus in the system once the blow-up date arrives. Since it’s impossible to check for every conceivable situation a phone might experience, the likelihood that this circuit will pass unnoticed into the final design is pretty good. A few weeks before the blow-up date, the criminals in charge of this trickery send a blackmail letter to the company, telling them what will happen and offering them an encrypted key to prevent the disaster—for, say, a billion dollars.

That particular scenario would be enacted by criminals, but political sabotage or terrorism could also inspire such machinations. Villasenor and his fellow researchers claim there are several ways to prevent such attacks.

One, favored by the Pentagon, is a kind of security-clearance check for every organization involved in the chip design. While this may be practical for certain costly military ICs, it doesn’t seem like a plan that will work for commercial designs, where vendors change at the last minute and are spread out all across the globe in a variety of jurisdictions.

A better idea Villasenor mentions is to install inspector or security circuits in every IC to monitor for suspicious behavior that would indicate the presence of a hardware hack. While this will reduce the space and speed available for the IC’s main tasks slightly, the added security appeal of knowing your new IC is protected against hardware hacking might make it worthwhile.

There are two questions in my mind about this whole situation.

First, how real is the threat of hardware hacking? Villasenor says that there have been no significant incidents so far, but of course there might be unknown time bombs out there right now, ticking away. I think one reason this kind of thing hasn’t happened yet is that, unlike viral malware, hardware generally has a paper trail that can be traced back to the place of origin. Being fingered as the guilty party in a hardware-hacking case would mean certain death to a firm, even if they were unaware of what was going on at the time. Nobody would ever want to buy circuits from them again.

Another reason, at least in the case of political terrorism of the Islamofascist variety, is that companies which design ICs are generally not found in places where organizations like the Taliban have significant influence. The extreme contrast between the state of Israel, which has dozens of high-tech firms turning out world-class technology, and the surrounding Arab nations, which have to buy nearly all the technology they have from other countries, is an example of this. So unless terrorists manage to convince individual designers in critical firms to implant hardware hacks, this kind of threat seems unlikely.

The second question is, if manufacturers develop security measures to install that allegedly prevent hardware hacks, will people pay the extra price for them? No matter how small, the security features will adversely impact price and performance, and it then becomes a question of perceived value added. Some customers with heightened concerns about security, for example the military and government, might be more willing to buy such chips than commercial customers such as computer makers. Of course, once a major hardware hack actually caused damage, the feature would sell itself. So you have the perverse situation that the best incentive to buy a hardware-hack-secure IC is to have a major problem occur with hardware hacking.

Unfortunately, that may be what has to happen if hardware-hack prevention is to amount to much more than a few academic papers and articles. Let’s hope the cure arrives well before the disease, at least in this case.

Sources: John Villasenor’s article “The Hacker in Your Hardware” appeared in the August 2010 issue of Scientific American (pp. 82-87).

1 comment:

  1. Sounds a lot more like a dog chasing its own tail. Aside from the "Quis custodiet ipsos custodes?" problem, wouldn't be easier to just exploit a vulnerability of hardware that's already been deployed? This is not the low hanging fruit. But comparatively speaking, it is easier and much more likely than infiltrating a vendor and embedding malicious software that may or may not be included in a particular piece of hardware.

    ReplyDelete