Monday, August 25, 2008

RFID and Privacy: A Spy In Your Pants?

A few days ago, I found out that my university ID card has an RFID chip in it. A new floor of our building has labs equipped with RFID locks on the doors: little black boxes that light up red or green when you pass the right card next to them. I figured I'd have to go get some special new key fob or other to use the locks, but I was told, "Just hold your ID card near it." I did, and open sesame! I didn't even have to take the card out of my wallet. Some guys, like technicians with an armload of equipment, will just do the "butt-pass"—twist around so their back pocket gets close enough, and they're in.

This discovery aroused mixed emotions. I'm glad I don't have to go get any special new card, but on the other hand, why didn't anybody tell me that chip was in there? And what else could it be used for?

Turns out that these are not idle questions. In a special issue on privacy, this month's Scientific American carries an article by Katherine Albrecht, who heads an organization called Consumers Against Supermarket Privacy Invasion and Numbering (CASPIAN, for short). We are entering an era in which RFID chips—little inexpensive transponders that spit out data-bearing radio waves to a properly equipped interrogation unit—are spreading like fleas on a dog. Think of RFIDs as a kind of wireless barcode on steroids. Barcodes have to be out in the open to be scanned, and the data they convey is limited to the few numbers of the bar code. But you can attach an RFID chip to an entire pallet of goods in a warehouse, and as a forklift carries the pallet past an interrogator in the doorway, the inventory control system learns that everything on the pallet has gone out the door—no manual scanning.

The financial and logistical advantages of this sort of thing are obvious to shippers, warehousemen, and supermarkets, in fact retailers of almost anything. So RFID chips are popping up in a lot of places.

So where's the beef? One of the places they're showing up is in identification documents such as passports, private and institutional ID cards (such as my university card), and even driver's licenses. Several states, including Washington, Arizona, Michigan, and Vermont, are making such "enhanced" driver's license cards available. Is there any potential drawback to this? It turns out that the type of technology most states are adopting is the same basic kind that is used in warehouses. So anybody with the right equipment can read the data off the chip—according to Albrecht, there is no encryption involved, unlike a different RFID standard prevalent in Europe which includes encryption.

Well, engineers like to think of worst-case scenarios, so here goes my attempt. Say I have an enhanced driver's license with an RFID chip in it. Driver's license numbers are no big secret anymore—you're asked for them any time you write a check, typically. So here I am, wandering around the hardware store, and without speaking to a soul, without picking up a single item, an RFID sensor can figure out who I am, what aisle I'm in, call up my complete purchase record at that store (and maybe other kinds of stores too, for all I know), and figure out exactly what kind of stuff they ought to try to sell to me. I don't know about you, but I'm not sure I like this idea.

Now the way you react may say something about how old you are. Younger people, to whom YouTube, MySpace, and Flickr are just another part of life, tend to have different notions about privacy than older people do. You might feel pleased or special if a salesperson comes up and offers you stuff that is specially tailored to your past purchases. My main encounter with this kind of thing so far is on Amazon.com, which is constantly making wild guesses as to what kind of books I'd like to read, based on the books I've bought in the past. Most of the time its offers are either laughable or annoying, but every once in a while they hit on something good. All in all, though, I would not miss this feature a bit.

We are talking about what some would term an invasion of privacy. Privacy is a right without much of a historical pedigree, it turns out. The Wikipedia article on it says that the first serious consideration of a legal right to privacy was published in the U. S. only in 1890. Before then, it was so hard to duplicate and spread information that the question of personal privacy rarely arose. But now with the tap of a keystroke, you can spread intensely private information to millions of people worldwide. And with an unencrypted RFID chip on your person that has data such as your social security number, driver's license number, or (as an RFID card that China is reportedly trying to implement has), your religion, ethnicity, employment record, and how many kids you have, why, you've turned into one of those pathological bean-spillers that late-night bus-riders fear to encounter—the kind of person who will dump their most intimate secrets onto the first stranger who won't get up and run away. I don't know about you, but I don't want to be that kind of person, either by word of mouth or electronically.

What is the alternative? Effective regulation is one, either direct regulation of the kind and amount of data that can be put on RFID cards, requiring the data to be encrypted somehow, or even simpler things such as labels telling consumers that products have RFID tags on them. Trouble is, the public awareness of this technology is so low that labels would probably just arouse confusion or fear. A little fear can be a good thing. But knowledge is even better. Consider whether you should inform yourself more about RFID technology, and make up your own mind about what kind of information you want to be giving away without ever knowing about it.

Sources: Katherine Albrecht's article "RFID Tag—You're It" appears in the September 2008 issue of Scientific American. CASPIAN operates websites www.spychips.com and www.nocards.org. Also see my blog "I Spend, Therefore I'm Spied Upon?" for Jan. 11, 2007.

Monday, August 18, 2008

Electronic Voting: Why or Why Not?

In case you hadn't noticed, we're going to elect a president here in a few months, and that means voting. Eight years ago, the humble machinery used to register ballot counts got dragged into the national spotlight when the Florida presidential election count uncertainties cast doubt on who would be sitting in the Oval Office on Jan. 20, 2001. Reports of hanging chad and other voting-system flaws motivated many local governments (which are the entities that deal with the nitty-gritty of running elections nationwide) to invest in shiny new all-electronic voting systems. But in recent years, there have been questions raised about the reliability and security of these new systems, and reportedly some municipalities are going back to the paper ballot (although still counted by computers).

What are the basic ethical issues in engineering a voting system for use by the general public? And why can't we seem to make up our minds as to which way is best?

First of all, who is involved? Every citizen meeting the legal qualifications to vote has a right to exercise that privilege. So to begin with, you have voters whose right to express their judgment in a democracy is guaranteed by law. Balloting nowadays is also secret (it didn't used to be, incidentally, even in the U. S.), so there has to be some way to ensure privacy in the voting booth.

Next you have the people being elected. They have a right to a reasonably accurate count. Not a perfect count: if we threw out the results of every election that had even one detectable flaw, we'd still be living in a monarchy. But since most elections are not photo-finish ones decided by only a few dozen votes, perfection isn't required, only accuracy that is better than the margin of victory in most cases.

Other interested parties include the election officials, the vendors selling the hardware and software used for voting, and way back in the back rooms of those firms, the engineers who design and develop the voting systems. Though these engineers are invisible to nearly everybody else, they obviously play a key role.

Now that we have identified the main parties to the matter, what can go wrong? Just to make things interesting, let's compare the latest touch-screen voting systems with the totally manual paper ballots that were used, for example, in the 1948 election that put Lyndon B. Johnson in the Senate.

There is a strong, almost intuitive, bias toward paper records in law and politics. Paper and ink are just as technological as computers and software—it's just that paper is an older and more familiar technology. It is integrated in our ways of thinking in ways that digital technology isn't, at least not yet. Plus which, paper systems can be easier to understand, and transparent in a way that software, for instance, is not. Unless a document is written in Urdu, say, or legalese that only a lawyer can decipher, you don't need an expert to read paper, but you do need one to tell what's going on in software.

All that familiarity with paper was of no avail when certain shenanigans went on in certain South Texas voting precincts back in 1948. Johnson biographer Robert Caro has shown how as many as 10,000 ballots in the Democratic primary that effectively determined the election outcome were highly suspect. And in an election that was won by only some 300 votes, that was more than enough to determine the outcome. The point is that, given enough corrupt officials and political pressure in the right places, paper ballots are no sure-fire defense against fraud. But everybody knows that.

With all-electronic voting, not only are people worried that a malevolent hacker working for one party will infest the system software to deliver enough votes to push that party's candidate to victory, but that mistakes or malfunctions will go undetected because without paper records, there is no way for the average non-technical election worker or politician to check the results. The only people who can even come close to doing that are the folks who can look at the software innards of the machines, and even they can't always recover a blow-by-blow description of everything that went on during the voting.

A recent New York Times editorial pointed out three instances in the last few years in which either all-electronic or partly electronic voting systems led to incidents which at least cast doubt on the results. The editorial writers came out with a proposal which is also being seriously studied by engineering researchers: voter-verified paper record systems (VVPRS for short). In these systems, each voter gets to see a piece of paper that reproduces his or her choices, and if the paper doesn't match the voter's desired choices, the voter can start over and do it right. Only when the voter is satisfied does the ballot get recorded, both electronically and on good old cellulose.

Of course, printing out a bunch of paper in addition to doing electronic ballot recording takes away some of the advantages of the digital system, but it's no different than in other areas where computers have found use. I remember the day when Bill Gates said that computers would eventually make the paperless office possible. As I recall, stocks in paper companies plummeted the next day, but the finance types needn't have worried. If anything, we have more paper to deal with than ever, now that it's so easy to print professional-looking documents at the touch of a button. But I digress.

Paper, electronics, white and black stones—fundamentally, voting is a non-material process mediated by physical communication systems, and the physical media used doesn't much matter if the will of the people is adequately expressed through it. Integrity, good will, and common sense makes it work pretty well most of the time, which is all you can expect of human systems. The big scandal about U. S. elections is not the technology, but the fact that so many people pass up the opportunity to vote. Don't let that be true of you this November.

Sources: The New York Times editorial appeared on July 31, 2008 at http://www.nytimes.com/2008/07/31/opinion/31observer.html. A paper describing a study of a VVPRS electronic voting system by Nirwan Ansari and others at the New Jersey Institute of Technology appeared in IEEE Security and Privacy for May/June 2008, pp. 30-39. And LBJ's South Texas ballot tricks are described in Robert A. Caro's excellent multivolume biography The Years of Lyndon Johnson.

Monday, August 11, 2008

Free Rides on the MBTA: MIT Hackers and the Law

Does the principle of freedom to share technical information about computer system vulnerabilities mean that you can tell folks how to get free rides on Boston's MBTA? A federal judge doesn't think so. And the way all this came about raises some interesting questions in engineering ethics.

A bunch of students from the Massachusetts Institute of Technology spent some time finding security flaws in the subway system: things like doors and turnstile boxes left unlocked and ways to duplicate the magnetic-stripe and RFID cards to get a free ride. That they did so is not surprising: any time you put a lot of super-competitive technologically savvy kids in a pressure-cooker environment like MIT, they're going to seek recreational relief in activities that will showcase their expertise. But then they went further by documenting their exploits in an 87-slide PowerPoint presentation and entered it in the annual Defcon convention in Las Vegas.

Now I'll be frank that I've never attended a Defcon, but I can imagine the atmosphere: lots of under-30 guys trying to impress each other with their computer prowess amid the partying and general high jinks that Las Vegas encourages. A perfect place, you would think, to brag about hacking the MBTA. Well, the Defcon organizers thought so, because they put the MIT students' talk on the schedule and distributed it in the proceedings CD handed to all registrants. Then the MBTA lawyers found out about it and went to court to block the talk. The federal judge's restraining order did this, but the CD copies found their way to the Internet and the talk is now roaming freely in cyberspace.

According to a lawyer for the Electronic Frontiers Foundation, an organization defending the students, they planned to omit certain key information that would have made it easy for anyone hearing the talk to get free rides. Of course, what is key information to some people is a trivial exercise for others, but we'll never know now, because the talk scheduled for Sunday wasn't delivered.

Let's consider the students to be software engineers—they are acting that way, whether or not they have their degrees yet. As software engineers, they discovered numerous flaws and security breaches in the MBTA's system of controlling access to subways. What should they have done?

The MBTA claims that the students never gave the organization a chance to fix the problems. Instead, the students went straight to Defcon with their findings. You must admit the MBTA has a point, but on the other hand, if the students had shown MBTA officials their talk first and then waited until the problems were fixed to present it in public, it would have taken the edge off, to say the least. And large municipal outfits such as the MBTA are not well known for being able to turn on a dime. The students might have all graduated and gotten real jobs before it was completely safe to talk about what they did back in their young, free undergrad days, and by then it would be ancient software history, not current events.

Back thirty years or so when "computer security" only meant making sure the door to the mainframe computer room was locked, a computer firm approached students at my alma mater, Caltech, with a new operating system and asked them explicitly to try and hack it. The company figured that if the Caltech junior whizzes couldn't break the system, nobody else was likely to, either. Perhaps the MBTA should be grateful for the free consulting work the MIT students did, but not for the way they found out about it.

It's hard to think of a way this situation could have been handled that would have left everybody happy. If someone with diplomatic skills had approached the MBTA with an early copy of the talk and asked their help in tuning it so it wouldn't spill all the digital beans, but would still make the important points, MBTA might have refrained from calling out the lawyers. On the other hand, sometimes it takes the sting of surprise publicity and the ensuing embarrassment to prod sluggish bureaucracies into action. You can bet that copies of the talk are being studied by MBTA engineers already, whether or not they pursue the legal actions they've initiated.

Anyway, happiness isn't necessarily the goal of engineering ethics. And sending around instructions on how to get a free subway ride is not in the same league as, for example, propagating directions on how to blow up subway cars. Still, it seems that the students could have taken a little more care to consider how the MBTA was going to view things. And if they didn't do it this time, they'll have the experience to draw on later in life when they remember back in their wild undergrad days how they got the MBTA on their backs for a hack they tried to show at Defcon.

Sources: The San Jose Mercury-News carried an AP article about the incident at http://www.mercurynews.com/ci_10163740?source=rss. The Electronic Frontiers Foundation currently features the case prominently on its website at www.eff.org.

Monday, August 04, 2008

Guarding the Guardians

Trust is a fragile thing. But it's also the mortar that holds organizations together. Two ongoing news items have brought to mind the critical role trust plays in engineering and what can happen when it's betrayed.

Shortly after the Sept. 11, 2001 attacks, envelopes containing a white powder that turned out to be anthrax spores showed up in the offices of several Congressmen and elsewhere, killing a total of five people and shutting down an entire Congressional office building for a time. The FBI investigation of the incidents progressed largely out of public view until a scientist at the U. S. Army Medical Research Institute of Infectious Diseases (USAMRIID, for short) named Bruce Ivins committed suicide last week. Although much remains to be revealed about the situation, it appears that a recently developed genetic test has linked the anthrax spores used in the 2001 attacks to anthrax that Ivins was working on. Ironically, Ivins was one of several scientists the FBI called on to assist with the original investigation.

The second item concerned a computer engineer named Terry Childs, who worked for the city of San Francisco in a highly responsible position in which he had exclusive control of certain passwords needed to make changes in the city's computer systems. It looks like Mr. Childs and his colleagues got into some kind of dispute that devolved into Mr. Childs being arrested on four felony counts of computer tampering. When it was discovered that nobody else in San Francisco knew those passwords, Mayor Gavin Newsome accepted an invitation by Childs' attorney to meet Childs in person at the jail, and got the passwords out of him, thus averting a potential computer disaster if changes had needed to be made to the system.

Both of these cases are far from over, and I hold no particular brief for either side of either dispute. But if either Bruce Ivins or Terry Childs turns out to have done what it looks like they might have done, we've got two failures on our hands. And to continue the theme of double trouble, both failures are of two kinds.

First, the personal failures. Suppose Ivins in fact did what it seems the FBI thinks he may have done: taken some of the anthrax spores he was developing exclusively for the purpose of coming up with defenses against them, and using them in real attacks. His motivation for such a heinous act can only be guessed at. One newswriter speculated that if Ivins was trying to gain attention and funding for what he thought was a neglected area of research, he succeeded—but at the price of five lives and the anxiety of millions. That kind of thing gets an F on anyone's moral calculus exam. And although Childs' accusations that the information technology department in San Francisco is corrupt and incompetently run may in fact be true, that doesn't justify his holding the entire system hostage by absconding with passwords, even though there were no service disruptions as a result of his actions. There is, I hope, little or no debate that these individuals did wrong if the accusations against them turn out to be true.

But what about the organizational failures? So many times it happens that engineering tragedies come about, not because any one person did something wicked or devious, but simply because the system allowed little slipups and slight ignorance here and there to cascade into a disaster. If Ivins really was able to take anthrax spores outside his lab and mail them from post offices in New Jersey, there is something wrong with the security system at the USAMRIID. But short of 100% body searches of everyone coming in and out of the labs, I'm not sure how you would improve it.

I don't know what the organization's policy is on allowing scientists to work alone, but if they allow such things, maybe they ought to stop. If there are always at least two people present any time hot stuff like anthrax spores are being worked on, you now have to have a conspiracy in order to take some away for nefarious purposes. Conspiracies aren't impossible, but they're less likely than the actions of one individual with malicious intent acting alone.

And the same goes for the San Francisco IT organization. Computer engineers can be notoriously poor communicators, and it is quite possible that nobody other than Childs knew that he had these powerful passwords under his exclusive control. There just seems to be something about the type of personality drawn to that line of work which delights in exclusive control of things. But once you trade your own personal computer games for a system that is essential for the safety and livelihoods of thousands of people, the penchant for exclusivity has to go out the window. No amount of organizational incompetence, personal distrust of others' motives, or so on can justify a computer engineer's taking matters into his or her own hands that way. This is an elementary lesson that ought to be drilled into the head of every computer-engineering student, but such uniformity in education is just a pipe dream at this point.

You can remember the lesson here with the adage, "two heads are better than one." Usually it's taken to mean that it's easier to solve problems with help, and that's true. But in technical organizations where life-critical matters are being dealt with, it's always dangerous when the system allows solitary individuals to do things that threaten the system's integrity. Rules enforcing the principle of never working alone or of always sharing system-critical passwords go against the personality grain of some types of engineers. But they're needed, and might have prevented the problems that were the focus of the news items we've just discussed.

Sources: An early report on the Ivins case can be found in the Los Angeles Times at http://www.latimes.com/news/nationworld/nation/la-na-anthrax1-2008aug01,0,2864223.story. The San Francisco Chronicle reported on the Childs incident at http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2008/07/22/BAGF11T91U.DTL&tsp=1