The government of China has invested billions of dollars and years of effort in creating a facial-recognition database of as many of its 1.3 billion people as possible. Their goal is social control: to keep tabs on every individual and to make sure anyone doing anything the government doesn't like will at least worry about being caught.
But those of us in the U. S. and Europe shouldn't simply sigh in relief that our governments don't do such things. Facebook and Google are virtually everywhere, as are closed-circuit TVs (CCTVs) and dashcams, not to mention smaller companies such as ClearView that are focused on selling facial-recognition technology to law-enforcement agencies. It is hard to go more than a few feet in most cities these days without your image, or the license plate of your car, being converted into an electronic form that can reveal your identity and location.
In a recent issue of The New Yorker, reporter John Seabrook describes how some individuals and researchers are fighting back with what amounts to digital-surveillance invisibility gear. This equipment doesn't make you invisible to ordinary people, who might just think you have a fondness for eccentric outerwear: shirts with images of random license-plate numbers, or even fuzzy-looking artwork that might be a traffic signal painted by Monet. But the same artificial-intelligence (AI) researchers who came up with the powerful facial-recognition software in the first place have found that some of the same AI techniques can be used to produce patterns that, if worn on the body or applied to the face, confuse facial-recognition software to the point that as far as it's concerned, you might as well not be there.
Admittedly, the average person in the U. S. is probably unaware that CCTV cameras can track your every move, because the data thus generated is currently used mostly for commercial purposes: optimizing online ads, for example, by figuring out what kinds of people look at certain store displays and so on. But residents of China and other places where systematic government spying using facial recognition is a part of everyday life have already adapted their behavior to the fact that anything they do outside their own homes (and maybe inside, too) is probably known to the government. It's the Big Brother of George Orwell's dystopian novel 1984 realized, not just in an isolated experiment or two, but over most of the most populous nation in the world.
In today's interconnected, hyper-monitored, Internet-of-Things world, the ethical concept of privacy is a soiled and tattered thing. Every time you sign up for a "free" service such as Facebook or another social-media platform that involves images, you are obliged to lie that you have read and understood a ream of legalese that it would take several lawyers a long time to understand thoroughly. Buried in that legalese is probably verbiage that allows the company to do effectively whatever it wants to with your pictures.
I attended a seminar on ethics and technology recently at which philosopher Eric T. Weber argued that some day, a clever lawyer may file a class-action suit on behalf of all of us who have thus signed our rights away. His point was that assent without understanding is not assent, and compared the situation to the concept of informed consent that subjects of medical experiments have to grant. If you don't understand what the doctors are going to do to you but say it's okay anyway, that is not regarded as informed consent.
Prof. Weber also pointed out that European laws are more advanced in this regard, in that the presumption there is that a person owns the data they generate until they intentionally let go of it. That's a nice theory, but the minute you set foot on public property—a road, a sidewalk, even a shopping mall—the fact is that you are liable to have your picture taken. And with the way AI has proliferated, you are liable to be recognized and categorized, even if the data is supposedly "anonymized" so that identifiable individuals cannot be picked out.
The whole thing uncomfortably reminds me of something that began during World War II and continues today: the continuing rivalry in what are called electronic countermeasures (ECM). As soon as someone invented radar, someone else invented a way to fool radars, and the game has continued ever since. But that game is played by sophisticated adversaries with access to billions of dollars of research and development funds.
What chance does an ordinary person with no AI knowledge or skills have to defend themselves against the nosiness of a Facebook, Google, or ClearView? Up till now, virtually none, unless you just self-isolate at home indefinitely (pardon me for letting that phrase creep in, but I couldn't keep the coronavirus out of this blog entirely). But clothing and accessory designer Kate Bertash, profiled in Seabrook's article, now sells ready-to-wear "anti-surveillance" items from her small studio in Los Angeles.
The right to privacy is something that any democratic government should respect and defend, rather than ignore or even destroy, as the government of China appears to be doing. But privacy is one of those subtle rights that you may not miss if it is slowly and gradually chipped away, until one day you suddenly find that you need it and it's gone.
It's gratifying to know that at least some researchers and retailers are waking up to the problem of omnipresent surveillance and trying to do something about it. And it isn't just bad actors who want to evade facial-recognition systems. In the words of the U. S. Declaration of Independence—"life, liberty, and the pursuit of happiness"—liberty implies the freedom to do all sorts of innocent and licit things without being concerned that your actions will result in consequences you don't want, either from unwelcome government intervention or from commercial entities exploiting their knowledge about you to sell you things. It's a shame that people even have to think about AI-generated invisibility cloaks, but such are the times we live in.
Sources: I referred to the article by John Seabrook, "Adversarial Man," on pp. 44-51 in the Mar. 16, 2020 New Yorker. Eric T. Weber is an associate professor in the Department of Educational Policy Studies at the University of Kentucky, and appeared in a panel discussion entitled "Can Ethics Keep Pace with Technological Change?" hosted by the Texas State University Department of Philosophy on Mar. 12, 2020.
No comments:
Post a Comment