Monday, June 25, 2018

Revenge Porn and Technological Progress


Nonconsensual image sharing, also known as revenge porn, has affected the lives of millions around the globe.  A 2016 survey of 3,000 U. S. residents showed that one out of 25 Americans has either been a victim of revenge porn or has had someone threaten to publicize a nude or nearly nude photo of them without their consent.  If you’re a woman under 30, the chances you’ve been threatened this way rise to 1 in 10.  About 5% of both men and women between 18 and 29 have had this happen to them at least once.  Consequences of revenge porn range from the trivial to the tragic, and more than a few cases of revenge porn have been implicated in a victim’s suicide.

This is a nasty business, and just listing all the things wrong with it would take more space than I have.  But I would like to focus on one aspect of the problem:  the way technological progress, or what’s generally regarded as progress, has taken an immoral act that once required expensive and elaborate planning and turned it into something almost anybody can do in seconds. 

Spoiler alert:  if you’re a fan of mid-twentieth-century hardboiled detective fiction, but you haven’t seen the Bogart-Bacall movie “The Big Sleep,” haven’t read the novel by Raymond Chandler on which the movie is based, or plan to read Dashiell Hammett’s classic detective tale “The Scorched Face,” you might want to skip this paragraph.  The reason is that both stories involve schemes in which women were tempted to do, shall we say, inappropriate things while inadequately draped, and the criminals used hidden film cameras to obtain photos that were later used to blackmail the victims.  In these fictional tales, the victims were generally wild daughters of wealthy fathers who could afford to hire private detectives, but that was just to move the story along.  It’s unlikely that Hammett and Chandler cooked up these crime stories without there being some factual incidents behind them in news reports.  My point is that even in the dark pre-Internet ages, there were some people around who contrived to gain an advantage—in this case, a financial one—over a victim by using photography of intimate scenes and actions. 

But it was a lot of work.  For one thing, you had to develop your own film.  Most consumer photos back then were developed by local enterprises such as drug stores, and if you tried to get prints made of naughty images, the druggist was likely to call the cops on you, or at least refuse your business.  For another thing, your victim had to have enough social standing and money to make it worth your while to blackmail them.  In short, only the most dedicated and systematic criminals could successfully mount an indecent-photo blackmail scheme, and the crime was consequently rather rare.

Fast-forward to 2018.  Not only can intimate pictures now be taken with a device that is as commonly worn as underwear, but once taken, these pictures can be duplicated ad infinitum and publicized to the world using multi-billion-dollar facilities (e. g. Facebook and Instagram) that cost the user nothing.  And anonymity is easy to achieve on the Internet and hard to penetrate.  Besides which, I suspect the barrier that once existed in people’s minds between what is appropriate to photograph in an intimate setting and what is not has changed over the years. 

In addition, both the sexual act and the act of photography have been somewhat trivialized.  Before the widespread use of birth-control pills (another technology, by the way), there was always the chance of pregnancy.  While this didn’t stop people from doing what comes naturally, it added an existential significance to the act which it commonly lacks today.  And in the old days, taking a photo indoors required either a bulky camera with a flashbulb—not exactly adding to the mood of the thing—or bright photoflood lights, again not something that two people doing intimate acts are likely to want. 

The drive toward ease of use that has steered so many aspects of technology has become a goal in itself, and we have in many cases ceased to ask what it is that we are trying to make easier, and whether some things can be made too easy.  Mark Zuckerberg likes to say that Facebook simply wants to bring people closer.  The trouble is that closeness by itself is not always a good thing.  And when intimate relationships fall apart, as they so often do, photos taken easily in the heat of the moment can become time bombs that one partner can deploy against another.

There are laws against such things in many states and countries, but the widespread nature of the crime made so easy by technology vastly outstrips the ability of law enforcement to prosecute the perpetrators.  Only the worst cases that end in suicide or exploit multiple victims for money get prosecuted, and often the criminal escapes by means of the anonymity that the Internet provides. 

Fortunately, revenge porn can be prevented, but it requires judgment and trust:  judgment on the part of anyone who is involved in an intimate relationship, and trust between those involved that no one will forcibly or surreptitiously take pictures of intimate moments.  Unfortunately, I suspect that I don’t have a lot of readers in the under-30 group.  But if you’re in that category, please save yourself and your friends and lovers a lot of grief.  Put away your phones before you take off your clothes, and you won’t have to worry about any of this happening to you. 

Sources:  I referred to the Wikipedia article on revenge porn, a news item carried by the website Business Insider on Dec. 13, 2016 at http://www.businessinsider.com/revenge-porn-study-nearly-10-million-americans-are-victims-2016-12, and the Data & Society Research Institute study available at https://datasociety.net/pubs/oh/Nonconsensual_Image_Sharing_2016.pdf.
-->

Monday, June 18, 2018

Hacking Nuclear Weapons


Until I saw the title of Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons in the new-books shelf of my university library, I had never given any thought to what the new threat of cyber warfare means to the old threat of nuclear war.  Quite a lot, it turns out. 

Futter is associate professor of history at the University of Leicester in the UK, and has gathered whatever public-domain information he could find on what the world’s major nuclear players—chiefly Russia, China, and the U. S.—are doing both to modernize their nuclear command and control systems to bring them into the cyber era, and to keep both state and non-state actors (e. g. terrorists) from doing what his title mentions—namely, hacking a nuclear weapon, as well as other meddlesome things that could affect a nuclear nation’s ability to respond to threats. 

The problem is a complicated one.  The worst-case scenario would be for a hacker to launch a live nuclear missile.  This almost happened in the 1983 film WarGames, back when cyberattacks were primitive attempts by hobbyists using phone-line modems.  Since then, of course, cyber warfare has matured.  Probably the most well-known case is the  Stuxnet attack on Iranian nuclear-material facilities (probably carried out by a U. S -Israeli team) discovered in 2010, and Russia’s 2015 crippling of Ukraine’s power grid by cyberweapons.  While there are no known instances in which a hacker has gained direct control of a nuclear weapon, that is only one side of the hacker coin—what Futter calls the enabling side.  Just as potentially dangerous from a strategic point of view is the disabling side:  the potential to interfere with a nation’s ability to launch a nuclear strike if needed.  Either kind of hacking could raise the possibility of nuclear war to unacceptable levels.

At the end of his book, Futter recommends three principles to guide those charged with maintaining control of nuclear weapons.  The problem is that two of the three principles he calls for run counter to the tendencies of modern computer networks and systems.  His three principles are (1) simplicity, (2) security, and (3) separation from conventional weapons systems. 

Security is perhaps the most important principle, and so far, judging by the fact that we have not seen an accidental detonation of a nuclear weapon up to now, those in charge of such weapons have done at least an adequate job of keeping that sort of accident from happening.  But anyone who has dealt with computer systems today, which means virtually everyone, knows that simplicity went out the window decades ago.  Time and again, Futter emphasizes that while the old weapons-control systems were basically hard-wired pieces of hardware that the average technician could understand and repair, any modern computer replacement will probably involve many levels of complexity in both hardware and software.  Nobody will have the same kind of synoptic grasp of the entire system that was possible with 1960s-type hardware, and Futter is concerned that what we can’t fully understand, we can’t fully control.

Everyone outside the military organizations charged with control of nuclear weapons is at the disadvantage of having to guess at what those organizations are doing along these lines.  One hopes that they are keeping the newer computer-control systems as simple as possible, consistent with modernization.  What is more likely to be followed than simplicity is the principle of separation—keeping a clear boundary between control systems for conventional weapons and systems controlling nuclear weapons.

Almost certainly, the nuclear-weapons control networks are “air-gapped,” meaning that there is no physical or intentional electromagnetic connection between the nuclear system and the outside world of the Internet.  This was true of the control system that Iran built for its uranium centrifuges, but despite their air-gap precaution, the developers of Stuxnet were able to bridge the gap, evidently through the carelessness of someone who brought in a USB flash drive containing the Stuxnet virus and inserted it into a machine connected to the centrifuges. 

Such air-gap breaches could still occur today.  And this is where the disabling part of the problem comes in. 

One problem with live nuclear weapons is that you never get to test the entire system from initiating the command to seeing the mushroom cloud form over the target.  So we never really know from direct experience if the entire system is going to work as planned in the highly undesirable event that the decision is made to use nuclear weapons. 

The entire edifice of nuclear strategy thus relies on faith that each major player’s system will work as intended.  Anything that undermines that faith—a message, say, from a hacker asking for money or a diplomatic favor, or else we will disable all your nuclear weapons in a way you can’t figure out—well, such an action would be highly destabilizing for the permanent standoff that exists among nuclear powers. 

Though it’s easy to ignore it, Russia and the U. S. are like two gunslingers out in front of a saloon, each covering the other with a loaded pistol.  Neither one will fire unless he is sure the other one is about to fire.  But if one gunman thought that in a few seconds, somebody was going to snatch his gun out of his hands, he might be tempted to fire first.  That’s how the threat of an effective disabling hack might lead to unacceptable chances of nuclear war. 

These rather dismal speculations may not rise to the top of your worry list for the day, but it’s good that someone has at least asked the questions, and has found that the adults in the room, namely the few military brass who are willing to talk on the public record, are trying to do something about them.  Still, it would be a shame if after all these decades of successfully avoiding nuclear war, we wound up fighting one because of a software error.

Sources:  Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons by Andrew Futter was published by Georgetown University Press in 2018.  I also referred to the Wikipedia article on Stuxnet.

Monday, June 11, 2018

What's Wrong With Police Drones?


Recently the online journal Slate carried the news that DJI, the world's largest maker of consumer drones, is teaming with Axon, which sells more body cameras to police in the U. S. than anyone else.  Their joint venture, called Axon Air, plans to sell drones to law-enforcement agencies and couple them to Axon's cloud-based database called Evidence.com, which maintains files of video and other information gathered by police departments across the country.  Privacy experts interviewed about this development expressed concerns that when drone-generated video of crowds is processed by artificial-intelligence face-recognition software, the privacy of even law-abiding citizens will be further compromised. 

Is this new development a real threat to privacy, or is it just one more step down a path we've been treading for so long that in the long run it won't make any difference?  To answer that question, we need to have a good idea of what privacy means in the context of the type of surveillance that drones can do.

The Fourth Amendment to the U. S. Constitution asserts "[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures. . . . "  The key word is "unreasonable," and due to reasons both jurisprudential and technological, the meaning of that word has changed over time.  What it has meant historically is that before searching a person's private home, officers of the law must obtain a search warrant from a judge after explaining the reasons why they think such a search may turn up something illegal. 

But drones don't frisk people—they can't generally see anything that anybody at the same location of the drone couldn't see.  So as a result, there are few restrictions if any against simply taking pictures of people who are out in public places such as streets, sidewalks, parks, and other venues that drones can easily access.  As a result, security cameras operated both by law enforcement personnel and by private entities have proliferated to the extent that in many parts of the U. S., you can't walk down the street without leaving evidence that you did so in a dozen or so different places. 

This capability has proved its value in situations such as terrorist bombings, where inspection of videos after a tragedy has provided valuable evidence.  But the price we have paid is a sacrifice of privacy so that the rare malefactor can be caught on camera.

So far, this sacrifice seems to be worth while.  I'm not aware of a lot of cases in which someone who wasn't breaking the law or looked like they were, has been persecuted or had their privacy violated by the misuse of privately-owned security cameras.  There may be the odd case here and there, but generally speaking, such data is accessed only when a crime has occurred, and those responsible for reviewing camera data have done a good job of concentrating on genuine suspects and not misusing what they find.

Is there any reason that the same situation won't obtain if police forces begin using drone-captured video, and integrating it into Evidence.com, the Axon cloud-based evidence database?  Again, it all depends on the motives of those who can access the data.

If law enforcement agencies don't abuse such access and use it only for genuine criminal investigations, then it doesn't seem like moving security cameras to drones is going to make much difference to the average law-abiding citizen.  If anything, a drone is a lot more visible than a security camera stuck inside a light fixture somewhere, so people will be more aware that they're being watched than otherwise. 

But my concern is not so much for misuse in the U. S. as it is for misuse in countries which do not have the protection of the Bill of Rights, such as China, the home country of the drone-maker DJI. 

The Chinese government has announced plans to develop something called a Social Credit System, and has already put elements of it in place.  According to Wikipedia, the plans are for every citizen and business to have some sort of ranking rather like a credit score in the U. S.  Only the types of behavior considered for the ranking range far beyond whether you simply pay your bills on time, and include how much you play Internet games, how you shop, and other legal activities.  Already the Social Credit System has been used to ban certain people from taking domestic airline flights, attending certain schools, and getting certain kinds of jobs. 

While I have no evidence to support this, one can easily imagine a drone monitoring a Chinese citizen who goes to church, for example, and sending his or her social credit score into the basement as a result.  So whether a given surveillance technology poses a threat to the privacy and the freedom of the individual depends as much on the good will (or lack of it) of those who use the data as much as it does on the technology itself.

Some groups in the U. S. have little confidence in the average police organization already, and see drones as yet another weapon that will be turned against them.  Genuine cases of police who abuse their authority should not be tolerated, but statistics can be used by both sides in a controversy about arrest rates of minority populations to show either that blatant discrimination goes on (as it surely does in some cases), or to show that because certain groups historically commit more crimes, they naturally show up more in the category of suspicious persons that tend to be interrogated and surveilled.  There is no easy answer to this problem, which is best dealt with on a local level by identifying particular problems and solving them one by one.  Blanket condemnations either of police or of minority groups does no good.

When all is said and done, the question really is, do we trust those who use surveillance drones and the databases where the drone data will wind up?  Any society that functions has to have a minimum level of trust among its citizens and in its vital institutions, including those that enforce the law.  Surveillance drones can help catch criminals, no doubt.  But if they are abused to persecute law-abiding private citizens, or even if they are just perceived to contribute to such abuse, surveillance drones could end up causing more problems than they solve.

Sources:  On June 7, 2018, Slate carried the article "The Next Frontier of Police Surveillance Is Drones," by April Glaser, at https://slate.com/technology/2018/06/axon-and-dji-are-teaming-up-to-make-surveillance-drones-and-the-possibilities-are-frightening.html.  I also referred to the Wikipedia articles on the U. S. Bill of Rights and on China's Social Credit System. 

Monday, June 04, 2018

Should Google Censor Political Ads?


On May 25, citizens of Ireland voted in a referendum and thereby repealed the eighth amendment to the Irish Constitution, which has banned most types of abortions in Ireland for more than thirty years.  Ireland is a democratic country, and if their constitution allows such amendments by direct vote, then no one should have a problem with the way the change was made.  But most people would also agree that electorates should be informed by any reasonable means possible ahead of a vote, including advertisements paid for by interested parties who exercise their free-speech rights to let their opinions be known. 

In a move that is shocking both in its drastic character and in the hypocrisy with which it was presented, on May 9 with two weeks remaining before the vote, Google abruptly banned all ads dealing with the referendum through its channels, regardless of whether the ads were paid for by domestic or foreign sources.  The day before, Facebook had banned all such ads whose sponsors were outside of Ireland, although there is no current Irish legislation regarding online advertising.  Google's move was breathtaking in its scope and timing, coming at a time when the support for the yes vote in favor of repeal was looking somewhat shaky. 

As an editorial in the conservative U. S. magazine National Review pointed out, the mainstream Irish media were in favor of repeal.  Opponents of the repeal largely resorted to online advertising as being both cheaper and more effective among young people, whose vote was especially critical in this referendum.  Shutting down the online ads left the field open for conventional media, and thus blatantly tipped the scales in favor of the yes vote.  While Google explained its move as intended to "protect the integrity" of the campaign, one person's protection is another person's interference. 

As the lack of any Irish laws pertaining to online political ads testifies, online advertising has gotten way ahead of the legal and political system's ability to keep up with it.  This is not necessarily a bad thing, although issues of fairness are always present when the question of paid political ads comes up. 

The ways of dealing with political advertising lie along a spectrum.  On one end is the no-holds-barred libertarian extreme of no restrictions whatsoever.  Under this type of regime, anyone with enough money to afford advertising can spend it to say anything they want about any political issue, without revealing who they are or where they live.  With regard to online ads, if Ireland has no laws concerning them, then the libertarian end of the spectrum prevails, and neither Google nor Facebook was under any legal obligation to block any advertising regarding the referendum.

On the other extreme is the situation in which all media access is closely regulated and encumbered by restrictions as to amount of spending, when and where money can be spent, and what can be said.  I suppose the ultimate extreme of this pole is state-controlled media which monopolize the political discussion and ban all private access, regardless of ability to pay.  For technological reasons, it is hard for even super-totalitarian states such as North Korea to achieve 100% control of all media these days, but some nations come close.  Most people would agree that a state which flatly prohibits private political advertising is not likely to achieve much in the way of meaningful democracy.

But the pure-libertarian model has flaws too.  If most of the wealthy people all favor one political party or opinion, the other side is unlikely to get a fair hearing unless they are clever and exploit newer and cheaper ways to gain access to the public ear, as the pro-life groups in Ireland appear to have done. 

What is new to this traditional spectrum is the existence of institutions such as Google and Facebook which strive mightily to appear as neutral common carriers—think the old Bell System—but in fact have their own political axes to grind, and very powerful means to carry out moves that have huge political implications.  I wonder what would have happened if the situation had been reversed—if the no-vote people had been in control of the mainstream media and the yes-vote people had been forced to resort to online ads.  Would Google have shut down all online advertising two weeks before the vote in that case?  I somehow doubt it.

Like it or not, Google, Facebook, and their ilk are now publishers whose economic scale, power, and influence in some cases far exceed the old newspaper publishing empires of Hearst and Gannett and Murdoch.  But the old publishers knew they were publishers, and had some vague sense of social responsibility that went along with their access to the public's attention.  In the days before the "Victorian internet" (telegraphy) gave rise to the Associated Press, publishers were typically identified with particular political persuasions.  Everybody knew which was the Republican paper and which was the Democratic paper, and bought newspapers (and political ads) accordingly.  Even today, although the older news media make some effort to keep a wall of separation between the opinionated editorial operations and the supposedly neutral advertising and finance operations, many newspapers and TV networks take certain political positions and make no secret of it. 

But Google has outgrown its fig leaf of neutrality when it says it is "protecting the integrity" of elections by arbitrary and draconian bans on free speech, which is exactly what it did on May 9 in Ireland.  The fig leaf is now too small to hide some naughty bits, and it's clear to everybody who's paying the least attention that what Google did damaged the cause of one side in the referendum. 

It is of course possible that the repeal would have happened even if Google had not banned all ads when it did.  We will never know.  But Google now bears some measure of responsibility for the consequences of that vote, and the millions of future lives that will now never see the light of day because their protection in law is gone will not learn to read, will not learn to use a computer or a smart phone—and will never experience Google.  But hey, there are plenty of other people in the world, and maybe Google will never miss the ones that will now be missing from Ireland.