Monday, July 16, 2018

Exploding E-Cigarettes and Ethical Theories


A recent article in the engineering magazine IEEE Spectrum describes how numerous users of e-cigarettes have received injuries ranging from minor to life-threatening when their devices caught fire or exploded.  E-cigarettes work by vaporizing a solution of nicotine and flavoring with a hot wire powered by a high-energy-density lithium-ion battery.  Lithium-ion batteries are in everything from mobile phones to airliners, but the particular design of e-cigarettes makes them especially hazardous in this application.  The high power required by the heater means that the battery is operating perilously near the maximum output current that it can maintain without overheating itself.  And if there are any manufacturing defects in the battery, as can often happen if substandard components are used by a fly-by-night manufacturer with inadequate quality control, the e-cigarette user ends up carrying around what amounts to a small pipe bomb.

The consequences can be dire.  The article tells the story of Otis Gooding, whose e-cigarette went off in his pants pocket, injuring both his thigh and the hand he used to try to get rid of it.  Other users have lost eyes, teeth, and parts of their cheeks to explosions and fires.

Anecdotes, however harrowing, do not constitute numerical evidence that the typical e-cigarette user is taking his or her life in their hands when they light up.  But various sources have estimated that the incidents of e-cigarette explosions or fires is in the dozens if not hundreds a year. 

The e-cigarette market had total sales exceeding $2 billion in 2016, and assuming the average user spends $150 a year on the habit, that means over 10 million people in the U. S. are likely regular users.  Say 100 of these have fireworks-type problems with their devices, and that amounts to an incidence of 1 per 100,000 per year, which is the type of ratio that public-health epidemiologists like to use.  Just to put that in perspective, deaths in the U. S. from lung cancer in the period 2011-2015 averaged about 43 per 100,000.  One of the advantages touted for e-cigarettes is that they don’t produce the tar and other nasty stuff that leads to lung cancer in regular cigarette smokers.  While e-cigarettes haven’t been around long enough for this assertion to be empirically verified—nobody  has been an e-cigarette user for forty years yet—there is probably something to this argument.

And such an argument would appeal to a certain type of ethical theorist called a utilitarian.  Utilitarians decide what the right thing is to do based on the greatest good for the greatest number.  A utilitarian might look at this situation and say, “Okay, we have Case A and Case B.  In Case A, 10 million people satisfy their craving for nicotine with plain old coffin-nails, and as a result a good many more than 43 of them die of lung cancer every year.  In Case B, we have the same 10 million people smoking e-cigarettes.  A lot fewer of them die of lung cancer, at the price of only one unlucky person whose e-cigarette explodes in his face.  Clearly, Case B is better.”

And if you put it that way, it’s hard to argue the point.  But we don’t live idealized lives in which we’re always choosing between two clearly-defined cases.  And if I were an e-cigarette user (which I am not), I would still be concerned about the chances that my device could blow up or catch fire.  But the utilitiarian won’t help me.

So I go looking in the catalog of ethical theories and find something called virtue ethics.  Basically, virtue ethics encourages cultivation of the virtues, of which there are almost as many as there are virtue ethicists.  Of the virtues we could choose from, I’ll pick one that doesn’t have a particularly fancy name, but will work for our purposes:  thoughtfulness. 

If you open a cabinet door while making breakfast one morning, and then think to close it afterwards not because you want to (open cabinet doors don’t bother you at all) but because your wife has a thing about open cabinet doors, you’re being thoughtful.  What does thoughtfulness say about the situation of defective e-cigarettes leading to explosions and fires?

Well, it draws attention to the proximate causes of those explosions and fires, which in most cases prove to be defective lithium-ion batteries supplied by shady manufacturers who are almost exclusively in China, which is where most of the devices and their components are made.  Now there’s over a billion people in China, and probably most of them are doing the best they can in their lives, trying to improve their lot and fulfill their obligations.  China is a haven for entrepreneurs right now, especially in the exploding (so to speak) growth market of e-cigarettes.  And in a world economy where low prices speak louder than almost any other consideration, the organization that can underbid everybody else tends to get the business.  And so probably the management of a shady lithium-ion battery factory feels caught between the rock of maintaining quality and reliability on the one hand, and the hard place of not pricing their product above the minimum needed to get the business. 

The virtue ethicists would tell the makers of e-cigarettes to clean up their act.  “How would you like to be one of the customers whose lips are sent to Kingdom Come by one of your bad batteries?” they might ask.  Of course, if one company spends the money to improve the quality of their batteries, there may be another one around the corner willing to skip quality control and underbid the first company.  The e-cigarette makers who buy batteries also have reputations to uphold, and so they could also pay attention to the sermon of a virtue ethicist and take more responsibility for the quality of their products.  In any event, it’s easy to see that virtue ethics provides you with more reasons to do something about this situation than utilitarianism does.

This is not to say that utilitiarianism is useless.  In situations where there are lots of data to work with, utilitarian analyses can clarify choices between comparable courses of action.  But sooner or later, it always runs up against the questions of how to quantify good, and who to include in the greatest number.  And there are no universally agreed-upon answers to those questions.

Sources:  The print version of IEEE Spectrum carried the article “When E-Cigarettes Go Boom” on pp. 42-45 of the July 2018 issue.  A briefer version appeared on their website in February 2018 at https://spectrum.ieee.org/consumer-electronics/portable-devices/exploding-ecigarettes-are-a-growing-danger-to-public-health.  The statistic about lung cancer was obtained from https://seer.cancer.gov/statfacts/html/lungb.html, and the sales figures for e-cigarettes are from https://www.statista.com/statistics/285143/us-e-cigarettes-dollar-sales/.

Monday, July 09, 2018

Is Dr. Google Ready to See Us Now?


Google’s parent company Alphabet has recently demonstrated an artificial-intelligence (AI) algorithm that can be used to estimate how likely hospital patients are to die soon.  A recent piece in Bloomberg News described how one particular woman with end-stage breast cancer arrived with fluid-filled lungs and underwent numerous tests.  The Google algorithm said there was a 20% chance she would not survive her hospital stay, and she died a few days later.

One data point—or one life.  The woman was both, and therein lies the challenge for researchers wanting to use AI to improve health care.  AI is a data-hungry beast, thriving on huge databases and sweeping up any scrap of information in its maw.  One of the best features of Google’s medical AI system is that it doesn’t need to have the raw data gussied up, in terms of needing a human being to type messy notes into a form the computer can use, a process that consumes as much as 80% of the effort devoted to other AI medical software.  Google’s system takes almost any kind of hand-scrawled data and integrates it into patient evaluations.  So in order to help, the system needs to know everything, no matter how apparently trivial or unrelated to the case it may be.

But then the human aspect enters.  To make my point, I’ll draw an analogy to a differetn profession—banking.  I’m old enough to remember when bankers evaluated customers with a combination of hard data—loans paid off in the past, bank balances, and so on—and intuition gained from meeting and talking with the customer.  Except for maybe a few high-class boutique banks, this is no longer the case.  The almighty credit score ground out by opaque algorithms reigns, and no amount of personal charm exerted for the benefit of a loan officer will overcome a low credit score. 

It’s one thing when we’re talking about loans, and another when the subject is human lives.  It’s easy to imagine a dystopian narrative involving a Google-like AI program that comes to dominate the decision-making process in a situation where medical resources are limited and there are more patients needing expensive care than the system can handle.  Doctors will turn to their AI assistants and ask, “Which of these five patients is most likely to benefit from a kidney transplant?”  It’s likely that some form of this process already goes on today, but is limited to comparatively rare situations such as transplants. 

The U. S. government’s Medicare system is currently forecast to become insolvent eight years from now.  Even if Congress manages to bail it out, the flood of aging baby-boomers such as myself will threaten to overwhelm the nation’s health-care system.  In such a crisis, the temptation to use AI algorithms to allocate limited resources will be overwhelming. 

From an engineering-efficiency standpoint, it all makes sense.  Why waste limited resources on someone who isn’t likely to benefit from them, when another person may get them and go on to live many years longer?  That’s fine except for two things.

One, even the best AI systems aren’t perfect, and now and then there will be mistakes—sometimes major ones.

And two, what if an AI medical system tells you you’re not going to get that treatment that might make the difference between life or death?  Even the hardiest utilitarian (“greatest benefit for the greatest number”) may have second thoughts about that outcome.

 Of course, resource allocation in health care is nothing new.  There have always been more sick people than there have been facilities to take care of them all.  The way we’ve done it in the past has been a combination of economics, the judgment of medical personnel, and government intervention from time to time.  As computers made inroads into various parts of the process, it’s only natural that they be used along with other available means to make wise choices.  But there’s a difference between using computers as tools and completely turning over decision-making to an algorithm.

Another concern raised about Google’s foray into applying AI to health care is the issue of privacy.  Medical records are among the most sensitive types of personal data, and in the past, elaborate precautions have been taken to guard the sanctity of each individual’s records.  But AI algorithms work better the more data they have, and so simply for the purpose of getting better at what they do, these algorithms will need access to as much data as they can get their digital hands on.  According to one survey, less than half of the public trusts Google to keep their data private.  While that is just a perception, it’s a perception that Google, and the medical profession in general, ignore at their peril.  One scandal or major data breach involving medical records could set back the entire medical-AI industry, so all participants will need to tread carefully and make sure nothing like that happens, or else the whole experiment could come to a screeching halt.

Predicting when people will die is only one of the many abilities that medical AI of the future offers.  In figuring out hard-to-diagnose cases, in recommending treatment customized to the individual, and in optimizing the delivery of health care generally, it shows great promise in making health care more effective and efficient for the vast majority of patients.  But doctors and other medical authorities should beware of letting the algorithms gain the upper hand, and turning their judgment and ethics over to a printout, so to speak.  Because Google’s system is still in the prototype stage, we don’t know what the effects of its more widespread deployment will be.  But whatever form it takes, we need to make sure that the vital life-or-death decisions involved in medical care are made by responsible people, not just machines.

Sources:  The article “Google Is Training Machines to Predict When a Patient Will Die” appeared on June 18, 2018 in Bloomberg News at https://www.bloomberg.com/news/articles/2018-06-18/google-is-training-machines-to-predict-when-a-patient-will-die, and was reprinted by the Austin American-Statesman, where I first saw it.  I also referred to an article on the Modern Health Care website at http://www.modernhealthcare.com/article/20180419/NEWS/180419911.  The statistic about Medicare’s insolvency is from p. 6 of the July 9, 2018 issue of National Review.   

Monday, July 02, 2018

Unanswered Questions About the FIU Bridge Collapse


Last March 15, an innovatively-designed pedestrian bridge installed for less than a week suddenly collapsed onto a busy roadway in Florida International University, killing six people and injuring several more.  Although many bridges have been erected successfully using the technique known as accelerated bridge construction used for this bridge, this was one of the first times such a bridge failed during construction, and so both the engineering community as well as the public at large would like to know what went wrong.  But answers have been slow in coming.

Accident investigations are tedious, painstaking tasks, and it’s understandable that the U. S. National Transportation Safety Board (NTSB), which is the primary agency charged with the investigation, is going to take as long as it takes to find out what happened.  On May 23, the NTSB released a preliminary report on the accident.  But those hoping to read about a metaphorical smoking gun in the report will be disappointed.

This is not unusual for preliminary reports.  Depending on how accessible the raw data is that has to be examined, preliminary NTSB reports can come close to answering all the relevant questions.  But a bridge is a large physical object that doesn’t yield its secrets easily, and it’s quite possible that the agency is conducting tedious and lengthy examinations of the pieces recovered from certain sections of the bridge to reconstruct exactly what happened.

The bridge was a concrete truss, and I learned just now the technical definition of a truss, which “consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object.”  (That’s from Wikipedia.)  Trusses are everywhere in constructed objects.  Your house or apartment probably has roof trusses in it.  If a sign support arching over a highway isn’t a single large tube or pole, it’s probably a truss.  You can tell a truss by its triangles, each side of which is one of the aforementioned two-force members.  And all that means is that the force on each member (strut) is applied at only two points, generally the ends.

The NTSB report focuses on two particular truss members of the bridge, especially one designated as No. 11.  If you picture the bridge as a concrete floor and ceiling connected by slanting concrete beams (truss members), these two members were near each end, slanting downward from near the end of the ceiling to the far edge of the floor at the end of the bridge.  Each slanting member formed the diagonal of a triangle, the other two sides being the end part of the ceiling and a vertical beam connecting the ceiling and floor at each end. 

Clear?  Maybe not.  Anyway, the point of making all these triangles in trusses is that a triangular shape made of straight sides does not change its shape easily.  You can push on the opposite sides of a square made of four bars tied together with pivots at the corners, and the pivots will let you squash the square flat.  But even if a triangle is made with pivots at the corners, it won’t change its shape until one of the sides actually bends or breaks.  And that may be what happened to the FIU bridge.

There has been much attention paid to some cracks that showed up near the bottom of No. 11 several weeks before the collapse.  Photos of these cracks were accidentally released, the NTSB griped about it, and then the agency included them in their report later.  There are good reasons why certain touchy information about accidents shouldn’t be released before an NTSB report is completed, but it’s not clear if these reasons apply in this case.  The Miami Herald commented in an article on the report that the newspaper is trying to get more information about the bridge released from the Florida Department of Transportation under Florida’s public-records act, but the lawsuit is still in progress.

As many people know, concrete can withstand a lot of compression (squeezing), but hardly any tension.  That is why steel reinforcement bars (rebars) are embedded in concrete structures of any size, and why the FIU bridge included adjustable tensioning rods in some of its members, including No. 11.  A couple of sources indicate that construction crews were re-tensioning these rods after it was moved in place when the bridge collapsed.  The Herald report speculates that the member might have been compressed too much by these tensioning rods in an attempt to close the suspicious cracks.  Every concrete structure has a limit as to how much compression it can stand.  If the worker got carried away and put too much stress on an already compromised member, it might have simply crumbled from the pressure.  Because it was in such a vital location, failure of that member would have caused exactly the kind of accident that happened.

Especially if the worker who made this mistake was the one who died, the NTSB is reluctant to draw any conclusions in this direction that are not supported by abundant evidence.  It’s up to them to figure out what traces of such a mishap would remain in the rubble that was collected from the site, as well as whatever preliminary evidence such as photos that are available.  So rather than any nefarious conspiracy to cover up systematic wrongdoing, the delay and refusal to share information may simply be out of concern that premature release of information could lead to unnecessary agitation, hurt feelings, and even more lawsuits.  The legal system has grown to accommodate the NTSB’s role in accident investigation, and anything that would upset that particular applecart may not be helpful.

All the same, it would be good if the root causes of this very public and tragic event could be unearthed, if there are any to be found.  And other things being equal, it would be nice to have that happen sooner than later.  Clearly, something went wrong, and everyone using accelerated bridge constructions stands to learn something potentially useful from the final report on this accident.  But as the process may take months longer, we may simply have to wait. 

Sources:  The Miami Herald’s article on the NTSB preliminary report appeared on May 23, 2018 at https://www.miamiherald.com/news/local/community/miami-dade/article211735504.html.  The NTSB report itself can be accessed at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH009-prelim.pdf.  I also referred to the Wikipedia article on trusses (the engineering kinds).      

Monday, June 25, 2018

Revenge Porn and Technological Progress


Nonconsensual image sharing, also known as revenge porn, has affected the lives of millions around the globe.  A 2016 survey of 3,000 U. S. residents showed that one out of 25 Americans has either been a victim of revenge porn or has had someone threaten to publicize a nude or nearly nude photo of them without their consent.  If you’re a woman under 30, the chances you’ve been threatened this way rise to 1 in 10.  About 5% of both men and women between 18 and 29 have had this happen to them at least once.  Consequences of revenge porn range from the trivial to the tragic, and more than a few cases of revenge porn have been implicated in a victim’s suicide.

This is a nasty business, and just listing all the things wrong with it would take more space than I have.  But I would like to focus on one aspect of the problem:  the way technological progress, or what’s generally regarded as progress, has taken an immoral act that once required expensive and elaborate planning and turned it into something almost anybody can do in seconds. 

Spoiler alert:  if you’re a fan of mid-twentieth-century hardboiled detective fiction, but you haven’t seen the Bogart-Bacall movie “The Big Sleep,” haven’t read the novel by Raymond Chandler on which the movie is based, or plan to read Dashiell Hammett’s classic detective tale “The Scorched Face,” you might want to skip this paragraph.  The reason is that both stories involve schemes in which women were tempted to do, shall we say, inappropriate things while inadequately draped, and the criminals used hidden film cameras to obtain photos that were later used to blackmail the victims.  In these fictional tales, the victims were generally wild daughters of wealthy fathers who could afford to hire private detectives, but that was just to move the story along.  It’s unlikely that Hammett and Chandler cooked up these crime stories without there being some factual incidents behind them in news reports.  My point is that even in the dark pre-Internet ages, there were some people around who contrived to gain an advantage—in this case, a financial one—over a victim by using photography of intimate scenes and actions. 

But it was a lot of work.  For one thing, you had to develop your own film.  Most consumer photos back then were developed by local enterprises such as drug stores, and if you tried to get prints made of naughty images, the druggist was likely to call the cops on you, or at least refuse your business.  For another thing, your victim had to have enough social standing and money to make it worth your while to blackmail them.  In short, only the most dedicated and systematic criminals could successfully mount an indecent-photo blackmail scheme, and the crime was consequently rather rare.

Fast-forward to 2018.  Not only can intimate pictures now be taken with a device that is as commonly worn as underwear, but once taken, these pictures can be duplicated ad infinitum and publicized to the world using multi-billion-dollar facilities (e. g. Facebook and Instagram) that cost the user nothing.  And anonymity is easy to achieve on the Internet and hard to penetrate.  Besides which, I suspect the barrier that once existed in people’s minds between what is appropriate to photograph in an intimate setting and what is not has changed over the years. 

In addition, both the sexual act and the act of photography have been somewhat trivialized.  Before the widespread use of birth-control pills (another technology, by the way), there was always the chance of pregnancy.  While this didn’t stop people from doing what comes naturally, it added an existential significance to the act which it commonly lacks today.  And in the old days, taking a photo indoors required either a bulky camera with a flashbulb—not exactly adding to the mood of the thing—or bright photoflood lights, again not something that two people doing intimate acts are likely to want. 

The drive toward ease of use that has steered so many aspects of technology has become a goal in itself, and we have in many cases ceased to ask what it is that we are trying to make easier, and whether some things can be made too easy.  Mark Zuckerberg likes to say that Facebook simply wants to bring people closer.  The trouble is that closeness by itself is not always a good thing.  And when intimate relationships fall apart, as they so often do, photos taken easily in the heat of the moment can become time bombs that one partner can deploy against another.

There are laws against such things in many states and countries, but the widespread nature of the crime made so easy by technology vastly outstrips the ability of law enforcement to prosecute the perpetrators.  Only the worst cases that end in suicide or exploit multiple victims for money get prosecuted, and often the criminal escapes by means of the anonymity that the Internet provides. 

Fortunately, revenge porn can be prevented, but it requires judgment and trust:  judgment on the part of anyone who is involved in an intimate relationship, and trust between those involved that no one will forcibly or surreptitiously take pictures of intimate moments.  Unfortunately, I suspect that I don’t have a lot of readers in the under-30 group.  But if you’re in that category, please save yourself and your friends and lovers a lot of grief.  Put away your phones before you take off your clothes, and you won’t have to worry about any of this happening to you. 

Sources:  I referred to the Wikipedia article on revenge porn, a news item carried by the website Business Insider on Dec. 13, 2016 at http://www.businessinsider.com/revenge-porn-study-nearly-10-million-americans-are-victims-2016-12, and the Data & Society Research Institute study available at https://datasociety.net/pubs/oh/Nonconsensual_Image_Sharing_2016.pdf.
-->

Monday, June 18, 2018

Hacking Nuclear Weapons


Until I saw the title of Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons in the new-books shelf of my university library, I had never given any thought to what the new threat of cyber warfare means to the old threat of nuclear war.  Quite a lot, it turns out. 

Futter is associate professor of history at the University of Leicester in the UK, and has gathered whatever public-domain information he could find on what the world’s major nuclear players—chiefly Russia, China, and the U. S.—are doing both to modernize their nuclear command and control systems to bring them into the cyber era, and to keep both state and non-state actors (e. g. terrorists) from doing what his title mentions—namely, hacking a nuclear weapon, as well as other meddlesome things that could affect a nuclear nation’s ability to respond to threats. 

The problem is a complicated one.  The worst-case scenario would be for a hacker to launch a live nuclear missile.  This almost happened in the 1983 film WarGames, back when cyberattacks were primitive attempts by hobbyists using phone-line modems.  Since then, of course, cyber warfare has matured.  Probably the most well-known case is the  Stuxnet attack on Iranian nuclear-material facilities (probably carried out by a U. S -Israeli team) discovered in 2010, and Russia’s 2015 crippling of Ukraine’s power grid by cyberweapons.  While there are no known instances in which a hacker has gained direct control of a nuclear weapon, that is only one side of the hacker coin—what Futter calls the enabling side.  Just as potentially dangerous from a strategic point of view is the disabling side:  the potential to interfere with a nation’s ability to launch a nuclear strike if needed.  Either kind of hacking could raise the possibility of nuclear war to unacceptable levels.

At the end of his book, Futter recommends three principles to guide those charged with maintaining control of nuclear weapons.  The problem is that two of the three principles he calls for run counter to the tendencies of modern computer networks and systems.  His three principles are (1) simplicity, (2) security, and (3) separation from conventional weapons systems. 

Security is perhaps the most important principle, and so far, judging by the fact that we have not seen an accidental detonation of a nuclear weapon up to now, those in charge of such weapons have done at least an adequate job of keeping that sort of accident from happening.  But anyone who has dealt with computer systems today, which means virtually everyone, knows that simplicity went out the window decades ago.  Time and again, Futter emphasizes that while the old weapons-control systems were basically hard-wired pieces of hardware that the average technician could understand and repair, any modern computer replacement will probably involve many levels of complexity in both hardware and software.  Nobody will have the same kind of synoptic grasp of the entire system that was possible with 1960s-type hardware, and Futter is concerned that what we can’t fully understand, we can’t fully control.

Everyone outside the military organizations charged with control of nuclear weapons is at the disadvantage of having to guess at what those organizations are doing along these lines.  One hopes that they are keeping the newer computer-control systems as simple as possible, consistent with modernization.  What is more likely to be followed than simplicity is the principle of separation—keeping a clear boundary between control systems for conventional weapons and systems controlling nuclear weapons.

Almost certainly, the nuclear-weapons control networks are “air-gapped,” meaning that there is no physical or intentional electromagnetic connection between the nuclear system and the outside world of the Internet.  This was true of the control system that Iran built for its uranium centrifuges, but despite their air-gap precaution, the developers of Stuxnet were able to bridge the gap, evidently through the carelessness of someone who brought in a USB flash drive containing the Stuxnet virus and inserted it into a machine connected to the centrifuges. 

Such air-gap breaches could still occur today.  And this is where the disabling part of the problem comes in. 

One problem with live nuclear weapons is that you never get to test the entire system from initiating the command to seeing the mushroom cloud form over the target.  So we never really know from direct experience if the entire system is going to work as planned in the highly undesirable event that the decision is made to use nuclear weapons. 

The entire edifice of nuclear strategy thus relies on faith that each major player’s system will work as intended.  Anything that undermines that faith—a message, say, from a hacker asking for money or a diplomatic favor, or else we will disable all your nuclear weapons in a way you can’t figure out—well, such an action would be highly destabilizing for the permanent standoff that exists among nuclear powers. 

Though it’s easy to ignore it, Russia and the U. S. are like two gunslingers out in front of a saloon, each covering the other with a loaded pistol.  Neither one will fire unless he is sure the other one is about to fire.  But if one gunman thought that in a few seconds, somebody was going to snatch his gun out of his hands, he might be tempted to fire first.  That’s how the threat of an effective disabling hack might lead to unacceptable chances of nuclear war. 

These rather dismal speculations may not rise to the top of your worry list for the day, but it’s good that someone has at least asked the questions, and has found that the adults in the room, namely the few military brass who are willing to talk on the public record, are trying to do something about them.  Still, it would be a shame if after all these decades of successfully avoiding nuclear war, we wound up fighting one because of a software error.

Sources:  Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons by Andrew Futter was published by Georgetown University Press in 2018.  I also referred to the Wikipedia article on Stuxnet.

Monday, June 11, 2018

What's Wrong With Police Drones?


Recently the online journal Slate carried the news that DJI, the world's largest maker of consumer drones, is teaming with Axon, which sells more body cameras to police in the U. S. than anyone else.  Their joint venture, called Axon Air, plans to sell drones to law-enforcement agencies and couple them to Axon's cloud-based database called Evidence.com, which maintains files of video and other information gathered by police departments across the country.  Privacy experts interviewed about this development expressed concerns that when drone-generated video of crowds is processed by artificial-intelligence face-recognition software, the privacy of even law-abiding citizens will be further compromised. 

Is this new development a real threat to privacy, or is it just one more step down a path we've been treading for so long that in the long run it won't make any difference?  To answer that question, we need to have a good idea of what privacy means in the context of the type of surveillance that drones can do.

The Fourth Amendment to the U. S. Constitution asserts "[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures. . . . "  The key word is "unreasonable," and due to reasons both jurisprudential and technological, the meaning of that word has changed over time.  What it has meant historically is that before searching a person's private home, officers of the law must obtain a search warrant from a judge after explaining the reasons why they think such a search may turn up something illegal. 

But drones don't frisk people—they can't generally see anything that anybody at the same location of the drone couldn't see.  So as a result, there are few restrictions if any against simply taking pictures of people who are out in public places such as streets, sidewalks, parks, and other venues that drones can easily access.  As a result, security cameras operated both by law enforcement personnel and by private entities have proliferated to the extent that in many parts of the U. S., you can't walk down the street without leaving evidence that you did so in a dozen or so different places. 

This capability has proved its value in situations such as terrorist bombings, where inspection of videos after a tragedy has provided valuable evidence.  But the price we have paid is a sacrifice of privacy so that the rare malefactor can be caught on camera.

So far, this sacrifice seems to be worth while.  I'm not aware of a lot of cases in which someone who wasn't breaking the law or looked like they were, has been persecuted or had their privacy violated by the misuse of privately-owned security cameras.  There may be the odd case here and there, but generally speaking, such data is accessed only when a crime has occurred, and those responsible for reviewing camera data have done a good job of concentrating on genuine suspects and not misusing what they find.

Is there any reason that the same situation won't obtain if police forces begin using drone-captured video, and integrating it into Evidence.com, the Axon cloud-based evidence database?  Again, it all depends on the motives of those who can access the data.

If law enforcement agencies don't abuse such access and use it only for genuine criminal investigations, then it doesn't seem like moving security cameras to drones is going to make much difference to the average law-abiding citizen.  If anything, a drone is a lot more visible than a security camera stuck inside a light fixture somewhere, so people will be more aware that they're being watched than otherwise. 

But my concern is not so much for misuse in the U. S. as it is for misuse in countries which do not have the protection of the Bill of Rights, such as China, the home country of the drone-maker DJI. 

The Chinese government has announced plans to develop something called a Social Credit System, and has already put elements of it in place.  According to Wikipedia, the plans are for every citizen and business to have some sort of ranking rather like a credit score in the U. S.  Only the types of behavior considered for the ranking range far beyond whether you simply pay your bills on time, and include how much you play Internet games, how you shop, and other legal activities.  Already the Social Credit System has been used to ban certain people from taking domestic airline flights, attending certain schools, and getting certain kinds of jobs. 

While I have no evidence to support this, one can easily imagine a drone monitoring a Chinese citizen who goes to church, for example, and sending his or her social credit score into the basement as a result.  So whether a given surveillance technology poses a threat to the privacy and the freedom of the individual depends as much on the good will (or lack of it) of those who use the data as much as it does on the technology itself.

Some groups in the U. S. have little confidence in the average police organization already, and see drones as yet another weapon that will be turned against them.  Genuine cases of police who abuse their authority should not be tolerated, but statistics can be used by both sides in a controversy about arrest rates of minority populations to show either that blatant discrimination goes on (as it surely does in some cases), or to show that because certain groups historically commit more crimes, they naturally show up more in the category of suspicious persons that tend to be interrogated and surveilled.  There is no easy answer to this problem, which is best dealt with on a local level by identifying particular problems and solving them one by one.  Blanket condemnations either of police or of minority groups does no good.

When all is said and done, the question really is, do we trust those who use surveillance drones and the databases where the drone data will wind up?  Any society that functions has to have a minimum level of trust among its citizens and in its vital institutions, including those that enforce the law.  Surveillance drones can help catch criminals, no doubt.  But if they are abused to persecute law-abiding private citizens, or even if they are just perceived to contribute to such abuse, surveillance drones could end up causing more problems than they solve.

Sources:  On June 7, 2018, Slate carried the article "The Next Frontier of Police Surveillance Is Drones," by April Glaser, at https://slate.com/technology/2018/06/axon-and-dji-are-teaming-up-to-make-surveillance-drones-and-the-possibilities-are-frightening.html.  I also referred to the Wikipedia articles on the U. S. Bill of Rights and on China's Social Credit System. 

Monday, June 04, 2018

Should Google Censor Political Ads?


On May 25, citizens of Ireland voted in a referendum and thereby repealed the eighth amendment to the Irish Constitution, which has banned most types of abortions in Ireland for more than thirty years.  Ireland is a democratic country, and if their constitution allows such amendments by direct vote, then no one should have a problem with the way the change was made.  But most people would also agree that electorates should be informed by any reasonable means possible ahead of a vote, including advertisements paid for by interested parties who exercise their free-speech rights to let their opinions be known. 

In a move that is shocking both in its drastic character and in the hypocrisy with which it was presented, on May 9 with two weeks remaining before the vote, Google abruptly banned all ads dealing with the referendum through its channels, regardless of whether the ads were paid for by domestic or foreign sources.  The day before, Facebook had banned all such ads whose sponsors were outside of Ireland, although there is no current Irish legislation regarding online advertising.  Google's move was breathtaking in its scope and timing, coming at a time when the support for the yes vote in favor of repeal was looking somewhat shaky. 

As an editorial in the conservative U. S. magazine National Review pointed out, the mainstream Irish media were in favor of repeal.  Opponents of the repeal largely resorted to online advertising as being both cheaper and more effective among young people, whose vote was especially critical in this referendum.  Shutting down the online ads left the field open for conventional media, and thus blatantly tipped the scales in favor of the yes vote.  While Google explained its move as intended to "protect the integrity" of the campaign, one person's protection is another person's interference. 

As the lack of any Irish laws pertaining to online political ads testifies, online advertising has gotten way ahead of the legal and political system's ability to keep up with it.  This is not necessarily a bad thing, although issues of fairness are always present when the question of paid political ads comes up. 

The ways of dealing with political advertising lie along a spectrum.  On one end is the no-holds-barred libertarian extreme of no restrictions whatsoever.  Under this type of regime, anyone with enough money to afford advertising can spend it to say anything they want about any political issue, without revealing who they are or where they live.  With regard to online ads, if Ireland has no laws concerning them, then the libertarian end of the spectrum prevails, and neither Google nor Facebook was under any legal obligation to block any advertising regarding the referendum.

On the other extreme is the situation in which all media access is closely regulated and encumbered by restrictions as to amount of spending, when and where money can be spent, and what can be said.  I suppose the ultimate extreme of this pole is state-controlled media which monopolize the political discussion and ban all private access, regardless of ability to pay.  For technological reasons, it is hard for even super-totalitarian states such as North Korea to achieve 100% control of all media these days, but some nations come close.  Most people would agree that a state which flatly prohibits private political advertising is not likely to achieve much in the way of meaningful democracy.

But the pure-libertarian model has flaws too.  If most of the wealthy people all favor one political party or opinion, the other side is unlikely to get a fair hearing unless they are clever and exploit newer and cheaper ways to gain access to the public ear, as the pro-life groups in Ireland appear to have done. 

What is new to this traditional spectrum is the existence of institutions such as Google and Facebook which strive mightily to appear as neutral common carriers—think the old Bell System—but in fact have their own political axes to grind, and very powerful means to carry out moves that have huge political implications.  I wonder what would have happened if the situation had been reversed—if the no-vote people had been in control of the mainstream media and the yes-vote people had been forced to resort to online ads.  Would Google have shut down all online advertising two weeks before the vote in that case?  I somehow doubt it.

Like it or not, Google, Facebook, and their ilk are now publishers whose economic scale, power, and influence in some cases far exceed the old newspaper publishing empires of Hearst and Gannett and Murdoch.  But the old publishers knew they were publishers, and had some vague sense of social responsibility that went along with their access to the public's attention.  In the days before the "Victorian internet" (telegraphy) gave rise to the Associated Press, publishers were typically identified with particular political persuasions.  Everybody knew which was the Republican paper and which was the Democratic paper, and bought newspapers (and political ads) accordingly.  Even today, although the older news media make some effort to keep a wall of separation between the opinionated editorial operations and the supposedly neutral advertising and finance operations, many newspapers and TV networks take certain political positions and make no secret of it. 

But Google has outgrown its fig leaf of neutrality when it says it is "protecting the integrity" of elections by arbitrary and draconian bans on free speech, which is exactly what it did on May 9 in Ireland.  The fig leaf is now too small to hide some naughty bits, and it's clear to everybody who's paying the least attention that what Google did damaged the cause of one side in the referendum. 

It is of course possible that the repeal would have happened even if Google had not banned all ads when it did.  We will never know.  But Google now bears some measure of responsibility for the consequences of that vote, and the millions of future lives that will now never see the light of day because their protection in law is gone will not learn to read, will not learn to use a computer or a smart phone—and will never experience Google.  But hey, there are plenty of other people in the world, and maybe Google will never miss the ones that will now be missing from Ireland.