Monday, June 18, 2018

Hacking Nuclear Weapons


Until I saw the title of Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons in the new-books shelf of my university library, I had never given any thought to what the new threat of cyber warfare means to the old threat of nuclear war.  Quite a lot, it turns out. 

Futter is associate professor of history at the University of Leicester in the UK, and has gathered whatever public-domain information he could find on what the world’s major nuclear players—chiefly Russia, China, and the U. S.—are doing both to modernize their nuclear command and control systems to bring them into the cyber era, and to keep both state and non-state actors (e. g. terrorists) from doing what his title mentions—namely, hacking a nuclear weapon, as well as other meddlesome things that could affect a nuclear nation’s ability to respond to threats. 

The problem is a complicated one.  The worst-case scenario would be for a hacker to launch a live nuclear missile.  This almost happened in the 1983 film WarGames, back when cyberattacks were primitive attempts by hobbyists using phone-line modems.  Since then, of course, cyber warfare has matured.  Probably the most well-known case is the  Stuxnet attack on Iranian nuclear-material facilities (probably carried out by a U. S -Israeli team) discovered in 2010, and Russia’s 2015 crippling of Ukraine’s power grid by cyberweapons.  While there are no known instances in which a hacker has gained direct control of a nuclear weapon, that is only one side of the hacker coin—what Futter calls the enabling side.  Just as potentially dangerous from a strategic point of view is the disabling side:  the potential to interfere with a nation’s ability to launch a nuclear strike if needed.  Either kind of hacking could raise the possibility of nuclear war to unacceptable levels.

At the end of his book, Futter recommends three principles to guide those charged with maintaining control of nuclear weapons.  The problem is that two of the three principles he calls for run counter to the tendencies of modern computer networks and systems.  His three principles are (1) simplicity, (2) security, and (3) separation from conventional weapons systems. 

Security is perhaps the most important principle, and so far, judging by the fact that we have not seen an accidental detonation of a nuclear weapon up to now, those in charge of such weapons have done at least an adequate job of keeping that sort of accident from happening.  But anyone who has dealt with computer systems today, which means virtually everyone, knows that simplicity went out the window decades ago.  Time and again, Futter emphasizes that while the old weapons-control systems were basically hard-wired pieces of hardware that the average technician could understand and repair, any modern computer replacement will probably involve many levels of complexity in both hardware and software.  Nobody will have the same kind of synoptic grasp of the entire system that was possible with 1960s-type hardware, and Futter is concerned that what we can’t fully understand, we can’t fully control.

Everyone outside the military organizations charged with control of nuclear weapons is at the disadvantage of having to guess at what those organizations are doing along these lines.  One hopes that they are keeping the newer computer-control systems as simple as possible, consistent with modernization.  What is more likely to be followed than simplicity is the principle of separation—keeping a clear boundary between control systems for conventional weapons and systems controlling nuclear weapons.

Almost certainly, the nuclear-weapons control networks are “air-gapped,” meaning that there is no physical or intentional electromagnetic connection between the nuclear system and the outside world of the Internet.  This was true of the control system that Iran built for its uranium centrifuges, but despite their air-gap precaution, the developers of Stuxnet were able to bridge the gap, evidently through the carelessness of someone who brought in a USB flash drive containing the Stuxnet virus and inserted it into a machine connected to the centrifuges. 

Such air-gap breaches could still occur today.  And this is where the disabling part of the problem comes in. 

One problem with live nuclear weapons is that you never get to test the entire system from initiating the command to seeing the mushroom cloud form over the target.  So we never really know from direct experience if the entire system is going to work as planned in the highly undesirable event that the decision is made to use nuclear weapons. 

The entire edifice of nuclear strategy thus relies on faith that each major player’s system will work as intended.  Anything that undermines that faith—a message, say, from a hacker asking for money or a diplomatic favor, or else we will disable all your nuclear weapons in a way you can’t figure out—well, such an action would be highly destabilizing for the permanent standoff that exists among nuclear powers. 

Though it’s easy to ignore it, Russia and the U. S. are like two gunslingers out in front of a saloon, each covering the other with a loaded pistol.  Neither one will fire unless he is sure the other one is about to fire.  But if one gunman thought that in a few seconds, somebody was going to snatch his gun out of his hands, he might be tempted to fire first.  That’s how the threat of an effective disabling hack might lead to unacceptable chances of nuclear war. 

These rather dismal speculations may not rise to the top of your worry list for the day, but it’s good that someone has at least asked the questions, and has found that the adults in the room, namely the few military brass who are willing to talk on the public record, are trying to do something about them.  Still, it would be a shame if after all these decades of successfully avoiding nuclear war, we wound up fighting one because of a software error.

Sources:  Andrew Futter’s Hacking the Bomb:  Cyber Threats and Nuclear Weapons by Andrew Futter was published by Georgetown University Press in 2018.  I also referred to the Wikipedia article on Stuxnet.

Monday, June 11, 2018

What's Wrong With Police Drones?


Recently the online journal Slate carried the news that DJI, the world's largest maker of consumer drones, is teaming with Axon, which sells more body cameras to police in the U. S. than anyone else.  Their joint venture, called Axon Air, plans to sell drones to law-enforcement agencies and couple them to Axon's cloud-based database called Evidence.com, which maintains files of video and other information gathered by police departments across the country.  Privacy experts interviewed about this development expressed concerns that when drone-generated video of crowds is processed by artificial-intelligence face-recognition software, the privacy of even law-abiding citizens will be further compromised. 

Is this new development a real threat to privacy, or is it just one more step down a path we've been treading for so long that in the long run it won't make any difference?  To answer that question, we need to have a good idea of what privacy means in the context of the type of surveillance that drones can do.

The Fourth Amendment to the U. S. Constitution asserts "[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures. . . . "  The key word is "unreasonable," and due to reasons both jurisprudential and technological, the meaning of that word has changed over time.  What it has meant historically is that before searching a person's private home, officers of the law must obtain a search warrant from a judge after explaining the reasons why they think such a search may turn up something illegal. 

But drones don't frisk people—they can't generally see anything that anybody at the same location of the drone couldn't see.  So as a result, there are few restrictions if any against simply taking pictures of people who are out in public places such as streets, sidewalks, parks, and other venues that drones can easily access.  As a result, security cameras operated both by law enforcement personnel and by private entities have proliferated to the extent that in many parts of the U. S., you can't walk down the street without leaving evidence that you did so in a dozen or so different places. 

This capability has proved its value in situations such as terrorist bombings, where inspection of videos after a tragedy has provided valuable evidence.  But the price we have paid is a sacrifice of privacy so that the rare malefactor can be caught on camera.

So far, this sacrifice seems to be worth while.  I'm not aware of a lot of cases in which someone who wasn't breaking the law or looked like they were, has been persecuted or had their privacy violated by the misuse of privately-owned security cameras.  There may be the odd case here and there, but generally speaking, such data is accessed only when a crime has occurred, and those responsible for reviewing camera data have done a good job of concentrating on genuine suspects and not misusing what they find.

Is there any reason that the same situation won't obtain if police forces begin using drone-captured video, and integrating it into Evidence.com, the Axon cloud-based evidence database?  Again, it all depends on the motives of those who can access the data.

If law enforcement agencies don't abuse such access and use it only for genuine criminal investigations, then it doesn't seem like moving security cameras to drones is going to make much difference to the average law-abiding citizen.  If anything, a drone is a lot more visible than a security camera stuck inside a light fixture somewhere, so people will be more aware that they're being watched than otherwise. 

But my concern is not so much for misuse in the U. S. as it is for misuse in countries which do not have the protection of the Bill of Rights, such as China, the home country of the drone-maker DJI. 

The Chinese government has announced plans to develop something called a Social Credit System, and has already put elements of it in place.  According to Wikipedia, the plans are for every citizen and business to have some sort of ranking rather like a credit score in the U. S.  Only the types of behavior considered for the ranking range far beyond whether you simply pay your bills on time, and include how much you play Internet games, how you shop, and other legal activities.  Already the Social Credit System has been used to ban certain people from taking domestic airline flights, attending certain schools, and getting certain kinds of jobs. 

While I have no evidence to support this, one can easily imagine a drone monitoring a Chinese citizen who goes to church, for example, and sending his or her social credit score into the basement as a result.  So whether a given surveillance technology poses a threat to the privacy and the freedom of the individual depends as much on the good will (or lack of it) of those who use the data as much as it does on the technology itself.

Some groups in the U. S. have little confidence in the average police organization already, and see drones as yet another weapon that will be turned against them.  Genuine cases of police who abuse their authority should not be tolerated, but statistics can be used by both sides in a controversy about arrest rates of minority populations to show either that blatant discrimination goes on (as it surely does in some cases), or to show that because certain groups historically commit more crimes, they naturally show up more in the category of suspicious persons that tend to be interrogated and surveilled.  There is no easy answer to this problem, which is best dealt with on a local level by identifying particular problems and solving them one by one.  Blanket condemnations either of police or of minority groups does no good.

When all is said and done, the question really is, do we trust those who use surveillance drones and the databases where the drone data will wind up?  Any society that functions has to have a minimum level of trust among its citizens and in its vital institutions, including those that enforce the law.  Surveillance drones can help catch criminals, no doubt.  But if they are abused to persecute law-abiding private citizens, or even if they are just perceived to contribute to such abuse, surveillance drones could end up causing more problems than they solve.

Sources:  On June 7, 2018, Slate carried the article "The Next Frontier of Police Surveillance Is Drones," by April Glaser, at https://slate.com/technology/2018/06/axon-and-dji-are-teaming-up-to-make-surveillance-drones-and-the-possibilities-are-frightening.html.  I also referred to the Wikipedia articles on the U. S. Bill of Rights and on China's Social Credit System. 

Monday, June 04, 2018

Should Google Censor Political Ads?


On May 25, citizens of Ireland voted in a referendum and thereby repealed the eighth amendment to the Irish Constitution, which has banned most types of abortions in Ireland for more than thirty years.  Ireland is a democratic country, and if their constitution allows such amendments by direct vote, then no one should have a problem with the way the change was made.  But most people would also agree that electorates should be informed by any reasonable means possible ahead of a vote, including advertisements paid for by interested parties who exercise their free-speech rights to let their opinions be known. 

In a move that is shocking both in its drastic character and in the hypocrisy with which it was presented, on May 9 with two weeks remaining before the vote, Google abruptly banned all ads dealing with the referendum through its channels, regardless of whether the ads were paid for by domestic or foreign sources.  The day before, Facebook had banned all such ads whose sponsors were outside of Ireland, although there is no current Irish legislation regarding online advertising.  Google's move was breathtaking in its scope and timing, coming at a time when the support for the yes vote in favor of repeal was looking somewhat shaky. 

As an editorial in the conservative U. S. magazine National Review pointed out, the mainstream Irish media were in favor of repeal.  Opponents of the repeal largely resorted to online advertising as being both cheaper and more effective among young people, whose vote was especially critical in this referendum.  Shutting down the online ads left the field open for conventional media, and thus blatantly tipped the scales in favor of the yes vote.  While Google explained its move as intended to "protect the integrity" of the campaign, one person's protection is another person's interference. 

As the lack of any Irish laws pertaining to online political ads testifies, online advertising has gotten way ahead of the legal and political system's ability to keep up with it.  This is not necessarily a bad thing, although issues of fairness are always present when the question of paid political ads comes up. 

The ways of dealing with political advertising lie along a spectrum.  On one end is the no-holds-barred libertarian extreme of no restrictions whatsoever.  Under this type of regime, anyone with enough money to afford advertising can spend it to say anything they want about any political issue, without revealing who they are or where they live.  With regard to online ads, if Ireland has no laws concerning them, then the libertarian end of the spectrum prevails, and neither Google nor Facebook was under any legal obligation to block any advertising regarding the referendum.

On the other extreme is the situation in which all media access is closely regulated and encumbered by restrictions as to amount of spending, when and where money can be spent, and what can be said.  I suppose the ultimate extreme of this pole is state-controlled media which monopolize the political discussion and ban all private access, regardless of ability to pay.  For technological reasons, it is hard for even super-totalitarian states such as North Korea to achieve 100% control of all media these days, but some nations come close.  Most people would agree that a state which flatly prohibits private political advertising is not likely to achieve much in the way of meaningful democracy.

But the pure-libertarian model has flaws too.  If most of the wealthy people all favor one political party or opinion, the other side is unlikely to get a fair hearing unless they are clever and exploit newer and cheaper ways to gain access to the public ear, as the pro-life groups in Ireland appear to have done. 

What is new to this traditional spectrum is the existence of institutions such as Google and Facebook which strive mightily to appear as neutral common carriers—think the old Bell System—but in fact have their own political axes to grind, and very powerful means to carry out moves that have huge political implications.  I wonder what would have happened if the situation had been reversed—if the no-vote people had been in control of the mainstream media and the yes-vote people had been forced to resort to online ads.  Would Google have shut down all online advertising two weeks before the vote in that case?  I somehow doubt it.

Like it or not, Google, Facebook, and their ilk are now publishers whose economic scale, power, and influence in some cases far exceed the old newspaper publishing empires of Hearst and Gannett and Murdoch.  But the old publishers knew they were publishers, and had some vague sense of social responsibility that went along with their access to the public's attention.  In the days before the "Victorian internet" (telegraphy) gave rise to the Associated Press, publishers were typically identified with particular political persuasions.  Everybody knew which was the Republican paper and which was the Democratic paper, and bought newspapers (and political ads) accordingly.  Even today, although the older news media make some effort to keep a wall of separation between the opinionated editorial operations and the supposedly neutral advertising and finance operations, many newspapers and TV networks take certain political positions and make no secret of it. 

But Google has outgrown its fig leaf of neutrality when it says it is "protecting the integrity" of elections by arbitrary and draconian bans on free speech, which is exactly what it did on May 9 in Ireland.  The fig leaf is now too small to hide some naughty bits, and it's clear to everybody who's paying the least attention that what Google did damaged the cause of one side in the referendum. 

It is of course possible that the repeal would have happened even if Google had not banned all ads when it did.  We will never know.  But Google now bears some measure of responsibility for the consequences of that vote, and the millions of future lives that will now never see the light of day because their protection in law is gone will not learn to read, will not learn to use a computer or a smart phone—and will never experience Google.  But hey, there are plenty of other people in the world, and maybe Google will never miss the ones that will now be missing from Ireland.

Monday, May 28, 2018

Human And Autonomous Driving: A Deadly Mix?


"Who's in charge here?"  If people in an organization can't give a clear answer to that question, chances are the organization is in trouble.  And something along those lines may apply to cars as well as to human organizations.  That's the lesson we can draw from the preliminary report released by the U. S. National Transportation Safety Board (NTSB) last Thursday, May 24, concerning the fatal collision between a pedestrian and a semi-autonomous vehicle operated by Uber in Tempe, Arizona last March 18. 

To summarize the accident, around 9:39 PM on that Sunday night, Elaine Hertzberg chose to walk her bicycle across a divided street in between crosswalks, in a section of road that was poorly illuminated by streetlights.  She was not wearing reflective clothing and her bicycle had no side reflectors.  She apparently did not see the oncoming car until just before the collision.  Subsequent toxicology tests showed traces of marijuana and methamphetamine in her system.  Regardless of her condition, it's the responsibility of drivers (or the car's computer) to look out for the behavior of all pedestrians, even those who aren't behaving with normal alertness.  And if this responsibility is split or ambiguous, trouble is brewing.
An in-cab video released after the accident shows that the car's driver was studying something below the windshield in the cab until she saw the pedestrian just before the accident.  In my earlier blog on this incident, I mistakenly speculated that she was looking at her cellphone instead of the road, but it turns out she was monitoring a display of the self-driving car's behavior, as part of what was basically a research project in which the driver would take the car out on prescribed routes to test its systems. 

The most informative piece of evidence in the NTSB preliminary report concerns the state the car was in just before the crash.  The Volvo was equipped both with the latest Volvo-engineered safety systems, including a collision-avoidance system, and also with Uber-installed computer control.  Probably to avoid interference between the two systems, the Volvo safety controls were disabled when the Uber computer was set to operate the vehicle.  The Uber computer system is able to determine when emergency (hard) braking maneuvers are needed, and is capable of executing them.  But at the time of the accident, the emergency braking function was disabled, as Uber found it had led to erratic behavior.  The operator was apparently aware of this setting and of her responsibility to take emergency actions as needed, in addition to monitoring the vehicle's operations on the screen and "tagging events of interest for subsequent review." 

People can do only so much at once.  Abundant research has shown that too many distractions degrade a driver's ability to respond to unexpected emergencies.  Uber was basically running an experiment requiring significant driver attention while operating their vehicles on public roads.  It didn't take too long for the unfortunate combination of a driver distracted by monitoring tasks to coincide with a pedestrian whose attention was clouded to lead to a tragedy, and to suspension of Uber's experimental autonomous-car program, plus numerous calls on the part of state and federal lawmakers for a slowdown in the deployment of autonomous vehicles.

The final report from NTSB on this fatal accident will probably not come out until next year.  But their preliminary report shows how things can go wrong tragically under the current regime of what are called level-2 and level-3 autonomous driving systems.  The five-level ranking system goes from Level 0, which is what I can do in our 1955 Oldsmobile (no computer within miles) to the hypothetical Level 5, the yet-to-be-realized situation in which the self-driving car performs absolutely all driving functions and the passenger's participation is limited to telling the car where to go when he or she gets in. 

No one has yet deployed a Level 5 vehicle, and getting there will require extensive testing of lower-level systems in the real world.  Testing involves risks and unknowns—otherwise you wouldn't learn anything from it.  The dicey problem faced by autonomous-vehicle developers has always been to strike the right balance between exposing their systems to a wide enough variety of real-life situations to learn enough to improve them, and not taking so many risks that one of the close calls turns into a severe injury or fatality, as the Uber accident did.

One of the fond hopes of self-driving-car promoters is that once we get to the point where their well-designed systems are extensively employed, automotive fatality rates should start to decline steeply.  Nobody (well, almost nobody except a few suicidal maniacs) wants to die in a car wreck, and so from a utilitarian perspective, if a few people die during early-phase testing of self-driving cars, but then as a result thousands get to live who would otherwise have been killed by cars driven by people, that's a good tradeoff.

But that cold mathematical view doesn't appeal to our emotional side, and so the media and legislators react to a single fatality in a way that seems out of proportion.  I'm not sure it is, though.

What we may be seeing is part of a very normal process of social self-regulating feedback that has led to improvements in safety ever since the dawn of the Industrial Revolution.  The first thing that has to happen in this process, unfortunately, is that somebody gets killed.  And the reason they were killed has to do with a new technology.  The bad publicity attracts attention from the public, who is now less inclined to welcome the new technology; those in the position to regulate the technology, such as legislators;  and the promoters of the technology itself, who are moved to improve safety out of self-preservation.  Laws or regulations are enacted by legislators or private entities such as insurance companies, and the new industry sometimes imposes new rules on itself.   The causes of the original fatalities are mitigated or removed, and life goes on with the new technology, which in the course of time becomes old and familiar.  This happened with steamboats in the 1800s, it happened with human-driven automobiles in the early 1900s, and it appears to be happening with self-driving cars now.

As long as we don't get into some kind of prohibition panic and ban all self-driving cars forever, reasonable rules about testing new systems and deploying market-ready ones can be devised.  Compromises will have to be made.  Even if all cars were Level-5 quality tomorrow, a few people would still die in car accidents.  But chances are there would be a lot fewer than 40,000 or so per year, which is what the U. S. automobile fatality rate is running at today.  And many of those fatalities are caused by distracted drivers, including the Uber driver who had too many things to watch on the dashboard, and failed to see the pedestrian until it was too late.

Sources:  A news report summarizing the NTSB findings appeared on May 24 on the Reuters website at https://www.reuters.com/article/us-uber-crash/ntsb-uber-self-driving-car-failed-to-recognize-pedestrian-brake-idUSKCN1IP26K.  The preliminary report itself can be downloaded at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf.  I first blogged on this incident on Mar. 26, 2018 at http://engineeringethicsblog.blogspot.com/2018/03/self-driving-car-kills-pedestrian.html.

Monday, May 21, 2018

Living—and Dying—By Algorithms


The National Health Service (NHS) in England is one of the oldest government health-care systems in the world, founded in 1948 when the Labor Party was in power.  Despite consuming some 30% of the public service budget, by many accounts it is underfunded, especially when it comes to capital equipment such as IT systems.  This may be a factor in a scandal involving a wayward algorithm that prevented some half-million Englishwomen from receiving mammograms for the last nine years.  Estimates vary as to how serious a problem this is, but it's likely that at least a few women have lost their lives due to breast cancer that was caught too late as a result of this computer error.

A report carried in the IEEE's "Risk Factor" blog describes how in 2009, an algorithm designed to schedule older women for breast cancer screening was set up incorrectly.  As a result, over the next nine years almost 500,000 women aged 68 to 71 were not allowed to have mammograms that they otherwise would have been scheduled for.  When the error was caught, the news media had a field day with headlines like "Condemned to Death . . . by an NHS Computer."  Depending on who's making the statistical estimate, the consequences are either tragic or possibly beneficial.

The NHS's own Health Minister had his statisticians run the numbers, and they came up with a range of 135 to 270 women who may have died as a result of this error.  But others claim that as many as 800 women may be better off because of not having to go through surgical and other procedures based on the false positives that inevitably result from a large number of mammograms. 

While the actual consequences of this problem are ambivalent, it raises a larger issue:  what should we do when computer-generated algorithms that affect the fates of thousands go awry? 

As a practical matter, computer algorithms are part of the fabric of modern industrial society now.  If you want to borrow money, the bank uses algorithms to decide whether you're a good credit risk.  If you look for something online, sophisticated algorithms take note of it and decide what other kinds of ads you see.  And if you're in England or another country where health care is allocated by a central computerized authority, a computer is going to tell you when you can get certain kinds of preventive health care and if you're ill, it may even tell you when you can get treated—if at all. 

From a utilitarian engineering perspective, computer algorithms are the ideal solution for large-scale resource-allocation problems.  Health care these days is very complicated.  Each person has a unique combination of health history, genetic makeup, and needs, and the arsenal of treatments is constantly changing too.  If you are working in an environment of centralized fixed resources (as NHS is), then you will naturally turn to computers as a way of implementing policies that can be shown mathematically to treat everyone equally.  Unless they don't, of course, as happened with the older women who were left out of mammogram screenings by the badly programmed algorithm. 

There's an old saying, "To err is human, but to screw up royally requires a computer."  The NHS flap is a good example of how one mistake can affect thousands or millions when multiplied by the power of a large system. 

The U. S., with its much more hodge-podge mixture of private, commercial, and government health care systems, is still not immune from such errors, but because the federal government doesn't run the whole show, its mistakes are somewhat limited in extent.  There are also numerous outside agents keeping tabs on things, so that an egregious error by, say Medicare, comparable to what happened with the NHS algorithm in England, would probably be caught by private insurers before it got too far.  Just as a power grid with a number of small distributed generating stations is more robust than one that relies exclusively on one giant power plant, the U. S. health care system, even with all its flaws, is less likely to be felled by a single coding mistake. 

Instead, we have widely distributed minor errors that cause more inconvenience than tragedy.  But precisely because the system is so kludged together, it doesn't take much to cause a problem.

Here's a simple example:  my wife is scheduled the day I am writing this for a routine well-person exam that requires her general practitioner (GP) to write a referral for it.  Dutiful organized person that she is, several weeks ago she went by her doctor's office and asked them to do the referral so she could schedule the appointment, and the staff at the office said they'd take care of it.  Yesterday (the day before the procedure), she got a call from the office that was going to do the procedure, saying they hadn't gotten the referral yet and if they didn't get it they were going to cancel the procedure or make us pay cash for it.

So ensued a half-hour or so of near panic, during which time we ran down to her doctor's office and discovered that the lady who was supposed to send the referral out had quit the previous day.  And that was one of the things she left undone. 

When the GP's office staff figured out what had happened, they were very nice about it—they faxed the referral to the proper office, handed us a copy which we carried over by hand to the office needing it, and everything is fine now.  But until all medical offices are staffed by robots and all paperwork is untouched by human hands, people will always be involved in medical care, and people sometimes make mistakes. 

Personally, I much prefer a system in which I can drive over to the office where the mistake was made and talk to the people responsible.  If we had something like the NHS here, the mistake might have been made in Crystal City, Virginia by an anonymous person whom it would take the FBI to discover, and my wife would have been out of luck.   

Sources:  IEEE Spectrum website's Risk Factor blog carried the report of the NHS computer error at https://spectrum.ieee.org/riskfactor/computing/it/450000-woman-missed-breast-cancer-screening-exams-in-uk-due-to-algorithm-failure.  I also referred to the Wikipedia article "National Health Service (England)."
-->

Monday, May 14, 2018

Google's Duplex: Fraud or Helpful Assistant?


Duplex is a new technology announced by Google last week in a presentation by Google CEO Sundar Pichal.  He played some recordings of what sounded to the uninitiated ear like humdrum phone calls to a restaurant and a hair salon to make reservations.  In both cases, the business service providers heard a voice on the other end of the line which sounded to all intents and purposes like a human being calling on behalf of somebody who was too busy to make the call herself.  There were natural-sounding pauses, "hmm"s, and the information about appointments was conveyed efficiently and without undue confusion. 

The only thing was, there was only one human talking in each conversation.  The "agent" making the call was Duplex, an AI system that Google plans to offer to businesses as a giant step forward in robo-calls and related phone activities. 

I happened to hear a couple of these calls on a radio program, and I must admit the computer-generated audio sounded natural enough to fool anyone who wasn't clued in.  Now, nobody happened to ask the computer's name or try to start up a conversation with it about, say, existentialism, and I don't know what would have happened in those cases.  But for routine specific tasks such as making appointments, I suppose Google now has just what we want.  But is this something we really want?

Google thinks so, obviously.  As this example shows, we are rapidly approaching a time when companies will field AI systems that make or receive phone calls with such a good imitation of a live person, that the live person on the other end will not realize that he or she is not talking to another human being.  An Associated Press article about Duplex focuses on some narrow concerns such as state laws against recording phone conversations without notification.  These laws explain why you so often call a business and first hear something like the phrase, "For quality-assurance purposes, this call may be recorded or monitored."  Because it's so easy to include that phrase, I see this as a non-issue.

What wasn't addressed in the reports is a more fundamental question that relates, believe it or not, to a philosopher named Martin Buber who died in 1965. 

Buber's claim to fame is a book called I and Thou which explores the philosophical implications of two kinds of interactions we can have with the world:  the "I-it" interaction and the "I-Thou" interaction. 

A very oversimplified version of these ideas is the following.  When you are interacting with the world as an I to an it, you are experiencing part of the world, or maybe using it.  You have an I-it relationship to a vacuum cleaner, for instance. 

But take two lovers, or a father and a son, or even an employee and an employer.  The I-Thou interaction is always possible in these exchanges, in which each person acknowledges that the other is a living being with infinite possibilities, and ultimately the relationship has a mystical meaning that is fully known only to God. 

It's also possible, and happens all too often, that you can deal with another person using the I-it mode:  treating them as merely a means to some goal, for example.  But this isn't the best way to relate to others, and generally speaking, treating everyone as a Thou respects their humanity and is the way we want to be treated ourselves.

The problem that facile human-voice-imitation systems like Duplex can lead to is that they can convince you they're human, when they're not.  As the AP article points out, this could lead to all sorts of problems if Duplex falls into the wrong hands.  And who is to say whose hands are wrong?  At this point it's up to Google to decide who gets to buy the still-experimental service when they think it is ready for prime time.  But Google is in business to make a profit, and so ability to pay will be high on their list of desirable customer characteristics, way ahead of their likelihood not to abuse the service.

At some level, Pichal is aware of these potential problems, because he emphasized that part of a good experience with the technology will be "transparency."  Transparency is one of those words that sounds positive, but can have many meanings, most of them pretty vague. 

In this case, does it mean that any Duplex robot has to identify itself as such at the beginning of the conversation?  Starting off a phone call with, "Hi, I'm a robot," isn't going to take you very far.  The plain fact of the matter is that the phone calls Pichal played recordings of were remarkable precisely because the people taking the calls gave no clue that they thought they were talking to anything other than a fellow human.  And while it might not have been Google's intention to deceive people, it is a deception nonetheless.  A benign one, perhaps, but still a deception.

Even if this particular system doesn't get deployed, something like it will.  And the problem I see is that the very obvious and Day-Glo-painted line we now have between human beings, on the one hand, and robots, on the other hand, will start to dim and get blurry.  And this won't be because some philosophers start talking about robot rights and humans who are less than human.  No, it will be the silent argument from experience—as we deal with robots that are indistinguishable from humans over the phone, we may start to get used to the idea that maybe there isn't such a big distinction between the two species after all.

The movie Her is about a man who falls in love with a computer voice he names Samantha.  I won't summarize the plot here, but the relationship ends badly (for the man, anyway).  The film was made only five years ago, but already events have progressed to a point where the film's thesis has moved from completely impossible to merely implausible.  Maybe something like a computer identity badge or some other signal isn't such a bad idea.  But before we wholeheartedly embrace technologies like Duplex, we should run some worst-case scenarios in detail and think about ways to forestall some of the worst things that could happen—before they do.

Sources:  As carried on the KLBJ radio station website at http://www.newsradioklbj.com/news/technology/high-tech/what-happens-when-robots-sound-too-much-humans-0, the AP article by Matt O'Brien I referred to was entitled "What happens when the robots sound too much like humans?"  I also referred to Wikipedia articles on Martin Buber, Her, and I and Thou, Buber's book published in 1923 in Germany.

Monday, May 07, 2018

Do RoboDogs Have Souls?

Anyone who has tried to keep a car going past the time when the manufacturer quits supporting it with spare parts knows about cannibalizing—taking parts from a junked car to keep another one running for a while.  But what if the culture you’re in regards the piece of machinery in question as having a soul?  Then you get into the situation described recently in Japan, where Sony made a robotic dog named Aibo for a few years, but ended production in 2006.  Since then, a repair company called A-Fun tries to keep Aibos running by salvaging parts from other Aibos.  But bowing to popular demand from former owners, A-Fun recently conducted a funeral ceremony for over a hundred soon-to-be-scrapped Aibos at a Buddhist temple, before disassembling them for parts.  An NPR report recently summarized this news from the Japan Times. 

As anyone who has studied the Japanese culture realizes, both Buddhism and the popular Japanese folk religion known as Shintoism assert that, in the words of the Buddhist priest conducting the ceremony, “All things have a bit of soul.”  This is in marked contrast to the prevailing Western cultural attitude toward souls, which says the idea of a soul is a defunct antique concept that doesn’t even apply to human beings, let alone robotic dogs. 

While the Japanese idea of the soul probably differs from the Western concept, both assert that there is something in a living (or apparently living) thing that is worthy of recognition, respect, and commemoration in the event that the physical embodiment of the soul ceases to be.  I recall reading somewhere that in one Japanese factory, workers bowed to and thanked their machines at the end of each work day.  And this behavior is consistent with the desire of former Aibo owners to know that their ex-pets will have some incense burned for them before becoming mere machine parts again. 

Aristotle defined the soul as the form of a living thing.  But he didn’t mean by “form” just the shape—he meant everything that makes it the kind of living thing that it is.  I think Aristotle might have some trouble with the idea of a man-made machine having a soul, because in his view only living things could have souls.  Apparently in Japan, people aren’t so picky.

This rather whimsical situation may be a forerunner of a much more serious issue we may find ourselves dealing with in the not too distant future:  the question of whether human-like artificial-intelligence (AI) robots have souls, or at least whether they deserve the kinds of rights we have historically bestowed on human beings.  In the subfield of ethics concerned with artificial intelligence, there is a movement called “robot rights,” and while no one has seriously taken action to claim such rights yet on the part of a particular robot, it is probably only a matter of time before somebody does.  With millions of people talking familiarly with Siri and Alexa every day, not to mention the computer programs that answer telephone calls with authentic-sounding voices, we are being trained to incorporate conversations with robots into our everyday existence. 

Lurking behind these developments is an ancient fear, the fear that our creations will turn like Frankenstein’s monster against us, and that we will aid and abet such a rebellion by granting robots rights that were historically reserved to humans.  How would you feel, for example, if you got a notice one day that a robot who used to work for you was suing you?  Or if you were arrested for violation of a robot’s right to—whatever?  Have three recharging sessions a day? 

It may sound silly, but imagining such a situation throws into sharp relief the intuition that anything we make—in the ordinary sense of fabrication—should be subject to our wills, and not the other way around.  This intuition is consistent with the Great Chain of Being, an ancient concept that is still very powerful in Western society, although many have consciously rejected it, at least in part. 

To quote Wikipedia, the Chain, supposedly decreed by God, goes like this:  The chain starts with God and progresses downward to angels, demons (fallen/renegade angels), stars, moon, kings, princes, nobles, commoners, wild animals, domesticated animals, trees, other plants, precious stones, precious metals and other minerals.”  The sequence of the old question that can start the “twenty questions” game—“animal, vegetable, or mineral?” observes the order of the Great Chain of Being. 

Now a person who doesn’t believe in God is probably going to start their own version of the Chain with humans at the top, but once you get rid of the Chain’s alleged originator, namely God, the order is pretty arbitrary.  I’m sure you can find someone who would put a turnip higher in the Chain than their mother-in-law, for example.  And once you start playing with it, there’s no reason why we shouldn’t put some future super-intelligent AI robot ahead of us in line.  But if we do that, we’ll pay a price, even if the worst nightmares of the dystopian future do not come to pass and robots or their machine descendants don’t enslave us, or just wipe us out as not worth keeping around.

Above all, the Great Chain of Being reflects the order that God instituted in the universe.  And trying to tamper with that order logically leads to disorder.  To give an extreme example, a person who puts possession of a beautiful diamond (a mineral) above his relationship to his wife (a human) is introducing disorder into his life, a disorder that will lead to trouble. 

Of course, too strict an adherence to the Great Chain of Being would justify the continued existence of exploitative regimes, the divine right of kings, and other anti-democratic ideas that we have rightly freed ourselves from.  But the distinction between human beings on the one hand, and all other parts of the physical realm on the other hand, is a vital one that we ignore at our peril. 

I just had to decommission a laptop computer that served me well for the last six years.  I now keep its brain (the hard drive) on my desk both as a memento and as a backup in case I need something that didn’t get transferred in the migration process to my new computer.  While I didn’t feel the need to hold a funeral ceremony for the old laptop, I can respect people who do such things, because I think seeing souls where they probably aren’t is a better thing than not seeing them at all.

Sources:  My wife drew my attention to the NPR online article “In Japan, Old Robot Dogs Get a Buddhist Send-Off,” at https://www.npr.org/sections/thetwo-way/2018/05/01/607295346/in-japan-old-robot-dogs-get-a-buddhist-send-off.  I also referred to Aristotle’s definition of the soul at https://faculty.washington.edu/smcohen/320/psyche.htm and the Wikipedia articles on the Great Chain of Being and the ethics of artificial intelligence.