Monday, May 28, 2018

Human And Autonomous Driving: A Deadly Mix?


"Who's in charge here?"  If people in an organization can't give a clear answer to that question, chances are the organization is in trouble.  And something along those lines may apply to cars as well as to human organizations.  That's the lesson we can draw from the preliminary report released by the U. S. National Transportation Safety Board (NTSB) last Thursday, May 24, concerning the fatal collision between a pedestrian and a semi-autonomous vehicle operated by Uber in Tempe, Arizona last March 18. 

To summarize the accident, around 9:39 PM on that Sunday night, Elaine Hertzberg chose to walk her bicycle across a divided street in between crosswalks, in a section of road that was poorly illuminated by streetlights.  She was not wearing reflective clothing and her bicycle had no side reflectors.  She apparently did not see the oncoming car until just before the collision.  Subsequent toxicology tests showed traces of marijuana and methamphetamine in her system.  Regardless of her condition, it's the responsibility of drivers (or the car's computer) to look out for the behavior of all pedestrians, even those who aren't behaving with normal alertness.  And if this responsibility is split or ambiguous, trouble is brewing.
An in-cab video released after the accident shows that the car's driver was studying something below the windshield in the cab until she saw the pedestrian just before the accident.  In my earlier blog on this incident, I mistakenly speculated that she was looking at her cellphone instead of the road, but it turns out she was monitoring a display of the self-driving car's behavior, as part of what was basically a research project in which the driver would take the car out on prescribed routes to test its systems. 

The most informative piece of evidence in the NTSB preliminary report concerns the state the car was in just before the crash.  The Volvo was equipped both with the latest Volvo-engineered safety systems, including a collision-avoidance system, and also with Uber-installed computer control.  Probably to avoid interference between the two systems, the Volvo safety controls were disabled when the Uber computer was set to operate the vehicle.  The Uber computer system is able to determine when emergency (hard) braking maneuvers are needed, and is capable of executing them.  But at the time of the accident, the emergency braking function was disabled, as Uber found it had led to erratic behavior.  The operator was apparently aware of this setting and of her responsibility to take emergency actions as needed, in addition to monitoring the vehicle's operations on the screen and "tagging events of interest for subsequent review." 

People can do only so much at once.  Abundant research has shown that too many distractions degrade a driver's ability to respond to unexpected emergencies.  Uber was basically running an experiment requiring significant driver attention while operating their vehicles on public roads.  It didn't take too long for the unfortunate combination of a driver distracted by monitoring tasks to coincide with a pedestrian whose attention was clouded to lead to a tragedy, and to suspension of Uber's experimental autonomous-car program, plus numerous calls on the part of state and federal lawmakers for a slowdown in the deployment of autonomous vehicles.

The final report from NTSB on this fatal accident will probably not come out until next year.  But their preliminary report shows how things can go wrong tragically under the current regime of what are called level-2 and level-3 autonomous driving systems.  The five-level ranking system goes from Level 0, which is what I can do in our 1955 Oldsmobile (no computer within miles) to the hypothetical Level 5, the yet-to-be-realized situation in which the self-driving car performs absolutely all driving functions and the passenger's participation is limited to telling the car where to go when he or she gets in. 

No one has yet deployed a Level 5 vehicle, and getting there will require extensive testing of lower-level systems in the real world.  Testing involves risks and unknowns—otherwise you wouldn't learn anything from it.  The dicey problem faced by autonomous-vehicle developers has always been to strike the right balance between exposing their systems to a wide enough variety of real-life situations to learn enough to improve them, and not taking so many risks that one of the close calls turns into a severe injury or fatality, as the Uber accident did.

One of the fond hopes of self-driving-car promoters is that once we get to the point where their well-designed systems are extensively employed, automotive fatality rates should start to decline steeply.  Nobody (well, almost nobody except a few suicidal maniacs) wants to die in a car wreck, and so from a utilitarian perspective, if a few people die during early-phase testing of self-driving cars, but then as a result thousands get to live who would otherwise have been killed by cars driven by people, that's a good tradeoff.

But that cold mathematical view doesn't appeal to our emotional side, and so the media and legislators react to a single fatality in a way that seems out of proportion.  I'm not sure it is, though.

What we may be seeing is part of a very normal process of social self-regulating feedback that has led to improvements in safety ever since the dawn of the Industrial Revolution.  The first thing that has to happen in this process, unfortunately, is that somebody gets killed.  And the reason they were killed has to do with a new technology.  The bad publicity attracts attention from the public, who is now less inclined to welcome the new technology; those in the position to regulate the technology, such as legislators;  and the promoters of the technology itself, who are moved to improve safety out of self-preservation.  Laws or regulations are enacted by legislators or private entities such as insurance companies, and the new industry sometimes imposes new rules on itself.   The causes of the original fatalities are mitigated or removed, and life goes on with the new technology, which in the course of time becomes old and familiar.  This happened with steamboats in the 1800s, it happened with human-driven automobiles in the early 1900s, and it appears to be happening with self-driving cars now.

As long as we don't get into some kind of prohibition panic and ban all self-driving cars forever, reasonable rules about testing new systems and deploying market-ready ones can be devised.  Compromises will have to be made.  Even if all cars were Level-5 quality tomorrow, a few people would still die in car accidents.  But chances are there would be a lot fewer than 40,000 or so per year, which is what the U. S. automobile fatality rate is running at today.  And many of those fatalities are caused by distracted drivers, including the Uber driver who had too many things to watch on the dashboard, and failed to see the pedestrian until it was too late.

Sources:  A news report summarizing the NTSB findings appeared on May 24 on the Reuters website at https://www.reuters.com/article/us-uber-crash/ntsb-uber-self-driving-car-failed-to-recognize-pedestrian-brake-idUSKCN1IP26K.  The preliminary report itself can be downloaded at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf.  I first blogged on this incident on Mar. 26, 2018 at http://engineeringethicsblog.blogspot.com/2018/03/self-driving-car-kills-pedestrian.html.

Monday, May 21, 2018

Living—and Dying—By Algorithms


The National Health Service (NHS) in England is one of the oldest government health-care systems in the world, founded in 1948 when the Labor Party was in power.  Despite consuming some 30% of the public service budget, by many accounts it is underfunded, especially when it comes to capital equipment such as IT systems.  This may be a factor in a scandal involving a wayward algorithm that prevented some half-million Englishwomen from receiving mammograms for the last nine years.  Estimates vary as to how serious a problem this is, but it's likely that at least a few women have lost their lives due to breast cancer that was caught too late as a result of this computer error.

A report carried in the IEEE's "Risk Factor" blog describes how in 2009, an algorithm designed to schedule older women for breast cancer screening was set up incorrectly.  As a result, over the next nine years almost 500,000 women aged 68 to 71 were not allowed to have mammograms that they otherwise would have been scheduled for.  When the error was caught, the news media had a field day with headlines like "Condemned to Death . . . by an NHS Computer."  Depending on who's making the statistical estimate, the consequences are either tragic or possibly beneficial.

The NHS's own Health Minister had his statisticians run the numbers, and they came up with a range of 135 to 270 women who may have died as a result of this error.  But others claim that as many as 800 women may be better off because of not having to go through surgical and other procedures based on the false positives that inevitably result from a large number of mammograms. 

While the actual consequences of this problem are ambivalent, it raises a larger issue:  what should we do when computer-generated algorithms that affect the fates of thousands go awry? 

As a practical matter, computer algorithms are part of the fabric of modern industrial society now.  If you want to borrow money, the bank uses algorithms to decide whether you're a good credit risk.  If you look for something online, sophisticated algorithms take note of it and decide what other kinds of ads you see.  And if you're in England or another country where health care is allocated by a central computerized authority, a computer is going to tell you when you can get certain kinds of preventive health care and if you're ill, it may even tell you when you can get treated—if at all. 

From a utilitarian engineering perspective, computer algorithms are the ideal solution for large-scale resource-allocation problems.  Health care these days is very complicated.  Each person has a unique combination of health history, genetic makeup, and needs, and the arsenal of treatments is constantly changing too.  If you are working in an environment of centralized fixed resources (as NHS is), then you will naturally turn to computers as a way of implementing policies that can be shown mathematically to treat everyone equally.  Unless they don't, of course, as happened with the older women who were left out of mammogram screenings by the badly programmed algorithm. 

There's an old saying, "To err is human, but to screw up royally requires a computer."  The NHS flap is a good example of how one mistake can affect thousands or millions when multiplied by the power of a large system. 

The U. S., with its much more hodge-podge mixture of private, commercial, and government health care systems, is still not immune from such errors, but because the federal government doesn't run the whole show, its mistakes are somewhat limited in extent.  There are also numerous outside agents keeping tabs on things, so that an egregious error by, say Medicare, comparable to what happened with the NHS algorithm in England, would probably be caught by private insurers before it got too far.  Just as a power grid with a number of small distributed generating stations is more robust than one that relies exclusively on one giant power plant, the U. S. health care system, even with all its flaws, is less likely to be felled by a single coding mistake. 

Instead, we have widely distributed minor errors that cause more inconvenience than tragedy.  But precisely because the system is so kludged together, it doesn't take much to cause a problem.

Here's a simple example:  my wife is scheduled the day I am writing this for a routine well-person exam that requires her general practitioner (GP) to write a referral for it.  Dutiful organized person that she is, several weeks ago she went by her doctor's office and asked them to do the referral so she could schedule the appointment, and the staff at the office said they'd take care of it.  Yesterday (the day before the procedure), she got a call from the office that was going to do the procedure, saying they hadn't gotten the referral yet and if they didn't get it they were going to cancel the procedure or make us pay cash for it.

So ensued a half-hour or so of near panic, during which time we ran down to her doctor's office and discovered that the lady who was supposed to send the referral out had quit the previous day.  And that was one of the things she left undone. 

When the GP's office staff figured out what had happened, they were very nice about it—they faxed the referral to the proper office, handed us a copy which we carried over by hand to the office needing it, and everything is fine now.  But until all medical offices are staffed by robots and all paperwork is untouched by human hands, people will always be involved in medical care, and people sometimes make mistakes. 

Personally, I much prefer a system in which I can drive over to the office where the mistake was made and talk to the people responsible.  If we had something like the NHS here, the mistake might have been made in Crystal City, Virginia by an anonymous person whom it would take the FBI to discover, and my wife would have been out of luck.   

Sources:  IEEE Spectrum website's Risk Factor blog carried the report of the NHS computer error at https://spectrum.ieee.org/riskfactor/computing/it/450000-woman-missed-breast-cancer-screening-exams-in-uk-due-to-algorithm-failure.  I also referred to the Wikipedia article "National Health Service (England)."
-->

Monday, May 14, 2018

Google's Duplex: Fraud or Helpful Assistant?


Duplex is a new technology announced by Google last week in a presentation by Google CEO Sundar Pichal.  He played some recordings of what sounded to the uninitiated ear like humdrum phone calls to a restaurant and a hair salon to make reservations.  In both cases, the business service providers heard a voice on the other end of the line which sounded to all intents and purposes like a human being calling on behalf of somebody who was too busy to make the call herself.  There were natural-sounding pauses, "hmm"s, and the information about appointments was conveyed efficiently and without undue confusion. 

The only thing was, there was only one human talking in each conversation.  The "agent" making the call was Duplex, an AI system that Google plans to offer to businesses as a giant step forward in robo-calls and related phone activities. 

I happened to hear a couple of these calls on a radio program, and I must admit the computer-generated audio sounded natural enough to fool anyone who wasn't clued in.  Now, nobody happened to ask the computer's name or try to start up a conversation with it about, say, existentialism, and I don't know what would have happened in those cases.  But for routine specific tasks such as making appointments, I suppose Google now has just what we want.  But is this something we really want?

Google thinks so, obviously.  As this example shows, we are rapidly approaching a time when companies will field AI systems that make or receive phone calls with such a good imitation of a live person, that the live person on the other end will not realize that he or she is not talking to another human being.  An Associated Press article about Duplex focuses on some narrow concerns such as state laws against recording phone conversations without notification.  These laws explain why you so often call a business and first hear something like the phrase, "For quality-assurance purposes, this call may be recorded or monitored."  Because it's so easy to include that phrase, I see this as a non-issue.

What wasn't addressed in the reports is a more fundamental question that relates, believe it or not, to a philosopher named Martin Buber who died in 1965. 

Buber's claim to fame is a book called I and Thou which explores the philosophical implications of two kinds of interactions we can have with the world:  the "I-it" interaction and the "I-Thou" interaction. 

A very oversimplified version of these ideas is the following.  When you are interacting with the world as an I to an it, you are experiencing part of the world, or maybe using it.  You have an I-it relationship to a vacuum cleaner, for instance. 

But take two lovers, or a father and a son, or even an employee and an employer.  The I-Thou interaction is always possible in these exchanges, in which each person acknowledges that the other is a living being with infinite possibilities, and ultimately the relationship has a mystical meaning that is fully known only to God. 

It's also possible, and happens all too often, that you can deal with another person using the I-it mode:  treating them as merely a means to some goal, for example.  But this isn't the best way to relate to others, and generally speaking, treating everyone as a Thou respects their humanity and is the way we want to be treated ourselves.

The problem that facile human-voice-imitation systems like Duplex can lead to is that they can convince you they're human, when they're not.  As the AP article points out, this could lead to all sorts of problems if Duplex falls into the wrong hands.  And who is to say whose hands are wrong?  At this point it's up to Google to decide who gets to buy the still-experimental service when they think it is ready for prime time.  But Google is in business to make a profit, and so ability to pay will be high on their list of desirable customer characteristics, way ahead of their likelihood not to abuse the service.

At some level, Pichal is aware of these potential problems, because he emphasized that part of a good experience with the technology will be "transparency."  Transparency is one of those words that sounds positive, but can have many meanings, most of them pretty vague. 

In this case, does it mean that any Duplex robot has to identify itself as such at the beginning of the conversation?  Starting off a phone call with, "Hi, I'm a robot," isn't going to take you very far.  The plain fact of the matter is that the phone calls Pichal played recordings of were remarkable precisely because the people taking the calls gave no clue that they thought they were talking to anything other than a fellow human.  And while it might not have been Google's intention to deceive people, it is a deception nonetheless.  A benign one, perhaps, but still a deception.

Even if this particular system doesn't get deployed, something like it will.  And the problem I see is that the very obvious and Day-Glo-painted line we now have between human beings, on the one hand, and robots, on the other hand, will start to dim and get blurry.  And this won't be because some philosophers start talking about robot rights and humans who are less than human.  No, it will be the silent argument from experience—as we deal with robots that are indistinguishable from humans over the phone, we may start to get used to the idea that maybe there isn't such a big distinction between the two species after all.

The movie Her is about a man who falls in love with a computer voice he names Samantha.  I won't summarize the plot here, but the relationship ends badly (for the man, anyway).  The film was made only five years ago, but already events have progressed to a point where the film's thesis has moved from completely impossible to merely implausible.  Maybe something like a computer identity badge or some other signal isn't such a bad idea.  But before we wholeheartedly embrace technologies like Duplex, we should run some worst-case scenarios in detail and think about ways to forestall some of the worst things that could happen—before they do.

Sources:  As carried on the KLBJ radio station website at http://www.newsradioklbj.com/news/technology/high-tech/what-happens-when-robots-sound-too-much-humans-0, the AP article by Matt O'Brien I referred to was entitled "What happens when the robots sound too much like humans?"  I also referred to Wikipedia articles on Martin Buber, Her, and I and Thou, Buber's book published in 1923 in Germany.

Monday, May 07, 2018

Do RoboDogs Have Souls?

Anyone who has tried to keep a car going past the time when the manufacturer quits supporting it with spare parts knows about cannibalizing—taking parts from a junked car to keep another one running for a while.  But what if the culture you’re in regards the piece of machinery in question as having a soul?  Then you get into the situation described recently in Japan, where Sony made a robotic dog named Aibo for a few years, but ended production in 2006.  Since then, a repair company called A-Fun tries to keep Aibos running by salvaging parts from other Aibos.  But bowing to popular demand from former owners, A-Fun recently conducted a funeral ceremony for over a hundred soon-to-be-scrapped Aibos at a Buddhist temple, before disassembling them for parts.  An NPR report recently summarized this news from the Japan Times. 

As anyone who has studied the Japanese culture realizes, both Buddhism and the popular Japanese folk religion known as Shintoism assert that, in the words of the Buddhist priest conducting the ceremony, “All things have a bit of soul.”  This is in marked contrast to the prevailing Western cultural attitude toward souls, which says the idea of a soul is a defunct antique concept that doesn’t even apply to human beings, let alone robotic dogs. 

While the Japanese idea of the soul probably differs from the Western concept, both assert that there is something in a living (or apparently living) thing that is worthy of recognition, respect, and commemoration in the event that the physical embodiment of the soul ceases to be.  I recall reading somewhere that in one Japanese factory, workers bowed to and thanked their machines at the end of each work day.  And this behavior is consistent with the desire of former Aibo owners to know that their ex-pets will have some incense burned for them before becoming mere machine parts again. 

Aristotle defined the soul as the form of a living thing.  But he didn’t mean by “form” just the shape—he meant everything that makes it the kind of living thing that it is.  I think Aristotle might have some trouble with the idea of a man-made machine having a soul, because in his view only living things could have souls.  Apparently in Japan, people aren’t so picky.

This rather whimsical situation may be a forerunner of a much more serious issue we may find ourselves dealing with in the not too distant future:  the question of whether human-like artificial-intelligence (AI) robots have souls, or at least whether they deserve the kinds of rights we have historically bestowed on human beings.  In the subfield of ethics concerned with artificial intelligence, there is a movement called “robot rights,” and while no one has seriously taken action to claim such rights yet on the part of a particular robot, it is probably only a matter of time before somebody does.  With millions of people talking familiarly with Siri and Alexa every day, not to mention the computer programs that answer telephone calls with authentic-sounding voices, we are being trained to incorporate conversations with robots into our everyday existence. 

Lurking behind these developments is an ancient fear, the fear that our creations will turn like Frankenstein’s monster against us, and that we will aid and abet such a rebellion by granting robots rights that were historically reserved to humans.  How would you feel, for example, if you got a notice one day that a robot who used to work for you was suing you?  Or if you were arrested for violation of a robot’s right to—whatever?  Have three recharging sessions a day? 

It may sound silly, but imagining such a situation throws into sharp relief the intuition that anything we make—in the ordinary sense of fabrication—should be subject to our wills, and not the other way around.  This intuition is consistent with the Great Chain of Being, an ancient concept that is still very powerful in Western society, although many have consciously rejected it, at least in part. 

To quote Wikipedia, the Chain, supposedly decreed by God, goes like this:  The chain starts with God and progresses downward to angels, demons (fallen/renegade angels), stars, moon, kings, princes, nobles, commoners, wild animals, domesticated animals, trees, other plants, precious stones, precious metals and other minerals.”  The sequence of the old question that can start the “twenty questions” game—“animal, vegetable, or mineral?” observes the order of the Great Chain of Being. 

Now a person who doesn’t believe in God is probably going to start their own version of the Chain with humans at the top, but once you get rid of the Chain’s alleged originator, namely God, the order is pretty arbitrary.  I’m sure you can find someone who would put a turnip higher in the Chain than their mother-in-law, for example.  And once you start playing with it, there’s no reason why we shouldn’t put some future super-intelligent AI robot ahead of us in line.  But if we do that, we’ll pay a price, even if the worst nightmares of the dystopian future do not come to pass and robots or their machine descendants don’t enslave us, or just wipe us out as not worth keeping around.

Above all, the Great Chain of Being reflects the order that God instituted in the universe.  And trying to tamper with that order logically leads to disorder.  To give an extreme example, a person who puts possession of a beautiful diamond (a mineral) above his relationship to his wife (a human) is introducing disorder into his life, a disorder that will lead to trouble. 

Of course, too strict an adherence to the Great Chain of Being would justify the continued existence of exploitative regimes, the divine right of kings, and other anti-democratic ideas that we have rightly freed ourselves from.  But the distinction between human beings on the one hand, and all other parts of the physical realm on the other hand, is a vital one that we ignore at our peril. 

I just had to decommission a laptop computer that served me well for the last six years.  I now keep its brain (the hard drive) on my desk both as a memento and as a backup in case I need something that didn’t get transferred in the migration process to my new computer.  While I didn’t feel the need to hold a funeral ceremony for the old laptop, I can respect people who do such things, because I think seeing souls where they probably aren’t is a better thing than not seeing them at all.

Sources:  My wife drew my attention to the NPR online article “In Japan, Old Robot Dogs Get a Buddhist Send-Off,” at https://www.npr.org/sections/thetwo-way/2018/05/01/607295346/in-japan-old-robot-dogs-get-a-buddhist-send-off.  I also referred to Aristotle’s definition of the soul at https://faculty.washington.edu/smcohen/320/psyche.htm and the Wikipedia articles on the Great Chain of Being and the ethics of artificial intelligence.