"Who's in charge here?" If people in an organization can't give a
clear answer to that question, chances are the organization is in trouble. And something along those lines may apply to
cars as well as to human organizations.
That's the lesson we can draw from the preliminary report released by
the U. S. National Transportation Safety Board (NTSB) last Thursday, May 24,
concerning the fatal collision between a pedestrian and a semi-autonomous
vehicle operated by Uber in Tempe, Arizona last March 18.
To summarize the accident, around 9:39 PM on that Sunday
night, Elaine Hertzberg chose to walk her bicycle across a divided street in
between crosswalks, in a section of road that was poorly illuminated by
streetlights. She was not wearing
reflective clothing and her bicycle had no side reflectors. She apparently did not see the oncoming car
until just before the collision.
Subsequent toxicology tests showed traces of marijuana and
methamphetamine in her system. Regardless
of her condition, it's the responsibility of drivers (or the car's computer) to
look out for the behavior of all pedestrians, even those who aren't behaving
with normal alertness. And if this
responsibility is split or ambiguous, trouble is brewing.
An in-cab video released after the accident shows that the
car's driver was studying something below the windshield in the cab until she
saw the pedestrian just before the accident.
In my earlier blog on this incident, I mistakenly speculated that she
was looking at her cellphone instead of the road, but it turns out she was
monitoring a display of the self-driving car's behavior, as part of what was basically
a research project in which the driver would take the car out on prescribed
routes to test its systems.
The most informative piece of evidence in the NTSB
preliminary report concerns the state the car was in just before the
crash. The Volvo was equipped both with
the latest Volvo-engineered safety systems, including a collision-avoidance
system, and also with Uber-installed computer control. Probably to avoid interference between the
two systems, the Volvo safety controls were disabled when the Uber computer was
set to operate the vehicle. The Uber computer
system is able to determine when emergency (hard) braking maneuvers are needed,
and is capable of executing them. But at
the time of the accident, the emergency braking function was disabled, as Uber
found it had led to erratic behavior.
The operator was apparently aware of this setting and of her
responsibility to take emergency actions as needed, in addition to monitoring
the vehicle's operations on the screen and "tagging events of interest for
subsequent review."
People can do only so much at once. Abundant research has shown that too many
distractions degrade a driver's ability to respond to unexpected
emergencies. Uber was basically running
an experiment requiring significant driver attention while operating their
vehicles on public roads. It didn't take
too long for the unfortunate combination of a driver distracted by monitoring
tasks to coincide with a pedestrian whose attention was clouded to lead to a
tragedy, and to suspension of Uber's experimental autonomous-car program, plus
numerous calls on the part of state and federal lawmakers for a slowdown in the
deployment of autonomous vehicles.
The final report from NTSB on this fatal accident will
probably not come out until next year.
But their preliminary report shows how things can go wrong tragically
under the current regime of what are called level-2 and level-3 autonomous
driving systems. The five-level ranking
system goes from Level 0, which is what I can do in our 1955 Oldsmobile (no
computer within miles) to the hypothetical Level 5, the yet-to-be-realized
situation in which the self-driving car performs absolutely all driving
functions and the passenger's participation is limited to telling the car where
to go when he or she gets in.
No one has yet deployed a Level 5 vehicle, and getting there
will require extensive testing of lower-level systems in the real world. Testing involves risks and unknowns—otherwise
you wouldn't learn anything from it. The
dicey problem faced by autonomous-vehicle developers has always been to strike
the right balance between exposing their systems to a wide enough variety of
real-life situations to learn enough to improve them, and not taking so many
risks that one of the close calls turns into a severe injury or fatality, as
the Uber accident did.
One of the fond hopes of self-driving-car promoters is that
once we get to the point where their well-designed systems are extensively
employed, automotive fatality rates should start to decline steeply. Nobody (well, almost nobody except a few
suicidal maniacs) wants to die in a car wreck, and so from a utilitarian
perspective, if a few people die during early-phase testing of self-driving
cars, but then as a result thousands get to live who would otherwise have been
killed by cars driven by people, that's a good tradeoff.
But that cold mathematical view doesn't appeal to our
emotional side, and so the media and legislators react to a single fatality in
a way that seems out of proportion. I'm
not sure it is, though.
What we may be seeing is part of a very normal process of social
self-regulating feedback that has led to improvements in safety ever since the
dawn of the Industrial Revolution. The
first thing that has to happen in this process, unfortunately, is that somebody
gets killed. And the reason they were
killed has to do with a new technology.
The bad publicity attracts attention from the public, who is now less
inclined to welcome the new technology; those in the position to regulate the
technology, such as legislators; and the
promoters of the technology itself, who are moved to improve safety out of self-preservation. Laws or regulations are enacted by
legislators or private entities such as insurance companies, and the new
industry sometimes imposes new rules on itself. The causes of the original fatalities are
mitigated or removed, and life goes on with the new technology, which in the
course of time becomes old and familiar.
This happened with steamboats in the 1800s, it happened with
human-driven automobiles in the early 1900s, and it appears to be happening
with self-driving cars now.
As long as we don't get into some kind of prohibition panic
and ban all self-driving cars forever, reasonable rules about testing new
systems and deploying market-ready ones can be devised. Compromises will have to be made. Even if all cars were Level-5 quality
tomorrow, a few people would still die in car accidents. But chances are there would be a lot fewer
than 40,000 or so per year, which is what the U. S. automobile fatality rate is
running at today. And many of those
fatalities are caused by distracted drivers, including the Uber driver who had
too many things to watch on the dashboard, and failed to see the pedestrian
until it was too late.
Sources: A news report summarizing the NTSB
findings appeared on May 24 on the Reuters website at https://www.reuters.com/article/us-uber-crash/ntsb-uber-self-driving-car-failed-to-recognize-pedestrian-brake-idUSKCN1IP26K. The preliminary report itself can be downloaded
at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf. I first blogged on this incident on Mar. 26,
2018 at http://engineeringethicsblog.blogspot.com/2018/03/self-driving-car-kills-pedestrian.html.
No comments:
Post a Comment