Around midnight Sunday, Dec. 29, 2019, the driver of a Honda
Civic headed northbound on Vermont Avenue in Gardena, California was making a
left turn from Vermont onto Artesia Boulevard.
The traffic light at the intersection was red for westbound traffic on
Artesia, and the intersection happened to be the western end of the Gardena
Freeway, where it becomes a surface road.
At that moment, a Tesla Model S zoomed off the freeway westbound through
the red light and crashed into the Honda, killing its two occupants. The Tesla driver and his passenger were not
seriously injured. Early news reports
failed to indicate whether the Tesla was on autopilot, but National Highway
Traffic Safety Administration (NHTSA) officials are investigating the crash to
determine whether the autopilot was engaged.
This latest fatality involving an autopilot-equipped Tesla inspired
an Associated Press review of recent fatalities involving Tesla cars in which
the autopilot was engaged. The curious
reader can view the website www.tesladeaths.com, where someone has attempted to
compile a complete list of worldwide statistics for fatal crashes involving
Tesla cars. As of the end of 2019, the
list totaled 110 deaths, of which only 4 are in the category of "verified
Tesla autopilot death." As well
over 200,000 Teslas have been sold, these statistics are not particularly
remarkable, except for the fact that Tesla purports to be the leading edge of
the automotive future. As such, it
deserves closer scrutiny, and that is what it's getting.
The problem in answering the question of our headline is,
"more dangerous than what?"
Not only is Tesla the world's best-selling plug-in passenger car, it
offers what many regard as the most sophisticated commercially available
autopilot system as well. And in contrast
to the more conservative approach many automakers have taken in adding
self-driving features such as lane following and automatic braking, the Tesla
driver can turn on autopilot and let go of the wheel. Such behavior is not advised by Tesla, but
since when have instruction manuals been 100% effective in keeping people from
doing stupid things?
The Associated Press article quotes Jason Levine, head of
the nonprofit Center for Auto Safety in Washington, as saying, “At some point,
the question becomes: How much evidence is needed to determine that the way
this technology is being used is unsafe?”
Levine criticized the NHTSA for dragging its feet instead of issuing
regulations as to how Tesla's autopilot feature can be used. Simply warning the driver that he or she should
be alert at all times when the autopilot is working doesn't make it happen. At least two fatal U. S. crashes (one in Florida
and one in Ohio) happened when the autopilot's sensors became confused and
failed to recognize a large truck blocking the roadway. Presumably, if the drivers had been paying
attention, they might have seen the truck and stopped.
Promoters of the autonomous-vehicle future face two distinct
but interrelated obstacles that could delay or even prevent widespread adoption
of self-driving cars.
The first obstacle is technology. The Society of Automotive Engineers (SAE) has
defined five levels of automated driving.
Level 0 is a 1955 Plymouth—completely manual operation—and Level 5 would
be the equivalent of an electronic chauffeur—the passenger can watch TV, sleep,
or do anything else you would do if you knew a trusted and competent human
driver was in charge. No automaker
currently offers a Level 5 vehicle for sale, but Elon Musk is claiming that
early this year, Tesla will start to sell fully self-driving cars, which sounds
like Level 5 to me. Of course, this may
be nothing but vaporware. But unless
some pretty radical improvements are made in autopilot technology, it's
inevitable that some fatal crashes will happen in which the autopilot was engaged.
And here's where we run into the second obstacle: public perception, including the perception
of government regulators, lawmakers, and their constituents. I don't know about you, but I would feel a
lot worse thinking about dying in a car wreck in which my car was driving
itself, rather than dying in one where I was actively at the wheel. True, I'd be just as dead in either case, but
there's something about the hope that one can make a difference if one is
trying to control the situation. This is
not a completely rational state of mind, but carmakers learned decades ago to
appeal to the sub-rational "lizard brain" of the consumer. Why else do pickup ads show their products bounding
over rugged mountains and doing extreme feats that 99% of drivers will never
have to do?
The budding autonomous-car industry is still treading on
very shaky ground, at least in the U. S., where the majority of fatal accidents
involving Teslas have occurred. As the
statistics show, less than 10% of fatal crashes involving Teslas are associated
with the use of the autopilot. But
statistics do not count for much in public perception, and Elon Musk's
cowboy-style reputation lends credibility to the accusation that he and his company
are playing games with the safety of their customers, and by implication, with
the safety of anyone within collision range of a Tesla.
If properly designed and deployed, I concede that autonomous
vehicles could lower the rate of traffic fatalities while lessening traffic
congestion and doing other good things such as reducing carbon emissions as
well. But there is a world of challenges
in that "if." There may be
unknown factors that no one will discover until there is a certain critical
mass of autonomous vehicles on the road already.
In statistical mechanics involving solutions of, say, salt
in water, you can apply simple rules to very dilute solutions, so dilute that
each salt molecule or atom can be treated like it is the only one in the
solution. So far, autonomous vehicles
are so rare that each one is surrounded by a sea of non-autonomous vehicles, and
the software probably operates under that assumption.
But when you put enough salt in the water so that each salt
atom gets within shouting distance of another salt atom, things get
complicated. New effects such as saturation
and crystallization occur, and your analysis has to be more sophisticated in
order to deal with these effects.
If autonomous vehicles, especially those made by different
manufacturers, ever become common enough so that one vehicle can "see"
another one in a typical driving situation, it is very likely that novel and
perhaps hazardous effects will occur that even the designers may not have anticipated. But that will never happen if the public gets
so fearful of accidents involving autopiloted cars that they are regulated out
of existence. I hope that doesn't happen
either, but if Musk and Tesla get too careless, they might end up triggering
just such a reaction.
Sources: The
Associated Press article I referred to appeared in many locations, among which
was the San Jose Mercury-News website at https://www.mercurynews.com/2020/01/03/3-crashes-3-deaths-raise-questions-about-teslas-autopilot/. I also referred to the same site for
information on the Gardena crash at
https://www.mercurynews.com/2020/01/02/fatal-tesla-crash-in-california-investigated-by-feds/. The Tesla fatal-crash statistics website is
www.tesladeaths.com, and I also referred to the Wikipedia article on Tesla,
Inc.
No comments:
Post a Comment