In the last
couple of months, new information about the factors leading to crashes of two
Boeing 737 Max aircraft and the loss of 346 lives has emerged. All such aircraft were grounded indefinitely
last March after investigators found that a software glitch combined with
faulty data from airspeed indicators to start a chain of events that led to the
crashes. Airline companies around the
world have lost millions as their 737 Max fleets sit idle, and Boeing has been
under tremendous pressure from both international regulatory bodies and the
market to come up with a comprehensive fix for the problem. But as long as both humans and computers have
to work together to fly planes, the humans will need training to deal with
unusual situations that the computers come up with. And in the case of the Lion Air and Ethiopian
Air crashes, it looks like whatever training the pilots received left them inadequately
prepared to deal with at least one situation that led to tragedies.
Modern
fly-by-wire aircraft are certainly among the most complex mobile systems in
existence today. It is literally
impossible for engineers to think of every conceivable combination of failures
that pilots would have to handle in an emergency, simply because there are so
many subsystems that can interact in almost countless ways. But so far, airliner manufacturers have done
a pretty good job of identifying the major failure conditions that would be
life-threatening, and instructing pilots about how to deal with those. The fact that Capt. Chesley Sullenberger was
able to land a fly-by-wire Airbus A320 plane in the Hudson in 2009 after
experiencing failure of all engines shows that humans and computers can work
together cooperatively to deal with unusual failures.
But the ending
was not so happy with the 737 Max flights, and recent news from regulators
indicates that a wild combination of alarms, stick-shakings, and other
distractions may well have paralyzed the pilots of the two planes that crashed
after faulty readings from angle-of-attack sensors set off the alarms.
Flying a modern
jetliner is a little bit like what I am told it was like being in the army
during World War II. For many soldiers,
the experience was a combination of long stretches of incredible tedium
interrupted by short but terrifying bursts of combat. It's psychologically hard for a person to
remain alert and ready for any eventuality when the norm is that pretty much
nothing out of the routine ever happens the vast majority of the time. So when the unusual failure of both angle-of-attack
sensors led to a burst of alarms and the flight computer's attempt to push the
nose down, the pilots on the ill-fated flights apparently failed to cope with
the confusion and could not sort through the distractions in order to do the
correct thing.
A month after
the Lion Air crash in 2018, the FAA issued an emergency order telling pilots
what to do in this particular situation.
Read in retrospect, it resembles instructions on how to thread a needle
in the middle of a tornado:
". . . An analysis by Boeing
found that the flight control computer, should it receive faulty readings from
one of the angle-of-attack sensors, can cause 'repeated nose-down trim commands
of the horizontal stabiliser'. The
aircraft might pitch down 'in increments lasting up to 10sec', says the order. When that happens, the cockpit might erupt
with warnings. Those could
include continuous control column shaking and low airspeed warnings – but only
on one side of the aircraft, says the order.
The pilots might also receive alerts warning that the computer has
detected conflicting airspeed, altitude and angle-of-attack readings. Also, the
autopilot might disengage, the FAA says.
Meanwhile, pilots facing such circumstances might need to apply
increasing force on the control column to overcome the nose-down trim. . . . They
should disengage the autopilot and start controlling the aircraft's pitch using
the control column and the 'main electric trim', the FAA say. Pilots should
also flip the aircraft's stabiliser trim switches to 'cutout'. Failing that,
pilots should attempt to arrest downward pitch by physically holding the
stabilizer trim wheel, the FAA adds."
If I counted
correctly, there are six separate actions a pilot is being told to take in the
midst of a chaos of bells and whistles going off and his plane repeatedly
trying to fly itself into the ground.
The very fact that the FAA issued such a warning with a straight face,
so to speak, should have set off alarms of its own. And after the second crash under similar circumstances,
reason prevailed, but first with regulatory agencies outside the U. S. Finally, the FAA complied with the growing
global consensus and grounded the 737 Max planes until the problem could be
cleared up.
When software is
rigidly dependent on data from sensors that convey only a narrowly defined
piece of information, and those sensors go bad, the computer behaves like the
broomstick in the Disney version of Goethe's 1797 poem, "The Sorcerer's
Apprentice." It goes into an
out-of-control panic, and apparently the pilots found it was humanly impossible
to ignore the panicking computer's equivalent of "YAAAAH!" and do the
six or however many right things that were required to remedy the
situation.
It is here that an
important difference between even the most advanced artificial-intelligence (AI)
system and human beings comes to the fore.
It is the ability of a human being to maintain a global awareness of a
situation, flexibly enlarging or narrowing the scope of attention as
required. Clearly, the software
designers felt that once they had delivered an emergency message to the pilot, the
situation was no longer their responsibility.
But insufficient attention was paid to the fact that in the bedlam of
alarms that the unusual simultaneous sensor failure caused, some pilots—even
though they were well trained by the prevailing standards—simply could not
remember the complicated sequence of fixes required to keep their planes in the
air.
Early
indications are that the 737 Max "fix," whatever software changes it
involves, will also involve extensive pilot retraining. We can only hope that the lessons learned from
the fatal crashes have been applied, and that whenever such unusual sensor
failures happen in the future, pilots will not have to perform superhuman feats
of concentration to keep the plane from crashing itself.
Sources: A
news item about how Canadian regulators are looking at the pilot-overload
problem appeared on the Global News Canada website on Oct. 5, 2019 at https://globalnews.ca/news/5995217/boeing-737-max-startle-factor/. The November 2018 FAA directive to 737 Max
pilots is summarized at https://www.flightglobal.com/news/articles/faa-order-tells-how-737-pilots-should-arrest-runawa-453443/. I also referred to Wikipedia's articles on
the Boeing 737 Max groundings, Chesley Sullenberger, and The Sorcerer's
Apprentice.
No comments:
Post a Comment