Monday, November 04, 2019

Moral Exemplars Still Wanted: "The Current War: Director's Cut"


One way of teaching engineering ethics that is less grim than picking through wreckages of failed projects is to portray moral exemplars:  engineers who did the right thing in a critical situation and benefited people thereby.  For example, William LeMessurier, a structural consulting engineer, played a prominent role in determining that the 60-story Boston skyscraper formerly known as the John Hancock Tower was unstable in certain types of wind loading.  The analysis he and his colleagues produced convinced the owners to install millions of dollars' worth of cross-bracing, and the building is still standing today. 

For many generations both in the U. S. and abroad, Thomas Edison was a heroic inventor whose ideas benefited millions.  In 1940, at the height of the inventor-as-hero period in U. S. history, MGM brought out a pair of panegyrics based on his life: "Young Tom Edison" starring Mickey Rooney, and "Edison the Man" with Spencer Tracy.  I must confess that seeing the the former film on TV played a disproportionately large role in my decision to become an electrical engineer, around the age of eight.

However, cinematic heroes today tend to be mainly of the comic-book variety, so when "The Current War:  Director's Cut" was finally released after a two-year delay connected with the fall of Harvey Weinstein, whose production company financed the film, I could hardly wait to see what a present-day director and actors would do with the raw material.

And some of it is pretty raw.  There was no hint in the MGM flicks that Edison was ever anything less than selfless as he enthusiastically searched for innovations that would be boons for humanity.  In particular, there was nothing about the admittedly sordid tricks he pulled to make George Westinghouse's rival AC system look like a threat to public safety, up to and including killing horses and pigs (but not an elephant, as was falsely attributed to him by some confused reports of an elephant's electrocution he filmed at Coney Island in 1903).  But in "The Current War," Benedict Cumberbatch's Edison is no one's idea of a moral exemplar, though he would qualify as perhaps an object of pity.

Just the kind of historic rabbit-trail I took you on about the elephant is one of the problems with the film.  Despite the director's thoughtful inclusion of little side-titles when new characters are introduced (I particularly liked "Nikola Tesla, futurist"), the people I saw the film with all complained that it was very hard to follow.  Although I have made somewhat of an amateur study of Edison, Westinghouse, and that era of the history of technology, I myself had trouble figuring out who was who unless the side-titles were there to help.  In particular, there was an older bewhiskered gent who seemed to be Westinghouse's technical Svengali, but who was only addressed as "Frank."  At the time I guessed it was Frank Sprague, who played a prominent role in early electric tramways.  But later in the film it turned out he was Franklin Pope, an "electrician" (which was what electrical specialists sometimes called themselves back then) who died, not trying to get Westinghouse's electric fan motor to work as the film implied, but in trying to fix a motor-generator installed in the home of his own basement.

As most historical films have to do, this one takes dramatic license with the facts, but stays reasonably close to the main narrative, which is the battle between Edison's first-to-the-market but ultimately unsuccessful DC system, and Westinghouse's cheaper and better alternative of AC, which is still with us today.  The climax of the MGM Spencer Tracy flick, which was Edison's invention of the light bulb, is relegated to a long speech by Cumberbatch at the 1893 World's Columbian Exposition in Chicago.  Westinghouse won the bid to light up the fair, and that triumph symbolized the end of the current war.  Westinghouse and Edison run into each other at the Japanese exhibit, and Westinghouse asks Edison what it felt like to invent the light bulb.  What ensues is perhaps the best piece of classical acting I've seen Cumberbatch do—no special effects, no superhero action, just a man painting a picture of a scene with words that answers the question better than any number of adjectives. 

Cumberbatch seems to be making a career of portraying emotionally peculiar geniuses:  first as the legendary Sherlock Holmes, then as the eccentric mathematician and code expert Alan Turing, and now as Edison.  Critics of the film say he does a good job of portraying our era's idea of the corporate genius:  the Steve-Jobs-type disrupter of the status quo who nevertheless betrays his usually-suppressed emotional life by playing back the voice of his dead wife on his new invention, the phonograph.  Edison's first wife Mary did die in the midst of the current war, which only added to his distress and frustration.  But he didn't let her death slow him down in pursuing his goals by whatever means he thought necessary, including secretly cooperating with a man who thought electrocution would be better than hanging condemned criminals.

I suppose I should have put a spoiler alert on that last paragraph.  But it's unlikely that too many people will see the movie just for the suspense.  To techno-nerds such as myself, who are familiar with many of Edison's lesser-known inventions, the film was a feast of seeing cinematic reproductions of things like the first kinetograph projector (an early form of motion-picture machine) and his electric pen for writing on paper for mimeograph reproduction.  And the film gets some humor out of the Edison family's alleged habit of secretly communicating with each other via Morse-code taps. 

But for the average viewer who has little or no prior knowledge of the era, I'm afraid the film will be a rather confused mish-mosh of mysterious devices, obscure motives, and hard-to-identify characters. A strength of the film is that everything takes place against a lovingly-reproduced CGI background of 1890s America, including a panoramic view of the Columbian Exposition, complete with the original Ferris wheel, that ought to be issued as a two-by-three-foot poster on its own.  I'm not sure what effect the movie will have on eight-year-olds, but don't look for there to be a flurry of young people flocking to be electrical engineers in about ten years.  Actors, maybe, but not engineers.

Sources:  "The Current War:  Director's Cut" was released on October 24, 2019, and stars Benedict Cumberbatch as Thomas Edison.  I referred to a review of the film posted at https://www.thewrap.com/the-current-war-directors-cut-film-review-benedict-cumberbatch-tom-holland/ for details of the delayed release, and also Wikipedia information concerning the 1903 elephant electrocution of Topsy.

Monday, October 28, 2019

A Pilot and Software Engineer's Take on the Boeing 737 Max


As of this writing, the ill-fated Boeing 737 Max series of jetliners is still grounded after two fatal crashes in which the pilots lost a battle with the plane's Maneuvering Characteristics Augmentation system (MCAS).  The U. S. Federal Aviation Administration (FAA) grounded the planes last March, and current estimates are that the planes will not be flying again before at least  2020.  This is a huge blow to Boeing and its customers who bought the planes, as billions of dollars of assets are sitting idly on the runway instead of making money. 

Only a month after the planes were grounded, a software engineer named Gregory Travis, who is also a pilot, wrote his thoughts on what happened with the Max 8 and why he thinks the problem may be intractable.  A version of his article appeared on the website of IEEE Spectrum recently, and to my mind it is the most comprehensive and damning examination yet of a situation that put thousands of lives at risk and ended up killing 346 people.

Travis points out that the 737 series was introduced all the way back in 1967.  Designing an airframe from the bottom up is a costly enterprise, so Boeing understandably would like to make incremental changes to an existing design rather than coming up with a whole new airplane every few years.  As fuel economy became more important for airlines, Boeing decided to go with more efficient engines, which for fundamental physical reasons have to be larger.  But eventually, the newer engines got so big that the ground clearance in their original positions was too small—the front fans were going to hit the ground if they didn't move the engines.  So they did move them upward and back.  But that caused another problem.

Travis drew on his experience as a pilot to note that you start playing with the fundamental handling characteristics of an aircraft when you move the engines around.  Stable flight is a complex interplay between the engine thrust vector and the center of gravity, the drag on the wings and other surfaces, and many other factors.  When the engines were moved, it made the plane tend to pitch upward with increased power, and this is not a good thing.  Upward pitch is to an airplane what tilting your head up is to your head. 

If an aircraft's pitch exceeds a certain angle, depending on the angle of attack (the angle between the plane's fuselage and the air moving past it), it can stall, which basically makes it fall out of the air.  The modified 737 was edging dangerously close to a dynamically unstable condition, which is not something a commercial airliner should do.  Travis said that the right thing to do at this point would have been to redesign the whole airframe to deal with the changed position of the engines.  In his words, "The airframe, the hardware, should get it right the first time and not need a lot of added bells and whistles to fly predictably. This has been an aviation canon from the day the Wright brothers first flew at Kitty Hawk." 

But instead of doing that, Boeing chose to develop a software patch that included the MCAS—a complicated system of interacting compensation fixes, pilot warnings, and poorly considered feedback loops that were vulnerable to faulty inputs from angle-of-attack sensors, which can easily be fooled by surface winds or other transient phenomena. 

Most modern airliners are "fly-by-wire" systems in which there is no direct mechanical connection between the pilot's stick and pedals, and the airplane's control surfaces.  Instead, a computer both takes in the pilot's commands and feeds back to the pilot something approximating the "feel" of manually operated controls, so that the pilot senses he or she is flying a plane and not a video game.  But the MCAS was apparently designed so that when it sensed a situation in which the nose needed to be pointed down, it would in effect grab the controls away from the pilot and do what it knew was right—even if it was wrong.  And the feedback motors that would do this were simply too powerful for the pilots to overcome.  In a reference to the famous HAL 9000 computer in the film 2001: A Space Odyssey, in which the computer tries to kill everyone on board for its own rather obscure purposes, Travis writes "MCAS gaslights the pilots. And it turns out badly for everyone. 'Raise the nose, HAL.' 'I’m sorry, Dave, I’m afraid I can’t do that.'"

We are well down the road that leads to 100% control of airplanes by robotic systems.  Nevertheless, we are far from arriving, and in the meantime there has to be effective and safe cooperation, not competition, between the human pilots and the software that runs the plane.  But in trying to cut corners by fixing an airframe problem with software, and poorly designed software at that, Boeing may have painted itself, and all its customers who bought 737 Max 8s, into a corner that it can't get out of.  Every month that goes by without an FAA-approved plan to fix or retrofit Max 8s so they can fly safely again is an indication that the problem revealed by the MCAS-related crashes may be deeper and more far-reaching than most people thought at first.  The fact that an engineer with deep expertise in both software and flying saw what was evidently going on within a month of the groundings tells me that he's probably on to something.

The historian of technology Henry Petroski says that engineers often learn more from failures than from successes.  We should learn a lot from the 737 saga, but it may prove to be an expensive lesson.  The 737 Max began commercial flights only in 2017, and I'm sure Boeing and its customers were counting on many years of revenue from their purchases.  If the design ends up being scrapped, it will amount to the largest recall in aviation history.  But if even just most of what Travis says is true, that is well within the realm of possibility.  Regardless of what patches Boeing may come up with, I'm never going to feel entirely comfortable flying in a 737 Max again. 

Sources:  Readers are urged to see Travis's complete article, which goes into greater depth than I have been able to here.  It is on the website of IEEE Spectrum at https://spectrum.ieee.org/aerospace/aviation/how-the-boeing-737-max-disaster-looks-to-a-software-developer.

Monday, October 21, 2019

Hard Rock Hotel Collapse: Why?


On Saturday morning, Oct. 12, a hotel under construction at the corner of Rampart and Canal streets in New Orleans, Louisiana underwent a partial collapse, killing three workers and injuring 30.  The Hard Rock Hotel, originally planned as a mixed retail/residential project, had reached a height of 13 stories when something happened to cause a collapse at the top completed level.  A chain of floor collapses ensued, leading to a partial collapse of all the floors above about the seventh level.  The collapse also damaged the two tower cranes that were being used on the project, leading to concerns that they might fall and damage some of the surrounding structures in the densely populated downtown area.  At this writing (Wednesday, Oct. 16), the body of one worker has yet to be recovered.

Any time a construction accident occurs, the entire complex process of planning, management, and actual construction activity gets called into question.  The construction of a large high-rise such as the Hard Rock Hotel is an exercise in meticulous coordination and integration of technologies ranging from computer-aided design to the kind of pumps that can send many tons of concrete all the way up to the roof of a 13-story building.  With so much heavy stuff being supported in temporary ways, it's understandable that something could go wrong. 

For example, the concrete floors that are poured at each level have to set before they are put into compression by tensioning cables.  Try to tighten those cables too early, and you're liable to squash the still-weak concrete.  But wait too long by a day or so, and you've added costly time to the construction schedule.  A huge number of time-critical matters have to be coordinated within a small margin of error for things to go smoothly, and weather, supplier problems, and other external factors can throw a monkey wrench into the works. 

Still, most buildings go up without having multiple floors collapse on each other.  Viewed from the front, the structure looks like a giant finger just scraped all the floors above the seventh and bent them downward. 

A structural engineer named Walter Zehner once worked on the project in its early stages.  When contacted by a reporter from the Lafayette Daily Advertiser, he said that it was much too early even to speculate on the cause of the collapse.  After retreival of the remaining fatality, engineers will have to stabilize the structure so that it won't present an ongoing hazard to surrounding buildings.  Only then will the investigation begin, and it might take months.

Construction was in progress at the time of the collapse, and Zehner says that the remaining eyewitnesses will be asked what exactly was being done at the time.  It's possible that someone accidentally knocked over a support column, for example.  If a heavy just-poured layer of concrete falls twelve or fifteen feet onto the floor below it, the impact could well cause the next floor to collapse, leading to just the kind of destruction that took place.  But all such notions are speculation at this point, and the investigation will reveal a sequence of events that may be traced backwards to a possible cause.

In the recent past there have been some indictments of city inspectors for taking bribes.  A lack of proper municipal oversight might lead to hazardous conditions that could cause such a collapse, but again, this is speculation. 

The most recent collapse of a structure under construction that was covered in this blog was the Florida International University pedestrian bridge collapse in Miami in 2018.  Six people were killed when a concrete-beam bridge collapsed just after being set in place.  The investigation of that accident is still ongoing, but late last year it was revealed that the National Transportation Safety Board (NTSB) had determined design errors were at least partly to blame. 

Accidents like the Hard Rock Hotel collapse can happen even if the plans are flawless.  The 1981 collapse of a pedestrian walkway inside the atrium of the Hyatt Regency Hotel in Kansas City was due not to any flaws in design, but to a compromise that the builder made in the support structure during construction.  Investigations may reveal that while the New Orleans hotel plans were correct, the builders may have overlooked something.  Or it could turn out that a single mistake made by one construction worker led to the tragedy. 

Not much is known about the extent of training that typical construction workers receive.  Construction is one of the few remaining fields in which a person without a high-school degree can earn at least in the range of $13 an hour, which is the average construction-worker wage in Louisiana according to a statistic cited by the website indeed.com.  This is scarcely anything to write home about, unless home is Guatemala, in which case it looks good compared to trying to be a subsistence farmer.  Nevertheless, it's attractive enough to draw workers who are willing to face the dangers and difficulties that construction work involves, up to and including the chance of dying in a tragic accident.

We will have to wait to find out what exactly happened in New Orleans to transform a nearly completed building into a pile of dangerous rubble.  And when we do, I hope that any lessons learned will be applied to future construction sites so that tragedies like this happen less and less frequently. 

Sources:  I referred to reports from the ABC News website at https://abcnews.go.com/US/search-underway-unaccounted-person-hard-rock-hotel-partially/story?id=66261708, the Lafayette Daily Advertiser website at https://www.theadvertiser.com/story/news/2019/10/14/hard-rock-hotel-collapse-new-orleans/3979427002/, and the Wikipedia website "List of structural failures and collapses."  The hourly construction wage statistic came from https://www.indeed.com/salaries/Construction-Worker-Salaries,-Louisiana.  I discussed the FIU bridge collapse at https://engineeringethicsblog.blogspot.com/2018/12/design-flaw-identified-in-fiu-bridge.html.

Monday, October 14, 2019

PG&E Pulls the Fire Plug


If you were one of an estimated two million customers of Pacific Gas & Electric in northern California this week, your power went off for a day or more.  There was no malfunction of the power grid.  Instead, the utility deliberately shut off power in large regions where high winds were predicted, in order to avoid sparking forest wildfires of the kind that have killed over a hundred people in recent years.  During the outage, the utility's website crashed, making it difficult or impossible for people to find out if they lived in an area targeted for an outage.  According to an article about the blackouts in the Wall Street Journal, California Governor Gavin Newsome reacted with "outrage," blaming the precautionary outages on PG&E's "greed and mismanagement over the course of decades."  PG&E CEO Bill Johnson said he might have some disagreements with Gov. Newsome, but that he was not ruling out similar outages in the future.

Reliable electric power is one of the mainstays of modern civilization.  Because most utilities outside large cities rely on above-ground transmission and distribution lines, their power grids are subject to what the lawyers call acts of God:  windstorms and ice storms that down power lines, lightning and floods that damage and destroy equipment, and other natural occurrences that disrupt the smooth delivery of power.  As long as these interruptions are rare and end promptly, no one blames the utility for them.  But the deliberate large-scale blackouts PG&E imposed simply as a precautionary measure are something new.

The Journal article points out that California now has a law making utilities liable for any damage caused by fires that are ignited by their lines, even if the utility was not negligent.  This law contributed to over $30 billion in potential liability costs associated with power-line-sparked wildfires and was a big reason why PG&E went into bankruptcy proceedings at the beginning of this year.  I don't know the history of that particular law, but it's consistent with a blame-the-powerful attitude that also seemed to inspire Gov. Newsome's comments.

Blaming the powerful is one thing, especially if they're guilty, but crippling a vital utility through excessively punitive laws is another thing.  The parties to this conflict include PG&E's management, workers, and investors, who mainly just want to do their job and/or get paid for it; PG&E's customers, who want reliable electric power without having their houses burn down; and the rest of California, which includes its government, along with the accompanying laws and regulatory environment.  Each group has interests that potentially conflict with the others, and these blackouts highlight the areas of conflict.

I lived in California for the four years of my undergraduate degree outside of Los Angeles in the 1970s, and I vividly recall waking up one day to see a dark cloud of smoke covering the entire northern half of the sky as a wildfire burned out of control in the San Gabriel Mountains.  Even back then, I thought people built houses in crazy places in California, on the edges of cliffs and so on, and it's only gotten worse since then.  Fires that used to damage nothing but wildlife (which is bad enough) now threaten whole communities, and so the need to control them by whatever means necessary has grown in recent years.

Part of that control is making sure that no tree can come anywhere close to a high-voltage transmission line.  PG&E has a tree-trimming program, but they admit they are behind in their scheduled trimming operations, and they also lack the ability to monitor winds at many specific locations so as to restrict the power outages to where they are really needed.  And even if they had such monitoring abilities, their older equipment doesn't allow them to be very selective in the power lines they de-energize—hence the massive blackouts covering a wide area. 

From here, the outages look like a desperate move by a utility company that is hamstrung by regulations and unfavorable laws.  If PG&E was a human being and not a large corporation, it would strike me as unfair to make him liable for damage even if negligence could not be proved.  If a driver is in a situation where he is obeying all the traffic laws, and a child suddenly runs out from a hidden place and gets hit, the driver generally does not get penalized if there was nothing he could have done to avoid the accident. 

But if you have an attitude that large private corporations are infinite money pots from which lawyers and their clients can extract indefinite amounts of money, sooner or later you run up against reality.  If PG&E doesn't have enough money or staff or freedom from regulations to cut away all their trees from their lines, and they risk the corporate equivalent of death if their lines cause a fire, then the precautionary blackouts look like the least bad alternative. 

Civilization is a huge mesh of cooperation:  buyers cooperating with sellers, consumers cooperating with producers, and government, one hopes, encouraging the virtuous kind of cooperation that leads to prosperous and flourishing societies.  But when groups begin to view other groups mainly as enemies and attribute malign motives to them, you can end up with a kind of self-fulfilling prophecy.  The maligned group or entity may think, "Well, these folks believe I'm a bad hombre no matter what I do, so I might as well act like it." 

The precautionary wind blackouts are a sign that maybe PG&E has been pushed too far.  We can hope that a spirit of conciliation can prevail so that more trees can be trimmed, more customers served more reliably, and fewer fires lead to tragedy in the future.  But right now, the prospects for that don't look too bright.  Especially if you're power's out.

Sources:  The Wall Street Journal article "PG&E's Big Blackout is Only the Beginning" appeared on Oct. 12, 2019 at https://www.wsj.com/articles/pg-es-big-blackout-is-only-the-beginning-11570909816.  I also referred to a New York Times article at https://www.nytimes.com/2019/10/09/us/pge-shut-off-power-outage.html.

Monday, October 07, 2019

Pilot Overload and the Boeing 737 Max Accidents


In the last couple of months, new information about the factors leading to crashes of two Boeing 737 Max aircraft and the loss of 346 lives has emerged.  All such aircraft were grounded indefinitely last March after investigators found that a software glitch combined with faulty data from airspeed indicators to start a chain of events that led to the crashes.  Airline companies around the world have lost millions as their 737 Max fleets sit idle, and Boeing has been under tremendous pressure from both international regulatory bodies and the market to come up with a comprehensive fix for the problem.  But as long as both humans and computers have to work together to fly planes, the humans will need training to deal with unusual situations that the computers come up with.  And in the case of the Lion Air and Ethiopian Air crashes, it looks like whatever training the pilots received left them inadequately prepared to deal with at least one situation that led to tragedies.

Modern fly-by-wire aircraft are certainly among the most complex mobile systems in existence today.  It is literally impossible for engineers to think of every conceivable combination of failures that pilots would have to handle in an emergency, simply because there are so many subsystems that can interact in almost countless ways.  But so far, airliner manufacturers have done a pretty good job of identifying the major failure conditions that would be life-threatening, and instructing pilots about how to deal with those.  The fact that Capt. Chesley Sullenberger was able to land a fly-by-wire Airbus A320 plane in the Hudson in 2009 after experiencing failure of all engines shows that humans and computers can work together cooperatively to deal with unusual failures.

But the ending was not so happy with the 737 Max flights, and recent news from regulators indicates that a wild combination of alarms, stick-shakings, and other distractions may well have paralyzed the pilots of the two planes that crashed after faulty readings from angle-of-attack sensors set off the alarms. 

Flying a modern jetliner is a little bit like what I am told it was like being in the army during World War II.  For many soldiers, the experience was a combination of long stretches of incredible tedium interrupted by short but terrifying bursts of combat.  It's psychologically hard for a person to remain alert and ready for any eventuality when the norm is that pretty much nothing out of the routine ever happens the vast majority of the time.  So when the unusual failure of both angle-of-attack sensors led to a burst of alarms and the flight computer's attempt to push the nose down, the pilots on the ill-fated flights apparently failed to cope with the confusion and could not sort through the distractions in order to do the correct thing.

A month after the Lion Air crash in 2018, the FAA issued an emergency order telling pilots what to do in this particular situation.  Read in retrospect, it resembles instructions on how to thread a needle in the middle of a tornado: 

            ". . . An analysis by Boeing found that the flight control computer, should it receive faulty readings from one of the angle-of-attack sensors, can cause 'repeated nose-down trim commands of the horizontal stabiliser'.  The aircraft might pitch down 'in increments lasting up to 10sec', says the order.  When that happens, the cockpit might erupt with warnings.  Those could include continuous control column shaking and low airspeed warnings – but only on one side of the aircraft, says the order.  The pilots might also receive alerts warning that the computer has detected conflicting airspeed, altitude and angle-of-attack readings. Also, the autopilot might disengage, the FAA says.  Meanwhile, pilots facing such circumstances might need to apply increasing force on the control column to overcome the nose-down trim. . . . They should disengage the autopilot and start controlling the aircraft's pitch using the control column and the 'main electric trim', the FAA say. Pilots should also flip the aircraft's stabiliser trim switches to 'cutout'. Failing that, pilots should attempt to arrest downward pitch by physically holding the stabilizer trim wheel, the FAA adds."

If I counted correctly, there are six separate actions a pilot is being told to take in the midst of a chaos of bells and whistles going off and his plane repeatedly trying to fly itself into the ground.  The very fact that the FAA issued such a warning with a straight face, so to speak, should have set off alarms of its own.  And after the second crash under similar circumstances, reason prevailed, but first with regulatory agencies outside the U. S.  Finally, the FAA complied with the growing global consensus and grounded the 737 Max planes until the problem could be cleared up.

When software is rigidly dependent on data from sensors that convey only a narrowly defined piece of information, and those sensors go bad, the computer behaves like the broomstick in the Disney version of Goethe's 1797 poem, "The Sorcerer's Apprentice."  It goes into an out-of-control panic, and apparently the pilots found it was humanly impossible to ignore the panicking computer's equivalent of "YAAAAH!" and do the six or however many right things that were required to remedy the situation. 

It is here that an important difference between even the most advanced artificial-intelligence (AI) system and human beings comes to the fore.  It is the ability of a human being to maintain a global awareness of a situation, flexibly enlarging or narrowing the scope of attention as required.  Clearly, the software designers felt that once they had delivered an emergency message to the pilot, the situation was no longer their responsibility.  But insufficient attention was paid to the fact that in the bedlam of alarms that the unusual simultaneous sensor failure caused, some pilots—even though they were well trained by the prevailing standards—simply could not remember the complicated sequence of fixes required to keep their planes in the air.

Early indications are that the 737 Max "fix," whatever software changes it involves, will also involve extensive pilot retraining.  We can only hope that the lessons learned from the fatal crashes have been applied, and that whenever such unusual sensor failures happen in the future, pilots will not have to perform superhuman feats of concentration to keep the plane from crashing itself.

Sources:  A news item about how Canadian regulators are looking at the pilot-overload problem appeared on the Global News Canada website on Oct. 5, 2019 at https://globalnews.ca/news/5995217/boeing-737-max-startle-factor/.  The November 2018 FAA directive to 737 Max pilots is summarized at https://www.flightglobal.com/news/articles/faa-order-tells-how-737-pilots-should-arrest-runawa-453443/.  I also referred to Wikipedia's articles on the Boeing 737 Max groundings, Chesley Sullenberger, and The Sorcerer's Apprentice. 

Monday, September 30, 2019

Jonathan Franzen Gives Up On Controlling the Climate


Jonathan Franzen is a novelist and also writes essays that are published in places like The New Yorker.  As he admits, he's not a scientist or a policy wonk, but that doesn't keep him from putting his oar in on climate change. 

In a recent essay posted on The New Yorker's website entitled "What If We Stopped Pretending?" Franzen gives what at first glance appears to be a counsel of despair.

First, he admits that anybody under thirty is "all but guaranteed" to witness what he calls the "radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought."  This will happen when "climate change, intensified by various feedback loops, spins completely out of control."  The only way to keep this from happening, according to authorities he cites such as the Intergovernmental Panel on Climate Change, is if every major greenhouse-gas-emitting nation on the planet imposes what amounts to a climate dictatorship:  instituting "draconian conservation measures, shut[ting] down much of its energy and transportation infrastructure, and completely retool[ing] its economy."  And that means everybody, not just folks who agree with the idea.  And here he gets personal:  "Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks."  (I live in Texas, but I don't personally drive a pickup truck.)

Then he says in effect, "Hey, I'm a realist.  This isn't going to happen.  So you know what?  I'm giving up on it.  We might as well face it:  the apocalypse is coming, and we better just get ready for it."  We shouldn't quit trying to reduce carbon emissions, but we also shouldn't con ourselves into believing that our little token individual actions are going to make much difference. 

He winds up his essay by encouraging people to make your own little corner of the world better in whatever way you can—improving democratic governance, helping the homeless, and just generally being a good citizen, whether or not it makes a difference in climate change.  "To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it."  In other words, we should fight smaller battles we have a reasonable chance of winning instead of putting all our eggs in the basket of averting climate change.

There is a syndrome that workers in the helping professions call "compassion fatigue."  Even if a naturally compassionate person chooses a job such as assisting Alzheimer's patients or children with terminal cancer, constantly having to come up with sympathy for someone who isn't going to get better can be tremendously draining.  And after months or years of such work, some people simply burn out—they can't take it anymore. 

Something like this seems to have happened to Franzen.  If he's like many people who see climate change as the most important existential threat to humanity, it's the kind of thing that you can never quite put out of your mind.  If you're not actively part of the solution, out there with Greta Thunberg protesting on the steps of the UN, then you're part of the problem merely by living a normal life in the U. S.  It's understandable that Franzen would choose to unburden himself by saying publicly,"Look, let's face it.  The train's coming at us in the tunnel and there's no way out.  Let's use the time we have to make things better, rather than fooling ourselves into thinking we can stop the train."

I'm not a climate scientist either, but I'm willing to make a prediction that I feel very confident about.  The way that climate change actually plays out is not going to fit anybody's prediction exactly, simply because it's far too complicated and long-term for anyone to predict with accuracy.

In 2018, the peak level of carbon dioxide in the atmosphere was 407 parts per million, up about 2.5 ppm from the previous year.  20 million years ago, it was about that high, and the all-time high record for carbon dioxide, according to various estimates that scientists have made, was around 2000 ppm some 200 million years ago.  So it's not like the planet has ever seen such high levels before.  Life survived, although many species went extinct and others arose to take their places.

Now admittedly, we are doing a radical thing to the planet, and there will be consequences.  But just as the way an individual human deals with a threatening crisis affects the outcome, the way human beings deal with what may turn into a climate crisis will also affect the future of humanity. 

When Franzen writes that climate change may "spin out of control," I would point out that strictly speaking, climate has never been under our control.  True, you can adjust a thermostat that is labeled "Climate Control," but its influence is limited to your house.  For all of human history, the weather is something that human beings have simply had to accept, not something they could control in any meaningful sense.  We are now engaged in the first-ever unintentional attempt at climate control, or at any rate climate influence, by emitting so much carbon dioxide, and in the coming years and decades we will be scrambling to deal with the consequences. 

But not in the way Franzen fantasizes in his scenario to stop worldwide emissions.  If the world really shut down much of its energy and transportation infrastructure, that is in itself would cause economies to implode.  So in that case the cure for climate change would be just as bad as the disease. 

The only way humans have survived on this planet as long as we have is that we are adaptable.  If crops start failing in some parts of the world, other parts will get better.  If coastlines shrink, people have the ability to move, assuming their governments will let them.  Franzen has caught a lot of flak for his essay, but I think he ends up in a better place than a lot of other people who keep banging the same drum in favor of a global climate dictatorship.  I agree with his advice to do what you can to limit climate change, but mainly, start with yourself to be a better person and to make the part of the world you can control to be a better place, no matter how warm it gets.

Sources:  Jonathan Franzen's essay "What If We Stopped Pretending?" appeared on Sept. 8, 2019 on The New Yorker website at https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending.  For historical numbers on carbon dioxide levels, I consulted a graph published in Nature at https://www.nature.com/articles/ncomms14845/figures/4. 

Monday, September 23, 2019

Moral Machines?


By now you may be used to asking your phone or Siri or Alexa questions and expecting a reasonable answer.  The dream of Alan Turing in 1950 that one day, computers might be powerful enough to fool people into thinking they were human is realized every time someone dials a phone tree and thinks the voice on the other end is a human when it's actually a computer.

The programmers setting up artificial-intelligence virtual assistants such as Siri and human-sounding phone trees aren't necessarily trying to deceive consumers.  They are simply trying to make a product that people will use, and so far they've succeeded pretty well.  Considering the system as a whole, the AI part is still pretty low-level, and somewhere in the back rooms there are human beings keeping track of things.  If anything gets too out of hand, the back-room folks stand ready to intervene.

But what if it was computers all the way up?  And what if the computers were by some meaningful measure, smarter overall than humans?  Would you be able to trust what they told you if you asked them a question? 

This is no idle fantasy.  Military experts have been thinking for years about the hazards of deploying fighting drones and robots with the ability to make shoot-to-kill decisions autonomously, with no human being in the loop.  Yes, somewhere in the shooter robot's past there was a programmer, but as AI systems become more sophisticated and even the task of developing software gets automated, some people think we will see a situation in which AI systems are doing things that whole human organizations do now:  buying, selling, developing, inventing, and in short, behaving like humans in most of the ways humans behave.  The big worrisome question is:  will these future superintelligent entities know right from wrong?

Nick Bostrom, an Oxford philosopher whose book Superintelligence has jacket blurbs from Bill Gates and Elon Musk, is worried that they won't.  And he is wise to do so.  In contrast to what you might call logic-based intellectual power, in which computers already surpass humans, whatever it is that tells humans the difference between right and what is wrong is something that even we humans don't have a very good handle on yet.  And if we don't understand how we can tell right from wrong, let alone do right and avoid wrong, how do we expect to build a computer or AI being that does any better?

In his book, Bostrom considers several ways this could be done.  Perhaps we could speed up natural evolution in a supercomputer and let morality evolve the same way it's done with human beings.  Bostrom drops that idea as soon as he's thought of it, because, as he puts it, ""Nature might be a great experimentalist, but one who would never pass muster with an ethics review board—contravening the Helsinki Declaration and every norm of moral decency, left, right, and center." (The Helsinki Declaration was a document signed in 1964 that sets out principles of ethical human experimentation in medicine and science.) 

But to go any farther with this idea, we need to get philosophical for a moment.  Unless Bostrom is a supernaturalist of some kind (e. g. Christian, Jew, Muslim, etc.), he thinks that humanity evolved on its own, without help or intervention, and is a product of random processes and physical laws.  And if the human brain is simply a wet computer, as most AI proponents seem to believe, one has to say it has programmed itself, or at most that later generations have been programmed (educated) by earlier generations and life experiences.  However you think about it in this context, there is no independent source of ideal rules or principles against which Bostrom or anyone else could compare the way life is today and say, "Hey, there's something wrong here." 

And yet he does.  Anybody with almost any kind of a conscience can read the news or watch the people around you, and see stuff going on that we know is wrong.  But how do we know that?  And more to the point, why do we feel guilty when we do something wrong, even as young children? 

To say that conscience is simply an instinct, like the way birds know how to build nests, seems inadequate somehow.  Conscience involves human relationships and society.  The experiment has never been tried intentionally (thank the Helsinki Declaration for that), but a baby reared in total isolation from human beings—well, something close to this has happened by accident in large emergency orphanages, and the baby typically dies.  We simply can't survive without human contact, at least right after we're born. 

And dealing with other people allows for the possibility of hurting others, and I think that is at least the practical form conscience takes.  It asks, "If you do that terrible thing, what will so-and-so think?"  But a well-developed conscience keeps you from doing bad things even if you were alone on a desert island.  It doesn't even let you live with yourself at peace if you've done something wrong.  So if conscience is simply a product of blind evolution, why would it bother you if you did something that never hurt anybody else, but was wrong anyway?  What's the evolutionary advantage in that?

Bostrom never comes up with a satisfying way to teach machines how to be moral.  For one thing, you would like to base a machine's morality on some logical principles, which means a moral philosophy.  And as Bostrom admits, there is no generally accepted system that most moral philosophers agree on, which means most moral philosophers are wrong about morality. 

Those of us who believe that morality derives not from evolution, or experience, or tradition, but from a supernatural source that we call God, have a different sort of problem.  We know where conscience comes from, but that doesn't make it any easier to obey it.  We can ask for help, but the struggle to accept that help from God goes on every day of life, and some days it doesn't go very well.  And as for whether God can teach a machine to be moral, well, God can do anything that isn't logically contradictory.  But whether he'd want to, or whether he'd just let things take their Frankensteinian course, is not up to us.  So we had better be careful. 

Sources:  Nick Bostrom's Superintelligence:  Paths, Dangers, Strategies was published in 2014 by Oxford University Press.

Monday, September 16, 2019

Facing Google In Your Living Room

An article on cnet.com recently described how Google's new smart assistant, called Google Nest Hub Max, uses facial recognition technology to tell who is talking with it.  This feature has raised privacy concerns, as Google has admitted that they reserve the right to upload facial data from it to the cloud to help improve "product experience."  But whatever Google does legitimately, a hacker might be able to do too, and so we are approaching a time when the telescreens of George Orwell's dystopian fantasy novel Nineteen Eighty-Four have become a reality—not because of the unilateral command of a totalitarian government (at least not in the U. S.), but because we want what they can do.

For those unfamiliar with the novel, Orwell's book was a warning to the free world to beware of what a dictatorship could do with communications technologies of the future.  Telescreens were two-way televisions on which propaganda by a dictator known only as Big Brother is transmitted, and through which images of whoever is watching are transmitted back to the party's central headquarters.  Orwell was simply extrapolating the efforts of regimes such as the Nazis of the 1930s and the Soviet Union of the 1940s to spy on their populace twenty-four hours a day to enforce total obedience to the regime. 

At the time the novel was published in 1949, no one took the telescreen-spying idea very seriously, because it would take a huge number of human monitors to spy on a significant number of people.  Carried to its logical extreme, the only way the government could watch everybody would be if half the population spied on the other half, and then took turns. 

But neither Orwell nor anybody else at the time reckoned on the development of advanced artificial-intelligence (AI) systems using facial recognition technology.  In China, the government is deploying many thousands of cameras and taking facial data from millions of people with the intention of developing a Social Credit rating that measures how well you measure up to the regime's model of the ideal citizen.  If you have been caught on camera by computers going to suspicious places or meetings, your Social Credit score could go in the tank, making it hard to travel, get a job, or even stay out of jail. 

None of that is happening in the U. S., but the fact that a large corporation will now have electronic access to views in millions of private residences should at least give us pause. 

Leaving the hardware aside for the moment, let's examine the difference in motives between a government, such as in Nineteen Eighty-Four, spying on its citizens for the purposes of controlling behavior, and a commercial entity such as Google using images to sell both its own services and advertising for others.  The government spying is motivated by suspicion and fear of what people might be doing while the government isn't watching them.  Whatever the regime sets out as an ideal of behavior, it watches for deviations from that ideal, and punishes those who deviate from it.  Participation is not voluntary, and people have to go to great lengths to avoid being spied on.

Now contrast that with a benign-looking thing such as the Google Nest Hub Max.  Nobody is going to make you buy one.  And if you do, there are ways of turning off the facial-recognition feature, though it will be less convenient to use.  And the device is intended to serve you, not the other way around.  It's sold with the vision portrayed in so many TV ads of people happily using it to make their lives better, not for means of social control like Orwell's telescreens. 

But maybe the differences are not as great as they first appear.  Both the telescreen and the Nest Hub Max are intended to change behavior.  If they don't, they have failed.  True, the ideal behavior that a totalitarian government wants and the ideal behavior that a company wants are two different things.  But neither ideal is the way the citizen-consumer was before the screen or Nest Hub shows up, namely, unwatched and unbenefited by the products or services that the company wants to sell.

Nobody should read this blog and then go around saying "Ahh, Stephan's saying Google is Big Brother and they're trying to take over our lives!"  That's not the comparison I'm making.  My point is that the mere fact of being watched by someone, or something that can inform someone about us, is going to change our behavior.  And that change by itself is significant.

Now, the change may not necessarily be bad.  Already, virtual audio assistant devices such as Alexa have been used in criminal cases when bad actors set them off, either by accident or on purpose, and the data thus generated has proved to be incriminating.  Though this is ancient history, I am told that in the days when some middle-class and upper-class people had servants, families tended to behave better when the servants were around, although I'm sure there were exceptions.  Alexa isn't Jeeves the butler, but as virtual assistants play a more significant role in domestic life, it's not beyond imagination to think that some of the worst behavior in homes—domestic abuse, for example—might be mitigated if the victim could call 911 by just shouting it instead of having to pull out a phone.

I'm not necessarily crying doom and gloom here.  Millions of people are already using virtual assistants with few if any problems, and adding two-way video to the mix will only increase the devices' capabilities.  But we are entering a new territory of connectivity here, and it's bound to have some effects that nobody has predicted yet.  Perhaps it's not too helpful to predict that there will be unpredictable effects, but right now that's all I can do at the moment.  Let's hope that the security features of the Nest Home Max are good enough to prevent nefarious use, and that people who buy them are truly happier with them than they were before. 

Sources:  The article "Google collects face data now.  Here's what it means and how to opt out" appeared on Sept. 11, 2019 at https://www.cnet.com/how-to/google-collects-face-data-now-what-it-means-and-how-to-opt-out/#ftag=CADf328eec.  I also referred to Wikipedia articles on Nineteen Eighty-Four and Google Home.  I thank my wife for pointing this article out to me.

Monday, September 09, 2019

Vaping Turns Deadly


At this writing, three people have died and hundreds more have become ill from a mysterious lung ailment that is connected with certain types of e-cigarettes.  The victims typically have nausea or vomiting at first, then difficulty breathing.  Many end up in emergency rooms and hospitals because of lung damage.

Most of the sufferers are young people in their teens and twenties, and all were found to have been  using vaping products in the previous three months.  Many but not all were using e-cigarettes laced with THC, the active ingredient in marijuana.  Others were vaping only nicotine, but some early analysis indicates that a substance called vitamin-E acetate was found in many of the users' devices.  It's possible that this oily compound is at fault, but investigators at the U. S. Centers for Disease Control (CDC) and the Food and Drug Administration (FDA) have not reached any conclusions yet. 

In fact, the two agencies have released different recommendations in response to the crisis.  The CDC is warning consumers to stay away from all e-cigarettes, but the FDA is limiting its cautions to those containing THC.  Regardless, it looks like the vaping party has received a damper that may change a lot of things.

So far, vaping and the e-cigarette industry is largely unregulated, unlike the tobacco industry.  It found its first mass market in China in the early 2000s.  The technology was made possible by the development of high-energy-density lithium batteries, among other things.  While vaporizers for medical use have been around since at least the 1920s, it wasn't possible to squeeze everything needed into a cigarette-size package until about fifteen years ago. 

Since then, vaping has taken off among young people.  A recent survey of  U. S. 12th-graders shows that about 20% of them have vaped in the last 30 days, and this is up from only about 11% in 2017, the sharpest two-year increase in the use of any drug that the National Institutes of Health has measured in its forty-some-odd year history of doing such surveys.

The ethical question of the hour is this:  has vaping become popular enough, mature enough, and dangerous enough, that some kind of regulation (either industrial self-policing or governmental oversight) is needed?  The answer doesn't hinge only on technical questions, but on one's political philosophy as well.

Take the extreme libertarian position, for example.  Libertarians start out by opposing all government activity of any kind, and then grudgingly allow certain unavoidable activities that are needed for a nation to be regarded as a nation:  national defense, for instance.  It's not reasonable to expect every household to defend itself against foreign aggression, so most libertarians admit the necessity of maintaining national defense in a collective way. 
           
But on an issue such as a consumer product, the libertarian view is "caveat emptor"—let the buyer beware.  If you choose to buy an off-brand e-cigarette because it promises to have more THC in it than the next guy's does, that's your business.  And if there's risk involved, well, people do all sorts of risky things that the government pays no attention to:  telling your wife "that dress makes you look fat" is one example that comes to mind. 

On the opposite extreme is the nanny-state model, favored generally by left-of-center partisans who see most private enterprises, especially large ones, as the enemy, and feel that government's responsibility is to even out the unfair advantage that huge companies have over the individual consumer.  These folks would regulate almost anything you buy, and have government-paid inspectors constantly checking for quality and value and so on. 

It's impractical to run your own bacteriological lab to inspect your own hamburgers and skim milk, so the government is supposed to do that for you.  Arguably, it's also impractical for vapers to take samples of their e-cigarette's goop and send it to a chemical lab for testing, and then decide on the basis of the results whether it's safe to use that particular product. 

My guess at this point is that sooner or later, probably sooner, the e-cigarette industry is going to find itself subject to government standards for something.  Exactly what isn't clear yet, because we do not yet know what exactly is causing the mysterious vaping illnesses and deaths.  But when we do, you can bet there will be lawsuits, at a minimum, and at least calls for regulation of the industry. 

Whether or not those calls are heeded will depend partly on the way the industry reacts.  Juul, currently the largest maker of vaping products, is one-third owned by the corporate entity formerly known as Philip Morris Companies.  In other words, the tobacco makers have seen the vaping handwriting on the wall, and are moving into the new business as their conventional tobacco product sales flatten or decline. 

The tobacco companies gained a prominent place in the Unethical Hall of Fame when they engaged in a decades-long campaign of disinformation to combat the idea that smoking could hurt or kill you, despite having inside information that it very well could.  In the face of an ongoing disaster such as the vaping illness, this ploy doesn't work so well.  But they could claim that only disreputable firms would sell vaping products that cause immediate harm, and pay for studies that show it's better than smoking and harmless for the vast majority of users.

Sometimes the hardest thing to do is be patient, and that's what we need to do right now, rather than rushing to conclusions that aren't supported by clinical evidence.  Investigators should eventually figure out what exactly is going on with the sick and dying vapers, and once we know that, we'll at least have something to act on.  Until then, if by chance anyone under 30 is reading this blog, take my advice:  leave those e-cigarettes alone.