Monday, July 29, 2019

What Price Safety? The 737 Max 8 Saga Continues


In March and April, I blogged on the tragic and costly software problems plaguing Boeing's 737 Max 8 jetliner.  Briefly, after two crashes in Ethiopia and Malaysia in which a total of 346 people died, evidence pointed to a software problem in the fly-by-wire plane, and the U. S. Federal Aviation Administration (FAA) grounded the plane after numerous other nations did the same in March.  In May, Boeing claimed that they had fixed the software problem, and since then Boeing and the FAA have been running extensive tests to verify that the problem has in fact been solved.  On June 3, Boeing CEO Dennis Muilenberg said that he expected the FAA to declare the plane flightworthy by the end of the year, but declined to give a specific timeline. 

In the meantime, all 387 existing MAX 8s are sitting on the ground instead of flying and generating revenue for the airlines that own them.  This has caused big headaches for both American Airlines and Southwest, which recently announced that it is terminating service to New Jersey's Newark Airport simply because it doesn't have enough planes owing to the MAX 8 groundings.  And American's losses are running in the range of $400 million, largely due to the groundings.

Most of the time, when software fails to do what it should, the consequences are fairly minor.  If it's one feature on some software on your laptop that acts up, maybe you lose some work, or even get so turned off by the problem that you swear never to buy that software again. But you remain healthy and nobody dies.

Then there's the whole issue of software security, and making sure malevolent attacks don't disable or otherwise inconvenience users.  Software companies are used to dealing with such things by now, and generally stay up to date with patches that prevent hackers from doing major damage, as long as the users install the patches.

These kinds of environments are what most software developers are used to working in.  The bigger the organization and the more critical the software, the more bureaucracy is involved, but that's not necessarily a bad thing.  I spoke with a software engineer many years ago who worked for a regional telecommunications company.  She told me that she'd been spending most of the previous year on changing exactly one line of code.  The reason it took so long was that a bunch of other engineers had to take that change and try it out in all sorts of other situations and find out what its ramifications were, and whether it would cause problems down the road. 

Telecomm companies are rather shielded from competition, and so taking a year to change one line of code may be fairly routine, I don't know.  So maybe we shouldn't be that surprised if it now takes six more months for the FAA to make sure that the changes Boeing has made in their 737 MAX 8s are really going to make things better and not otherwise. 

Thing is, the phone company didn't have to shut down and wait for my software engineer friend to finish her job.  But when software is intimately tied in with a multimillion-dollar piece of hardware that you can't use just a little of, and the software makes the whole thing unusable, it creates a spectacle that we haven't seen since the week or so after 9/11/2001 when all domestic U. S. flights were grounded.  And that period, plus the general fear of flying it engendered, hit the airlines with an economic punch that took them years to recover from.

Fortunately, the MAX 8 problem doesn't appear to have frightened people away from flying in general.  Because of the scarcity of seats, the airlines have been able to charge more, and so revenues at American and Southwest are actually up, despite the shortage of planes.  Nevertheless, Boeing has set aside nearly $5 billion in case it ends up having to pay its customers for loss of revenue, and lots of airlines around the world are going to think very hard before they place any more orders with Boeing.

Unlike mechanical failures, software failures are not simply a function of physics.  Software is so dynamic and dependent on the exact conditions and history of its environment that it is virtually impossible to "prove" it won't fail under any circumstances, except in rare and rather academic cases.  Some day, I hope the whole history of this fiasco will come out, as it will be a fascinating study in how software engineering ethics failed in this instance, and it will harbor lessons for how safety-critical software should not be written. 

The problem with such a story may be that it could be too hard for anybody except specialized software engineers to understand.  But then again, it may boil down to management problems, as so many ethical issues do.  Already there has been speculation that the FAA was allowing Boeing to conduct too many of its own safety tests, and basically just taking Boeing's word for it that everything was okay.  Only when we have enough details about how the problems happened and how they were fixed, can we judge whether the FAA has been lax or negligent in this area.

In the meantime, software engineers everywhere except Boeing can be glad that their work is not going under the microscope of the FAA's inspection.  But there are plenty of other types of software that are life-critical:  for example, software for medical devices, automotive software, even the software that lets first responders communicate with each other.  A failure with any of these products can have life-threatening implications. 

So maybe the lesson here for software engineers is:  program as though your life depended on it.  If more programmers had that attitude, we'd all have much better software.  Maybe not so much of it, but that might not be a bad thing either.

Sources:  The report describing CEO Muilenberg's comments appeared on the CNBC website on June 3, 2019 at https://www.cnbc.com/2019/06/03/boeing-plans-to-fly-a-boeing-737-max-certification-flight-soon-ceo-says.html.  Reuters reported on Southwest leaving Newark at https://www.reuters.com/article/us-american-airline-results/boeing-737-max-groundings-plague-u-s-airlines-frustrated-southwest-exits-newark-idUSKCN1UK1N5.  I also referred to the Wikipedia articles "Boeing 737 MAX" and "Boeing 737 MAX groundings."

Monday, July 22, 2019

Seeing Hasn't Been Believing For Some Time

As you may have been reminded recently, the second person to walk on the moon was Edwin Eugene "Buzz" Aldrin Jr., who accompanied Neil Armstrong in the lunar module during the Apollo 11 flight in July of 1969.  A common problem faced by the lunar astronauts was what to do with the rest of your life afterwards, and one thing Aldrin did was to make public appearances about his astronaut career.  On Sept. 9, 2002, Aldrin showed up at a Beverly Hills hotel expecting to be interviewed on camera for a Japanese children's television program.  Instead, waiting for him was one Bart Sibrel, a sometime documentary filmmaker who has made a career out of promoting the idea that NASA's moon landings were all faked in a secret CIA-operated studio.  Sibrel, accompanied by his own film crew, aggressively tried to get Aldrin to swear on a Bible that he had landed on the moon, and when Aldrin declined to do that in front of cameras and told Sibrel to leave him alone, Sibrel called him a "thief, liar, and coward" and reportedly backed him against a wall and poked him with the Bible.  In response, the 72-year-old Aldrin punched the 250-pound Sibrel in the jaw, but no charges were filed. 

Sibrel has made a career out of claiming that all the visual evidence (as well as physical evidence in the form of moon rocks distributed around the world) showing that men have been to the moon was essentially an elaborate "deepfake."  The word was not invented in 1969, but the concept of lying is as old as humanity.  For reasons of his own, Sibrel thinks (or at least appears to think) that NASA and the CIA concocted an extremely elaborate lie and backed it up with artificially-generated visual evidence. 

In 1969, there was no such thing as advanced computer graphics that could take images from different sources and combine them seamlessly to make it look like, for example, actor Tom Hanks was in the same room with President John F. Kennedy, as the movie "Forrest Gump" showed in 1994 in one of the first films that took advantage of computer-generated imagery (CGI).  Well, the great democratizing force known as IT has now brought the simpler kinds of digital fakery to the masses.  Some people are worried that simple tricks such as speeding up or slowing down authentic videos will cause more trouble than the sophisticated deepfakes that even experts have problems detecting. 

A recent Associated Press article describes how U. S. Speaker of the House Nancy Pelosi was made to appear physically impaired by simply playing back a recording of her at a slower speed.  While some such tricks are pulled purely for satirical purposes, digital forensics expert Hany Field worries that unsophisticated voters will be fooled by them nonetheless.  He called the doctored Pelosi video, which got over two million views on Facebook, "a canary in a coal mine," and expects the 2020 election year will see many more such crude trick fakes, which one could call shallowfakes.

According to some studies quoted in the article, some groups of voters (older ones and "ultra-conservatives," whatever that means) tend to trust videos more and will retweet, and otherwise treat as credible, videos that younger and less conservative people will quickly recognize as having been altered.  I have seen this sort of effect myself as I have watched an otherwise sensible and well-balanced woman of my acquaintance lap up stuff on Facebook that I consider arrant nonsense, on occasion. 

It's possible that such people hold all Facebook information in a different category in their minds from things that people they trust tell them in person, and that it's a sort of entertainment more than a serious search for the real truth.  But all this lies in the uncertain realm of what may influence voters, which includes everything from the weather to the state of one's health.  And as such, we can only speculate on its effects.

Two possible ways mentioned by the news article to mitigate any negative effects of shallowfakes are so-called "downranking" by social media outlets such as Facebook, and adding fact-checking information to suspicious-looking videos or information such as labels saying "This video has been altered from its original state."  The problem with this is that it puts the social-media operators in the role of censor, or at least editor.  And depending on the political character of the shallowfake and the political leanings of the censor or editor, someone is sure to cry "Foul!" at such actions, or at least point out equally egregious shallowfakes on the opposite end of the political spectrum that haven't been censored, edited, or labeled as such.  That is a quagmire that organizations like Facebook may not care to wade into, and I'm not sure they should.

One thing is for certain:  the video-altering genie of cheap and easily available software is not going back into its bottle.  And any thought of government regulation or censorship moves us in the direction of dictatorships like China, where members of minority groups such as Uighurs get sent to what amounts to concentration camps simply for trying to get in touch with other Uighurs on social media. 

So perhaps the answer is a better-educated electorate.  Civics education in this country is reportedly in terrible shape anyway, as many history texts seem to concentrate on everything bad our forefathers did (excuse me, should I say "forepeople"?  No, I shouldn't.).  I do recall one lesson I learned from a high-school history book, which was that after the famous purges of the 1930s in the Soviet Union where whole rafts of bureaucrats were executed at Stalin's whim, new editions of history books carried photos of the Great Leader from which certain undesirables had been airbrushed away, without comment.  I was warned in the 1970s against that sort of thing happening, and so today we should warn our high-schoolers against the danger of both deepfakes and shallowfakes.  But after that, it's up to them to judge—and to vote.

Sources:  The Associated Press article about deepfakes and shallowfakes by Beatrice Dupuy and Barbara Ortutay, dated July 19,was carried by many news outlets, including the Longview (Texas) News-Journal at https://www.news-journal.com/ap/politics/deepfake-videos-pose-a-threat-but-dumbfakes-may-be-worse/article_20f82829-5abd-5e83-8456-2630fc4b097b.html.  The Aldrin-Sibrel incident is described in the Wikipedia article "Buzz Aldrin."

Monday, July 15, 2019

The Moon, Mars, or Stay Home?


This coming Saturday marks the fiftieth anniversary of the first landing of humans on the moon.  I remember staying up late in my bathrobe and watching the blurry images on our old black-and-white tube-model TV as Neil Armstrong first set foot on the dusty surface.  I was no more moonstruck than most fifteen-year-olds were at the time.  I enjoyed the attention that engineers and high technology were getting as a part of the space program.  But the geopolitical forces that led up to the space race in the first place and the reasons why the U. S. government was spending so much money on it were things I was almost completely ignorant of. 

NASA is still very much with us, though almost a shadow of its 1960s self in terms of its percentage of the federal budget.  The questions of whether and how to spend the many billions of dollars it would cost to either return to the moon with manned spaceflights, or eventually go beyond the moon to Mars, will inevitably rise as we look back on what turned out to be a basically one-trick achievement.  This is not to belittle the incredibly complex and, overall, well-executed program that took men to the moon.  And if you want to connect the dots that go from the lunar landings to the Star Wars research initiatives to the fall of the Berlin Wall and the collapse of the Soviet Union, you can view the Apollo program as the most successful battle in the war against global communism, a war that the West won without starting a nuclear conflict. 

But all that is history, and now the world faces the question of what to do next in space.  The answer depends on which country you ask.

China makes no secret of the fact that they want to land Chinese citizens on the moon and establish a permanently-staffed lunar base.  That is also the goal of one version of plans that NASA has been discussing.  According to an article in Physics Today, during the Obama administration NASA examined the costs associated with setting up a manned lunar base:  $60 to $80 billion.  Faced with such a price tag, the agency instead proposed a plan involving lunar orbiters, landing on an asteroid, and eventually getting to Mars, but it attracted little support and became a dead letter.

More recently, Vice President Pence announced plans to land both men and women on the moon by 2024, but Congress refused to add the $1.6 billion needed for NASA to start planning for such an early date, so it looks like the political climate is more of a problem than any strictly technical issues.

As the barnacles of bureaucracy keep accreting on the U. S. ship of state, achievements of the past look even more impressive than they did at the time.  Taking the technology of 1961, when many if not most electronic systems used vacuum tubes, and moving it forward to the point that we landed men on the moon and got them back safely in only two Presidential election cycles, was truly a stunning achievement. 

But the country was more unified then, and nations that are deeply divided have problems uniting around any goal that isn't clearly for immediate self-preservation.  Nevertheless, it's possible that younger people could unite around a space program that manages to establish a permanent outpost on another planet.

In my work as an educator at the college level, I run across students who, despite their precociously mature and somewhat cynical attitudes, show their support for space efforts by their desire to work for bold, exciting companies like SpaceX or Blue Origin, the Jeff-Bezos-funded space firm.  I suppose it's the latest version of the pioneering spirit that led adventurous Swedes, Poles, and Englishmen to endure the hardships of a transatlantic voyage in the age of sail to explore and settle the unknown territories of the New World.  We've pretty much run out of that sort of thing on this planet, so the moon or Mars is the next frontier, as the old Star Trek series used to tell us.  It's no coincidence that the peak of that show's popularity was the late sixties, although its descendants have a cult following that continues to this day. 

Science fiction is one thing, but taking a substantial fraction of a nation's gross domestic product and spending it on sending a few people out of this world is not to be undertaken lightly, even if it is privately funded instead of paid for by governments.  It's the same kind of thing that the Egyptians did when they built the Pyramids, and it's no accident that most of the great construction achievements of the ancient and medieval world—pyramids, tombs, temples, cathedrals—involved religion in some way.  With those who reply "none" to the pollster's question about religious belief increasing in our U. S. population, it seems pointless to hope that an explicitly religious motive could be found to unite people behind a new effort to go to either the moon or Mars. 

But as the quasi-religious devotion of some Trekkies to the Star Trek franchise shows, some entirely secular things can become a religion for some people.  A lot of the folks who work on the search for extraterrestrial intelligence (SETI) have a kind of religious fervor about their jobs.  And a good many Silicon Valley types believe with an almost religious conviction that once we overheat this planet we'll have no choice but to load up our interplanetary U-Hauls and move to another one. 

The danger in that kind of thinking is that it can begin to inculcate the attitude that any sacrifice in the present is worth doing for the future paradise that awaits.  Though I don't see any sign that this is happening yet, one could imagine a dictatorial government forcing its people into grinding poverty for the sake of a space program that at most could benefit a few dozen individuals directly.  The same sacrifice of present goods for the ever-receding magnificent future was the way that Communism tempted (and still tempts) people to do highly immoral things right now for the sake of imagined future generations.  The way things look now, we're in little danger of doing that in the U. S., but it might happen in China. 

If we do land on the moon by 2024 (only five years from now), I will be happy to watch a full-color, 4K-definition image of someone young enough to be Neil Armstrong's grandson (or granddaughter) set foot on the moon for the second time.  But I'm not placing any bets on it, and not just because I'm not the gambling type, either.

Sources:   Physics Today's article "Quo Vadis, NASA:  The Moon, Mars, or both?" by David Kramer appeared on pp. 22-26 of the July 2019 issue. 

Monday, July 08, 2019

Should Engineers Think Like Computers?



Peter Kreeft is a philosopher at Boston College and author of a textbook on Socratic logic.  So far, so dull, you may think.  I have heard Prof. Kreeft speak on numerous occasions, and "dull" does not apply either to his speaking or his writing.  In the introductory section of his text, he says something that I think engineers need to hear.  It's about ways of thinking and knowing, and the two different competing kinds of logic that are taught today.

If you take a course on logic in college these days, it is most likely going to be what is called "symbolic logic" or "formal logic."  In fact, most electrical engineering undergrads get a sample of this kind of logic when they take digital electronics.  It turns out that digital computers use things called logic gates.  One of the simplest logic gates, called an AND gate, can have two inputs and one output.  As you may know, the numbers in digital computers are in binary form, meaning they have the value of either 1 or 0.  Typically, 1 is represented by a high voltage and 0 by a low voltage.  An AND gate looks at its two inputs, and the only time its output is a high voltage is when both one input AND the other input are high.  Otherwise, its output is low.
           
If you understood that, you have a firm grasp of an element of what is called Boolean algebra, named after the nineteenth-century logician George Boole, who formalized this type of thinking in an attempt to bring the discipline and clarity of mathematics to logical thinking.  Boole's project was taken up by others, and now most philosophers think, do, and teach highly mathematical symbolic logic when they do any kind of logic at all.

But there's a problem with symbolic logic, a problem it shares with computers, whose structure is nothing more than a physical embodiment of symbolic logic.  The old expression "garbage in, garbage out" applies to symbolic logic as well as it does to computers.  Symbolic logic can do marvelously complicated things with its inputs, but it doesn't "know" what the inputs are any more than a copy machine knows Shakespeare after you've copied a page of King Lear with it.  Symbolic logic says nothing about the truth or reality of what you give it.  To understand what things really are, you have to get outside the pristine mathematical structure of symbolic logic and embrace what Prof. Kreeft calls Socratic logic.

It could just as well be called Aristotelian logic or classical logic.  You can get a sense of what gets lost if you think symbolic logic is the only way to think, by reading what Prof. Kreeft says about reason and logic: 

"The very nature of reason itself is understood differently by the new symbolic logic than it was by the traditional Aristotelian logic.  'Reason' used to mean essentially    
'all that distinguishes man from the beasts,' including intuition, understanding, wisdom,    moral conscience, and aesthetic appreciation, as well as calculation.  'Reason' now usually means only the last of these powers." (Socratic Logic, edition 3.1, p. 22)

He goes on to say that as first philosophers and then people in general have accepted these definitions of logic and reason, it's become harder to understand some concepts that earlier ages knew almost without thinking.  Take one of the most basic questions that even five-year-olds can ask:  "What is it?"  That goes to the heart of the being of a thing:  what is a horse or a man essentially?  Not just what is it made of, but what is its true nature?  Symbolic logic doesn't ask such questions.  It is a logic of manipulation and process.  As Kreeft says, instead of trying to understand the fundamental essentials of things, we "think about how we feel about things, about how we can use them, how we see them behave, how they work, how we can change them, or how we can predict and control their behavior by technology."  That last phrase—predicting and controlling behavior by technology—is largely what engineers do. 

And it's a good thing to do, rightly applied, and symbolic logic is essential to the task, especially these days now that everything engineers do involves information technology.  But it is a great mistake to think that symbolic logic is all we need to make our way in the world, and we can safely ignore the questions of what a thing really is on our way to bending it to our technological will.

As Kreeft points out, animals have feelings, use their environment, and even make use of rudimentary tools in some cases.  But we do not share with animals the higher operations of the human mind, including the ability to see connections between apparently different things, and to achieve true understanding about universal concepts such as freedom or love.  No animal and no computer (sorry, all you artificial-intelligence folks) will be able to do that. 

Kreeft is concerned that we will give in to the ancient temptation to idolize the work of our hands:  symbolic logic and the computers it gave rise to.  But if we do so, we will lower ourselves to the level of the beasts, and neglect those powers of the mind recognized by classical logic as what makes the difference between humans on the one hand, and animals and computers on the other hand. 

The takeaway from all this is simply to remind us that scientific thinking in the sense of symbolic logic is not the only kind of thinking or knowing that there is.  Engineers of all people should recognize that they do their work in the context of human society, which is forever beyond the grasp of symbolic logic to analyze or comprehend.  Something more is required.  We can call that something classical logic, or wisdom, or even faith.  But to limit ourselves to the kind of logic that computers can do is to leave our humanity behind. 

Sources:  Peter Kreeft's Socratic Logic, edition 3.1 was published by St. Augustine's Press in 2010.