Monday, October 07, 2019

Pilot Overload and the Boeing 737 Max Accidents


In the last couple of months, new information about the factors leading to crashes of two Boeing 737 Max aircraft and the loss of 346 lives has emerged.  All such aircraft were grounded indefinitely last March after investigators found that a software glitch combined with faulty data from airspeed indicators to start a chain of events that led to the crashes.  Airline companies around the world have lost millions as their 737 Max fleets sit idle, and Boeing has been under tremendous pressure from both international regulatory bodies and the market to come up with a comprehensive fix for the problem.  But as long as both humans and computers have to work together to fly planes, the humans will need training to deal with unusual situations that the computers come up with.  And in the case of the Lion Air and Ethiopian Air crashes, it looks like whatever training the pilots received left them inadequately prepared to deal with at least one situation that led to tragedies.

Modern fly-by-wire aircraft are certainly among the most complex mobile systems in existence today.  It is literally impossible for engineers to think of every conceivable combination of failures that pilots would have to handle in an emergency, simply because there are so many subsystems that can interact in almost countless ways.  But so far, airliner manufacturers have done a pretty good job of identifying the major failure conditions that would be life-threatening, and instructing pilots about how to deal with those.  The fact that Capt. Chesley Sullenberger was able to land a fly-by-wire Airbus A320 plane in the Hudson in 2009 after experiencing failure of all engines shows that humans and computers can work together cooperatively to deal with unusual failures.

But the ending was not so happy with the 737 Max flights, and recent news from regulators indicates that a wild combination of alarms, stick-shakings, and other distractions may well have paralyzed the pilots of the two planes that crashed after faulty readings from angle-of-attack sensors set off the alarms. 

Flying a modern jetliner is a little bit like what I am told it was like being in the army during World War II.  For many soldiers, the experience was a combination of long stretches of incredible tedium interrupted by short but terrifying bursts of combat.  It's psychologically hard for a person to remain alert and ready for any eventuality when the norm is that pretty much nothing out of the routine ever happens the vast majority of the time.  So when the unusual failure of both angle-of-attack sensors led to a burst of alarms and the flight computer's attempt to push the nose down, the pilots on the ill-fated flights apparently failed to cope with the confusion and could not sort through the distractions in order to do the correct thing.

A month after the Lion Air crash in 2018, the FAA issued an emergency order telling pilots what to do in this particular situation.  Read in retrospect, it resembles instructions on how to thread a needle in the middle of a tornado: 

            ". . . An analysis by Boeing found that the flight control computer, should it receive faulty readings from one of the angle-of-attack sensors, can cause 'repeated nose-down trim commands of the horizontal stabiliser'.  The aircraft might pitch down 'in increments lasting up to 10sec', says the order.  When that happens, the cockpit might erupt with warnings.  Those could include continuous control column shaking and low airspeed warnings – but only on one side of the aircraft, says the order.  The pilots might also receive alerts warning that the computer has detected conflicting airspeed, altitude and angle-of-attack readings. Also, the autopilot might disengage, the FAA says.  Meanwhile, pilots facing such circumstances might need to apply increasing force on the control column to overcome the nose-down trim. . . . They should disengage the autopilot and start controlling the aircraft's pitch using the control column and the 'main electric trim', the FAA say. Pilots should also flip the aircraft's stabiliser trim switches to 'cutout'. Failing that, pilots should attempt to arrest downward pitch by physically holding the stabilizer trim wheel, the FAA adds."

If I counted correctly, there are six separate actions a pilot is being told to take in the midst of a chaos of bells and whistles going off and his plane repeatedly trying to fly itself into the ground.  The very fact that the FAA issued such a warning with a straight face, so to speak, should have set off alarms of its own.  And after the second crash under similar circumstances, reason prevailed, but first with regulatory agencies outside the U. S.  Finally, the FAA complied with the growing global consensus and grounded the 737 Max planes until the problem could be cleared up.

When software is rigidly dependent on data from sensors that convey only a narrowly defined piece of information, and those sensors go bad, the computer behaves like the broomstick in the Disney version of Goethe's 1797 poem, "The Sorcerer's Apprentice."  It goes into an out-of-control panic, and apparently the pilots found it was humanly impossible to ignore the panicking computer's equivalent of "YAAAAH!" and do the six or however many right things that were required to remedy the situation. 

It is here that an important difference between even the most advanced artificial-intelligence (AI) system and human beings comes to the fore.  It is the ability of a human being to maintain a global awareness of a situation, flexibly enlarging or narrowing the scope of attention as required.  Clearly, the software designers felt that once they had delivered an emergency message to the pilot, the situation was no longer their responsibility.  But insufficient attention was paid to the fact that in the bedlam of alarms that the unusual simultaneous sensor failure caused, some pilots—even though they were well trained by the prevailing standards—simply could not remember the complicated sequence of fixes required to keep their planes in the air.

Early indications are that the 737 Max "fix," whatever software changes it involves, will also involve extensive pilot retraining.  We can only hope that the lessons learned from the fatal crashes have been applied, and that whenever such unusual sensor failures happen in the future, pilots will not have to perform superhuman feats of concentration to keep the plane from crashing itself.

Sources:  A news item about how Canadian regulators are looking at the pilot-overload problem appeared on the Global News Canada website on Oct. 5, 2019 at https://globalnews.ca/news/5995217/boeing-737-max-startle-factor/.  The November 2018 FAA directive to 737 Max pilots is summarized at https://www.flightglobal.com/news/articles/faa-order-tells-how-737-pilots-should-arrest-runawa-453443/.  I also referred to Wikipedia's articles on the Boeing 737 Max groundings, Chesley Sullenberger, and The Sorcerer's Apprentice. 

Monday, September 30, 2019

Jonathan Franzen Gives Up On Controlling the Climate


Jonathan Franzen is a novelist and also writes essays that are published in places like The New Yorker.  As he admits, he's not a scientist or a policy wonk, but that doesn't keep him from putting his oar in on climate change. 

In a recent essay posted on The New Yorker's website entitled "What If We Stopped Pretending?" Franzen gives what at first glance appears to be a counsel of despair.

First, he admits that anybody under thirty is "all but guaranteed" to witness what he calls the "radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought."  This will happen when "climate change, intensified by various feedback loops, spins completely out of control."  The only way to keep this from happening, according to authorities he cites such as the Intergovernmental Panel on Climate Change, is if every major greenhouse-gas-emitting nation on the planet imposes what amounts to a climate dictatorship:  instituting "draconian conservation measures, shut[ting] down much of its energy and transportation infrastructure, and completely retool[ing] its economy."  And that means everybody, not just folks who agree with the idea.  And here he gets personal:  "Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks."  (I live in Texas, but I don't personally drive a pickup truck.)

Then he says in effect, "Hey, I'm a realist.  This isn't going to happen.  So you know what?  I'm giving up on it.  We might as well face it:  the apocalypse is coming, and we better just get ready for it."  We shouldn't quit trying to reduce carbon emissions, but we also shouldn't con ourselves into believing that our little token individual actions are going to make much difference. 

He winds up his essay by encouraging people to make your own little corner of the world better in whatever way you can—improving democratic governance, helping the homeless, and just generally being a good citizen, whether or not it makes a difference in climate change.  "To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it."  In other words, we should fight smaller battles we have a reasonable chance of winning instead of putting all our eggs in the basket of averting climate change.

There is a syndrome that workers in the helping professions call "compassion fatigue."  Even if a naturally compassionate person chooses a job such as assisting Alzheimer's patients or children with terminal cancer, constantly having to come up with sympathy for someone who isn't going to get better can be tremendously draining.  And after months or years of such work, some people simply burn out—they can't take it anymore. 

Something like this seems to have happened to Franzen.  If he's like many people who see climate change as the most important existential threat to humanity, it's the kind of thing that you can never quite put out of your mind.  If you're not actively part of the solution, out there with Greta Thunberg protesting on the steps of the UN, then you're part of the problem merely by living a normal life in the U. S.  It's understandable that Franzen would choose to unburden himself by saying publicly,"Look, let's face it.  The train's coming at us in the tunnel and there's no way out.  Let's use the time we have to make things better, rather than fooling ourselves into thinking we can stop the train."

I'm not a climate scientist either, but I'm willing to make a prediction that I feel very confident about.  The way that climate change actually plays out is not going to fit anybody's prediction exactly, simply because it's far too complicated and long-term for anyone to predict with accuracy.

In 2018, the peak level of carbon dioxide in the atmosphere was 407 parts per million, up about 2.5 ppm from the previous year.  20 million years ago, it was about that high, and the all-time high record for carbon dioxide, according to various estimates that scientists have made, was around 2000 ppm some 200 million years ago.  So it's not like the planet has ever seen such high levels before.  Life survived, although many species went extinct and others arose to take their places.

Now admittedly, we are doing a radical thing to the planet, and there will be consequences.  But just as the way an individual human deals with a threatening crisis affects the outcome, the way human beings deal with what may turn into a climate crisis will also affect the future of humanity. 

When Franzen writes that climate change may "spin out of control," I would point out that strictly speaking, climate has never been under our control.  True, you can adjust a thermostat that is labeled "Climate Control," but its influence is limited to your house.  For all of human history, the weather is something that human beings have simply had to accept, not something they could control in any meaningful sense.  We are now engaged in the first-ever unintentional attempt at climate control, or at any rate climate influence, by emitting so much carbon dioxide, and in the coming years and decades we will be scrambling to deal with the consequences. 

But not in the way Franzen fantasizes in his scenario to stop worldwide emissions.  If the world really shut down much of its energy and transportation infrastructure, that is in itself would cause economies to implode.  So in that case the cure for climate change would be just as bad as the disease. 

The only way humans have survived on this planet as long as we have is that we are adaptable.  If crops start failing in some parts of the world, other parts will get better.  If coastlines shrink, people have the ability to move, assuming their governments will let them.  Franzen has caught a lot of flak for his essay, but I think he ends up in a better place than a lot of other people who keep banging the same drum in favor of a global climate dictatorship.  I agree with his advice to do what you can to limit climate change, but mainly, start with yourself to be a better person and to make the part of the world you can control to be a better place, no matter how warm it gets.

Sources:  Jonathan Franzen's essay "What If We Stopped Pretending?" appeared on Sept. 8, 2019 on The New Yorker website at https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending.  For historical numbers on carbon dioxide levels, I consulted a graph published in Nature at https://www.nature.com/articles/ncomms14845/figures/4. 

Monday, September 23, 2019

Moral Machines?


By now you may be used to asking your phone or Siri or Alexa questions and expecting a reasonable answer.  The dream of Alan Turing in 1950 that one day, computers might be powerful enough to fool people into thinking they were human is realized every time someone dials a phone tree and thinks the voice on the other end is a human when it's actually a computer.

The programmers setting up artificial-intelligence virtual assistants such as Siri and human-sounding phone trees aren't necessarily trying to deceive consumers.  They are simply trying to make a product that people will use, and so far they've succeeded pretty well.  Considering the system as a whole, the AI part is still pretty low-level, and somewhere in the back rooms there are human beings keeping track of things.  If anything gets too out of hand, the back-room folks stand ready to intervene.

But what if it was computers all the way up?  And what if the computers were by some meaningful measure, smarter overall than humans?  Would you be able to trust what they told you if you asked them a question? 

This is no idle fantasy.  Military experts have been thinking for years about the hazards of deploying fighting drones and robots with the ability to make shoot-to-kill decisions autonomously, with no human being in the loop.  Yes, somewhere in the shooter robot's past there was a programmer, but as AI systems become more sophisticated and even the task of developing software gets automated, some people think we will see a situation in which AI systems are doing things that whole human organizations do now:  buying, selling, developing, inventing, and in short, behaving like humans in most of the ways humans behave.  The big worrisome question is:  will these future superintelligent entities know right from wrong?

Nick Bostrom, an Oxford philosopher whose book Superintelligence has jacket blurbs from Bill Gates and Elon Musk, is worried that they won't.  And he is wise to do so.  In contrast to what you might call logic-based intellectual power, in which computers already surpass humans, whatever it is that tells humans the difference between right and what is wrong is something that even we humans don't have a very good handle on yet.  And if we don't understand how we can tell right from wrong, let alone do right and avoid wrong, how do we expect to build a computer or AI being that does any better?

In his book, Bostrom considers several ways this could be done.  Perhaps we could speed up natural evolution in a supercomputer and let morality evolve the same way it's done with human beings.  Bostrom drops that idea as soon as he's thought of it, because, as he puts it, ""Nature might be a great experimentalist, but one who would never pass muster with an ethics review board—contravening the Helsinki Declaration and every norm of moral decency, left, right, and center." (The Helsinki Declaration was a document signed in 1964 that sets out principles of ethical human experimentation in medicine and science.) 

But to go any farther with this idea, we need to get philosophical for a moment.  Unless Bostrom is a supernaturalist of some kind (e. g. Christian, Jew, Muslim, etc.), he thinks that humanity evolved on its own, without help or intervention, and is a product of random processes and physical laws.  And if the human brain is simply a wet computer, as most AI proponents seem to believe, one has to say it has programmed itself, or at most that later generations have been programmed (educated) by earlier generations and life experiences.  However you think about it in this context, there is no independent source of ideal rules or principles against which Bostrom or anyone else could compare the way life is today and say, "Hey, there's something wrong here." 

And yet he does.  Anybody with almost any kind of a conscience can read the news or watch the people around you, and see stuff going on that we know is wrong.  But how do we know that?  And more to the point, why do we feel guilty when we do something wrong, even as young children? 

To say that conscience is simply an instinct, like the way birds know how to build nests, seems inadequate somehow.  Conscience involves human relationships and society.  The experiment has never been tried intentionally (thank the Helsinki Declaration for that), but a baby reared in total isolation from human beings—well, something close to this has happened by accident in large emergency orphanages, and the baby typically dies.  We simply can't survive without human contact, at least right after we're born. 

And dealing with other people allows for the possibility of hurting others, and I think that is at least the practical form conscience takes.  It asks, "If you do that terrible thing, what will so-and-so think?"  But a well-developed conscience keeps you from doing bad things even if you were alone on a desert island.  It doesn't even let you live with yourself at peace if you've done something wrong.  So if conscience is simply a product of blind evolution, why would it bother you if you did something that never hurt anybody else, but was wrong anyway?  What's the evolutionary advantage in that?

Bostrom never comes up with a satisfying way to teach machines how to be moral.  For one thing, you would like to base a machine's morality on some logical principles, which means a moral philosophy.  And as Bostrom admits, there is no generally accepted system that most moral philosophers agree on, which means most moral philosophers are wrong about morality. 

Those of us who believe that morality derives not from evolution, or experience, or tradition, but from a supernatural source that we call God, have a different sort of problem.  We know where conscience comes from, but that doesn't make it any easier to obey it.  We can ask for help, but the struggle to accept that help from God goes on every day of life, and some days it doesn't go very well.  And as for whether God can teach a machine to be moral, well, God can do anything that isn't logically contradictory.  But whether he'd want to, or whether he'd just let things take their Frankensteinian course, is not up to us.  So we had better be careful. 

Sources:  Nick Bostrom's Superintelligence:  Paths, Dangers, Strategies was published in 2014 by Oxford University Press.

Monday, September 16, 2019

Facing Google In Your Living Room

An article on cnet.com recently described how Google's new smart assistant, called Google Nest Hub Max, uses facial recognition technology to tell who is talking with it.  This feature has raised privacy concerns, as Google has admitted that they reserve the right to upload facial data from it to the cloud to help improve "product experience."  But whatever Google does legitimately, a hacker might be able to do too, and so we are approaching a time when the telescreens of George Orwell's dystopian fantasy novel Nineteen Eighty-Four have become a reality—not because of the unilateral command of a totalitarian government (at least not in the U. S.), but because we want what they can do.

For those unfamiliar with the novel, Orwell's book was a warning to the free world to beware of what a dictatorship could do with communications technologies of the future.  Telescreens were two-way televisions on which propaganda by a dictator known only as Big Brother is transmitted, and through which images of whoever is watching are transmitted back to the party's central headquarters.  Orwell was simply extrapolating the efforts of regimes such as the Nazis of the 1930s and the Soviet Union of the 1940s to spy on their populace twenty-four hours a day to enforce total obedience to the regime. 

At the time the novel was published in 1949, no one took the telescreen-spying idea very seriously, because it would take a huge number of human monitors to spy on a significant number of people.  Carried to its logical extreme, the only way the government could watch everybody would be if half the population spied on the other half, and then took turns. 

But neither Orwell nor anybody else at the time reckoned on the development of advanced artificial-intelligence (AI) systems using facial recognition technology.  In China, the government is deploying many thousands of cameras and taking facial data from millions of people with the intention of developing a Social Credit rating that measures how well you measure up to the regime's model of the ideal citizen.  If you have been caught on camera by computers going to suspicious places or meetings, your Social Credit score could go in the tank, making it hard to travel, get a job, or even stay out of jail. 

None of that is happening in the U. S., but the fact that a large corporation will now have electronic access to views in millions of private residences should at least give us pause. 

Leaving the hardware aside for the moment, let's examine the difference in motives between a government, such as in Nineteen Eighty-Four, spying on its citizens for the purposes of controlling behavior, and a commercial entity such as Google using images to sell both its own services and advertising for others.  The government spying is motivated by suspicion and fear of what people might be doing while the government isn't watching them.  Whatever the regime sets out as an ideal of behavior, it watches for deviations from that ideal, and punishes those who deviate from it.  Participation is not voluntary, and people have to go to great lengths to avoid being spied on.

Now contrast that with a benign-looking thing such as the Google Nest Hub Max.  Nobody is going to make you buy one.  And if you do, there are ways of turning off the facial-recognition feature, though it will be less convenient to use.  And the device is intended to serve you, not the other way around.  It's sold with the vision portrayed in so many TV ads of people happily using it to make their lives better, not for means of social control like Orwell's telescreens. 

But maybe the differences are not as great as they first appear.  Both the telescreen and the Nest Hub Max are intended to change behavior.  If they don't, they have failed.  True, the ideal behavior that a totalitarian government wants and the ideal behavior that a company wants are two different things.  But neither ideal is the way the citizen-consumer was before the screen or Nest Hub shows up, namely, unwatched and unbenefited by the products or services that the company wants to sell.

Nobody should read this blog and then go around saying "Ahh, Stephan's saying Google is Big Brother and they're trying to take over our lives!"  That's not the comparison I'm making.  My point is that the mere fact of being watched by someone, or something that can inform someone about us, is going to change our behavior.  And that change by itself is significant.

Now, the change may not necessarily be bad.  Already, virtual audio assistant devices such as Alexa have been used in criminal cases when bad actors set them off, either by accident or on purpose, and the data thus generated has proved to be incriminating.  Though this is ancient history, I am told that in the days when some middle-class and upper-class people had servants, families tended to behave better when the servants were around, although I'm sure there were exceptions.  Alexa isn't Jeeves the butler, but as virtual assistants play a more significant role in domestic life, it's not beyond imagination to think that some of the worst behavior in homes—domestic abuse, for example—might be mitigated if the victim could call 911 by just shouting it instead of having to pull out a phone.

I'm not necessarily crying doom and gloom here.  Millions of people are already using virtual assistants with few if any problems, and adding two-way video to the mix will only increase the devices' capabilities.  But we are entering a new territory of connectivity here, and it's bound to have some effects that nobody has predicted yet.  Perhaps it's not too helpful to predict that there will be unpredictable effects, but right now that's all I can do at the moment.  Let's hope that the security features of the Nest Home Max are good enough to prevent nefarious use, and that people who buy them are truly happier with them than they were before. 

Sources:  The article "Google collects face data now.  Here's what it means and how to opt out" appeared on Sept. 11, 2019 at https://www.cnet.com/how-to/google-collects-face-data-now-what-it-means-and-how-to-opt-out/#ftag=CADf328eec.  I also referred to Wikipedia articles on Nineteen Eighty-Four and Google Home.  I thank my wife for pointing this article out to me.

Monday, September 09, 2019

Vaping Turns Deadly


At this writing, three people have died and hundreds more have become ill from a mysterious lung ailment that is connected with certain types of e-cigarettes.  The victims typically have nausea or vomiting at first, then difficulty breathing.  Many end up in emergency rooms and hospitals because of lung damage.

Most of the sufferers are young people in their teens and twenties, and all were found to have been  using vaping products in the previous three months.  Many but not all were using e-cigarettes laced with THC, the active ingredient in marijuana.  Others were vaping only nicotine, but some early analysis indicates that a substance called vitamin-E acetate was found in many of the users' devices.  It's possible that this oily compound is at fault, but investigators at the U. S. Centers for Disease Control (CDC) and the Food and Drug Administration (FDA) have not reached any conclusions yet. 

In fact, the two agencies have released different recommendations in response to the crisis.  The CDC is warning consumers to stay away from all e-cigarettes, but the FDA is limiting its cautions to those containing THC.  Regardless, it looks like the vaping party has received a damper that may change a lot of things.

So far, vaping and the e-cigarette industry is largely unregulated, unlike the tobacco industry.  It found its first mass market in China in the early 2000s.  The technology was made possible by the development of high-energy-density lithium batteries, among other things.  While vaporizers for medical use have been around since at least the 1920s, it wasn't possible to squeeze everything needed into a cigarette-size package until about fifteen years ago. 

Since then, vaping has taken off among young people.  A recent survey of  U. S. 12th-graders shows that about 20% of them have vaped in the last 30 days, and this is up from only about 11% in 2017, the sharpest two-year increase in the use of any drug that the National Institutes of Health has measured in its forty-some-odd year history of doing such surveys.

The ethical question of the hour is this:  has vaping become popular enough, mature enough, and dangerous enough, that some kind of regulation (either industrial self-policing or governmental oversight) is needed?  The answer doesn't hinge only on technical questions, but on one's political philosophy as well.

Take the extreme libertarian position, for example.  Libertarians start out by opposing all government activity of any kind, and then grudgingly allow certain unavoidable activities that are needed for a nation to be regarded as a nation:  national defense, for instance.  It's not reasonable to expect every household to defend itself against foreign aggression, so most libertarians admit the necessity of maintaining national defense in a collective way. 
           
But on an issue such as a consumer product, the libertarian view is "caveat emptor"—let the buyer beware.  If you choose to buy an off-brand e-cigarette because it promises to have more THC in it than the next guy's does, that's your business.  And if there's risk involved, well, people do all sorts of risky things that the government pays no attention to:  telling your wife "that dress makes you look fat" is one example that comes to mind. 

On the opposite extreme is the nanny-state model, favored generally by left-of-center partisans who see most private enterprises, especially large ones, as the enemy, and feel that government's responsibility is to even out the unfair advantage that huge companies have over the individual consumer.  These folks would regulate almost anything you buy, and have government-paid inspectors constantly checking for quality and value and so on. 

It's impractical to run your own bacteriological lab to inspect your own hamburgers and skim milk, so the government is supposed to do that for you.  Arguably, it's also impractical for vapers to take samples of their e-cigarette's goop and send it to a chemical lab for testing, and then decide on the basis of the results whether it's safe to use that particular product. 

My guess at this point is that sooner or later, probably sooner, the e-cigarette industry is going to find itself subject to government standards for something.  Exactly what isn't clear yet, because we do not yet know what exactly is causing the mysterious vaping illnesses and deaths.  But when we do, you can bet there will be lawsuits, at a minimum, and at least calls for regulation of the industry. 

Whether or not those calls are heeded will depend partly on the way the industry reacts.  Juul, currently the largest maker of vaping products, is one-third owned by the corporate entity formerly known as Philip Morris Companies.  In other words, the tobacco makers have seen the vaping handwriting on the wall, and are moving into the new business as their conventional tobacco product sales flatten or decline. 

The tobacco companies gained a prominent place in the Unethical Hall of Fame when they engaged in a decades-long campaign of disinformation to combat the idea that smoking could hurt or kill you, despite having inside information that it very well could.  In the face of an ongoing disaster such as the vaping illness, this ploy doesn't work so well.  But they could claim that only disreputable firms would sell vaping products that cause immediate harm, and pay for studies that show it's better than smoking and harmless for the vast majority of users.

Sometimes the hardest thing to do is be patient, and that's what we need to do right now, rather than rushing to conclusions that aren't supported by clinical evidence.  Investigators should eventually figure out what exactly is going on with the sick and dying vapers, and once we know that, we'll at least have something to act on.  Until then, if by chance anyone under 30 is reading this blog, take my advice:  leave those e-cigarettes alone. 

Monday, September 02, 2019

Lawyers in Space?


A recent Washington Post article highlights what would normally be a humdrum domestic dispute alleging identity theft.  The unusual feature of the dispute is that the party who allegedly accessed a bank account without permission did it from the International Space Station, and thus may have committed the first legally recognized crime in space.

Anywhere humans go, the lawyers can't be far behind.  While Shakespeare probably got a laugh in his play Henry VI when the criminal type Dick the Butcher said, "The first thing we do, let's kill all the laywers," the context was not a sober discussion of how to make a better society.  Dick and his rebel friends were imagining a fantasy world made to their liking, where all the beer barrels would be huge, all the prices low, and naturally, there wouldn't be any lawyers to get people like them into trouble.

Law-abiding citizens need have no fear of ordinate laws, and so it's only right that there are some treaties and international agreements that govern humans and human-made artifacts in space. 

The foundational agreement is something called the Outer Space Treaty, which over 100 countries have signed, including every nation that is currently capable of orbiting hardware in space.  Implemented in 1967, its most prominent theme is that space is for peaceful uses only.  It therefore prohibits keeping nuclear weapons in space.  It also forbids any country from claiming sovereignty over any part of outer space, including any celestial body.  So when the U. S. planted its flag on the moon, it was just a symbolic gesture, not the first step in creating a fifty-first state with zero population.

Right now, there are companies making quite serious plans to do space mining, build space hotels, and engage in other profit-making activity that would involve substantial amounts of investment of both hardware and human capital.  There's a concern that the Outer Space Treaty is silent on the question of individual or corporate ownership of space property, and unless we get more specific on what the rules are, such developments may be stifled.

I don't see any critical problem here, because we have abundant precedents in a similar situation:  the international laws governing ocean-going vessels.  The Disney Company puts millions of dollars into what amounts to floating hotels, and they quite happily manage to cope with the fact that while they own the ship, it travels in international waters and docks at various ports owned by other countries.  Of course, there are hundreds of years of tradition bound up in the law of the sea, and the same isn't true of space law.  But the fact that ocean-going commerce goes on quite smoothly for the most part shows such things can be done, and so that doesn't concern me at all.

What could throw the whole situation into doubt is if somebody finds a fantastically lucrative space-based enterprise.  Diamonds on the moon sounds like something that Edgar Rice Burroughs would cook up, but there are quite serious organizations out there planning to do things like mining asteroids.  And depending on what they find, we might see something like the rush of the Old World explorers to the New World, where they in fact did discover gold.  Like most naive fantasies, that discovery didn't work out quite as nicely as the explorers hoped, what with the abysmal treatment of Native Americans and the disastrous inflation that the introduction of huge amounts of gold caused in the economies of Europe. 

It's hard to imagine something similar happening as a result of a space-based discovery, but stranger things have happened.  The optimist in me, as well as numbers of Silicon Valley types who seem to think that space colonization is not only possible, but inevitable and represents the last best hope of humanity, would like to see the future of space exploration and settlement as another chance for us to get some things right.  After all, something like that was the motivation for many Europeans to make the arduous journey to the New World where unknown hardships awaited them.  Overall I'm glad they did, or else I would not have the opportunity to live in Central Texas today.

But the identity-theft case in the International Space Station reminds us that no matter what idealistic plans we make, we will take all our mental and behavioral baggage with us wherever we go.  That is why we will always need lawyers, whether in San Marcos or a moon base.  Because a certain number of us will misbehave beyond the boundaries that ordinary people around us can deal with, and so the law will have to get involved.  Right now, all the parties to the identity-theft dispute turned out to be U. S. citizens, so U. S. law applies.  But in the future, when space colonies (for lack of a better phrase) may want to set up their own independent governments, things may get considerably more complicated.  And complications mean lawyers.

One thing I haven't mentioned is the question of militarization in space.  President Trump recently announced the establishment of a Space Command, which is apparently a kind of umbrella under which the space activities of the various branches of the military will be gathered.  While the Outer Space Treaty prohibits "weapons of mass destruction" in space, it does nothing to stop nations from testing weapons or placing military personnel in space. 

It is perhaps inevitable that rivalries on the ground will end up being played out in space as well.  But we can hope that for the near future, anyway, the need for lawyers and law in space will be limited to minor issues such as the identity-theft case, and that we can view space as a place where for the time being, people can just get along.  But if they don't, I'm sure
lawyers will find a way to get involved.

Sources:  Deanna Paul's article "Space:  The Final Legal Frontier" appeared on Aug. 31 on the Washington Post website at https://www.washingtonpost.com/technology/2019/08/31/space-final-legal-frontier/.  I also referred to Wikipedia articles on the Outer Space Treaty and "Let's kill all the lawyers." 

Monday, August 26, 2019

This Business of Engineering


Early Sunday morning, Aug. 5, 1888, a 39-year-old woman named Bertha Benz set off for her mother's house in Pforzheim, some sixty-six miles (106 km) away from Mannheim, Germany.  She lived there with her husband Karl and two teenage sons, and she took her sons along for the ride.  Visiting her mother was not unusual.  But the way she planned to get there was. 

For the last several years, Karl had been developing what he called a Patent-Motorwagen—what we would call today an automobile.  Its one-cylinder engine burned an obscure solvent called ligroin, obtainable only at pharmacies.  It had wooden brakes and only two gears, low and high.  Bertha was from a wealthy family, and she had put a considerable amount of money into her husband's invention.  But like many inventors, Karl was content to make incremental improvements to his machine and treated it gingerly, never driving it more than a few miles away from home on short test drives.  Besides, there were laws regulating such machines, and to drive it a long distance legally, he would have had to get permission from various local authorities along the way.  It was much easier just to tinker with it in his shop and drive it only around town.

But Bertha had had enough of this.  She knew Karl's invention was good, but people had to know what it was capable of.  Without telling her husband, she and her two boys left Mannheim on the rutted wagon roads leading to Pforzheim.  On steep hills, the boys had to get out and push the underpowered vehicle uphill.  At one point the fuel line clogged, and Bertha unplugged it with a hatpin.  A chain broke, and she managed to find a blacksmith willing to work on Sunday to fix it.  The brakes proved inadequate, and she stopped at a cobbler's shop and had him cut some leather strips to fit onto the brakes, thus inventing the world's first brake pads.  A little after sunset the same day, she and her boys arrived in Pforzheim, no doubt to the great surprise of her mother.  She telegrammed her husband of her successful trip, and by the time she drove back several days later, reports of her exploit were in newspapers all over the country.  Which was exactly what she wanted.  Benz's invention, and Bertha's exploit, were foundational steps in the worldwide automotive industry.

Somehow I had gotten to my present advanced age without learning about Bertha Benz's first-ever auto trip.  But this was just the most interesting of many such anecdotes about engineering and business that I encountered in a new book by Matt Loos:  The Business of Engineering. 

Loos is a practicing civil engineer in Fort Worth who realized a few years after being in the working world, that many of the most important skills he was using every day had little or nothing to do with what he learned in engineering school. 

This is not to disparage engineering education (which is what I do for a living) but simply reflects the fact that the technical content of engineering is so voluminous these days that there isn't much room in a nominal four-year curriculum for what are (perhaps unfortunately) called "softer" subjects such as management techniques, ethical issues, and coping with the dynamics of rapidly changing technical fields. 

A newly-graduated engineer could do a lot worse than to pick up The Business of Engineering and read it to find out what the late radio commentator Paul Harvey called "the rest of the story."

If a person is going to claim to be able to use specialized technical knowledge to do something of value, they must have mastered that technical knowledge.  That fundamental requirement is the reason behind the extensive and challenging technical content of engineering undergraduate courses.  But as Loos points out in numerous ways—through anecdotes like Bertha Benz's story, through recent statistics and facts drawn from a variety of technical fields, and from his own personal experience—knowing your technical stuff by itself will not make you a successful engineer.  And even the definition of success depends on what sort of business you are in and how your own personal goals fit in with the directions that the industry is moving. 

I have to say that if I had read and taken to heart what Loos says in his book when I was, say, twenty-four, my career might have been very different.  At the time, I had a very simplistic and immature notion that all an engineer had to do was to come up with brilliant technical stuff, and the world would beat a path to his door.  But in thinking that, I was acting like Karl Benz, happily tinkering away in his shop but afraid to try his pet invention out in the real world.  The lesson I needed to learn was that if nobody but you cares about what you're doing, nothing much good will come of it.  Working engineers need to be engaged in the world around them, not only on a purely technical level, but also at the levels of economics, social relations, and ethics, to mention only a few.

This is Loos's first book, and as with most things, one's first efforts occasionally lack the polish that long experience can give.  But it is still highly readable, even if you don't read it for anything but the stories.  One of the strengths of the book is that Loos is realistic about how an engineer's personal habits can make the difference between success and something considerably below success:  things like attention to details, ability to organize one's time, problem-solving skills, and so on.  Now and then I come across a student who has more than adequate brain power to do engineering problems.  But when he confronts a problem he's not familiar with, he will simply sit there and appear to wait for inspiration.  And if inspiration doesn't come, well, it's just too bad.  The better way is to follow the advice of G. K. Chesterton (this isn't in Loos's book), who said anything worth doing is worth doing badly.  Even trying something that doesn't work will probably tell you something about what will work, and it's better than just passively waiting for something to happen.  Engineers make things happen—not always the best thing, but something that moves the process along.

Loos's book is now available on Amazon, and I recommend it especially to graduating engineers who can benefit from the experience and the stories that The Business of Engineering collects.

Sources:  The Business of Engineering by Matthew K. Loos, P. E. is available on Amazon at
https://www.amazon.com/gp/product/0998998788?pf_rd_p=183f5289-9dc0-416f-942e-e8f213ef368b&pf_rd_r=2V03WN39CX0ASA8E4VGJ.  Mr. Loos kindly provided me with a free review copy.  I enhanced the Bertha Benz story with some details from the Wikipedia page on her.
-->

Monday, August 19, 2019

Should Social Media Be Regulated?


Last month, the youngest U. S. Senator, Josh Hawley, a freshman Republican from Missouri, filed a bill called the Social Media Addiction Reduction Technology (SMART) Act.  The purpose of the act is to do something about the harmful effects of addiction to social media.

What would the bill do?  I haven't read it, but according to media reports it would change the ways companies like Facebook, Twitter, and Google deal with their customers.  The open secret of social media is that they are designed quite consciously and intentionally to be habit-forming.  So-called "free" media make their money by selling advertising, and advertising is worthless unless someone looks at it.  So their bottom line depends on how firmly and how long they glue your eyeballs to their sites.  And they have scads of specialists—psychologists, media experts, and software engineers—whose full-time job is to squeeze an extra minute or two of attention from you every day, regardless of whatever else is going on in your life. 

Writing on the website of the religion and public life journal First Things, Jon Schweppe says the SMART Act may not be the one-stop cure-all for our social media problems, but it's a step in the right direction.  It would  prohibit certain practices that are currently commonplace, apparently including one that has always reminded me of what life might be like in Hell:  the infinite webpage.

It used to be that when people first figured out how to make a web page scroll, it was only so long.  You could always get to the bottom of it, where you might find useful things like who wrote it or other masthead-and-boilerplate information.  Well, that doesn't always happen anymore.  The infinite webpage pits the pitiful finite mortal human against the practically unlimited resources of the machine to come up with more eye candy, as much as you want.  You keep scrolling, it will keep showing you new stuff. 

This particular feature reminds me of a passage from C. S. Lewis's The Lion, the Witch, and the Wardrobe featuring the candy called Turkish Delight.   The wicked-witch Queen of Narnia offered the boy Edmund his favorite type of candy to convince him to betray his friends.  The candy she offered him was enchanted so that whoever ate it always wanted more, and "would even, if they were allowed, go on eating it till they killed themselves."  No matter how much time you waste on an infinite website, there's always more.

The SMART bill would also give users a realistic option to voluntarily limit their own use of social media with daily timers, prohibits "badge" systems (which is evidently a kind of special-privilege feature that gets rewards heavy users and encourages them even more), and would prohibit or modify other addictive features. 

The Federalist's John Thomas sees the SMART bill as the first step in what may be a turning point in the history of social media.  He likens it to the Parisian reaction to brightly-colored advertising posters enabled by the then-new lithography process in the 1860s.  Pretty soon, a good percentage of all available vertical flat surfaces were covered with posters, and the town fathers decided to regulate how and where posters could be displayed. 

This may be the point at which the U. S. citizenry stops merely wringing its hands and saying there's nothing you can do in the face of rising teen depression and other ill effects of social media, and starts to take action.  As Thomas points out, though, there are few grass-roots organizations taking up the control-social-media banner. 

This may be because the dangers social media pose for mental health are insidious and gradual rather than abrupt and catastrophic.  Suppose that every person had an intrinsic social-media limit:  say after viewing X hours of social media (and X would be different for each person), your brain would literally explode and you'd die.  Well, you can bet that after two or three of these incidents, governments would come down on Facebook, Google, and company like a ton of bricks with all sorts of restrictions, up to and including an outright ban.

But nobody's brain literally explodes from doing too much Facebook.  The negative consequences of social-media use are much less obvious than that, but are nonetheless real.  Even the most tragic cases of teen suicides that result from peer persecution over social media can be blamed not just on the media, but on the cruelty of other teens.  Nevertheless, the nominal anonymity and ease of use that social media offer can turn what might be fairly well-behaved peers in person into abominable monsters on Facebook. 

Some writers oppose the SMART Act and similar legislation on the free-market principle that government is more likely to make things worse with legislation than otherwise.  While that can happen, it is foolish to take the hyper-libertarian position that if a good or service is bad, people just shouldn't use it.  Back when ordinary glass was used for automobile windshields, it would turn into long razor-sharp shards that decapitated numerous drivers, and Congress invited Henry Ford to testify about a proposed law that would require the use of the more expensive safety glass in windshields.  Reportedly (and this is from memory), Ford said, "I'm in the business of making cars, I'm not in the business of saving lives."

When Mark Zuckerberg testified before Congress not too long ago, he was self-controlled enough not to say anything that harsh.  But if the day has at last arrived when our elected officials are finally going to do something about the harmful effects of social media, one of two things (or perhaps a combination) is going to happen.  Either the social-media companies will have to get ahead of the proposed legislation and enact real, quantifiable reforms of their own and prove that they work, or they will have to change their ways in accordance with regulatory laws that they brought upon themselves. 

My own hope is that the companies will figure out a transparent and effective way to self-regulate.  But the choice is theirs, and if they brush off the SMART Act and think they have the raw power to squash such regulation, they may be in for a painful surprise.

Sources:  Jon Schweppe's article "Big Addiction" appeared on the First Things website on Aug. 13, 2019 at https://www.firstthings.com/web-exclusives/2019/08/big-addiction.  John Thomas's article "Hawley's SMART Act Is the Beginning Of the Revolt Against Big Tech" is on the Federalist website at https://thefederalist.com/2019/08/13/hawleys-smart-act-beginning-revolt-big-tech/. 
-->

Monday, August 12, 2019

USB-Crummy


About a year ago, my old Mac laptop died and I had to buy a new one.  I was pleased overall with my new machine, as far as the software and operating characteristics went.  And at first I wasn't too concerned that the only physical ports it had were one 3.5-mm phone jack for a headphone and four USB-C jacks, two on each side. 

I wasn't familiar with USB-C, although like anybody else who's dealt with computers, I knew about USB (Universal Serial Buses) in general.  I had nothing in my possession that would work with a USB-C connector—no printer cables, no external hard drive cables, no headphones.  So I went to Best Buy and bought a docking station that promised to solve all my interface problems.

It has a single USB-C connector plug on a short cable that goes to a flat aluminum box that has nearly every kind of jack you can think of:  an old-fashioned VGA (Video Graphics Array) connector for your ancient video projector, large and small HDMI (High-Definition Multimedia Interface) connectors that work with our medium-screen TV, an Ethernet cable port for hard-wired networking, an SD/MMC jack for flash-drive cards, a micro-SD for the teeny flash drive cards, three USB-2 (regular size) jacks, and another USB-C jack in case you need it.  And for most of a year after I bought the new laptop, I used these ports on the docking station whenever I wanted to connect anything to the laptop other than the USB-C power supply that came with it.

Then a month or so ago, I began to have problems.  First it was an issue with downloading data to a flash drive.  I would be downloading something and then all of a sudden I'd get a message on my Mac criticizing me for removing the flash drive before ejecting it.  Only I hadn't touched a thing.  I could take it out and put it back in and it would start working—sometimes.  And sometimes it would drop out again without warning.

That made me wonder if something was wrong with the docking station.  So I bought a simple adapter cable that has just a USB-C connector on one end and a USB-2 regular-size USB jack on the other, and used another adapter to get to a flash drive.  That worked for a while, but then I began to have the same problem. 

The worst thing was when I would do a backup to an external hard drive.  That takes a while, and in the middle of the transfer I'd get an error saying I'd removed the drive without ejecting it.  Only I hadn't.

Finally, I went online to see if other people were having these problems.  Turns out they were.  And amid all this wonderful stuff that USB-C is supposedly capable of doing (it has six modes of operation, including everything from external display support to power and 20-GB-per-second data transfer), there is a smelly fly in the ointment:  mechanical unreliability.

If you've never looked closely at a USB-C connector, get a magnifying glass and do so.  Inside that tiny rigid plug there are twenty-four pins, twelve on each side.  And apparently, for the high-speed data transfer to work, a good number of them (either four or eight, the best I can tell from online information) have to make perfect contact with their mating members in the socket you plug it in to.  Or else it looks to the hardware like you've jerked out the connector and you get an error.

From various online forums I read, it appears that the mechanical design of the consumer-grade USB-C jack is unreliable.  I saw tales of people with Macs like mine who had to take theirs in to get the USB-C jacks replaced because they simply wouldn't work for high-speed data transfer anymore.  I also use them (of necessity) for low-speed stuff like connecting to my keyboard and my printer, and I've never had any problems with those functions, because evidently they use different contacts than the high-speed ones.  But even if you get new jacks, they're just as unreliable as the old ones, and you're likely to have the same problem show up again in a few months.

As one online forum writer commented, every connector engineer knows that other things being equal, the more contacts you put in a connector, the less reliable it becomes.  Squeezing twenty-four pins into a tiny USB-C connector and expecting all of them to work all the time was one of the dumbest standards decisions I've come across in a long time.

The proverb saying a chain is only as strong as its weakest link applies in spades to connectors, and I am now stuck with a system that has a known weak link:  the USB-C connectors, which are the only practical physical way I have to get data in and out of my Mac. 

Now that I know it's a mechanical problem, I can do things like trying not to use one of the four USB-C ports for anything except the occasional backup, for example, and tiptoeing around any time a long data transfer is happening for fear I will set up vibrations that will break one of the eight vital connections and ruin the whole process.  This is not progress.

It may be too much to hope for, but maybe whoever devises the next standard after USB-C will come up with a fail-safe approach, or at least one that will be as reliable as the larger USB-2 standard was.  The old Bell System, which for much of the twentieth century relied on electromechanical relays for all of its network switching functions, found that the only way to make a relay reliable was to duplicate every one of its contacts, so that if a piece of dust got into one contact you had the other one that would still work.  Whoever designed the USB-C evidently forgot that hard-earned piece of wisdom.  I hope the next standards committee working on whatever comes after USB-C will not forget, but it looks like we may have to wait a while before that happens.

Sources:  I referred to the Wikipedia article on USB-C connectors and the website of the USB Implementers Forum (usb.org), as well as several online discussion boards about the unreliability of USB connectors.