Monday, June 24, 2019

The Fatal Dallas Crane Collapse


Two weeks ago, on Sunday June 9, a severe thunderstorm appeared over downtown Dallas, Texas.  Sudden thunderstorms are not uncommon in this region, and the residents of the Elan City Lights Apartments had no undue reason for concern.

But they should have been worried.  Around 2 PM that afternoon, a construction crane owned by Bigge Crane and Rigging Company toppled over onto the apartments, slicing through the buildings "like a hot knife through butter," in the words of one eyewitness.  One person, 29-year-old Kiersten Smith, died in her apartment, and five other residents suffered various degrees of injury.  More than 500 people have been temporarily made homeless while the building's safety is being assessed and repairs commence.

This is only the latest in a number of construction-crane accidents that have happened in the U. S. and elsewhere in recent years.  As a CNN report pointed out, from 2011 to 2015 Texas led the nation in the number of crane-related deaths, with nine occurring since 2011.  While this means you probably shouldn't take out a special crane-fatality rider on your life insurance, nine deaths, especially if they were not construction workers but ordinary citizens who were unable to do anything with regard to crane safety, is nine too many.

As crane-safety expert Thomas Barth pointed out in a CNN interview after the accident, there are things that crane operators can do to ensure that cranes won't blow over in case high winds arise.  The tower cranes so common in the skylines of modern cities can be designed and installed to withstand winds of up to 140 miles per hour (225 km per hour), which would occur during a moderate hurricane.  But the operators have to take certain precautions in the event of high winds.
           
One such precaution Barth cited was to attach a large weight to the working end of the crane.  With no load, such cranes are only marginally stable due to the large rear-mounted counterweight that compensates for the typical load the crane carries, and so pre-weighting the front adds to the crane's stability.  Another precaution taken by some operators is to release the rotation clutch and let the crane "windmill" in the wind, so that the long front part naturally points in the direction of the wind.  This also places the counterweight in such a position as to oppose the force of the wind and lessens the chances that the crane will blow over.

Apparently, neither one of these precautions was taken with the crane in Dallas.  Both live video shot during the storm and drone video of the accident's aftermath shows that the crane fell over nearly backwards, with the boom partly crosswise to the wind and partly pointing into the wind.  While definitive conclusions will have to await the results of the accident investigation, it appears that no one was on the construction site or charged with the responsibility of taking precautions with the crane if a storm arose. 

Some cities have regulations and licensing requirements for crane operators, but Dallas, in keeping with the general laissez-faire economic atmosphere of Texas, is not one of them.  Such regulations are not guaranteed to prevent crane accidents, as the 2008 crane collapse in New York City that killed seven people showed.  In general, the lawsuits and insurance-rate increases that follow a fatal accident like this can be enough incentive to make crane operators take reasonable precautions, but sometimes leaving safety to the commercial firms isn't enough.  All the regulations and policies in the world won't make a difference if the people on the ground doing the work either get careless, or simply are not told what the safe thing is to do, and get paid for doing it. 

In the case of the Dallas crane accident, either the crane operator or the construction general contractor would have had to pay somebody to be responsible for putting the crane into a safe mode in the event of threatening weather.  In retrospect, the few hundred dollars this might have cost would have been money well spent if it had prevented the accident.  And perhaps Bigge Crane and Rigging has learned its lesson, although news reports say it has been cited some eighteen times by the U. S. Occupational Safety and Health Administration in the last ten years. 

Such a record may be typical for a large, busy firm with extensive operations in numerous states.  And some of the citations may be for fairly trivial matters, such as mislabeled safety equipment.  But we now know that in at least one case, inattention to crane safety has led to the loss of one life, the injury of several bystanders, the loss of an expensive piece of equipment, and untold damage to property.

There is an aspect to this accident that gets almost no attention these days, but deserves it nonetheless.  It concerns the wider society's attitude toward "lowly" jobs such as construction workers, even those who operate costly pieces of equipment and hold the responsibility for dozens of lives in their hands.  The same thing could be said about airline pilots.  Pilots are respected, treated with deference, and in turn receive good pay and job security, while the operator of a construction crane is unknown to everyone except perhaps his family and co-workers, certainly gets paid less than the lowliest supervisor on the job, and may not know if he has a job at all after the current project is over. 

This situation reminds me of a saying attributed to Lyndon Johnson's Secretary of Health, Education, and Welfare, one John W. Gardner.  In a 1961 book called Excellence:  Can We Be Equal and Excellent Too? he wrote, "The society which scorns excellence in plumbing as a humble activity and tolerates shoddiness in philosophy because it is an exalted activity will have neither good plumbing nor good philosophy: neither its pipes nor its theories will hold water."  As a society, I think we both tend to scorn lowly but important activities such as crane operation, and exalt others, not just philosophy (think sports and entertainment?) that don't necessarily deserve such exaltation.  If crane operators and their ilk were more respected, they might feel a little more responsible, and companies employing them might act more responsibly too.

Sources:  I referred to news reports on the Dallas accident from the websites of Channel 5 News at https://www.nbcdfw.com/weather/stories/Dallas-Crane-Collapse-Multiple-Injuries-511045081.html and CNN at https://www.cnn.com/2019/06/10/us/dallas-crane-collapse/index.html.  I also referred to Wikipedia articles "303 East 51st Street" for the New York crane collapse and https://en.wikiquote.org/wiki/John_W._Gardner for the quotation about plumbers and philosophy. 

Monday, June 17, 2019

Is Facebook's Libra Cash In Your Future?


A recent item in the San Jose Mercury News says that the social-media giant Facebook is planning to announce that it's venturing into the cryptocurrency business with something it has code-named Libra.  The earliest and most well-known such currency—Bitcoin—has not exactly taken the financial world by storm.  But Facebook is reportedly lining up cooperation with credit-card companies Visa and Mastercard as well as PayPal to make using Libra more appealing than its predecessors.

Most of my readers are probably familiar with the basics of cryptocurrencies such as Bitcoin, which is based on a technology known as "blockchain."  From what I understand, it's a way of guaranteeing that everybody knows what has been done and who owns what, but without anyone being able to trace ownership of a particular unit of currency beyond the person you are immediately dealing with.  Anyway, it works well enough to have attracted the attention of investors who have sent the value of Bitcoin on a roller-coaster ride that has enriched a few and impoverished probably just as many.  And the fact that most cryptocurrencies are subject to just such wild and unpredictable fluctuations is one big reason that non-speculators have mostly stayed away from them, unless their desire to transact illegal business with untraceable cash has overcome their fear of short-term changes in value.

The way Facebook plans to fix the fluctuation problem is to tie their Libra to a basket of government-issued currencies.  So the idea would be that anybody holding 100 units of Libras (or whatever they end up being called—I suggest "zuckers") could take them at any time and trade them in for a certain fixed number of pounds, dollars, and euros. 

That's fine in theory.  But if Facebook suddenly finds itself in the position of the Federal Reserve, able to issue as much or as little currency as it wants, and able to say how much the currency is worth, the firm will face temptations that few governments have been able to resist in the past.

The open secret about the U. S. Federal Reserve, and for that matter any entity that issues fiat currency, is that they can print money.  Or, what amounts to the same thing, they can loan you money that they don't have until they write you the check, but then you have to pay it back in real cash that you have to earn somehow.  This is why you almost never run across a banker in the line at the soup kitchen.  The Federal Reserve System is open to numerous criticisms, but at least it has some semblance of being under the control of the U. S. government, and if it went totally crazy one day we citizens would stand a chance of doing something about it before it bankrupted us all. 

Facebook, however, as a private (though publicly traded) entity, is under no such restrictions.  If the company chooses, it can go the way of the nineteenth-century "wildcat" banks that predated the establishment of the Federal Reserve System in 1913.  Back then, every bank that wanted to could issue its own currency, and many of them did.  But if you accepted a note from the Second National Bank of Podunk, Indiana, you were taking a chance that it wasn't worth the paper it was printed on if that bank had decided to go on a note-printing spree, which many of them did—and then closed forever, leaving holders of their notes with no recourse.

An entity as big as Facebook isn't likely to vanish in a shower of virtual zucker notes.  But there are reasons why sovereign governments typically reserve the right to issue the primary legal tender in their respective domains.  As is usually the case with governmental behavior, it has to do with power.

You may have heard the saying, "The power to tax is the power to destroy."  Nobody's saying Facebook will start taxing people and calling it that, but you can certainly picture them charging people for certain services involving their currency.  It turns out that the quotation is taken from an oral argument that the great U. S. statesman Daniel Webster made before the even greater U. S. Supreme Court justice John Marshall in 1819.  And at issue was the power of a state to tax guess what?  A bank.

Bitcoin, or for that matter all the cryptocurrencies lumped together, represent such a small fraction of all the money in circulation today that governments can afford to ignore them.  But suppose that Facebook's venture into this business turns out to be really successful.  People start getting paid in zuckers instead of dollars.  Banks start carrying your account balance in zuckers instead of dollars.  On April 15 you write a check to the U. S. Treasury in zuckers instead of dollars—uh-oh, that won't wash.  Uncle Sam gets to say how you pay Federal taxes, and he won't take anything but dollars.

If this alleged stabilization business with the currency basket works, maybe there won't be a problem with paying taxes in cryptocurrency.  But if governments haven't been able to resist the temptation to tamper with the exchange rate of their currency, my guess is that Facebook won't be able to resist the temptation either.  And presumably, Facebook (or whatever co-op ends up running the currency) will be able to determine how many zuckers are in circulation.  That right there is a temptation that is hard to resist.  Why go to all the trouble of developing a business model and charging customers and paying employees and making a profit, when you can just issue another few million zuckers and there you are?  And if the exchange rate stays constant, those zuckers are just as good as dollars or what have you. 

No, there are good reasons why any government faced with the advent of an increasingly popular medium of exchange that isn't under its control sooner or later grabs it for itself.  And I predict that either Facebook's venture into cryptocurrency will vanish in the welter of other such products without a trace, or if it becomes really popular and lots of people and companies start using it, Uncle Sam will come along and take away Mr. Zuckerberg's new toy.  Even if they are called zuckers. 

Sources:  The San Jose Mercury News carried the article "Who is Facebook getting to support Libra, its cryptocurrency?" on June 14, 2019 at https://www.mercurynews.com/2019/06/14/who-is-facebook-getting-to-support-libra-its-cryptocurrency/.  The attribution to Daniel Webster is from https://www.bartleby.com/73/1798.html.  For one of the clearest explanations of the Federal Reserve System, and for arguments that actually favor the limited issue of currency or its equivalent by private entities, I recommend distributist author John C. Médaille's Toward a Truly Free Market (ISI Press, 2010).

Monday, June 10, 2019

Death With Style and Elegance: Philip Nitschke's Death Pod


Near-universal consensus about anything is rare in ethics.  But I think you could get most people to agree on an answer to the question, "Was the invention of the Nazi gas chambers a good invention?"  In the Auschwitz concentration camp operated by Nazi Germany during World War II, an estimated 1.1 million people died, most of them Jews, and scientifically-designed gas chambers were used to kill many of them.  Those gas chambers represent a nadir in the history of engineering:  designed by a corrupt, malevolent government for industrial-scale executions of people who died in them only because they ran afoul of Hitler's regime.

Ah, but what if those wanting to try out a gas chamber are not compelled, but have made the decision of their own free will?  And have even passed an online test certifying that they are of "sound mind"?  And have read a fancy advertising brochure promising "death with style and elegance"?  Just climb into the Death Pod—which looks like what you might get if you asked Apple to design a body-length chest-style freezer—lie down, make yourself comfortable, and push the button.  The software does all the rest.  Allegedly, the user will experience first "euphoria" as the oxygen level decreases, then pass out into the Great Beyond.  Since liquid nitrogen is involved, it's not clear whether the corpse is flash-frozen after death or if that's just a convenient way to get a supply of suffocating nitrogen.  But after you're dead, it really doesn't matter. 

The inventor of the Death Pod is Philip Nitschke, a resident of the Netherlands who has been working on self-operated suicide machines since at least the 1990s.  I won't remark on the similarity of Mr. Nitschke's name to the philosopher Friedrich Nietzsche, except to say that they seem to share a pessimistic view of life that partakes of what some call the culture of death. 

The alleged problem that the Death Pod addresses is the tiresome necessity of involving medical personnel, or worse yet, rolling your own method, in one's decision to commit suicide.   The writer and wit Dorothy Parker portrayed the difficulties facing someone who has reached the decision to do herself in with this grim little poem, titled "Résumé:"

            Razors pain you,
            Rivers are damp,
            Acids stain you,
            And drugs cause cramp.
            Guns aren't lawful,
            Nooses give,
            Gas smells awful,
            You might as well live.

But Parker, who died in 1967 of mostly natural causes, didn't live to see how the Death Pod could solve all these problems in one stroke.

In a development that I am sure will be noted by future historians (assuming there are any left), the idea of legalizing suicide has spread around the world in recent decades.  In the Netherlands, Germany, Canada, and some U. S. states, both euthanasia (mercy killing by physicians) and suicide are legal, and organizations such as Nitschke's "Exit International" promote the idea that killing yourself should be—well, it's hard to say what they think it should be.  Socially acceptable?  More often considered as an option to be chosen when facing problems?  Quick, easy, and convenient, like the tag line to countless other advertisements for products that let you become more of the ideal autonomous self that modern culture seems to be encouraging us to be?  All of these things and more.

As far as I can tell, nobody has actually died in a Death Pod yet.  Nitschke enlisted the help of an industrial designer in creating a full-scale model that people visiting it on display in Venice, Italy can try on for size.  There's a photo of a gal lying underneath the clear plastic canopy of the thing, holding a bunch of lilies and grinning.  Death can be funny if you can open up the canopy and get out afterwards.  But that won't be an option once Nitzschke finishes his design and publishes 3-D printing instructions for the entire system.

This whole thing may be nothing more than a publicity stunt, as the news reports about it say that Nitschke isn't planning to make and sell the device himself, apparently concerned that he might get in trouble with a government which doesn't look favorably on people selling products that are not only likely to kill their owners, but almost guaranteed to.  Instead, the suicidal customer is expected to download the plans, 3-D print the device somewhere (good luck finding a printer large enough to handle a coffin-size plastic box capable of holding the weight of the average human), and hook it up to liquid nitrogen that you get at your handy local liquid-nitrogen convenience store. 

Technically, this scenario doesn't make much sense, and it sounds like there's some important ingredients missing.  For example, a third party (Nitzsche doesn't say who) has to be involved to give the user an access code to get into the thing, presumably ensuring that the user has passed an online test verifying that they are of sound mind.  But the definition of "sound mind" must be pretty skewed.  Public health officials tend to treat suicidal tendencies as aberrant behavior, not evidence of a sound mind.

There are so many things wrong with this idea that I could write several columns about it, but I will close with this thought, which readers not believing in the supernatural can skip.  Ever since the Fall in the Garden of Eden, humanity has been in a battle that is primarily waged in the spiritual realm, between God and his angels and the Devil and his angels.  The Devil would like nothing better than to kill off humanity, which he finds offensive in the highest degree.  So he likes to portray death as attractive, even as stylish and elegant, in order to achieve his purposes, which are to kill, steal, and destroy.  Philip Nitschke is an unwitting servant of the Devil when he goes around promoting attractive means of killing oneself.  The Devil, a liar from the beginning, likes to fool us with the illusion that we are autonomous individuals who can freely choose what to be or how to end our lives, with no adverse consequences.  The Death Pod is just one of the latest of his tricks.  But somehow the product's execution (pardon the expression) doesn't sound like it will live up to the hype its creator has generated, and I for one hope it doesn't. 

Monday, June 03, 2019

Bitcoin-Enabled Ransomware Attack Strikes Baltimore


Last month, the city of Baltimore became the latest target of a ransomware attack.  The city's Microsoft operating systems were held hostage by a group that demanded 13 bitcoins, which at the present rate of exchange is about $100,000.  Despite their inability to repair all the damage after nearly a month, Baltimore administrators refuse to pay the ransom, and instead have asked the federal government for help.  According to some sources, the malware used for the attack was developed at the U. S. government's National Security Agency (NSA), and somehow it leaked and was posted by a group of hackers in 2017. 

Irony is usually found more in literature than in engineering, but this incident is particularly rich in them. 

The first irony is that a cyberweapon presumably developed to be used by the United States against its enemies was stolen, published worldwide, and used instead to attack the infrastructure of a major U. S. city. 

The second irony is that an idea traceable back to 1991, a chain of blocks developed originally just to prevent software timestamps from being tampered with, has turned into a means by which ransoms can be paid with no realistic hope of tracing where the money goes. 

And the third irony is that some eyebrows are being raised by the fact that the city of Baltimore is asking for help from the federal government. 

Let's do a little thought experiment and set the essential ingredients of this incident in an alternate universe which is just like ours, except there's no computer networks and so on.  Suppose a gang of paratroopers landed in Baltimore and made their way to the city offices, holding employees at gunpoint while they absconded with tons of files and records in a heavily armored vehicle.  Then the mayor received a ransom note demanding $100,000 for the return of the records.  Not only would a nationwide manhunt be mounted for these criminals, but the FBI and other federal agencies would get involved as a matter of course. 

But simply because the records and functions involved are on computers and not physical documents, attitudes and actions are vastly different here.  Now, admittedly some blame can be attached to those responsible for running Baltimore's IT systems.  Microsoft evidently does a fairly good job of sending out patches and updates in response to new viruses and malware, but these patches have to be implemented in a systematic and organized way.  And in the case of Baltimore's systems, this was not done.  In the world of our thought experiment, this amounts to not having enough armed guards surrounding your municipal buildings to fight off the attackers. 
While a certain amount of security is to be expected, nobody wants to have to do the equivalent of breaking into Ft. Knox in order to pay your city water bill. 

While I am not usually in favor of greater centralization of power and resources, in this case I think it is only fair for the federal government to help out Baltimore in their hour of need.  For one thing, the NSA never should have let its malware escape in the first place.  It would seem to be a fairly straightforward investigation to discover who was responsible.  But the NSA's workings are deliberately opaque and poorly supervised even by Congress, who pays the bills, and that sort of setup is an open invitation to laxity and inefficiency.  Perhaps this leak represents only 0.001% of everything that NSA has developed, most of which is still secret.  But in situations like this, even one leak can be too many.

As for bitcoins being used for ransomware payment, it makes a certain amount of perverse sense that a form of currency inspired by hyper-libertarianism is used mainly for two things nowadays:  speculation and illegal transactions.  It is an ill wind that blows nobody good, and bitcoins have benefited some people.  I may have mentioned a student of mine who managed to buy some bitcoins only a few years after they came out in 2009.  I don't know exactly what she paid, but by the time she graduated I think she had been able to pay for her entire college education with her profit in bitcoins. 

But is this advantage worth the social cost of having a virtually foolproof way of laundering money?  I leave that for the reader to decide.  It doesn't matter now, because bitcoins and their offspring are a permanent part of the cyberlandscape now. 

Perhaps the most troubling aspect of the Baltimore situation is the complete anonymity of the attackers, who could be, and probably are, anywhere in the world outside of the United States.  Prior to the Internet, the most significant threat the U. S. endured from outside its borders was the threat of intercontinental ballistic missiles carrying nuclear warheads, and billions of dollars were spent in an arms race that is in some ways still with us.  But now that anyone with sufficient skills can mount attacks on specific geographic entities in the heartland of the U. S. from halfway around the world, we still act as though it's just some sort of defect in a strictly local pile of computer networks, and treat the attackers much like an act of God—something that's always going to happen sooner or later, so you might as well just buy insurance and be ready when it happens.

Maybe that's the best approach.  Baltimore, as it turns out, did not have cyberinsurance, but the bond underwriters will soon see to that  So in the future we will go armed not with guards, but with insurance policies to buy experts who come in and fix our computer systems, just like roofers replaced my roof after a recent hailstorm this spring.  Complexity begets complexity, and if Baltimore and other cities consistently refuse to pay ransomware demands, perhaps the criminals will devise some other way to make ill-gotten gains.  I can hardly wait to see what they'll do next.  (That's irony, by the way.)

Sources:  I referred to articles at https://phys.org/news/2019-05-baltimore-ransom-cyberattack.html and the website Governing.com at https://www.governing.com/topics/public-justice-safety/gov-cyber-attack-security-ransomware-baltimore-bitcoin.html, as well as the Wikipedia articles "blockchain" and "bitcoin." 

Monday, May 27, 2019

Can We Trust Alexa? Wade Roush Hopes So


Wade Roush is a journalist who writes a column on innovation for the prestigious Scientific American monthly.  In the June issue, he looks at the future of increasingly smart and omni-present artificial-intelligence (AI) agents that you can talk with—Apple's Siri, Google's Assistant, Amazon's Alexa, Microsoft's Cortana, and so on.  Apple has installed a Siri app in its AirBuds so all you have to do is say, "Hey, Siri" and she's right there in your ear canals.  (Full disclosure:  I don't use any of these apps, except for a dumb talk-only GPS assistant we've named Garmina.) 

True to his column's marching orders, Roush came up with a list of five protections that he says users should "insist on in advance" before we go any farther with these smart electronic assistants.  Don't get me wrong, it's a good list.  But the chances of any of the five taking hold or being realized in any substantial way are, in my view, way smaller than a snowball's chances in you-know-where. 

Take his first item:  privacy.  Inevitably, AI interactions are cloud-based because of the heavy-duty processing required.  Therefore, he calls for end-to-end encryption so even the companies running the AI assistants can't tell what's going on.  This is a contradictory requirement.  Of course they have to know what you're asking, because otherwise how are they going to respond to requests for information?  Maybe Roush is thinking of something like the old firewall idea that used to be maintained between the editorial and advertising divisions of a news organization.  But there are huge holes in those walls now even in the most traditional news outlets, and I don't see how any company could both remain ignorant of what's going on between its AI system and the user, and have the AI system do anything useful.

The next protection he asks for is unprecedented, so I will quote it directly:  "AI providers must be up front about how they are handling our data, how customer behavior feeds back into improvements in the system, and how they are making money, without burying the details in unreadable, 50-page end-user license agreements."  If any of the AI-assistant firms manage to do this, it will be the first time in recorded history.  Especially the part about how they make money.  That's called a firm's business strategy, and it's one of the most closely guarded secrets that most firms have. 

Next, he calls for every link in the communication chain to be "hacker-proof."  Good luck with that.  Hacker-resistant, I can see.  But not hacker-proof.

Next, he says the assistants must draw on "accurate data from trusted sources."  This is a hard one.  If you ask Alexa a question like, "What do you mean, an Elbonian wants my help in transferring millions out of his country?" what's she going to say in response?  The adage "garbage in, garbage out" still applies to AI systems just as it did to IBM System 360s in the 1960s.  And if we're truly talking about artificial intelligence, with no human intervention, I don't see how AI systems will filter out carefully designed phishing attacks or Russian-sponsored political tweets any better than humans do, which is to say, not very well.

And I've saved the best for last.  He calls for autonomy, for AI assistants to give us more agency over our lives:  "It would be a disaster for everyone if they morphed into vehicles for selling us things, stealing our attention or stoking our anxieties." 

Excuse me, but those three actions are how most of the Internet works.  If you took away all the activity that was designed to sell us things, the Internet would dwindle back down to a few academics sending scientific data back and forth, which is how it began in the 1980s.  If you tell designers not to try stealing our attention, and turned off all the apps and sites designed to do so, Facebook, Instagram, all the online games, Twitters, newsfeeds—all that stuff would disappear.  Facebook designers are on public record as having said that their explicit conscious intention in designing the system was to make using it addictive.  And as for stoking our anxieties—well, that's a good capsule description of about 80% of all the news on the Internet.  Take that away, and maybe you'll have some good stories about rainbows, butterflies, and flowers, but only till the sponsoring companies go bankrupt for lack of business.

I have no personal animus against Mr. Roush, and in dealing with a new technology he has to say something about it.  And there's no harm in holding up an ideal for people to approach in the future, even if they don't have much of a chance of approaching it very closely.  But it's strange to see a supposedly savvy technology writer call for future protections on any high-tech innovation that are so ludicrously idealistic, not to say contradictory in some points. 

Perhaps a page from the historians of technology would be helpful here.  They make a distinction between an internalist view of history and an externalist view.  I'm radically simplifying here, but basically, an internalist (I would count Roush in that number) takes the general assumptions of a field for granted and looks at things in a we-can-do-this way.  And in principle, if you take the promises of smart-AI proponents at face value, we could in fact achieve the five goals of protection that Roush outlined.

But an externalist views a situation more broadly, in the context of what has happened before both inside and outside a given field.  In saying that the protections Roush calls for are unlikely to be realized fully, I rely on the history of how high-tech companies and other actors have behaved up to this point, which is to fall far short of every protection that Roush calls for, at one time or another. 

I hope that this time it will be different, and talking with your trusted invisible AI assistant will be just as worry-free as talking with your most trusted actual human friend on the phone.  But after writing that sentence, I'm not even sure that I want that to happen.  And if it does, I think we will have lost something in the process.

Sources:  Wade Roush's column "Safe Words for Our AI Friends" appeared on p. 22 of the June 2019 print issue of Scientific American.

Monday, May 20, 2019

How Not To Do It: Elizabeth Holmes and Theranos


As the engineering sage Henry Petroski likes to say, we often learn more from failures than from successes, at least when it comes to ethical behavior.  And we now have a book-length record of one of the most spectacular failures in recent business history:  Theranos, a medical-equipment company founded by Elizabeth Holmes when she dropped out of Stanford at the tender age of 20.  Although the ethical misdeeds of Holmes and her close associates are many and various enough to fill a book—investigative reporter John Carreyrou's excellent Bad Blood—I would like to focus on just one aspect of wrongdoing described therein:  the falsification of test data sent to regulators.

Holmes, who is presently undergoing a federal trial in connection with her time as CEO of Theranos, began her venture with a remarkable vision:  to make all medical blood tests as simple and easy as the diabetic's pin-prick test for blood-glucose monitoring.  She based this vision on nothing more than an internship she spent at Stanford in a research lab one summer.  But she brought to the table considerable family connections (she was able to raise over six million dollars for her new company in less than two years) and a personal magnetism that convinced older and wiser persons (men, mostly) to give her whatever she wanted.   And if it had been possible to achieve her goal with enough money and talented people in the time she had, she would have achieved it.

The history of technology teaches us that there are optimum times for certain technical advances, and the key to profiting from a new technology is to try it at the right time.  Even Steve Jobs, who Holmes idolized and emulated whenever possible, could not have invented the Macintosh in 1953—the technology simply wasn't there yet.  So in retrospect, Holmes' idea of a tiny credit-card-like machine that would do everything a whole clinical blood lab currently does with a small fraction of the blood volume now required, was simply too far ahead of its time. 

The farthest her company got technically was to build some kludgy prototypes that were basically robot versions of what a human lab technician does.  When they worked, which wasn't often, their results compared to conventional blood-testing machines were unreliable and full of errors.  The only way Theranos employees could get results approaching the accuracy of standard commercially available lab testing devices already on the market was to run six of their robot machines at once on the same sample, and average the results.  Needless to say, this was not a practical solution, but as Holmes had already gone out and negotiated contracts with big retailers such as Safeway and Walgreens, and blood samples from real people were coming in to be tested, the engineers had to do something, and this was their temporary makeshift solution.

Then the question of federal certification came up.  Labs in the U. S. that are not purely research-oriented—that is, blood testing establishments that test blood for the general public—are required to meet certain standards enforced by the U. S. government under the Clinical Laboratory Improvement Amendments (CLIA), the laws governing such labs.  At Holmes's insistence, Theranos fudged on meeting these standards as long as it could.  By the time Theranos hired a young engineer named Tyler Shultz, the firm was following a home-brew procedure to certify its lab tests that looked funny to him.

The CLIA required periodic "proficiency testing" of samples to ensure that the lab was getting accurate results.  The rule was that these samples should be tested "in the same manner" as actual patient specimens "using the laboratory's routine methods."  But at Theranos, there was nothing routine about the way they were handling tests.

In order not to look completely foolish, the company had bought a number of on-the-market blood test machines that it was using along with its own units, called (regrettably for Edison) "Edisons."  The Edisons could handle some types of tests with the dodgy averaging method, but others were sent to the outsider machines and the results passed off as being done by Theranos devices.  When Tyler tried to run the proficiency tests on the Edisons and they failed, he got in hot water with management, who ordered him to report only the good results from the non-Edison machines.  After Tyler's protests proved fruitless, he contacted an outside state regulatory body to see whether his suspicions about Theranos's way of doing things was well-founded. 

It was, and Tyler decided to go to the most powerful person he knew who was associated with Theranos:  his grandfather.  This was not just any grandfather.  It was George Shultz, former U. S. Secretary of State, now in his 90s, but very much interested in Theranos as a member of the board.

Without passing judgment on the quality of relationships in the Shultz family, I can still say that Tyler's complaints to his grandfather fell on deaf ears.  Not only that, but his grandfather told on him to Theranos management, who made it so hot for Tyler that he concluded to resign, despite threats that if he quit and went public with his information, "he would lose."

Tyler turned out to be a critical source for John Carreyrou, a Wall Street Journal investigative reporter who also caught legal flak for trying to report the truth about Theranos.  But Carreyrou persisted in contacting current and former employees of the firm who gave him enough information to write a series of blockbuster articles that turned the climate of public opinion against Theranos, and led eventually to the demise of the company (it went out of existence last fall) and the indictment of Holmes and her second-in-command for fraud.

As whistleblower stories go, Tyler Shultz got off fairly easily.  He helped bring down an organization which richly deserved its fate.  But far more commonly, whistleblowers' allegations are challenged vigorously and often successfully, even if they are true.  And whatever happens ultimately to a whistleblower's career, he or she is guaranteed to undergo considerable emotional pain as the accused organization tries to defend itself, almost like an immune response to a therapy that will ultimately do a patient good, but which causes discomfort initially.

The story of Holmes and Theranos is of a noble cause gone bad, and continues to play out in the courtroom.  But we know enough already to use the story as a bad example of how startups should not handle problems with regulators.  The lesson for young engineers is, if you smell a rat, don't just hold your nose and keep on going.  Find out if it's really a rat, and if it is, deal with it, even if you get your hands dirty to do so.

Sources:  John Carreyrou's Bad Blood:  Secrets and Lies in a Silicon Valley Startup was published in 2018 by A. A. Knopf.   

Monday, May 13, 2019

China's High-Tech Persecution of Uighurs


In 1949, the newly formed communist government of China seized control of the northwest corner of the country now known as Xinjiang Province.  The province was home to a number of different ethnic groups, the largest of which are known as Uighurs (also spelled Uyghurs).  The Uighurs are a Turkic people with their own language and culture, and the majority of them are Muslims.  None of this sits well with the Chinese government, which began systematic attempts to convert Uighurs to conform to the language and ways of the Chinese-majority Han ethnicity, and the struggle continues to this day. 

Chilling details of how Beijing is using the latest high-tech surveillance methods to persecute Uighurs were reported in a recent article that appeared on the website of Wired.  In 2011, a new social-media app called WeChat took China by storm, and Uighurs seized upon it as a great new way to communicate among themselves, discussing everything from personal matters to politics and religion.  But in 2014, the Chinese government forced WeChat's owners to let them snoop on all WeChat messages, and soon after that, bad things started to happen to Uighurs who discussed sensitive issues such as Islam or Uighur separatist movements on WeChat.  In 2016, Uighur families who used WeChat incautiously were being checked on by police officers, sometimes daily.  For one family, it got so bad that they decided to emigrate to Turkey, and the father sent his wife and children ahead while he stayed behind to wait for his passport.  It never arrived, and instead he was arrested.

This is just one of thousands of cases in which Chinese officials have persecuted Uighurs.  Besides monitoring social media, the police are requiring Uighurs to provide voice and facial-recognition samples and even taking compulsory DNA samples.  Families are afraid to turn their lights on at home before dawn for fear the police will figure that they're praying, and haul them off to a re-education camp.  Yes, China is operating what amounts to massive prison camps for Uighurs, which the government claims are vocational training centers.  A diplomatic spokesman for Turkey contradicts these claims, saying that up to a million Uighurs have been detained in such camps against their will and subjected to torture and "political brainwashing."

Such actions are familiar to anyone with knowledge of China's Great Cultural Revolution, which lasted from 1966 until Mao Zedong's death in 1976.  This nationwide convulsion paralyzed the country, led to millions of deaths, and subjected millions more to internal exile and forced self-confessions.  While such things are a fading memory to most citizens of China today, the surveillance state is not, and all it takes for someone to fall back into those bad old days is to manifest religious faith in actions such as gathering for worship or praying openly.  And political organizing against the will of the ruling government will land you in hot water too.

China's reach even extends beyond its borders to blight the lives of Uighurs who escape to Turkey and elsewhere.  For Uighurs remaining in China, even contacting an exiled Uighur relative or friend by phone can result in police investigations and arrest.  So leaving China usually means losing all contact with family and friends who remain behind, except for the rare hand-carried letter that can be smuggled back into the country by a friendly courier. 

The Chinese government seems to be motivated by fear rather than trust.  One reason for this may be that the long history of China is that of periods of peace enforced by central control, interrupted by brief spasms of popular revolt that depose the old guard and install a new one in its place.  The leaders of China seem above all determined not to let that happen to them, which explains their brutal response to the 1989 Tiananmen Square protests and their continued harrassment of ethnic and religious minorities.  In common with other totalitarian philosophies, they seem to think that if anybody, anywhere in China harbors thoughts or actions that fundamentally contradict the basic assumptions of dialectical materialism, the regime is in mortal danger and must suppress such thoughts or actions.   

In a well-informed article in the journal of religion and public life First Things, Thomas F. Farr, who heads an NGO called the Religious Freedom Institute, says that U. S. diplomacy toward China has been largely ineffective in its efforts to mitigate the suffering that religious and ethnic minorities endure there.  Farr recommends reminding Chinese government leaders of their self-interest in promoting a peaceful and prosperous society.  He suggests that we provide Chinese leaders with hard evidence that religious faith can produce individuals who are peaceful, productive, and a net asset to any country which harbors them.  So instead of persecution, arrests, and forced retraining, which are likely to inspire counter-movements and even terrorism, perhaps we can persuade the Chinese government to change its attitude toward minorities like the Uighurs and allow them to practice their faiths and preserve their cultures. 

That would be nice if we could make it happen, but so far it's just a policy idea.  Right now, the Chinese government seems to think more and more surveillance is the answer, and has invested billions in technology and hiring of police to the point that in some regions of Xinjiang, the only stable, reliable job you can get is to work for the police and spy on your neighbors.

In the U. S., such difficulties seem exotic and far away, and it's easy to forget that the same technology Beijing is using to control its Uighur population is available here.  I suppose it's better for megabytes of personal information about us to be in the control of private companies who have a good reason to behave themselves if they want to stay in business, rather than a government desperate to remain in control under any circumstances.  But the information is out there just the same, and the sad plight of the Uighurs in China reminds us that except for the traditions of freedom in the U. S., we might be in the same boat.

Sources:  Isobel Cockerell's article "Inside China's Massive Surveillance Operation" appeared on May 9, 2019 at https://www.wired.com/story/inside-chinas-massive-surveillance-operation/.  Thomas F. Farr's "Diplomacy and Persecution in China" appeared in the May 2019 issue of First Things, pp. 29-36.

Monday, May 06, 2019

Faked Test Result Cost $700 Million, Says NASA


Last Tuesday, April 30, NASA announced the results of a years-long investigation by its Office of the Inspector General (OIG) and its Launch Services Program (LSP).  Back in 2009 and 2011, two climate-change-observing satellites failed to reach orbit and were lost at a total cost of $700 million.  In both cases, the payload fairing—the dome protecting the payload during launch—failed to separate on command, throwing the flight dynamics out of whack and ultimately crashing the satellites before they reached orbit.  After a long investigation in which the U. S. Department of Justice was involved, NASA's OIG found that a supplier of aluminum extrusions used to hold the fairing together had been faking materials-testing results on the extrusions not once, not twice, but literally thousands of times over a period of nineteen years.

At least, that is what the revised launch-failure report by NASA says.  Understandably, the company involved disputes some of these findings.  At the time the extrusions were supplied, it was known as Sapa Profiles Inc. (SPI), of Portland, Oregon, although it is now part of a multinational corporation called Hydro.  The report makes for chilling reading. 

To allow the fairing to separate into its clamshell halves at the right moment during the launch phase of a flight, explosive charges are set off to sever the aluminum extrusions that hold the halves of the fairing together.  But the aluminum has to have the right properties to break cleanly, it appears, and so NASA required supplier SPI to do certain materials tests on their extrusions, probably things like tensile strength and so on.  Given the right equipment, these are straightforward tests, and even low-level engineers and engineering students such as I teach know that faking test results is one of the worst, but at the same time one of the more common, engineering-ethics lapses. 

According to the NASA report, such fakery became routine at SPI, so much so that a lab supervisor got in the habit of training newcomers how to fake test results.  NASA found handwritten documents showing how the faking was done.  And this was no now-and-then thing.  For whatever reason, the extrusions failed tests a lot, and so lots of faking went on, not only for NASA's extrusions but for products bound for hundreds of other customers.  But not all of them had the investigative resources and motivation of $700-million launch failures to check out what was happening.

The investigation took years to complete, and once SPI was confronted with its results, the company agreed to pay $46 million in restitution to the U. S. government and other customers as a part of a settlement of criminal and civil charges.  That's a lot of money, but clearly a drop in the bucket compared to what the firm's malfeasance cost NASA and all the people who put years of work and ingenuity into the launches of the satellites which were doomed by the faulty extrusions. 

Seldom does a clear-cut violation of engineering ethics principles have such an equally clear-cut result that makes it into the public eye.  Fortunately, none of the flights affected by the faulty extrusions were manned, but losing $700 million of hardware is bad enough.  What we don't know is how SPI's other customers were adversely affected by the falsified tests, but didn't have the resources to trace the problem back to its true source.  It's entirely possible that this new information will inspire other SPI customers to look back into mysterious failures of their products to see if faulty SPI extrusions may be at fault there as well.  At any rate, NASA has suspended the firm from selling anything to the U. S. government and is thinking about proposing a perpetual bar.

I can only speculate what went through the minds of the engineers who were asked to falsify the test results.  Clearly, a culture of falsification had to be in place for the problem to go on as long as it did.  And numerous psychology experiments have shown that we are much more creatures of our environment than we think we are.  Stanford professor of psychology Philip Zimbardo showed that in 1971 when he set up a mock prision with college-student volunteers who were randomly assigned to be either prisoners or guards.  Within six days, the guards were treating the prisoners so badly that Zimbardo prematurely terminated the experiment for fear that someone could get injured or killed. 

If ordinary law-abiding students can conform to an environment they know is not real, but nevertheless demands that they act a certain way that is contrary to their everyday behavior, it is no great surprise that engineering students newly hired into a company where systematic corrupt practices are in place find it all too easy to conform to the expectations of their supervisor and fall in with the practice of test-result fakery.  As an educator, I don't know what we can do other than to repeat that faking test results is never, under any circumstances, a good thing to do.  Adhering to this advice requires that the listener believe in at least one moral absolute, and that itself can prove to be a challenge these days.

So sometimes, telling stories is more effective than just reciting rules.  The SPI extrusion episode will probably make it into the annals of engineering-ethics textbooks, as it should.  Maybe telling the story of how fake test results led directly to the loss of satellites will make an impression that will stick in the minds of students.  At any rate, this debacle deserves to be more widely known, as it serves as an object lesson for anyone who is responsible for testing hardware, or software, for that matter.

Sources:  I referred to a report on the NASA investigation at the website Space.com carried at https://www.space.com/nasa-determines-cause-satellite-launch-failures.html.  The revised NASA failure report is available at https://www.nasa.gov/sites/default/files/atoms/files/oco_glory_public_summary_update_-_for_the_web_-_04302019.pdf.  I also referred to the Wikipedia article "Stanford prison experiment."
-->

Monday, April 29, 2019

Facebook, Privacy, and Regulation


In what may signal a change in attitude, the U. S. Federal Trade Commission is talking about fining Facebook billions of dollars for breaching a privacy agreement between the company and the FTC.  At issue is how Facebook uses the data it gleans from its users and whether Facebook has asked permission before sharing private data with third parties. 

In a recent AP article, reporter Barbara Ortutay says Facebook has set aside $3 billion in case the FTC fines the firm.  This is somewhat of a drop in the bucket of Facebook profits, which are estimated to be over $20 billion this year.  But still, it's large enough to attract investors' attention, and so the publicly traded company mentioned it in a recent news release. 

Privacy is one of the more nebulous concepts in ethics and law, as opposed to murder, say.  With murder you generally have a dead body and a definite event that produced it.  But privacy is all in the mind, or rather, minds—the mind of the person whose privacy has been violated, and the minds of those who allegedly know something about the victim that the victim doesn't want known.  And figuring out what is in peoples' minds isn't that easy.

One of the earliest institutional guarantees of privacy is what the Roman Catholic Church calls the "seal of confession."  Catholics are supposed to confess all their mortal sins periodically to a priest in what's called the sacrament of penance or reconciliation.  In turn, the Church promises on behalf of its priests never to reveal what is confessed.  A priest who breaks the seal of confession is subject to immediate defrocking, and so some priests have become martyrs rather than reveal secrets they learned in the confessional to government agencies, for example. 

There are many differences between confessing one's sins to a priest and posting your latest trip to a bar on Facebook, but structurally the situations are similar.  In each case, there is a person who is providing information that they would like to keep private:  the penitent in the confessional, or the person posting something on Facebook.  There is also the desired audience which the person wants to reach:  the priest (and presumably God) in the confessional, the intended circle of chosen friends in the case of Facebook.  There is the institution whose job it is to ensure that privacy is maintained:  the Church in the one case and Facebook in the other.  And finally, there's everybody else—the rest of the world which is supposed to remain wholly ignorant of what is going on in the private interchanges between priest and penitent in the one case, or Facebook and the user in the other case.

There have been isolated cases in which the seal of confession has been broken, but they have been rare, probably owing to the drastic penalty the Church exacts on a priest who breaks the seal.  In the case of Facebook, things are much different.  For one thing, individual users have no sure way of knowing if Facebook shares their private information with advertisers.  So it's reasonable that another institution with enough resources to investigate such large-scale questions systematically should get involved, in this case the FTC.  Instead of the seal of confession, we have a 2011 agreement reached between the FTC and Facebook which bound the company for twenty years to ask for "affirmative express consent" before Facebook shares any data the user hasn't made public with a third party.

Here's where things get tricky.  Anyone who deals with computers knows that whenever you sign up with a new service or install new software, you get asked to consent to something that most people blow by without reading.  If you try reading the terms and conditions, as they're called, you will either waste hours on it or have to hire a lawyer to figure out what you're really committing to.  This digital equivalent of fine print on a written contract is where companies like Facebook sometimes try to bury things you may not like if you knew about them.  But the case the FTC has against Facebook may amount to something like in clause 4 of paragraph 3.7A, you actually agreed to let Facebook share what you thought was private data with anybody who will pay for it. 

The question of whether clicking a button that says you read and understood the terms and conditions without really doing that is "affirmative express consent" has two answers.  The technical answer is, yes, it does, and if you didn't read and understand all that legal boilerplate it's your own fault.  The practical and man-on-the-street answer is, no it doesn't, because nobody but a corporate lawyer getting $300 an hour for the job is going to read and understand all that stuff in the sense that is intended, and making ordinary non-lawyers press the button is simply a CYA (cover-your-afterparts) action on the part of the company.  And the FTC may be saying that Facebook hasn't been covering well enough.

I do not personally use Facebook, although my wife does and lets me know if she finds anything important on it that she thinks I ought to know.  As I said to begin with, privacy is a fuzzy concept which in the digital age we live in has come to mean different things to different people.  Younger people especially seem not to mind sharing things on public sites that forty or fifty years ago would have been confined to the privacy of one's diary kept under lock and key.  I suppose the best we can do is to make clear what users expect in the way of privacy, in terms that users themselves can understand, and then use government regulation if necessary to keep organizations like Facebook from abusing the trust that their users place in them.  And if it takes billion-dollar fines to get a company's attention, then I say go to it. 

Sources:  The AP article by Barbara Ortutay about the potential Facebook fine was carried by numerous news outlets, including the print edition of the Austin American-Statesman on Apr. 26, 2019, where I saw it.  One online location that is not protected by a pay-to-get-behind-it firewall (an increasingly common practice these days) where the article can be viewed is the website of West Virginia's Bluefield Daily Telegraph at https://www.bdtonline.com/region/possible-b-facebook-fine-echoes-european-tech-penalties/article_6c3bfa1d-9d83-58b3-8417-08fc8688ccd4.html.