Monday, May 27, 2019

Can We Trust Alexa? Wade Roush Hopes So


Wade Roush is a journalist who writes a column on innovation for the prestigious Scientific American monthly.  In the June issue, he looks at the future of increasingly smart and omni-present artificial-intelligence (AI) agents that you can talk with—Apple's Siri, Google's Assistant, Amazon's Alexa, Microsoft's Cortana, and so on.  Apple has installed a Siri app in its AirBuds so all you have to do is say, "Hey, Siri" and she's right there in your ear canals.  (Full disclosure:  I don't use any of these apps, except for a dumb talk-only GPS assistant we've named Garmina.) 

True to his column's marching orders, Roush came up with a list of five protections that he says users should "insist on in advance" before we go any farther with these smart electronic assistants.  Don't get me wrong, it's a good list.  But the chances of any of the five taking hold or being realized in any substantial way are, in my view, way smaller than a snowball's chances in you-know-where. 

Take his first item:  privacy.  Inevitably, AI interactions are cloud-based because of the heavy-duty processing required.  Therefore, he calls for end-to-end encryption so even the companies running the AI assistants can't tell what's going on.  This is a contradictory requirement.  Of course they have to know what you're asking, because otherwise how are they going to respond to requests for information?  Maybe Roush is thinking of something like the old firewall idea that used to be maintained between the editorial and advertising divisions of a news organization.  But there are huge holes in those walls now even in the most traditional news outlets, and I don't see how any company could both remain ignorant of what's going on between its AI system and the user, and have the AI system do anything useful.

The next protection he asks for is unprecedented, so I will quote it directly:  "AI providers must be up front about how they are handling our data, how customer behavior feeds back into improvements in the system, and how they are making money, without burying the details in unreadable, 50-page end-user license agreements."  If any of the AI-assistant firms manage to do this, it will be the first time in recorded history.  Especially the part about how they make money.  That's called a firm's business strategy, and it's one of the most closely guarded secrets that most firms have. 

Next, he calls for every link in the communication chain to be "hacker-proof."  Good luck with that.  Hacker-resistant, I can see.  But not hacker-proof.

Next, he says the assistants must draw on "accurate data from trusted sources."  This is a hard one.  If you ask Alexa a question like, "What do you mean, an Elbonian wants my help in transferring millions out of his country?" what's she going to say in response?  The adage "garbage in, garbage out" still applies to AI systems just as it did to IBM System 360s in the 1960s.  And if we're truly talking about artificial intelligence, with no human intervention, I don't see how AI systems will filter out carefully designed phishing attacks or Russian-sponsored political tweets any better than humans do, which is to say, not very well.

And I've saved the best for last.  He calls for autonomy, for AI assistants to give us more agency over our lives:  "It would be a disaster for everyone if they morphed into vehicles for selling us things, stealing our attention or stoking our anxieties." 

Excuse me, but those three actions are how most of the Internet works.  If you took away all the activity that was designed to sell us things, the Internet would dwindle back down to a few academics sending scientific data back and forth, which is how it began in the 1980s.  If you tell designers not to try stealing our attention, and turned off all the apps and sites designed to do so, Facebook, Instagram, all the online games, Twitters, newsfeeds—all that stuff would disappear.  Facebook designers are on public record as having said that their explicit conscious intention in designing the system was to make using it addictive.  And as for stoking our anxieties—well, that's a good capsule description of about 80% of all the news on the Internet.  Take that away, and maybe you'll have some good stories about rainbows, butterflies, and flowers, but only till the sponsoring companies go bankrupt for lack of business.

I have no personal animus against Mr. Roush, and in dealing with a new technology he has to say something about it.  And there's no harm in holding up an ideal for people to approach in the future, even if they don't have much of a chance of approaching it very closely.  But it's strange to see a supposedly savvy technology writer call for future protections on any high-tech innovation that are so ludicrously idealistic, not to say contradictory in some points. 

Perhaps a page from the historians of technology would be helpful here.  They make a distinction between an internalist view of history and an externalist view.  I'm radically simplifying here, but basically, an internalist (I would count Roush in that number) takes the general assumptions of a field for granted and looks at things in a we-can-do-this way.  And in principle, if you take the promises of smart-AI proponents at face value, we could in fact achieve the five goals of protection that Roush outlined.

But an externalist views a situation more broadly, in the context of what has happened before both inside and outside a given field.  In saying that the protections Roush calls for are unlikely to be realized fully, I rely on the history of how high-tech companies and other actors have behaved up to this point, which is to fall far short of every protection that Roush calls for, at one time or another. 

I hope that this time it will be different, and talking with your trusted invisible AI assistant will be just as worry-free as talking with your most trusted actual human friend on the phone.  But after writing that sentence, I'm not even sure that I want that to happen.  And if it does, I think we will have lost something in the process.

Sources:  Wade Roush's column "Safe Words for Our AI Friends" appeared on p. 22 of the June 2019 print issue of Scientific American.

Monday, May 20, 2019

How Not To Do It: Elizabeth Holmes and Theranos


As the engineering sage Henry Petroski likes to say, we often learn more from failures than from successes, at least when it comes to ethical behavior.  And we now have a book-length record of one of the most spectacular failures in recent business history:  Theranos, a medical-equipment company founded by Elizabeth Holmes when she dropped out of Stanford at the tender age of 20.  Although the ethical misdeeds of Holmes and her close associates are many and various enough to fill a book—investigative reporter John Carreyrou's excellent Bad Blood—I would like to focus on just one aspect of wrongdoing described therein:  the falsification of test data sent to regulators.

Holmes, who is presently undergoing a federal trial in connection with her time as CEO of Theranos, began her venture with a remarkable vision:  to make all medical blood tests as simple and easy as the diabetic's pin-prick test for blood-glucose monitoring.  She based this vision on nothing more than an internship she spent at Stanford in a research lab one summer.  But she brought to the table considerable family connections (she was able to raise over six million dollars for her new company in less than two years) and a personal magnetism that convinced older and wiser persons (men, mostly) to give her whatever she wanted.   And if it had been possible to achieve her goal with enough money and talented people in the time she had, she would have achieved it.

The history of technology teaches us that there are optimum times for certain technical advances, and the key to profiting from a new technology is to try it at the right time.  Even Steve Jobs, who Holmes idolized and emulated whenever possible, could not have invented the Macintosh in 1953—the technology simply wasn't there yet.  So in retrospect, Holmes' idea of a tiny credit-card-like machine that would do everything a whole clinical blood lab currently does with a small fraction of the blood volume now required, was simply too far ahead of its time. 

The farthest her company got technically was to build some kludgy prototypes that were basically robot versions of what a human lab technician does.  When they worked, which wasn't often, their results compared to conventional blood-testing machines were unreliable and full of errors.  The only way Theranos employees could get results approaching the accuracy of standard commercially available lab testing devices already on the market was to run six of their robot machines at once on the same sample, and average the results.  Needless to say, this was not a practical solution, but as Holmes had already gone out and negotiated contracts with big retailers such as Safeway and Walgreens, and blood samples from real people were coming in to be tested, the engineers had to do something, and this was their temporary makeshift solution.

Then the question of federal certification came up.  Labs in the U. S. that are not purely research-oriented—that is, blood testing establishments that test blood for the general public—are required to meet certain standards enforced by the U. S. government under the Clinical Laboratory Improvement Amendments (CLIA), the laws governing such labs.  At Holmes's insistence, Theranos fudged on meeting these standards as long as it could.  By the time Theranos hired a young engineer named Tyler Shultz, the firm was following a home-brew procedure to certify its lab tests that looked funny to him.

The CLIA required periodic "proficiency testing" of samples to ensure that the lab was getting accurate results.  The rule was that these samples should be tested "in the same manner" as actual patient specimens "using the laboratory's routine methods."  But at Theranos, there was nothing routine about the way they were handling tests.

In order not to look completely foolish, the company had bought a number of on-the-market blood test machines that it was using along with its own units, called (regrettably for Edison) "Edisons."  The Edisons could handle some types of tests with the dodgy averaging method, but others were sent to the outsider machines and the results passed off as being done by Theranos devices.  When Tyler tried to run the proficiency tests on the Edisons and they failed, he got in hot water with management, who ordered him to report only the good results from the non-Edison machines.  After Tyler's protests proved fruitless, he contacted an outside state regulatory body to see whether his suspicions about Theranos's way of doing things was well-founded. 

It was, and Tyler decided to go to the most powerful person he knew who was associated with Theranos:  his grandfather.  This was not just any grandfather.  It was George Shultz, former U. S. Secretary of State, now in his 90s, but very much interested in Theranos as a member of the board.

Without passing judgment on the quality of relationships in the Shultz family, I can still say that Tyler's complaints to his grandfather fell on deaf ears.  Not only that, but his grandfather told on him to Theranos management, who made it so hot for Tyler that he concluded to resign, despite threats that if he quit and went public with his information, "he would lose."

Tyler turned out to be a critical source for John Carreyrou, a Wall Street Journal investigative reporter who also caught legal flak for trying to report the truth about Theranos.  But Carreyrou persisted in contacting current and former employees of the firm who gave him enough information to write a series of blockbuster articles that turned the climate of public opinion against Theranos, and led eventually to the demise of the company (it went out of existence last fall) and the indictment of Holmes and her second-in-command for fraud.

As whistleblower stories go, Tyler Shultz got off fairly easily.  He helped bring down an organization which richly deserved its fate.  But far more commonly, whistleblowers' allegations are challenged vigorously and often successfully, even if they are true.  And whatever happens ultimately to a whistleblower's career, he or she is guaranteed to undergo considerable emotional pain as the accused organization tries to defend itself, almost like an immune response to a therapy that will ultimately do a patient good, but which causes discomfort initially.

The story of Holmes and Theranos is of a noble cause gone bad, and continues to play out in the courtroom.  But we know enough already to use the story as a bad example of how startups should not handle problems with regulators.  The lesson for young engineers is, if you smell a rat, don't just hold your nose and keep on going.  Find out if it's really a rat, and if it is, deal with it, even if you get your hands dirty to do so.

Sources:  John Carreyrou's Bad Blood:  Secrets and Lies in a Silicon Valley Startup was published in 2018 by A. A. Knopf.   

Monday, May 13, 2019

China's High-Tech Persecution of Uighurs


In 1949, the newly formed communist government of China seized control of the northwest corner of the country now known as Xinjiang Province.  The province was home to a number of different ethnic groups, the largest of which are known as Uighurs (also spelled Uyghurs).  The Uighurs are a Turkic people with their own language and culture, and the majority of them are Muslims.  None of this sits well with the Chinese government, which began systematic attempts to convert Uighurs to conform to the language and ways of the Chinese-majority Han ethnicity, and the struggle continues to this day. 

Chilling details of how Beijing is using the latest high-tech surveillance methods to persecute Uighurs were reported in a recent article that appeared on the website of Wired.  In 2011, a new social-media app called WeChat took China by storm, and Uighurs seized upon it as a great new way to communicate among themselves, discussing everything from personal matters to politics and religion.  But in 2014, the Chinese government forced WeChat's owners to let them snoop on all WeChat messages, and soon after that, bad things started to happen to Uighurs who discussed sensitive issues such as Islam or Uighur separatist movements on WeChat.  In 2016, Uighur families who used WeChat incautiously were being checked on by police officers, sometimes daily.  For one family, it got so bad that they decided to emigrate to Turkey, and the father sent his wife and children ahead while he stayed behind to wait for his passport.  It never arrived, and instead he was arrested.

This is just one of thousands of cases in which Chinese officials have persecuted Uighurs.  Besides monitoring social media, the police are requiring Uighurs to provide voice and facial-recognition samples and even taking compulsory DNA samples.  Families are afraid to turn their lights on at home before dawn for fear the police will figure that they're praying, and haul them off to a re-education camp.  Yes, China is operating what amounts to massive prison camps for Uighurs, which the government claims are vocational training centers.  A diplomatic spokesman for Turkey contradicts these claims, saying that up to a million Uighurs have been detained in such camps against their will and subjected to torture and "political brainwashing."

Such actions are familiar to anyone with knowledge of China's Great Cultural Revolution, which lasted from 1966 until Mao Zedong's death in 1976.  This nationwide convulsion paralyzed the country, led to millions of deaths, and subjected millions more to internal exile and forced self-confessions.  While such things are a fading memory to most citizens of China today, the surveillance state is not, and all it takes for someone to fall back into those bad old days is to manifest religious faith in actions such as gathering for worship or praying openly.  And political organizing against the will of the ruling government will land you in hot water too.

China's reach even extends beyond its borders to blight the lives of Uighurs who escape to Turkey and elsewhere.  For Uighurs remaining in China, even contacting an exiled Uighur relative or friend by phone can result in police investigations and arrest.  So leaving China usually means losing all contact with family and friends who remain behind, except for the rare hand-carried letter that can be smuggled back into the country by a friendly courier. 

The Chinese government seems to be motivated by fear rather than trust.  One reason for this may be that the long history of China is that of periods of peace enforced by central control, interrupted by brief spasms of popular revolt that depose the old guard and install a new one in its place.  The leaders of China seem above all determined not to let that happen to them, which explains their brutal response to the 1989 Tiananmen Square protests and their continued harrassment of ethnic and religious minorities.  In common with other totalitarian philosophies, they seem to think that if anybody, anywhere in China harbors thoughts or actions that fundamentally contradict the basic assumptions of dialectical materialism, the regime is in mortal danger and must suppress such thoughts or actions.   

In a well-informed article in the journal of religion and public life First Things, Thomas F. Farr, who heads an NGO called the Religious Freedom Institute, says that U. S. diplomacy toward China has been largely ineffective in its efforts to mitigate the suffering that religious and ethnic minorities endure there.  Farr recommends reminding Chinese government leaders of their self-interest in promoting a peaceful and prosperous society.  He suggests that we provide Chinese leaders with hard evidence that religious faith can produce individuals who are peaceful, productive, and a net asset to any country which harbors them.  So instead of persecution, arrests, and forced retraining, which are likely to inspire counter-movements and even terrorism, perhaps we can persuade the Chinese government to change its attitude toward minorities like the Uighurs and allow them to practice their faiths and preserve their cultures. 

That would be nice if we could make it happen, but so far it's just a policy idea.  Right now, the Chinese government seems to think more and more surveillance is the answer, and has invested billions in technology and hiring of police to the point that in some regions of Xinjiang, the only stable, reliable job you can get is to work for the police and spy on your neighbors.

In the U. S., such difficulties seem exotic and far away, and it's easy to forget that the same technology Beijing is using to control its Uighur population is available here.  I suppose it's better for megabytes of personal information about us to be in the control of private companies who have a good reason to behave themselves if they want to stay in business, rather than a government desperate to remain in control under any circumstances.  But the information is out there just the same, and the sad plight of the Uighurs in China reminds us that except for the traditions of freedom in the U. S., we might be in the same boat.

Sources:  Isobel Cockerell's article "Inside China's Massive Surveillance Operation" appeared on May 9, 2019 at https://www.wired.com/story/inside-chinas-massive-surveillance-operation/.  Thomas F. Farr's "Diplomacy and Persecution in China" appeared in the May 2019 issue of First Things, pp. 29-36.

Monday, May 06, 2019

Faked Test Result Cost $700 Million, Says NASA


Last Tuesday, April 30, NASA announced the results of a years-long investigation by its Office of the Inspector General (OIG) and its Launch Services Program (LSP).  Back in 2009 and 2011, two climate-change-observing satellites failed to reach orbit and were lost at a total cost of $700 million.  In both cases, the payload fairing—the dome protecting the payload during launch—failed to separate on command, throwing the flight dynamics out of whack and ultimately crashing the satellites before they reached orbit.  After a long investigation in which the U. S. Department of Justice was involved, NASA's OIG found that a supplier of aluminum extrusions used to hold the fairing together had been faking materials-testing results on the extrusions not once, not twice, but literally thousands of times over a period of nineteen years.

At least, that is what the revised launch-failure report by NASA says.  Understandably, the company involved disputes some of these findings.  At the time the extrusions were supplied, it was known as Sapa Profiles Inc. (SPI), of Portland, Oregon, although it is now part of a multinational corporation called Hydro.  The report makes for chilling reading. 

To allow the fairing to separate into its clamshell halves at the right moment during the launch phase of a flight, explosive charges are set off to sever the aluminum extrusions that hold the halves of the fairing together.  But the aluminum has to have the right properties to break cleanly, it appears, and so NASA required supplier SPI to do certain materials tests on their extrusions, probably things like tensile strength and so on.  Given the right equipment, these are straightforward tests, and even low-level engineers and engineering students such as I teach know that faking test results is one of the worst, but at the same time one of the more common, engineering-ethics lapses. 

According to the NASA report, such fakery became routine at SPI, so much so that a lab supervisor got in the habit of training newcomers how to fake test results.  NASA found handwritten documents showing how the faking was done.  And this was no now-and-then thing.  For whatever reason, the extrusions failed tests a lot, and so lots of faking went on, not only for NASA's extrusions but for products bound for hundreds of other customers.  But not all of them had the investigative resources and motivation of $700-million launch failures to check out what was happening.

The investigation took years to complete, and once SPI was confronted with its results, the company agreed to pay $46 million in restitution to the U. S. government and other customers as a part of a settlement of criminal and civil charges.  That's a lot of money, but clearly a drop in the bucket compared to what the firm's malfeasance cost NASA and all the people who put years of work and ingenuity into the launches of the satellites which were doomed by the faulty extrusions. 

Seldom does a clear-cut violation of engineering ethics principles have such an equally clear-cut result that makes it into the public eye.  Fortunately, none of the flights affected by the faulty extrusions were manned, but losing $700 million of hardware is bad enough.  What we don't know is how SPI's other customers were adversely affected by the falsified tests, but didn't have the resources to trace the problem back to its true source.  It's entirely possible that this new information will inspire other SPI customers to look back into mysterious failures of their products to see if faulty SPI extrusions may be at fault there as well.  At any rate, NASA has suspended the firm from selling anything to the U. S. government and is thinking about proposing a perpetual bar.

I can only speculate what went through the minds of the engineers who were asked to falsify the test results.  Clearly, a culture of falsification had to be in place for the problem to go on as long as it did.  And numerous psychology experiments have shown that we are much more creatures of our environment than we think we are.  Stanford professor of psychology Philip Zimbardo showed that in 1971 when he set up a mock prision with college-student volunteers who were randomly assigned to be either prisoners or guards.  Within six days, the guards were treating the prisoners so badly that Zimbardo prematurely terminated the experiment for fear that someone could get injured or killed. 

If ordinary law-abiding students can conform to an environment they know is not real, but nevertheless demands that they act a certain way that is contrary to their everyday behavior, it is no great surprise that engineering students newly hired into a company where systematic corrupt practices are in place find it all too easy to conform to the expectations of their supervisor and fall in with the practice of test-result fakery.  As an educator, I don't know what we can do other than to repeat that faking test results is never, under any circumstances, a good thing to do.  Adhering to this advice requires that the listener believe in at least one moral absolute, and that itself can prove to be a challenge these days.

So sometimes, telling stories is more effective than just reciting rules.  The SPI extrusion episode will probably make it into the annals of engineering-ethics textbooks, as it should.  Maybe telling the story of how fake test results led directly to the loss of satellites will make an impression that will stick in the minds of students.  At any rate, this debacle deserves to be more widely known, as it serves as an object lesson for anyone who is responsible for testing hardware, or software, for that matter.

Sources:  I referred to a report on the NASA investigation at the website Space.com carried at https://www.space.com/nasa-determines-cause-satellite-launch-failures.html.  The revised NASA failure report is available at https://www.nasa.gov/sites/default/files/atoms/files/oco_glory_public_summary_update_-_for_the_web_-_04302019.pdf.  I also referred to the Wikipedia article "Stanford prison experiment."
-->