Monday, May 20, 2019

How Not To Do It: Elizabeth Holmes and Theranos

As the engineering sage Henry Petroski likes to say, we often learn more from failures than from successes, at least when it comes to ethical behavior.  And we now have a book-length record of one of the most spectacular failures in recent business history:  Theranos, a medical-equipment company founded by Elizabeth Holmes when she dropped out of Stanford at the tender age of 20.  Although the ethical misdeeds of Holmes and her close associates are many and various enough to fill a book—investigative reporter John Carreyrou's excellent Bad Blood—I would like to focus on just one aspect of wrongdoing described therein:  the falsification of test data sent to regulators.

Holmes, who is presently undergoing a federal trial in connection with her time as CEO of Theranos, began her venture with a remarkable vision:  to make all medical blood tests as simple and easy as the diabetic's pin-prick test for blood-glucose monitoring.  She based this vision on nothing more than an internship she spent at Stanford in a research lab one summer.  But she brought to the table considerable family connections (she was able to raise over six million dollars for her new company in less than two years) and a personal magnetism that convinced older and wiser persons (men, mostly) to give her whatever she wanted.   And if it had been possible to achieve her goal with enough money and talented people in the time she had, she would have achieved it.

The history of technology teaches us that there are optimum times for certain technical advances, and the key to profiting from a new technology is to try it at the right time.  Even Steve Jobs, who Holmes idolized and emulated whenever possible, could not have invented the Macintosh in 1953—the technology simply wasn't there yet.  So in retrospect, Holmes' idea of a tiny credit-card-like machine that would do everything a whole clinical blood lab currently does with a small fraction of the blood volume now required, was simply too far ahead of its time. 

The farthest her company got technically was to build some kludgy prototypes that were basically robot versions of what a human lab technician does.  When they worked, which wasn't often, their results compared to conventional blood-testing machines were unreliable and full of errors.  The only way Theranos employees could get results approaching the accuracy of standard commercially available lab testing devices already on the market was to run six of their robot machines at once on the same sample, and average the results.  Needless to say, this was not a practical solution, but as Holmes had already gone out and negotiated contracts with big retailers such as Safeway and Walgreens, and blood samples from real people were coming in to be tested, the engineers had to do something, and this was their temporary makeshift solution.

Then the question of federal certification came up.  Labs in the U. S. that are not purely research-oriented—that is, blood testing establishments that test blood for the general public—are required to meet certain standards enforced by the U. S. government under the Clinical Laboratory Improvement Amendments (CLIA), the laws governing such labs.  At Holmes's insistence, Theranos fudged on meeting these standards as long as it could.  By the time Theranos hired a young engineer named Tyler Shultz, the firm was following a home-brew procedure to certify its lab tests that looked funny to him.

The CLIA required periodic "proficiency testing" of samples to ensure that the lab was getting accurate results.  The rule was that these samples should be tested "in the same manner" as actual patient specimens "using the laboratory's routine methods."  But at Theranos, there was nothing routine about the way they were handling tests.

In order not to look completely foolish, the company had bought a number of on-the-market blood test machines that it was using along with its own units, called (regrettably for Edison) "Edisons."  The Edisons could handle some types of tests with the dodgy averaging method, but others were sent to the outsider machines and the results passed off as being done by Theranos devices.  When Tyler tried to run the proficiency tests on the Edisons and they failed, he got in hot water with management, who ordered him to report only the good results from the non-Edison machines.  After Tyler's protests proved fruitless, he contacted an outside state regulatory body to see whether his suspicions about Theranos's way of doing things was well-founded. 

It was, and Tyler decided to go to the most powerful person he knew who was associated with Theranos:  his grandfather.  This was not just any grandfather.  It was George Shultz, former U. S. Secretary of State, now in his 90s, but very much interested in Theranos as a member of the board.

Without passing judgment on the quality of relationships in the Shultz family, I can still say that Tyler's complaints to his grandfather fell on deaf ears.  Not only that, but his grandfather told on him to Theranos management, who made it so hot for Tyler that he concluded to resign, despite threats that if he quit and went public with his information, "he would lose."

Tyler turned out to be a critical source for John Carreyrou, a Wall Street Journal investigative reporter who also caught legal flak for trying to report the truth about Theranos.  But Carreyrou persisted in contacting current and former employees of the firm who gave him enough information to write a series of blockbuster articles that turned the climate of public opinion against Theranos, and led eventually to the demise of the company (it went out of existence last fall) and the indictment of Holmes and her second-in-command for fraud.

As whistleblower stories go, Tyler Shultz got off fairly easily.  He helped bring down an organization which richly deserved its fate.  But far more commonly, whistleblowers' allegations are challenged vigorously and often successfully, even if they are true.  And whatever happens ultimately to a whistleblower's career, he or she is guaranteed to undergo considerable emotional pain as the accused organization tries to defend itself, almost like an immune response to a therapy that will ultimately do a patient good, but which causes discomfort initially.

The story of Holmes and Theranos is of a noble cause gone bad, and continues to play out in the courtroom.  But we know enough already to use the story as a bad example of how startups should not handle problems with regulators.  The lesson for young engineers is, if you smell a rat, don't just hold your nose and keep on going.  Find out if it's really a rat, and if it is, deal with it, even if you get your hands dirty to do so.

Sources:  John Carreyrou's Bad Blood:  Secrets and Lies in a Silicon Valley Startup was published in 2018 by A. A. Knopf.   

Monday, May 13, 2019

China's High-Tech Persecution of Uighurs

In 1949, the newly formed communist government of China seized control of the northwest corner of the country now known as Xinjiang Province.  The province was home to a number of different ethnic groups, the largest of which are known as Uighurs (also spelled Uyghurs).  The Uighurs are a Turkic people with their own language and culture, and the majority of them are Muslims.  None of this sits well with the Chinese government, which began systematic attempts to convert Uighurs to conform to the language and ways of the Chinese-majority Han ethnicity, and the struggle continues to this day. 

Chilling details of how Beijing is using the latest high-tech surveillance methods to persecute Uighurs were reported in a recent article that appeared on the website of Wired.  In 2011, a new social-media app called WeChat took China by storm, and Uighurs seized upon it as a great new way to communicate among themselves, discussing everything from personal matters to politics and religion.  But in 2014, the Chinese government forced WeChat's owners to let them snoop on all WeChat messages, and soon after that, bad things started to happen to Uighurs who discussed sensitive issues such as Islam or Uighur separatist movements on WeChat.  In 2016, Uighur families who used WeChat incautiously were being checked on by police officers, sometimes daily.  For one family, it got so bad that they decided to emigrate to Turkey, and the father sent his wife and children ahead while he stayed behind to wait for his passport.  It never arrived, and instead he was arrested.

This is just one of thousands of cases in which Chinese officials have persecuted Uighurs.  Besides monitoring social media, the police are requiring Uighurs to provide voice and facial-recognition samples and even taking compulsory DNA samples.  Families are afraid to turn their lights on at home before dawn for fear the police will figure that they're praying, and haul them off to a re-education camp.  Yes, China is operating what amounts to massive prison camps for Uighurs, which the government claims are vocational training centers.  A diplomatic spokesman for Turkey contradicts these claims, saying that up to a million Uighurs have been detained in such camps against their will and subjected to torture and "political brainwashing."

Such actions are familiar to anyone with knowledge of China's Great Cultural Revolution, which lasted from 1966 until Mao Zedong's death in 1976.  This nationwide convulsion paralyzed the country, led to millions of deaths, and subjected millions more to internal exile and forced self-confessions.  While such things are a fading memory to most citizens of China today, the surveillance state is not, and all it takes for someone to fall back into those bad old days is to manifest religious faith in actions such as gathering for worship or praying openly.  And political organizing against the will of the ruling government will land you in hot water too.

China's reach even extends beyond its borders to blight the lives of Uighurs who escape to Turkey and elsewhere.  For Uighurs remaining in China, even contacting an exiled Uighur relative or friend by phone can result in police investigations and arrest.  So leaving China usually means losing all contact with family and friends who remain behind, except for the rare hand-carried letter that can be smuggled back into the country by a friendly courier. 

The Chinese government seems to be motivated by fear rather than trust.  One reason for this may be that the long history of China is that of periods of peace enforced by central control, interrupted by brief spasms of popular revolt that depose the old guard and install a new one in its place.  The leaders of China seem above all determined not to let that happen to them, which explains their brutal response to the 1989 Tiananmen Square protests and their continued harrassment of ethnic and religious minorities.  In common with other totalitarian philosophies, they seem to think that if anybody, anywhere in China harbors thoughts or actions that fundamentally contradict the basic assumptions of dialectical materialism, the regime is in mortal danger and must suppress such thoughts or actions.   

In a well-informed article in the journal of religion and public life First Things, Thomas F. Farr, who heads an NGO called the Religious Freedom Institute, says that U. S. diplomacy toward China has been largely ineffective in its efforts to mitigate the suffering that religious and ethnic minorities endure there.  Farr recommends reminding Chinese government leaders of their self-interest in promoting a peaceful and prosperous society.  He suggests that we provide Chinese leaders with hard evidence that religious faith can produce individuals who are peaceful, productive, and a net asset to any country which harbors them.  So instead of persecution, arrests, and forced retraining, which are likely to inspire counter-movements and even terrorism, perhaps we can persuade the Chinese government to change its attitude toward minorities like the Uighurs and allow them to practice their faiths and preserve their cultures. 

That would be nice if we could make it happen, but so far it's just a policy idea.  Right now, the Chinese government seems to think more and more surveillance is the answer, and has invested billions in technology and hiring of police to the point that in some regions of Xinjiang, the only stable, reliable job you can get is to work for the police and spy on your neighbors.

In the U. S., such difficulties seem exotic and far away, and it's easy to forget that the same technology Beijing is using to control its Uighur population is available here.  I suppose it's better for megabytes of personal information about us to be in the control of private companies who have a good reason to behave themselves if they want to stay in business, rather than a government desperate to remain in control under any circumstances.  But the information is out there just the same, and the sad plight of the Uighurs in China reminds us that except for the traditions of freedom in the U. S., we might be in the same boat.

Sources:  Isobel Cockerell's article "Inside China's Massive Surveillance Operation" appeared on May 9, 2019 at  Thomas F. Farr's "Diplomacy and Persecution in China" appeared in the May 2019 issue of First Things, pp. 29-36.

Monday, May 06, 2019

Faked Test Result Cost $700 Million, Says NASA

Last Tuesday, April 30, NASA announced the results of a years-long investigation by its Office of the Inspector General (OIG) and its Launch Services Program (LSP).  Back in 2009 and 2011, two climate-change-observing satellites failed to reach orbit and were lost at a total cost of $700 million.  In both cases, the payload fairing—the dome protecting the payload during launch—failed to separate on command, throwing the flight dynamics out of whack and ultimately crashing the satellites before they reached orbit.  After a long investigation in which the U. S. Department of Justice was involved, NASA's OIG found that a supplier of aluminum extrusions used to hold the fairing together had been faking materials-testing results on the extrusions not once, not twice, but literally thousands of times over a period of nineteen years.

At least, that is what the revised launch-failure report by NASA says.  Understandably, the company involved disputes some of these findings.  At the time the extrusions were supplied, it was known as Sapa Profiles Inc. (SPI), of Portland, Oregon, although it is now part of a multinational corporation called Hydro.  The report makes for chilling reading. 

To allow the fairing to separate into its clamshell halves at the right moment during the launch phase of a flight, explosive charges are set off to sever the aluminum extrusions that hold the halves of the fairing together.  But the aluminum has to have the right properties to break cleanly, it appears, and so NASA required supplier SPI to do certain materials tests on their extrusions, probably things like tensile strength and so on.  Given the right equipment, these are straightforward tests, and even low-level engineers and engineering students such as I teach know that faking test results is one of the worst, but at the same time one of the more common, engineering-ethics lapses. 

According to the NASA report, such fakery became routine at SPI, so much so that a lab supervisor got in the habit of training newcomers how to fake test results.  NASA found handwritten documents showing how the faking was done.  And this was no now-and-then thing.  For whatever reason, the extrusions failed tests a lot, and so lots of faking went on, not only for NASA's extrusions but for products bound for hundreds of other customers.  But not all of them had the investigative resources and motivation of $700-million launch failures to check out what was happening.

The investigation took years to complete, and once SPI was confronted with its results, the company agreed to pay $46 million in restitution to the U. S. government and other customers as a part of a settlement of criminal and civil charges.  That's a lot of money, but clearly a drop in the bucket compared to what the firm's malfeasance cost NASA and all the people who put years of work and ingenuity into the launches of the satellites which were doomed by the faulty extrusions. 

Seldom does a clear-cut violation of engineering ethics principles have such an equally clear-cut result that makes it into the public eye.  Fortunately, none of the flights affected by the faulty extrusions were manned, but losing $700 million of hardware is bad enough.  What we don't know is how SPI's other customers were adversely affected by the falsified tests, but didn't have the resources to trace the problem back to its true source.  It's entirely possible that this new information will inspire other SPI customers to look back into mysterious failures of their products to see if faulty SPI extrusions may be at fault there as well.  At any rate, NASA has suspended the firm from selling anything to the U. S. government and is thinking about proposing a perpetual bar.

I can only speculate what went through the minds of the engineers who were asked to falsify the test results.  Clearly, a culture of falsification had to be in place for the problem to go on as long as it did.  And numerous psychology experiments have shown that we are much more creatures of our environment than we think we are.  Stanford professor of psychology Philip Zimbardo showed that in 1971 when he set up a mock prision with college-student volunteers who were randomly assigned to be either prisoners or guards.  Within six days, the guards were treating the prisoners so badly that Zimbardo prematurely terminated the experiment for fear that someone could get injured or killed. 

If ordinary law-abiding students can conform to an environment they know is not real, but nevertheless demands that they act a certain way that is contrary to their everyday behavior, it is no great surprise that engineering students newly hired into a company where systematic corrupt practices are in place find it all too easy to conform to the expectations of their supervisor and fall in with the practice of test-result fakery.  As an educator, I don't know what we can do other than to repeat that faking test results is never, under any circumstances, a good thing to do.  Adhering to this advice requires that the listener believe in at least one moral absolute, and that itself can prove to be a challenge these days.

So sometimes, telling stories is more effective than just reciting rules.  The SPI extrusion episode will probably make it into the annals of engineering-ethics textbooks, as it should.  Maybe telling the story of how fake test results led directly to the loss of satellites will make an impression that will stick in the minds of students.  At any rate, this debacle deserves to be more widely known, as it serves as an object lesson for anyone who is responsible for testing hardware, or software, for that matter.

Sources:  I referred to a report on the NASA investigation at the website carried at  The revised NASA failure report is available at  I also referred to the Wikipedia article "Stanford prison experiment."

Monday, April 29, 2019

Facebook, Privacy, and Regulation

In what may signal a change in attitude, the U. S. Federal Trade Commission is talking about fining Facebook billions of dollars for breaching a privacy agreement between the company and the FTC.  At issue is how Facebook uses the data it gleans from its users and whether Facebook has asked permission before sharing private data with third parties. 

In a recent AP article, reporter Barbara Ortutay says Facebook has set aside $3 billion in case the FTC fines the firm.  This is somewhat of a drop in the bucket of Facebook profits, which are estimated to be over $20 billion this year.  But still, it's large enough to attract investors' attention, and so the publicly traded company mentioned it in a recent news release. 

Privacy is one of the more nebulous concepts in ethics and law, as opposed to murder, say.  With murder you generally have a dead body and a definite event that produced it.  But privacy is all in the mind, or rather, minds—the mind of the person whose privacy has been violated, and the minds of those who allegedly know something about the victim that the victim doesn't want known.  And figuring out what is in peoples' minds isn't that easy.

One of the earliest institutional guarantees of privacy is what the Roman Catholic Church calls the "seal of confession."  Catholics are supposed to confess all their mortal sins periodically to a priest in what's called the sacrament of penance or reconciliation.  In turn, the Church promises on behalf of its priests never to reveal what is confessed.  A priest who breaks the seal of confession is subject to immediate defrocking, and so some priests have become martyrs rather than reveal secrets they learned in the confessional to government agencies, for example. 

There are many differences between confessing one's sins to a priest and posting your latest trip to a bar on Facebook, but structurally the situations are similar.  In each case, there is a person who is providing information that they would like to keep private:  the penitent in the confessional, or the person posting something on Facebook.  There is also the desired audience which the person wants to reach:  the priest (and presumably God) in the confessional, the intended circle of chosen friends in the case of Facebook.  There is the institution whose job it is to ensure that privacy is maintained:  the Church in the one case and Facebook in the other.  And finally, there's everybody else—the rest of the world which is supposed to remain wholly ignorant of what is going on in the private interchanges between priest and penitent in the one case, or Facebook and the user in the other case.

There have been isolated cases in which the seal of confession has been broken, but they have been rare, probably owing to the drastic penalty the Church exacts on a priest who breaks the seal.  In the case of Facebook, things are much different.  For one thing, individual users have no sure way of knowing if Facebook shares their private information with advertisers.  So it's reasonable that another institution with enough resources to investigate such large-scale questions systematically should get involved, in this case the FTC.  Instead of the seal of confession, we have a 2011 agreement reached between the FTC and Facebook which bound the company for twenty years to ask for "affirmative express consent" before Facebook shares any data the user hasn't made public with a third party.

Here's where things get tricky.  Anyone who deals with computers knows that whenever you sign up with a new service or install new software, you get asked to consent to something that most people blow by without reading.  If you try reading the terms and conditions, as they're called, you will either waste hours on it or have to hire a lawyer to figure out what you're really committing to.  This digital equivalent of fine print on a written contract is where companies like Facebook sometimes try to bury things you may not like if you knew about them.  But the case the FTC has against Facebook may amount to something like in clause 4 of paragraph 3.7A, you actually agreed to let Facebook share what you thought was private data with anybody who will pay for it. 

The question of whether clicking a button that says you read and understood the terms and conditions without really doing that is "affirmative express consent" has two answers.  The technical answer is, yes, it does, and if you didn't read and understand all that legal boilerplate it's your own fault.  The practical and man-on-the-street answer is, no it doesn't, because nobody but a corporate lawyer getting $300 an hour for the job is going to read and understand all that stuff in the sense that is intended, and making ordinary non-lawyers press the button is simply a CYA (cover-your-afterparts) action on the part of the company.  And the FTC may be saying that Facebook hasn't been covering well enough.

I do not personally use Facebook, although my wife does and lets me know if she finds anything important on it that she thinks I ought to know.  As I said to begin with, privacy is a fuzzy concept which in the digital age we live in has come to mean different things to different people.  Younger people especially seem not to mind sharing things on public sites that forty or fifty years ago would have been confined to the privacy of one's diary kept under lock and key.  I suppose the best we can do is to make clear what users expect in the way of privacy, in terms that users themselves can understand, and then use government regulation if necessary to keep organizations like Facebook from abusing the trust that their users place in them.  And if it takes billion-dollar fines to get a company's attention, then I say go to it. 

Sources:  The AP article by Barbara Ortutay about the potential Facebook fine was carried by numerous news outlets, including the print edition of the Austin American-Statesman on Apr. 26, 2019, where I saw it.  One online location that is not protected by a pay-to-get-behind-it firewall (an increasingly common practice these days) where the article can be viewed is the website of West Virginia's Bluefield Daily Telegraph at 

Monday, April 22, 2019

The Stork Goes Digital

Pregnancy and childbirth, as well as the activities leading up to these, are among the most private matters women are concerned with.  They can also be some of the most expensive medical conditions that otherwise healthy young women encounter.  It comes as no surprise, then, that a company called Ovia has developed a system that lets women employees track their fertility and resulting pregnancies digitally. 

Originally, Ovia planned to promote their product directly to the consumers—young women—but as described in a recent Washington Post article, the company began to get inquiries from employers wanting to know how they could encourage their women workers to sign up.  Why?  The hope that Ovia would reduce medical-insurance costs for expensive infertility treatments and problem pregnancies.  Ovia showed employers that women who use the app can indeed benefit from the improved monitoring and awareness of warning signs that it provides.  Presently, Ovia's website clearly prioritizes this mode of delivery, and the typical user can even receive small payments from her employer to encourage her to keep checking in and providing data.  But what happens to that data?

Ovia stresses that all identifying information is stripped from the data before it is passed to the employer, who can then use it to anticipate health-care costs and glean a detailed picture of the most intimate aspects of their women employees' lives.  The contract Ovia signs with employers includes a promise that the employer will not "de-anonymize" the data to figure out exactly who is pregnant, for example, but there have been bad actors in the business world before, and it's not hard to imagine someone doing this for illegitimate reasons. 

Ovia is one of the more prominent apps of a variety that track various aspects of user health—Fitbit being the most well known.  Using such an app simply for one's own benefit is one thing.  But signing up to share intimate details with an employer-sponsored app is a different matter.  According to statistics provided by Ovia, the benefits are real—they say that users have a 30 percent reduction in premature births, and  a 30 percent increase in natural conception, as opposed to costly and absence-inducing infertility treatments. 

In a country where the burden of health insurance typically lies with the employer, one can't fault employers for doing whatever they can to minimize this cost, and gynecological-related procedures are among the most expensive ones that young women predictably encounter.  So the synergistic cooperation among Ovia, employers, and their women employees looks mostly like a win-win situation.

All the same, the article interviewed experts who expressed privacy concerns.  As long as there are no data breaches, these concerns may be largely imaginary.  But one aspect of engineering ethics is trying to imagine what could go wrong before it happens, and here's one way an app like this could be misused.

In a free country where agreements are freely arrived at between employees and employers, voluntarily sharing information is one thing.  But in countries with less freedom, such as the Peoples' Republic of China, governments are systematically worming their way into increasingly private aspects of their employees' lives.  I'm sure Ovia would like to have a market of the 1.3 billion or so people in that country.  But what if women there, instead of being offered the option to use the app, were forced to use it as a requirement of employment?  And what if turning up pregnant without first informing Ovia could be a cause for fines or imprisonment? 

It sounds awful, but such regimentation is becoming just another part of life in China, where all kinds of digital information on citizens is being used to come up with a "social credit" score.  It's sort of like a financial credit rating, but measures your reliability as far as the government is concerned.  In the dystopian novel 1984, the omnipresent Big Brother monitored everyone's actions through telescreens, which were sometimes laughed off by readers at the time because it would have taken half the population to sit behind the monitors to watch the other half.  But now in the age of AI pattern recognition, the watchers are 99% digital, and what was formerly thought impossible because of the absurd manpower demands has become quite feasible for a government that sets no bounds for snooping on its own citizens.

So far, Ovia seems to be simply another employee benefit that really does make things better for both users and the companies they work for.  Working women who get pregnant always encounter more or less conflicted situations, and anything that reduces the conflict, making employers less bothered about their women employees who have children, seems to be a good thing.  Still, it's another step into the digital future, which young persons especially seem to be embracing with little or no regret.  Things that once people blushed to tell even their doctors are now fodder for online posting, and as long as the privacy Ovia and similar apps promise is not breached, I suppose this is a good thing if it leads to healthier mothers and babies. 

Still, one wonders where the sharing of formerly private and personal data will stop, if ever.  Freedom, as an abstraction, can get overlooked in the rush to convenience that so many digital advances offer.  And so far, it looks like Ovia really has kept their promises that users' privacy will not be compromised.  But in the hands of a malevolent employer, or worse, a malevolent government, these kinds of personal-health apps could lead to serious incidents of abuse.  Let's hope that we can keep the benefits of Ovia and related apps while fending off any attempts to use it for nefarious purposes.

Sources:  The Washington Post article "Is Your Pregnancy App Sharing Your Intimate Data With Your Boss?" appeared on Apr. 10, 2019 at  I also referred to Ovia's website at 

Sunday, April 21, 2019

The FCC and 5G

When I attended Cornell University in 1976 and 1977 for my master's degree, I took a microwave lab course.  In the lab room where we worked was a large glass desiccator jar, sort of like a clear cookie jar with blue desiccator crystals in the bottom to keep the contents dry.  Inside the main area of the jar were tiny rectangular copper pipes with little connectors on the ends. The pipes were about a quarter of an inch wide or less, some as small as soda straws, and a few inches long.  When I asked one of the professors what this was, he explained that the pipes were millimeter-wave waveguides.  Certain frequencies of millimeter waves were highly absorbed by water, so they had decided to keep the waveguides in a desiccator jar to make sure that they didn't have any absorbed film of water in them that would mess up the measurements they might make with them. 

Back then, millimeter-wave equipment was nothing more than a laboratory curiosity.  In terms of frequencies, millimeter waves range from 30 GHz up to 300 GHz.  Their name comes from the fact that they make waves in air that are between 1 and 10 millimeters long from one peak to the next peak.  Back in the 1970s, they were extremely hard to generate and detect, and nobody but a few scientists had anything to do with them.  The only large corporation that had pursued serious research about millimeter waves was Bell Laboratories, which thought for a while that the future of their network would involve millimeter-wave waveguides crisscrossing the country.  But when Corning and other companies figured out how to make extremely low-loss optical fibers, Bell dropped their millimeter-wave idea and switched to fiber optics, which is how the vast majority of network traffic travels today.

But you can't attach fiber optics to a moving car, or somebody walking down the street, so as newer applications such as virtual reality and the Internet of Things grow, there is a constantly increasing need for more wireless bandwidth.  And millimeter waves will be a key player in the next generation of wireless network technology called 5G.

Last Friday, Apr. 12, the U. S. Federal Communications Commission (FCC) announced that it plans to auction off close to 5 GHz of some millimeter-wave bands that have previously been reserved for other purposes.  These bands are at 37, 39, and 47 GHz.  For many years now, auctions have been the FCC's preferred method of allocating frequencies to private entities, and while such auctions shut out everyone except those well-heeled enough to afford to exploit the frequencies they buy, this process is a lot more transparent and fair than their former practice of simply opening applications to all comers, and waiting to see who gets there first.  And the old process was often subject to political log-rolling.  For example, the way Lyndon B. Johnson obtained control of station KLBJ in Austin and vastly mproved its value in the 1940s does not bear a lot of scrutiny, unless you don't mind finding a lot of political wangling that the then-senator engaged in with the FCC. 

While auctions of radio spectrum allocations are not inherently just proceedings in themselves, they do acknowledge that the spectrum is a limited natural esource, and an auction allows interested parties to express their perceived value of that resource in bids.  We don't often value what we don't pay for, and so an auction tends to ensure that whoever gets the right to use certain frequencies is going to exploit them so as to get their money's worth. 

Even as recently as a decade ago, an auction of millimeter-wave bands wouldn't have attracted much attention, because the technology to generate and receive such waves was way too expensive for consumer products.  But with advances in fabrication methods, microwave technology, and adaptive control of antennas, it's now feasible to start building the micro-cells that millimeter-wave wireless will need.  As you go higher in frequency to around 60 GHz, millimeter waves are increasingly absorbed by oxygen in the air, and even below that frequency they do not propagate very far compared to the longer microwaves that are used for earlier wireless systems.  So this means we will need a whole lot more millimeter-wave base stations than you would need for equivalent coverage at lower frequencies. 

A millimeter-wave base station won't be a two-hundred-foot tower with antennas several feet long hanging from the top.  It will probably take the form of a box or panel just a few feet square, sitting at or near ground level, typically on a utility pole.  They will show up first in big cities where the density of foot and vehicle traffic justifies the installations, and then less dense areas will be covered.  For sparsely populated areas, the FCC has announced it is thinking about allocating some frequencies as low as 600 MHz, whose waves can cover much wider areas, so suburbs and rural regions won't be totally left out in the cold, wireless-wise.

This all assumes that there's nothing harmful to human health regarding the increased amount of millimeter-wave radiation that people will be subjected to as 5G deploys.  There is at least one person with apparently good qualifications who says this isn't so.  Martin L. Pall is a retired professor of biological sciences at Washington State University who has published both refereed journal papers and popular talks saying that Wi-Fi, and in particular millimeter waves, can cause everything from low sperm counts to cancer.  I know enough about electromagnetics to have reason to doubt some of his reasoning as to how this occurs, but interested parties can examine his case here.  If he's right, we ought to go slow on the rollout of 5G, but it looks like instead we'll be performing a massive experiment in which millions of people get exposed—and then we'll see if anything bad happens. 

Sources:  The FCC's news release about their planned 5G auction can be found at  I read about the plan in an Associated Press article carried by the Austin American-Statesman on Apr. 13, a version of which can be viewed at the AP website  Dr. M. L. Pall's expression of his concerns regarding the increasing use of Wi-Fi can be read in his paper in Environmental Research vol. 164, pp. 405-416 (July 2018), which is downloadable at

Monday, April 08, 2019

Boeing Confirms Software At Fault In Ethiopian Crash

Last Thursday, Apr. 4, Ethiopian Transport Minister Dagmawit Moges released a preliminary report into the crash of an Ethiopian Airlines Boeing 737 Max 8 outside Addis Ababa last month, killing all 157 people on board.  Cockpit voice recordings and data from the flight recorder make it very clear that, as Boeing CEO Dennis A. Muilenberg admitted regarding both this crash and that of an Indonesian Lion Air flight last fall, "it's apparent that in both flights the Maneuvering Characteristics Augmentation System, known as MCAS, activated in response to erroneous angle of attack information."  Boeing is currently scrambling to fix both that software problem and another minor one uncovered recently, but as of now, no 737 Max 8s are flying in the U. S. or much of anywhere else.  And the FBI is reportedly investigating how Boeing certified the plane.

When we blogged about the Ethiopian crash three weeks ago, there were significant questions as to whether the MCAS alone was at fault, or whether pilot errors contributed to the crash.  But according to a summary published in the Washington Post, Minister Moges said that the pilots did everything recommended by the manufacturer to disable the MCAS, which was repeatedly attempting to point the plane's nose downward in response to the single faulty angle-of-attack sensor output.  But their efforts proved futile, and the plane eventually keeled over into a 40-degree dive and crashed into the ground at more than 500 mph. 

Our sympathy is with those who lost relatives and loved ones in both crashes.  Similar words were spoken by CEO Muilenberg, on whose head lies the ultimate responsibility for fixing these problems.  In doing so, he and his underlings will be dealing with how to smoothly integrate control of life-critical systems when both humans and what amounts to artificial intelligence are involved.

This is not a new problem, but it has transformed so much over the years that it seems new. 

I once toured a museum near Lowell, Massachusetts which preserved a good number of the original pieces of machinery used in one of the many water-powered textile mills that used to dot the landscape in the early 1800s.  Attached to their main water turbine was a large, complicated system of gears, flywheels, springs, levers, and so on which turned out to be the speed regulator for the mill.  As looms were cut in and out of the belt-and-shaft power distribution system, the load would vary, but it was important to keep the speed of the mill's shafts as constant as possible.  The complicated piece of machinery I saw turned out to be a sophisticated control system that kept the wheels turning at the same rate to within a few percent, despite wide variations in load.

I'm sure that from time to time the thing might malfunction, and in that case a human operator would have to intervene, shutting it down if it started to go too fast, for example, or if continued operation endangered someone caught in a belt, say.  So humans have been learning to get along with autonomous machinery for almost two hundred years.

The difference now is that in transportation systems (autonomous cars, airplanes), timing is critical.  And because cars and planes travel into novel situations, not all of which can be anticipated by software engineers, conditions can arise which make it hard or impossible for the humans who are ultimately responsible for the safety of the craft to respond.  That increasingly seems to be what happened to Ethiopian Air Flight 302, as evidenced by the black-box data clearly showing only one angle-of-attack sensor to be transmitting flawed data. 

Such issues have happened numerous times with the limited number of autonomous cars that have been fielded in recent years.  We know of at least two fatalities associated with them, and there have probably been many more near-misses or non-fatal accidents as well. 

But even a severe car wreck can kill at most a few people.  Commercial airliners are in a differenc category altogether.  They are operated by (mostly) seasoned professionals who should be able to trust that if they follow the procedures recommended by the manufacturer (in this case, Boeing), they will be able to deal with almost any imaginable contingency, even something like a stray plastic bag jamming an angle-of-attack sensor (this is my imagination working, but something had to make it give an erroneous reading).  In the case of the Ethiopian crash, the implied promise was broken.  The pilots did what they were told would disable the MCAS, but it didn't disable, and with disastrous results.

It is unusual for a criminal investigation to be aimed at the civilian U. S. aircraft industry, whose safety record has been achieved under mostly cooperative conditions between the Federal Aviation Administration and the firms who make and fly the planes.  Obviously it is too soon to speculate about what, if anything, will turn up from such an investigation.  In teaching my engineering classes, I sometimes ask if anyone has encountered on-the-job situations whose ethics could be questioned.  And I have heard several stories about how inspection or test records were falsified in order to pass along defective products.  So such things do happen, but one hopes that in a firm with a reputation such as Boeing's, incidents like this are rare. 

The marketplace has ways of punishing firms for bad behavior which are not just, perhaps, but nonetheless effective.  With the growth of Airbus, Boeing knows it has a formidable rival for commercial aircraft, and any company with millions of dollars' worth of capital sitting idly on the ground as the 737 Max 8s wait for properly vetted software upgrades is bound to be having second thoughts about going with Boeing the next time they need some planes.  I would not want to be one of the software engineers or managers dealing with this problem, as the reputation of the company may be hinging on the timeliness and effectiveness of the fixes they will come up with. 

Boeing has been reasonably transparent about this problem so far, and I hope they continue to be up-front and frank with customers, regulators, investigators, and the public about the progress they make toward fixing these software issues.  People have been learning to get along with smart machines for centuries now, and I am confident that engineers can overcome this issue as well.  But it will take a lot of work and continued vigilance to keep something like it from happening in the future.

Sources:  The Washington Post carried the story "Additional software problem detected in Boeing 737 Max flight control system, officials say," on Apr. 4 at  I also consulted a  Seattle Times article at and the original report from the Transport Ministry of Ethiopia, which the Washington Post currently has at