Monday, August 13, 2018

Exporting Enslavement: China’s Illiberal Artificial Intelligence


In 1989, I had the privilege of visiting Tiananmen Square in Beijing only a few months after the famous June Fourth protests that the Chinese govermnent violently suppressed.  Our tour guide showed us black marks on the pavement that were left by fires during the conflict—a memory that has not faded.

Much has changed since then.  China is now a world leader in many areas of science and technology, including artificial intelligence (AI).  But the nature of the Chinese government has not changed, and as Ryan Khurana points out in a recent online article in National Review, its illiberal policies may transform AI into a weapon that similar governments around the world can use to enslave their citizens. 

To avoid confusion, I should define a couple of political terms.  In the sense I intend here, the term liberal refers to what political scientists call “classical liberalism.”  Simply put, a liberal government in this sense encourages the liberty of its citizens.  It acknowledges  fundamental rights such as the right to life, the rights to worship freely and express one’s views without fear of government reprisal, and the right to participate meaningfully in political affairs.  The intention of the founders of the United States of America was to form a liberal government in this sense.

By contrast, illiberal governments are top-down operations in which those in charge have essentially unlimited power over the mass of citizens.  Most monarchies were set up this way in theory, and from its founding the Peoples’ Republic of China has behaved in a consistently illiberal way, and continues to do so under President Xi Jinping.  Anything that assists the Chinese government in spying on its citizens, learning about their private as well as public actions, and controlling their behavior so that they conform to a model pleasing to the government is going to get a lot of support.  And AI fits this bill perfectly, which is one reason why China is not only pouring billions into AI R&D, but exporting it to other countries that want to spy on their people too.

Khurana points out that Zimbabwe, an African country well-known for its human-rights abuses, has received advanced Chinese AI technology from a startup company in exchange for letting the firm have access to the country’s facial-recognition database.  So China is helping the government of Zimbabwe to keep tabs on its citizens as well.  Maybe Zimbabwe will come up with something like China’s recently announced Social Credit system, which is a sort of personal reliability rating that eventually every person in China will receive.  Think credit rating, only one based on the government’s electronic dossier of your behavior:  what stores you visit, what friends you have, what meetings you go to, what TV shows you watch, and whether you go to church. 

Khurana says that we are engaged in a kind of arms race reminiscent of the old Cold War conflict between the Soviet Union and its satellites, and what used to be called the Free World.  Back then, the game was to see whether the U. S. or the U. S. S. R. could dangle the most attractive technological baubles in front of this or that country to tempt it toward one side or the other.  It wasn’t only military technology, but weaponry was the trump card. 

Things are different now, and AI is not like a howitzer—you can do lots of things with it, both peaceful and warlike.  Or liberal and illiberal.  But unless a smaller country has already developed a capable AI technological base of its own, it is likely to want only turn-key systems already designed to do particular things.  And companies in China who have learned how to help the government use AI to monitor and control people will naturally find it easiest to help other governments do the same illiberal thing.

Khurana says the U. S. side is currently losing this battle.  The federal government and military have been slow to get up to speed on using AI for defensive purposes.  When the Department of Defense tried to engage Google in a cooperative AI project to identify terrorists, the company pulled out, and other attempts to use AI in the military have been stymied because critical pieces of intellectual property turn out to be linked to Russian or Chinese ownership. 

There are two aspects to this problem.  The international aspect is that around the world, Chinese AI is coming with illiberal strings attached, and governments with little interest in protecting their citizens’ freedom are eagerly following China’s lead in using AI to spy on and suppress their peoples.  The domestic aspect is that the U. S. is going perhaps too far in the direction of pretending that we are all one big happy AI family, and that we can get AI technology from anywhere in the world and use it for our own private, liberal, or defensive purposes. 

But the world is not that way.  Back when wars depended mainly on hardware, nations contemplating future conflicts made sure they stockpiled essential materials such as tungsten and vanadium before starting a war.  Now that international conflicts increasingly involve cyberwarfare and AI-powered technology, it is foolish and shortsighted to ignore the fact that China is flooding the globe with their AI products and services, and to pretend we don’t have to worry about it and will always be able to outsmart them.  Physical weapons have a way of being used, and the same is true of AI designed for illiberal purposes.  Let’s hope that freedom doesn’t get trampled underfoot in the rush of many countries to get on the Chinese AI bandwagon.

Sources:  Ryan Khurana’s article “The Rise of Illiberal Artificial Intelligence” appeared on the website of National Review on Aug. 10, 2018 at https://www.nationalreview.com/2018/08/china-artificial-intelligence-race/.

Monday, August 06, 2018

In Professionals We Trust—Or Do We?


In a recent New York Times opinion piece, science journalist Melinda Wenner Moyer bemoans the fact that vaccine researchers are getting paranoid about publishing scientific papers that contain anything negative about vaccines, out of fear that the anti-vaccine movement will weaponize such results.  This problem has important implications for public trust in professionals generally, including engineers.

First, a little background.  Life before vaccines was shorter and riskier.  Smallpox, diphtheria, tetanus, and the lesser but still potentially fatal childhood diseases of measles and mumps killed millions and left survivors scarred for life or otherwise disabled.  That is why the world’s most advanced thinkers, including the New England minister and Princeton president Jonathan Edwards, embraced the idea of vaccinations against smallpox when Edward Jenner popularized it in the 1700s.  Unfortunately, when Edwards was vaccinated during an outbreak of the disease in 1758, it led to full-blown smallpox that killed him. 

Vaccination methods were crude back then, and over the following decades, the smallpox vaccine was refined to the point that in 1980, the World Health Organization declared that smallpox had been eradicated.  But Edwards’ death is a reminder that progress isn’t uniform, and bad news as well as good news has to be shared among professional practitioners if progress in any technology is to be made.

Up to about the year 2000, the attitude of the public in most industrialized nations toward vaccines was almost uniformly positive, and not controversial.  Each new and more effective vaccine, such as the Salk and Sabin vaccines against polio in the 1950s, was hailed as one more example of science’s triumph over disease.  Then in 1998, a gastroenterologist named Andrew Wakefield published the results of a small study based on 12 cases that seemed to indicate a link between autism and the very small amount of mercury used as a preservative in the mumps-measles-rubella (MMR) vaccine that was routinely given to millions of small children every year. 

Wakefield’s paper was published in the respected medical journal Lancet, and created a huge controversy.  Parents of autistic children now had something to blame on which to blame the mysterious syndrome, and as time went on, activist groups of parents formed and made Wakefield a hero.  The nascent Internet became a powerful tool in the hands of these groups, as it bypassed the usual peer-review process that scientists must adhere to and enabled isolated parents of autistic children to band together.  The failure of any subsequent scientific studies to confirm Wakefield’s findings didn’t slow down the anti-vaccine movement significantly.

It wasn’t until 2004 that serious questions were raised about Wakefield’s integrity.  It turned out that he was being paid by attorneys who wanted to sue vaccine manufacturers, and after further investigation revealed that Wakefield had fabricated some data, Lancet withdrew the paper and Wakefield had his British medical license revoked.  But the horse had left the barn long before that.  Currently, many well-educated and otherwise rational people refuse to have their children vaccinated for what are generally termed “philosophical reasons.”  As epidemiologists know, there is a threshold for the percent of unvaccinated people in a given population above which the risk of epidemics increases rapidly, and widespread refusal to vaccinate is partly blamed for recent outbreaks such as the 147 cases of measles centered at Disneyland in California in 2015. 

This story of the anti-vaccination trend is perhaps one of the clearest examples of what is a relatively new thing in Western civilization: widespread distrust of expert authority.  Back when everyone knew someone who had died of smallpox and many survivors bore scars, the promise of being able to immunize yourself and your offspring against such a terrible disease was so attractive that intelligent people such as Jonathan Edwards took the risks of what was by modern standards a very dangerous vaccination. 

Today, when the chances of anything bad happening from a vaccination well known and down in the fifth decimal place (a few per 100,000), and the ill effects of not getting vaccinated are also well known and clearly worse than taking the vaccine, why would anybody refuse, especially on behalf of their innocent children?  Clearly, because they believe in something or someone other than the conventional scientific wisdom represented by institutions such as the medical profession, government and private research organizations, and even people as supposedly trustworthy as their own family doctor.   

The problem with all this is that some professionals really do know more about a subject than non-professionals, and when experts talk about their own fields, they are generally more worth listening to than some random website you find with Google.  The paranoia among vaccine researchers that Moyer discusses is a sad result of ignoring this basic fact of life. 

It’s like a child who is repeatedly accused falsely of stealing from the cookie jar.  If he’s punished often enough for something he didn’t do, he may go ahead and steal anyway, figuring he’s going to get blamed for it whether or not he’s done it, so he might as well enjoy the ill-gotten gains of stealing, because the negative consequences will be the same.

In embracing bogus and disproved theories of harm from vaccines, anti-vaccine groups appear to be creating the very behavior they suspected was already happening among scientists:  namely, a reluctance to report negative aspects of vaccine use.  Of course, this will cripple any efforts to improve vaccines, because you have to know what went wrong before you can fix it. 

Let’s hope that engineers keep their collective noses clean in this regard.  Few polls of trust in the professions even ask the public about engineers.  I had to dig for a while before I came up with a global poll from 2015 that lumped engineers in with technicians, and that combined group came in on the trust scale about in the middle, just below pilots and just above soldiers.  Firefighters were the most trusted profession, and bankers the least.

Things could be worse, certainly.  In this fishbowl Internet age when anybody who says anything eye-catching, whether true or not, is liable to become world-famous overnight, engineers need to be especially careful in their public pronouncements.  It’s good to let the public know your considered expert opinion about something.  But first, be sure you’re right.  Lying about a matter of expert opinion that’s of vital interest can create harmful effects that go on for decades, as the anti-vaccine movement has shown.

Sources:  Melinda Wenner Moyer’s opinion piece entitled “Anti-Vaccine Activists Have Taken Vaccine Science Hostage” appeared on the New York Times website on Aug. 4, 2018 at https://www.nytimes.com/2018/08/04/opinion/sunday/anti-vaccine-activists-have-taken-vaccine-science-hostage.html.  I referred to the website https://www.historyofvaccines.org/content/articles/do-vaccines-cause-autism for information about the Wakefield controversy, to https://blogs.cdc.gov/publichealthmatters/2015/12/year-in-review-measles-linked-to-disneyland/ about the Disneyland measles outbreak, and to https://www.gfk-verein.org/en/compact/focustopics/worldwide-ranking-trust-professions for the global survey of trust in various professions.  Jonathan Edwards’ death as the result of a smallpox vaccination is well known and reported in numerous sources. 

Monday, July 30, 2018

Printing Guns, Again


Back in May of 2013, I blogged in this space about Cody Wilson, then a law student at the University of Texas at Austin, who had gotten in hot water with the U. S. State Department for posting plans online for using 3-D printers to make guns.  At that time, the Obama administration’s State Department took a dim view of anybody encouraging the production of non-registered plastic guns with no serial numbers.  The uses of such things for terrorism and other purposes was obvious, and while at least 100,000 people downloaded the plans before Wilson was forced to take them down, he said at the time he wasn’t abandoning plans for his company Defense Distributed to make such plans more widely available.

A lot of things have changed since 2013.  Donald Trump is in the White House, 3-D printers have been getting cheaper, better, and more available, but Cody Wilson hasn’t given up his efforts.  And last month they paid off, at least to the extent that the State Department notified him it was going to let him go online with his plans again after July 31.  Other than issuing a brief victory cry on Twitter, Wilson and his company have kept silent about the ruling, but a coalition of gun-control organizations filed suit in Federal court to block the ruling and keep Wilson from going public with his plans again.  On Friday July 27, a Federal judge in Austin denied the coalition’s request, saying they were attempting to “litigate a political dispute in court.” 

Lisa Marie Pane, a crime and justice reporter for the Associated Press, quoted gun-control advocate Nick Suplina, who said "There is a market for these guns and it's not just among enthusiasts and hobbyists. . . There's a real desire and profit mode in the criminal underworld as well.”  But a spokesman for the National Shooting Sports Foundation discounted the notion that the availability of such plans will lead to a significant increase in gun-related crime, pointing out that 3-D printers are expensive, the plastic guns work poorly (if at all) and usually come apart after a round or two, and a criminal is more likely just to steal a weapon than to go to the trouble of 3-D printing. 

My own take on this matter is that 3-D-printed guns are both inevitable and unlikely to change the situation in the U. S. regarding gun safety.  The inevitability comes from the rapid pace of advances in both performance and price of 3-D printers.  In 2013, most people had not seen a 3-D printer in the flesh, so to speak, and they were still specialty items found mostly in universities and industrial research labs.  But today, you can buy them online for less than $200 (although the cheapest ones will make only toy guns, not real ones), and the technical skills needed to run such printers are being mastered by elementary-school children. 

That being said, if Cody Wilson and others like him make 3-D-printed gun plans easily available, will that lead to a flood of firearms that can pass through security checks and show up in the hands of terrorists and other criminals?  Somehow I don’t think so.

The availability of guns is only one term in the equation that equals gun violence.  As gun-control advocates never cease to remind us, it is very easy to obtain a gun in the U. S, both legally and illegally.  And criminals, being criminals, are not fastidious about using only legitimate means to get their weapons.  The many channels through which the huge inventory of existing weaponry moves in this country means that most efforts to lower gun violence by cutting off the supply of guns are doomed to failure. 

That doesn’t mean we should hand out derringers as door prizes.  Reasonable restrictions on the purchase and use of guns to prevent spur-of-the-moment bad choices by people who are likely to misuse a gun are justifiable.  But the other term in the gun-violence equation is the person holding the gun.  And that is where the problem gets complicated.

Ever since Cain did in Abel, murder has been a part of human existence.  Some cultures tend to be more violent than others, and one measure of the degree of civilization a culture possesses is how violent it is.  For complicated reasons having to do with the way the nation was settled and the kinds of people who settled it, the United States is both a place where gun ownership is a lot more common than in many other countries, and also a place where guns are used fairly frequently in violent crimes. 

I know people who both own guns and pose virtually no threat whatsoever to any law-abiding citizen with respect to gun violence.  They have guns solely for means of self-protection or sports such as hunting, and if every gun owner were like these people, the rate of violent gun-related crime in the U. S. would be zero. 

But even one of these people could end up shooting somebody if the gun owner felt threatened.  And aside from the rare psychopath who literally shoots people for fun, most gun-related deaths that are not accidental have some justification in the mind of the shooter.  The best way to reduce gun violence is to create a culture in which no one, or almost no one, feels threatened enough to shoot their way out of the situation. 

That’s a hard, long, complicated task—the work of generations, really.  And it requires a kind of unity of purpose that is presently largely lacking in this country.  It’s much easier to spot changes that threaten to increase the availability of guns and try to stop them, as gun-control advocates are doing to Cody Wilson.  But I think we should spend at least as much energy on studying the cultural and spiritual conditions that lead to gun violence, and at a grass-roots level try to do something about them as well.

Sources:  Lisa Marie Pane’s article entitled “Texas company cleared to put 3D-printed gun designs online” appeared on July 26, 2018 in the Chicago Tribune at http://www.chicagotribune.com/news/nationworld/ct-texas-3d-printed-gun-20180726-story.html, and in other media outlets as well.   Reuters reported on the decision to deny the gun-control coalition’s attempt to block the release at https://www.reuters.com/article/us-usa-court-guns/us-judge-denies-gun-control-groups-attempt-to-block-3-d-gun-blueprints-idUSKBN1KH2I2.  My previous blog on Cody Wilson’s 3-D-printed gun plans appeared on May 13, 2013, at https://engineeringethicsblog.blogspot.com/2013/05/printing-guns.html.

Monday, July 23, 2018

Zero Waste: Eccentric Hobby or Wave of the Future?


As I learned from living there during four years of college, California is a land of extremes.  The all-time U. S. record high temperature of 134 degrees Fahrenheit was recorded in Death Valley, which has the lowest land elevation in the U. S. too, and this principle extends to human behavior as well.  A recent post on the San Jose Mercury-News website tells the story of Anne-Marie Bonneau, whose teenage daughter read about floating islands of plastic trash in the oceans back in 2011.  Just as monks and nuns of old took vows of poverty, chastity, and obedience, Ms. Bonneau took a vow then and there to never buy another piece of plastic again. 

She’s still living up to her vow, and has extended the notion of cutting back waste to the extent that she has to take out the garbage only once a year, in a shopping bag (paper, not plastic, no doubt).  Not everybody would care to go to the extremes that Ms. Bonneau does.  She collects glass jars to store liquids in, buys shampoo in bulk, and makes her own deodorant and granola.  Her daughter Charlotte, while she is still on board with the zero-waste idea most of the time, has committed acts of rebellion from time to time, such as buying a plastic water bottle.  And she threatened to leave home if her mother switched from toilet paper to rewashable cloths, so they still buy toilet paper (but not in plastic bags).  Bonneau, whose day job as an editor must leave her with enough energy to run a part-time two-person recycling company and manufacturing firm, admits she is “hard-core.”  But she evidently feels strongly enough about how the planet’s oceans are getting trashed with an estimated 19 billion pounds of plastic every year that it gives her the energy and motivation to do what she and her daughter do along these lines. 

Ms. Bonneau is not alone, though few go almost to the limit of zero waste as she has. Recycling in some form happens in most large cities in the U. S. and smaller towns are catching up too.  While we sometimes think of it as a new idea, nature has been recycling since the beginning of life, at least.  Organic material comes from the soil into plants, animals eat the plants, and both during and after the animal’s life, the material returns to the soil.  And there is a line of thought out there that seems to say if we would only get rid of all this Industrial Revolution stuff—fossil fuels and plastic and climate-change-causing combustion—and live like people did prior to, say, 1500 A. D., then we’d have a sustainable economy and ecosystem. 

The trouble is, there would be a lot fewer of us to enjoy it.  Before Europeans came along, the North American continent supported fewer than a million residents.  And if you think making your own deodorant is hard, try catching enough fish and wildlife every day to live on.  Any archaeologist knows that the most informative thing you can discover about an ancient settlement is its trash heap.  And from trash heaps, we know that even ancient civilizations were pretty wasteful and generated a fair amount of trash.  I don’t know of any numerical comparisons, but my point is that just getting rid of plastic packaging would not automatically solve all our trash problems, though it would help.

Under the present circumstances that prevail in most parts of the U. S., there is not much motivation for those who do not have the exquisitely sensitive global-political conscience of Ms. Bonneau to reduce one’s weekly trash production.  To speak personally, my city provides us with three (plastic, sorry) trash barrels, each capable of holding a couple of cubic yards of stuff.  One is for green waste that presumably goes to a compost pile somewhere, another is for specific types of recyclable materials (plastic, glass, aluminum), and the third is for any general trash that can’t go into the first two.  For what it’s worth, our contribution to the recycle bin is usually larger than our contribution to the trash bin, and I’ve often felt kind of silly tossing one small bag of trash into that great big bin each week.  I suppose I should have felt guilty for using it that much.

Here’s an idea which probably won’t be very popular, because it would cost people money.  But I’ll float it anyway.  The technology exists for trash-pickup trucks to register the net weight of everybody’s trash as they stop and lift it with those big fork kind of things into the truck.  What if you were charged by the pound for your trash?  And what if it was a pretty steep charge after a first flat rate?  That would get a lot of attention from people who currently don’t give a flip about how much stuff they throw out.  It has the advantage of letting folks who value plastic above money to keep doing what they’re doing, but it would re-train the rest of us to buy stuff that leaves less trash behind. 

A brief Internet search turned up a single instance where this idea has been tried:  in 1993, in California, naturally.  A trash-pickup firm serving Thousand Oaks and Ventura experimented with a prototype system from a North Carolina firm.  But on the day a reporter came to witness the first public trial, the system got weights wrong by three to six pounds and missed one trash can entirely.  For whatever reason, even though the idea has been around for more than twenty years, it hasn’t caught on.

So that means if you want to reduce your contribution to the global trash pile, you are more or less on your own.  Ms. Bonneau’s example is out there for anyone who wishes to try it, but outside of the famously ecologically-minded culture of the Bay Area, you may be regarded as a little eccentric.  For most of us, making small changes in what and how we buy things will help some.  For example, after I read Jen Hatmaker’s 7 Experiment:  Staging Your Own Mutiny Against Excess, I started reusing the plastic bag I put my lunch sandwich in, keeping it for a week instead of throwing it away every day and getting a new one.  That’s about as small a change as you can make and still be able to say you’ve done something, but it’s better than nothing, I suppose.

Sources:  The article “Way beyond recycling:  How some Bay Area families are trying to get to zero waste” appeared on the San Jose Mercury-News website on July 20, 2018 at https://www.mercurynews.com/2018/07/20/way-beyond-recycling-how-some-bay-area-families-are-trying-to-get-to-zero-waste/.  I also referred to the Los Angeles Times story from May 12, 1993, “Pay-Per-Pound Trash Pickup System Tested” at http://articles.latimes.com/1993-05-12/local/me-34368_1_trash-weight.  And if you are interested in why Christians should get with the reduce-waste program, you can read Jen Hatmaker’s book 7 Experiment:  Staging Your Own Mutiny Against Excess published in 2012 by Lifeway Christian Resources. 

Monday, July 16, 2018

Exploding E-Cigarettes and Ethical Theories


A recent article in the engineering magazine IEEE Spectrum describes how numerous users of e-cigarettes have received injuries ranging from minor to life-threatening when their devices caught fire or exploded.  E-cigarettes work by vaporizing a solution of nicotine and flavoring with a hot wire powered by a high-energy-density lithium-ion battery.  Lithium-ion batteries are in everything from mobile phones to airliners, but the particular design of e-cigarettes makes them especially hazardous in this application.  The high power required by the heater means that the battery is operating perilously near the maximum output current that it can maintain without overheating itself.  And if there are any manufacturing defects in the battery, as can often happen if substandard components are used by a fly-by-night manufacturer with inadequate quality control, the e-cigarette user ends up carrying around what amounts to a small pipe bomb.

The consequences can be dire.  The article tells the story of Otis Gooding, whose e-cigarette went off in his pants pocket, injuring both his thigh and the hand he used to try to get rid of it.  Other users have lost eyes, teeth, and parts of their cheeks to explosions and fires.

Anecdotes, however harrowing, do not constitute numerical evidence that the typical e-cigarette user is taking his or her life in their hands when they light up.  But various sources have estimated that the incidents of e-cigarette explosions or fires is in the dozens if not hundreds a year. 

The e-cigarette market had total sales exceeding $2 billion in 2016, and assuming the average user spends $150 a year on the habit, that means over 10 million people in the U. S. are likely regular users.  Say 100 of these have fireworks-type problems with their devices, and that amounts to an incidence of 1 per 100,000 per year, which is the type of ratio that public-health epidemiologists like to use.  Just to put that in perspective, deaths in the U. S. from lung cancer in the period 2011-2015 averaged about 43 per 100,000.  One of the advantages touted for e-cigarettes is that they don’t produce the tar and other nasty stuff that leads to lung cancer in regular cigarette smokers.  While e-cigarettes haven’t been around long enough for this assertion to be empirically verified—nobody  has been an e-cigarette user for forty years yet—there is probably something to this argument.

And such an argument would appeal to a certain type of ethical theorist called a utilitarian.  Utilitarians decide what the right thing is to do based on the greatest good for the greatest number.  A utilitarian might look at this situation and say, “Okay, we have Case A and Case B.  In Case A, 10 million people satisfy their craving for nicotine with plain old coffin-nails, and as a result a good many more than 43 of them die of lung cancer every year.  In Case B, we have the same 10 million people smoking e-cigarettes.  A lot fewer of them die of lung cancer, at the price of only one unlucky person whose e-cigarette explodes in his face.  Clearly, Case B is better.”

And if you put it that way, it’s hard to argue the point.  But we don’t live idealized lives in which we’re always choosing between two clearly-defined cases.  And if I were an e-cigarette user (which I am not), I would still be concerned about the chances that my device could blow up or catch fire.  But the utilitiarian won’t help me.

So I go looking in the catalog of ethical theories and find something called virtue ethics.  Basically, virtue ethics encourages cultivation of the virtues, of which there are almost as many as there are virtue ethicists.  Of the virtues we could choose from, I’ll pick one that doesn’t have a particularly fancy name, but will work for our purposes:  thoughtfulness. 

If you open a cabinet door while making breakfast one morning, and then think to close it afterwards not because you want to (open cabinet doors don’t bother you at all) but because your wife has a thing about open cabinet doors, you’re being thoughtful.  What does thoughtfulness say about the situation of defective e-cigarettes leading to explosions and fires?

Well, it draws attention to the proximate causes of those explosions and fires, which in most cases prove to be defective lithium-ion batteries supplied by shady manufacturers who are almost exclusively in China, which is where most of the devices and their components are made.  Now there’s over a billion people in China, and probably most of them are doing the best they can in their lives, trying to improve their lot and fulfill their obligations.  China is a haven for entrepreneurs right now, especially in the exploding (so to speak) growth market of e-cigarettes.  And in a world economy where low prices speak louder than almost any other consideration, the organization that can underbid everybody else tends to get the business.  And so probably the management of a shady lithium-ion battery factory feels caught between the rock of maintaining quality and reliability on the one hand, and the hard place of not pricing their product above the minimum needed to get the business. 

The virtue ethicists would tell the makers of e-cigarettes to clean up their act.  “How would you like to be one of the customers whose lips are sent to Kingdom Come by one of your bad batteries?” they might ask.  Of course, if one company spends the money to improve the quality of their batteries, there may be another one around the corner willing to skip quality control and underbid the first company.  The e-cigarette makers who buy batteries also have reputations to uphold, and so they could also pay attention to the sermon of a virtue ethicist and take more responsibility for the quality of their products.  In any event, it’s easy to see that virtue ethics provides you with more reasons to do something about this situation than utilitarianism does.

This is not to say that utilitiarianism is useless.  In situations where there are lots of data to work with, utilitarian analyses can clarify choices between comparable courses of action.  But sooner or later, it always runs up against the questions of how to quantify good, and who to include in the greatest number.  And there are no universally agreed-upon answers to those questions.

Sources:  The print version of IEEE Spectrum carried the article “When E-Cigarettes Go Boom” on pp. 42-45 of the July 2018 issue.  A briefer version appeared on their website in February 2018 at https://spectrum.ieee.org/consumer-electronics/portable-devices/exploding-ecigarettes-are-a-growing-danger-to-public-health.  The statistic about lung cancer was obtained from https://seer.cancer.gov/statfacts/html/lungb.html, and the sales figures for e-cigarettes are from https://www.statista.com/statistics/285143/us-e-cigarettes-dollar-sales/.

Monday, July 09, 2018

Is Dr. Google Ready to See Us Now?


Google’s parent company Alphabet has recently demonstrated an artificial-intelligence (AI) algorithm that can be used to estimate how likely hospital patients are to die soon.  A recent piece in Bloomberg News described how one particular woman with end-stage breast cancer arrived with fluid-filled lungs and underwent numerous tests.  The Google algorithm said there was a 20% chance she would not survive her hospital stay, and she died a few days later.

One data point—or one life.  The woman was both, and therein lies the challenge for researchers wanting to use AI to improve health care.  AI is a data-hungry beast, thriving on huge databases and sweeping up any scrap of information in its maw.  One of the best features of Google’s medical AI system is that it doesn’t need to have the raw data gussied up, in terms of needing a human being to type messy notes into a form the computer can use, a process that consumes as much as 80% of the effort devoted to other AI medical software.  Google’s system takes almost any kind of hand-scrawled data and integrates it into patient evaluations.  So in order to help, the system needs to know everything, no matter how apparently trivial or unrelated to the case it may be.

But then the human aspect enters.  To make my point, I’ll draw an analogy to a differetn profession—banking.  I’m old enough to remember when bankers evaluated customers with a combination of hard data—loans paid off in the past, bank balances, and so on—and intuition gained from meeting and talking with the customer.  Except for maybe a few high-class boutique banks, this is no longer the case.  The almighty credit score ground out by opaque algorithms reigns, and no amount of personal charm exerted for the benefit of a loan officer will overcome a low credit score. 

It’s one thing when we’re talking about loans, and another when the subject is human lives.  It’s easy to imagine a dystopian narrative involving a Google-like AI program that comes to dominate the decision-making process in a situation where medical resources are limited and there are more patients needing expensive care than the system can handle.  Doctors will turn to their AI assistants and ask, “Which of these five patients is most likely to benefit from a kidney transplant?”  It’s likely that some form of this process already goes on today, but is limited to comparatively rare situations such as transplants. 

The U. S. government’s Medicare system is currently forecast to become insolvent eight years from now.  Even if Congress manages to bail it out, the flood of aging baby-boomers such as myself will threaten to overwhelm the nation’s health-care system.  In such a crisis, the temptation to use AI algorithms to allocate limited resources will be overwhelming. 

From an engineering-efficiency standpoint, it all makes sense.  Why waste limited resources on someone who isn’t likely to benefit from them, when another person may get them and go on to live many years longer?  That’s fine except for two things.

One, even the best AI systems aren’t perfect, and now and then there will be mistakes—sometimes major ones.

And two, what if an AI medical system tells you you’re not going to get that treatment that might make the difference between life or death?  Even the hardiest utilitarian (“greatest benefit for the greatest number”) may have second thoughts about that outcome.

 Of course, resource allocation in health care is nothing new.  There have always been more sick people than there have been facilities to take care of them all.  The way we’ve done it in the past has been a combination of economics, the judgment of medical personnel, and government intervention from time to time.  As computers made inroads into various parts of the process, it’s only natural that they be used along with other available means to make wise choices.  But there’s a difference between using computers as tools and completely turning over decision-making to an algorithm.

Another concern raised about Google’s foray into applying AI to health care is the issue of privacy.  Medical records are among the most sensitive types of personal data, and in the past, elaborate precautions have been taken to guard the sanctity of each individual’s records.  But AI algorithms work better the more data they have, and so simply for the purpose of getting better at what they do, these algorithms will need access to as much data as they can get their digital hands on.  According to one survey, less than half of the public trusts Google to keep their data private.  While that is just a perception, it’s a perception that Google, and the medical profession in general, ignore at their peril.  One scandal or major data breach involving medical records could set back the entire medical-AI industry, so all participants will need to tread carefully and make sure nothing like that happens, or else the whole experiment could come to a screeching halt.

Predicting when people will die is only one of the many abilities that medical AI of the future offers.  In figuring out hard-to-diagnose cases, in recommending treatment customized to the individual, and in optimizing the delivery of health care generally, it shows great promise in making health care more effective and efficient for the vast majority of patients.  But doctors and other medical authorities should beware of letting the algorithms gain the upper hand, and turning their judgment and ethics over to a printout, so to speak.  Because Google’s system is still in the prototype stage, we don’t know what the effects of its more widespread deployment will be.  But whatever form it takes, we need to make sure that the vital life-or-death decisions involved in medical care are made by responsible people, not just machines.

Sources:  The article “Google Is Training Machines to Predict When a Patient Will Die” appeared on June 18, 2018 in Bloomberg News at https://www.bloomberg.com/news/articles/2018-06-18/google-is-training-machines-to-predict-when-a-patient-will-die, and was reprinted by the Austin American-Statesman, where I first saw it.  I also referred to an article on the Modern Health Care website at http://www.modernhealthcare.com/article/20180419/NEWS/180419911.  The statistic about Medicare’s insolvency is from p. 6 of the July 9, 2018 issue of National Review.   

Monday, July 02, 2018

Unanswered Questions About the FIU Bridge Collapse


Last March 15, an innovatively-designed pedestrian bridge installed for less than a week suddenly collapsed onto a busy roadway in Florida International University, killing six people and injuring several more.  Although many bridges have been erected successfully using the technique known as accelerated bridge construction used for this bridge, this was one of the first times such a bridge failed during construction, and so both the engineering community as well as the public at large would like to know what went wrong.  But answers have been slow in coming.

Accident investigations are tedious, painstaking tasks, and it’s understandable that the U. S. National Transportation Safety Board (NTSB), which is the primary agency charged with the investigation, is going to take as long as it takes to find out what happened.  On May 23, the NTSB released a preliminary report on the accident.  But those hoping to read about a metaphorical smoking gun in the report will be disappointed.

This is not unusual for preliminary reports.  Depending on how accessible the raw data is that has to be examined, preliminary NTSB reports can come close to answering all the relevant questions.  But a bridge is a large physical object that doesn’t yield its secrets easily, and it’s quite possible that the agency is conducting tedious and lengthy examinations of the pieces recovered from certain sections of the bridge to reconstruct exactly what happened.

The bridge was a concrete truss, and I learned just now the technical definition of a truss, which “consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object.”  (That’s from Wikipedia.)  Trusses are everywhere in constructed objects.  Your house or apartment probably has roof trusses in it.  If a sign support arching over a highway isn’t a single large tube or pole, it’s probably a truss.  You can tell a truss by its triangles, each side of which is one of the aforementioned two-force members.  And all that means is that the force on each member (strut) is applied at only two points, generally the ends.

The NTSB report focuses on two particular truss members of the bridge, especially one designated as No. 11.  If you picture the bridge as a concrete floor and ceiling connected by slanting concrete beams (truss members), these two members were near each end, slanting downward from near the end of the ceiling to the far edge of the floor at the end of the bridge.  Each slanting member formed the diagonal of a triangle, the other two sides being the end part of the ceiling and a vertical beam connecting the ceiling and floor at each end. 

Clear?  Maybe not.  Anyway, the point of making all these triangles in trusses is that a triangular shape made of straight sides does not change its shape easily.  You can push on the opposite sides of a square made of four bars tied together with pivots at the corners, and the pivots will let you squash the square flat.  But even if a triangle is made with pivots at the corners, it won’t change its shape until one of the sides actually bends or breaks.  And that may be what happened to the FIU bridge.

There has been much attention paid to some cracks that showed up near the bottom of No. 11 several weeks before the collapse.  Photos of these cracks were accidentally released, the NTSB griped about it, and then the agency included them in their report later.  There are good reasons why certain touchy information about accidents shouldn’t be released before an NTSB report is completed, but it’s not clear if these reasons apply in this case.  The Miami Herald commented in an article on the report that the newspaper is trying to get more information about the bridge released from the Florida Department of Transportation under Florida’s public-records act, but the lawsuit is still in progress.

As many people know, concrete can withstand a lot of compression (squeezing), but hardly any tension.  That is why steel reinforcement bars (rebars) are embedded in concrete structures of any size, and why the FIU bridge included adjustable tensioning rods in some of its members, including No. 11.  A couple of sources indicate that construction crews were re-tensioning these rods after it was moved in place when the bridge collapsed.  The Herald report speculates that the member might have been compressed too much by these tensioning rods in an attempt to close the suspicious cracks.  Every concrete structure has a limit as to how much compression it can stand.  If the worker got carried away and put too much stress on an already compromised member, it might have simply crumbled from the pressure.  Because it was in such a vital location, failure of that member would have caused exactly the kind of accident that happened.

Especially if the worker who made this mistake was the one who died, the NTSB is reluctant to draw any conclusions in this direction that are not supported by abundant evidence.  It’s up to them to figure out what traces of such a mishap would remain in the rubble that was collected from the site, as well as whatever preliminary evidence such as photos that are available.  So rather than any nefarious conspiracy to cover up systematic wrongdoing, the delay and refusal to share information may simply be out of concern that premature release of information could lead to unnecessary agitation, hurt feelings, and even more lawsuits.  The legal system has grown to accommodate the NTSB’s role in accident investigation, and anything that would upset that particular applecart may not be helpful.

All the same, it would be good if the root causes of this very public and tragic event could be unearthed, if there are any to be found.  And other things being equal, it would be nice to have that happen sooner than later.  Clearly, something went wrong, and everyone using accelerated bridge constructions stands to learn something potentially useful from the final report on this accident.  But as the process may take months longer, we may simply have to wait. 

Sources:  The Miami Herald’s article on the NTSB preliminary report appeared on May 23, 2018 at https://www.miamiherald.com/news/local/community/miami-dade/article211735504.html.  The NTSB report itself can be accessed at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH009-prelim.pdf.  I also referred to the Wikipedia article on trusses (the engineering kinds).