Monday, November 23, 2015

VW's In A Fix With Their Fix

Back in September, the U. S. Environmental Protection Agency (EPA) accused Volkswagen of cheating with regard to emissions controls of many of its cars that use diesel engines.  VW admitted as much, its CEO resigned, and now the firm faces the problem of fixing all the cars that violate emissions standards.  One way or another, some 11 million cars worldwide are implicated, with about half a million in the U. S. alone.  How did VW get into this fix, and how are they going to dig themselves out?

As new information has emerged on exactly how the cheating was done, it's pretty easy to tell that this was no single-line software tweak by a lone rogue engineer.  According to a Nov. 4 BBC report, someone (probably several someones) designed software to detect when the car was on a test stand designed for EPA checks.  This typically involves running the car while it is on a dynamometer, which uses rollers underneath the wheels to load the engine to simulate actual road conditions.  But in order for the stationary test equipment to be connected to the vehicle, the car is usually sitting still in a laboratory somewhere during the test.  I'm not saying that I know how the software guys did it, but if I were faced with the problem of how to figure out if a test-stand situation like this was going on, I'd look at the built-in accelerometers that every airbag-equipped car has.  If nobody's at the steering wheel and the car isn't going anyplace even if it's in "drive" and the engine's running, chances are it's on a test stand. 

However they did it, when an emission-test situation was detected the car switched into a mode that made it pass the emissions test.  But the price was severely crippled power and lowered engine performance, which however would not typically show up on an emissions test—after all, nobody's actually driving it to tell.  Once the test was over, the software readjusted the engine settings to produce normal power and performance—and as much as forty times more nitrous oxides (NOx) than the EPA allows.  But hey—it passed the test.  That's all that counts, right?

This mode of cheating is why fixing the problem with many diesel models, especially older ones, is not going to be some simple reload-new-software exercise.  If you've gone on a road trip recently and looked around in a truck-stop convenience store, you may have noticed piles of plastic bottles full of something called "diesel exhaust fluid."  Turns out that this stuff is now needed for many tractor-trailer diesel engines in order to meet the EPA's requirements for NOx emissions.  There's machinery on board the truck that squirts the fluid—which contains urea—into the exhaust, and the urea solution vaporizes to form ammonia and carbon dioxide.  The ammonia, in the presence of a catalyst in a thing called a selective catalytic reduction system (SCR), combines with the nasty NOx molecules to form nitrogen and water, which finally leave the exhaust pipe and rejoin Mother Nature, leaving her nearly as pristine as she was before the truck came by. 

It's one thing for truck engineers to see the regulations coming down the pike, and take time to redesign the power plant so as to accommodate another anti-pollution system which requires valves, heaters to keep the urea solution from freezing, pipes, level-monitoring systems, and all the other stuff needed to do the NOx-killing job.  It's quite another thing for VW to be under the gun to retrofit small diesel passenger cars that are maybe four or five years old, with a kit of SCR stuff they were not designed to have.  You'll need someplace to stick the SCR unit in the exhaust line, a way to get a pipe from the SCR to the urea tank, a place to put the urea tank, control lines, etc.  Engineers estimate the cost per vehicle could range up to $1000 or more.  With some cars, it may be cheaper for VW simply to buy them back from the owners and send them to the scrapyard.  Software-only fixes may be possible for some diesel models, but it looks like millions of cars worldwide will need expensive hardware installations to meet current emissions requirements.

VW says its internal investigation into how all this happened is still continuing.  For their sake, I hope they wind it up pretty soon, at least well enough to publish a timeline with names and actions.  But even without such information, it's obvious by now that deception with regard to emissions controls was an established policy.  Maybe the conspiracy—that's not too strong a term at this point—was concealed from upper management, and that's one of the things we need to know.  But even if it was, it's clear that there was a group of engineers inside VW who deliberately set out to cheat the system of pollution controls.  And they got away with it for several years.

It's not often that such a clear-cut case of wrongdoing by engineers makes the headlines.  Far more often, engineers will face a dilemma in which either choice has advantages and disadvantages, both morally and otherwise.  And sometimes engineers make the wrong choice, basing their decisions on incomplete information.  But in most engineering situations, information is always incomplete.  There's always more you'd like to know, but at some point the project must go on, choices must be made, and sometimes they turn out to be wrong ones.

But the VW emissions case is different.  Deception was intended from the start.  I don't know what internal company dynamics brought pressure to bear on engineers to the extent that developing a software evasion of emissions controls seemed like a good idea, but clearly something was wrong with the way ethical principles were stated and handed down. 

Sometimes, companies who do bad things are unrepentant and fight tooth and nail despite being in the wrong.  In such cases, large government fines are sometimes the only thing that will make an impression.  But in VW's case, its CEO resigned, sales are dropping, and there are news stories with graphics that show the famed chrome VW emblem breaking apart.  It's starting to look like the market and news media will do more punishing than the EPA is likely to do.  Whether that's fair or not is almost beside the point.  To survive, VW will have to own up fully, fix the mess it made to the best of its ability, and be a different company from the inside out—from now on.

Sources:  An Associated Press article on the types of fixes needed by VW was published in numerous outlets, including the U. S. News and World Report website on Nov. 19 at  Information on the details of how the cheating software worked was carried by the BBC on Nov. 4 at  I also referred to the Wikipedia article on diesel exhaust fluid.  I last blogged on the VW emissions scandal on Sept. 21, 2015.

Monday, November 16, 2015

Rolling Back Mass Surveillance

Bruce Schneier is a man worth listening to.  In 1993, just as the Internet was gaining speed, he wrote one of the earliest books on applying cryptography to network communications, and has since become a well-known security specialist and author of about a dozen books on Internet security and related matters.  So when someone like Schneier says we're in big trouble and we need to do something fast to keep it from getting worse, we should at least pay attention.

The trouble is mass surveillance.  In his latest book, Data and Goliath, he explains that mass surveillance is the practice of indiscriminately collecting giant data banks of information on people first, and then deciding what you can do with it.  One of the best-known and most controversial examples of this is the practice of the U. S. National Security Agency (NSA) of grabbing telecommunications metadata (basically, who called whom when) covering the entire U. S., which was revealed when Edward Snowden made his stolen NSA files public in 2013.  Advocates of the NSA defend the call database by saying the content of the calls is not monitored, only the fact that they were made.  But Schneier makes short work of that argument in a few well-chosen examples showing that such metadata can easily reveal extremely private facts about a person:  medical conditions or sexual orientation, for example. 

It's not only government overreaching that Schneier is concerned about. Businesses come in for criticism too.  With data storage getting cheaper all the time, many Internet firms and network giants such as Google and Yahoo find that it's easier simply to collect all the data they can on their customers, and then pick through it to see what useful information they can extract—or sell to others.  This happens all the time.  Maybe the most visible evidence of it happens when you go online and look for, say, a barbecue grill at a hardware-store website.  Then, maybe several days later, you will be on a completely different site.  Say a vegetarian friend is coming over and you're looking up how to make vegan stew.  Lo and behold, right next to the vegan recipe, there's an ad for that barbecue grill you were looking at a few days ago.  How did they know?  With "cookies" (bits of data retained by your browser) and behind-the-scenes trading of information about you and your browsing habits.

But Schneier reserves his greatest concern for something that is perhaps hardest to define:  the loss of privacy.  The right to privacy is a vital if poorly defined right whose absence makes normal life almost impossible.  Schneier says, "Privacy is an inherent human right. . . . It is about choice, and having the power to control how you present yourself to the world."  Mass surveillance tramples over the right to privacy and trains millions subtly to alter their ways of living to avoid the pain of secrets revealed.  This way of living was familiar to those whose lives were monitored by totalitarian regimes such as the old East Germany or the Soviet Union.  True, Google isn't going to send a jackbooted corporal to your door if you say something nasty about Sergey Brin, Google's co-founder.  Brin himself was born behind the Iron Curtain, though his family emigrated when he was six, and he probably remembers little or nothing about the USSR.  Nevertheless, Google and other firms that collect massive amounts of private data from their customers have set up a situation in which the privacy rights of millions, even billions, depend solely on the good intentions of a few powerful decision-makers in private companies. 

So what do we do about this?  Schneier has lots of suggestions, and points to Europe as a place where privacy is more respected in law and custom.  Changing laws is a necessary first step.  Whenever anyone moves to restrict the mass-surveillance habits of government entities such as the NSA or the Federal Bureau of Investigation, their defenders threaten us with a terrorist apocalypse, saying if we don't give up this or that privacy right, we'll tie the government's hands and be helpless before terrorist assaults.  Schneier spends a lot of time taking apart this argument, to my mind pretty convincingly.  For one thing, mass-surveillance data has not proved that useful in uncovering terrorist plots, compared to old-fashioned detective work focused intensely on a few known troublemakers. In general, government should abandon most mass-surveillance practices in favor of concentrating on specific investigations, with permission granted by courts whose workings are made public to the extent possible.

As for massive snooping by private enterprises, Schneier thinks regulations are the best option.  These regulations would impose a kind of "opt-in" system.  Currently, if you have a privacy-related choice at all in dealing with Internet firms, you have to go to a lot of trouble to make them respect your privacy, if they will allow such a thing at all.  Under Schneier's proposed policy, companies could not take away your rights to your data without your explicit permission, and the choice would be explained clearly enough so that you wouldn't need to have your techno-lawyer read the fine print to understand what's going on. 

Neither Schneier nor I are political scientists, so it's hard to say how we would get from the current parlous situation to one in which online privacy is respected, and nobody can snoop on you unless they go to a lot of trouble and get special permission to do it.  But he's told us what the problem is, and now it's up to us to do something about it.

Sources:  Bruce Schneier's book Data and Goliath:  The Hidden Battles to Collect Your Data and Control Your World was published by W. W. Norton in 2015.  The quotation from it above is from p. 126.  I also referred to Wikipedia articles on Edward Snowden, MAINWAY (the NSA call databse), and Sergey Brin.

Monday, November 09, 2015

Did Exxon Mobil Lie About Climate Change?

The energy giant Exxon Mobil is being investigated by New York State's attorney general, according to a report last week in the New York Times.  The issue appears to be whether Exxon properly stated the risks of climate change to its future business in light of its own internal scientific climate research.  Critics of the company say it has engaged in deception similar to what tobacco companies did in the 1960s and 1970s, when cigarette makers funded research that cast doubt on the health dangers of tobacco use even as they knew the grim truth and concealed it.  For its part, Exxon's spokesman Kenneth P. Cohen said, "We unequivocally reject the allegations that Exxon Mobil has suppressed climate change research." 

Under a law called the Martin Act, the New York attorney general is charged with the investigation of financial fraud, and can issue subpoenas for records and documents relating to such an investigation.  Exxon got a subpoena along these lines last week, and is in the process of responding to it. 

Let's step back a moment and examine the question of how this case relates to the well-known practices of tobacco companies that attacked the credibility of research that showed smoking and chewing their products was hazardous to one's health.

The history of how Big Tobacco muddied the research waters is pretty clear.  After the tobacco firms fought what became a rear-guard action against the mounting evidence that smoking kills, both state and U. S. federal attorneys general sued large companies such as R. J. Reynolds beginning in the 1990s, claiming that they deceived consumers about the dangers of smoking even as the company's own internal research revealed the hazards involved.  These successful suits cost the companies billions of dollars in fines and continuing payments into state-controlled public-health funds. 

One of my high-school teachers loved questions that began, "Compare and contrast. . ." so let's do that here.  What are the comparisons and the contrasts between what Big Tobacco did, and what Big Oil is supposedly doing?

First, the comparison for similarities.  Exxon may have funded some researchers at times who opposed the general scientific consensus about climate change.  This consensus has itself been somewhat of a moving target as more data, more sophisticated computer models, and a better understanding of climatology in general have contributed to knowledge of the problem.   So for Exxon to be liable in the way that, say, R. J. Reynolds was liable, someone would have to show that (a) Exxon was publicly saying climate change isn't going to bother us, and (b) Exxon privately knew pretty much the opposite. 

There is also the question of harm.  It's pretty easy for a lawyer to argue that his late client died from smoking, which the client might have ceased and desisted from doing had he not been lied to by the maker of his cigarettes.  If some of the more dire forecasts of the climate-change prophets come to pass, we will also have widespread death and destruction from it too.  And to the extent that companies like Exxon were responsible for it, they could conceivably be held liable in some way.

Now for the contrasts.  Apparently the worst thing that the New York attorney general thinks Exxon has done is not murder or criminal negligence, but financial fraud.  Fraud generally involves the premeditated intent to trick or deceive someone to your own advantage.  The idea here seems to be that if (and that is a big "if") laws are passed or other factors intervene to make it harder for Exxon to profit from fossil fuels because of climate change, and Exxon knew this was likely to happen, and Exxon told its investors otherwise, then they have tricked their investors. 

Whatever you want to call this alleged action, it's a far cry from what blatant deceivers like Bernie Madoff did.  Madoff, you may recall, ran a Ponzi scheme and kept one set of books for public consumption and another set for his secret fraudulent operations.  While some European countries have begun to restrict fossil-fuel use in various ways—high fossil-fuel taxes, for example—their reasons for doing so often go beyond the threat of climate change.  And in the U. S., to the frustration of environmentalists, very few meaningful climate-change-inspired restrictions have been placed so far on the consumption of oil, gas, and coal.  This may change in the future, but it's hard to sue somebody for something that hasn't happened yet.  Oil prices have recently tanked (so to speak), but the reasons have little or nothing to do with climate-change laws and a lot more to do with higher domestic production and international politics. 

Another question is whether an engineering-intensive firm that operates legally to fulfill a widespread public need, as energy companies do, can be held liable for the free consumption decisions of millions of its customers.  Again, we come to the question of who has been harmed.  While lying is bad, if we find out that Exxon made some forecasts of future climate change that turn out to be wrong, that's not exactly the same as lying.  Overall, this investigation seems to be based on speculation about future harms more than it is a realistic assessment of how investors have been harmed up to now.  And such a thing will be hard to put across to a reasonable jury, assuming the case gets that far.

Of course, this may be the beginnings of what some might view as a government shakedown.  Rather than face the prospect of spending years or decades in court, Exxon may choose to settle out of court by paying fines or changing its way of business to make the New York attorney general happy.  Such proceedings always smack of blackmail to a greater or lesser degree, although sometimes they are the least bad alternative if a genuine wrong has occurred.

But to find out if that is the case, we'll just have to wait.  Wait to see what the attorney general of New York does next; wait to see if states and countries pass much more restrictive legislation inspired by climate change; and wait to see how much hotter it gets.  It may be a long wait for any or all of these things, so stay tuned.

Sources:  The New York Times article "Exxon Mobil Investigated for Possible Climate Change Lies by New York Attorney General" appeared on Nov. 6, 2015 at  I also referred to the Wikipedia article "Tobacco politics."  I blogged on a related matter pertaining to climate change and university-funded research in "A Chunk of (Climate) Change", posted on Mar. 2, 2015.

Monday, November 02, 2015

Arms Control for Cyberwarfare Weapons

Say you're a high-tech software security firm in the U. S. that sells a spyware application that lets your corporate customers monitor all the encrypted traffic going through their servers.  A benign reason that a customer of yours wants to buy your software is to catch encrypted malware that might otherwise mess up the customer's system operations.  But that's not the only way your software product could be used.

Say a repressive government wants to ferret out members of an opposition group who are trying to organize a grass-roots protest campaign.  The protesters use encrypted Internet communications to do so, and using the software your company makes, the repressive government finds out who the protest ringleaders are, rounds them up, and decapitates them all at sunrise.  Should you have sold your software to that government?

Quandaries like these are at the heart of a dispute between the U. S. Department of Commerce and Silicon Valley computer-security-software firms.   According to a recent New York Times report, back in May the Commerce Department proposed new export restrictions on a wide variety of security software.  Following howls of protest by software firms, the proposal was shelved, but the Obama administration has continued to prosecute isolated cases of software showing up in Iran or Syria, which are the only two countries that are currently subject to export bans specifically targeted at surveillance technology. 

Unfortunately, such bans are not that difficult to evade, given enough resources.  Modern-day gun runners (code runners?) can have the stuff sent to dummy firms in non-banned countries, and then turn around and send it from there through a few more countries to its true banned destination.  According to the report, that is exactly what a couple of alleged smugglers from the United Arab Emirates did to get products from computer-security firm Blue Coat Systems to Syria, where the use of that software by the Syrian government was detected and published by a Canadian firm, which told the U. S. Commerce Department about it. 

A number of my recent blogs have dealt with aspects of cyberwarfare, and the increasing arms trade in software such as Blue Coat's products is one more sign that warfare and its associated activities such as spying are moving rapidly into the cyber arena.  Trade restrictions on conventional arms are a familiar part of the diplomatic landscape, but deciding which physical weapons to keep to ourselves is easier than dealing with certain kinds of security software.  A nuclear weapon is good for only one thing, for instance, but the type of security system that companies like Blue Coat sell can be used for either good or bad reasons, as my example shows. 

The current compromise restricts direct sales of such software to Iran and Syria, but as we've seen, it's pretty easy to evade even those restrictions.  The fact of the matter is that small countries can buy pretty much anything they want, given enough time and determination, and larger countries such as China have enough resources to develop their own spyware.

So it looks like the most realistic position these days is to realize that one way or another, bad governments (whatever your criterion of "bad" is) will probably be able to spy on Internet traffic and do other things online that we would wish they couldn't do.  In such an environment, what are the prospects for free speech, freedom of association, and other democratic activities that presume citizens are not under the constant baleful glare of Big Brother, whose cybernetic eye never closes?

A little historical perspective is in order here.  Things like the U. S. Constitution's Bill of Rights are fairly recent innovations.  For most of recorded history, nobody except maybe a few favored upper-class rich people had anything resembling what we consider to be legal rights.  Even in peacetime, if you were a peasant or a slave, and the king or some rich guy came along and took away your donkey, your land, or even your life, there wasn't much you could do about it.  In the West, the rise of Enlightenment ideas about universal rights took centuries to develop, and it was by no means clear when the founders of the United States wrote them into the Constitution, that the experiment would work.  But work it did, and recognition of these rights achieved a high point in 1948 when the United Nations adopted its Universal Declaration of Human Rights, which includes the right to freedoms such as privacy and speech.

As the old saying goes, the price of liberty is eternal vigilance.  And lately, even in the U. S., we have seen actions at the highest levels of government that smack of the suppression of free speech.  I have not read The Silencing:  How the Left Is Killing Free Speech, a book by conservative commentator Kirsten Powers, but reports of the book cite incidents in which the Obama White House banned conservative Fox News correspondents from certain press briefings.  These are isolated incidents, but they indicate that at least in some circles, the fundamental right of free speech has lost some of its appeal when other urgent issues come to the fore.

It's a far cry from disinviting reporters to spying on everyone's Internet traffic, but the idea is the same:  control of what people are saying to other people.  The Silicon Valley contingent has a lot to say about open-source software and the idea that "information wants to be free."  But the fact that repressive governments can use computer-security products for suppression of freedom is a grim reminder that engineers have to use their imaginations when they make new tools.  Imagining how you, a presumably nice guy or gal, would use your newly invented computer-security product is one thing.  But you should also try the experiment of thinking about how some evil genius could use your product—and then maybe try to do something that would make it harder for the bad guys to succeed.

Sources:  The New York Times report by James Risen, "Battle Heats Up over Exports of Surveillance Technology" appeared on Oct. 31, 2015 online at  I also referred to a discussion of Kirsten Powers' book at RealClearPolitics,, and the U. N.'s Universal Declaration of Human Rights at 

Monday, October 26, 2015

Kids and Smartphones: Does the Good Outweigh the Bad?

If you have children, do you regulate their use of smartphones?  In particular, what do you do about smartphones when you sit down for a meal together?  These questions came to mind when my wife told me about a little episode she'd witnessed in a restaurant one evening last week. 

The mother and father sat on either side of the daughter, who was perhaps 11.  Shortly after they got there, all three got out their smartphones, and each person escaped into a different electronic world.  The parents actually put down their phones and started a conversation after a while over the girl's head, but she held onto her phone till the food came, and after she was finished eating she picked it up again. 

In the lobby of the restaurant we'd passed a lady who was singing pop tunes and accompanying herself on the accordion.  (This is Canyon Lake, Texas, you understand, not New York City.)  Later in the evening, the singer picked up a hand puppet and went around entertaining guests who had brought along their children.  According to my wife, the puppet struck out with the smartphone girl, who looked up uncomprehendingly and then went back to her phone.  Evidently, live entertainment can't compete with electronic media, at least in that particular girl's world. 

When a new technology gets adopted as widely and rapidly as smartphones have, there is always at least a theoretical concern that some long-term effect that hasn't shown up in pilot marketing tests will pop up later to surprise and harm us.  The worst case like this from history I can think of was the thalidomide crisis of the 1960s. 

Thalidomide was a drug introduced in West Germany in 1957 and marketed as, among other things, a treatment for morning sickness in pregnant women.  While it appeared to help, it took several years for doctors to figure out that if a woman took it early enough in her pregnancy, thalidomide caused severe birth defects:  deformed or missing arms and legs, facial defects, and other disabling problems.  Although thalidomide is still available and prescribed for certain conditions such as cancer, the medical community knows to avoid any possibility of its use by women who could be pregnant. 

If something as bad as the thalidomide episode was going to happen with kids using smartphones, I think we'd probably know by now.  Nearly two billion such devices are out there, and a survey in Britain showed that more than half of eleven-year-olds use their own smartphone.  But not every technological problem can be studied with surveys and statistics.

What my wife witnessed in that restaurant was the clash of tradition and something else—"modernity" isn't the right word, nor is "technology."  One way to put it was expressed by a friend of mine, Bruce Hunt, who is a historian of technology.  We talk a lot about "cyberspace" without always knowing quite what we mean by it.  His definition of cyberspace is this:  "Cyberspace is where you are when you're on the phone."  At the time, he meant a traditional POTS phone (Plain Old Telephone Service), but saying that all three members of the family were in cyberspace before the food arrived is a pretty accurate statement.  So it was a clash between traditional space and activities, and whatever each individual happened to be doing in cyberspace.

By traditional, I mean nothing more than activities that have gone on more or less the same for a long time.  There have been restaurants and inns and families eating in them as long as there have been civilizations, I suppose.  And the same goes for live entertainers, going all the way back to cave men who put on masks and danced around the campfire.  Just because a thing has been done a long time doesn't mean it's necessarily good—it's just durable. 

When it comes to a family eating meals together, though, you can find studies that correlate all sorts of good things with families who eat together at least five nights a week.  Their kids are less likely to get involved in drug and alcohol use, they make better grades, and they feel closer to their parents.  I don't know whether the studies were fine-grained enough to notice how often smartphones were brought to the table, but it doesn't take a Ph. D. to tell that a family meal without smartphones is going to allow more opportunities for interpersonal interaction than one with them. 

The age at which a child should gain access to a smartphone is a question each parent has to decide.  Not having children myself, I have never had to make that decision, but I hear that it's a hard one to make.  Like driving, watching R-rated movies, and drinking alcohol, using smartphones is something that adults are free to do, and it's a judgment call on the part of parents as to when a child is mature enough to use one responsibly. 

But the little drama in the restaurant made me think that the family that brings their smartphones to the dinner table is missing something valuable that has no corporate-sponsored PR in its favor, no guaranteed payoff, and no particular immediate harm that results when it goes missing.  It's the chance to be with other people, in the time-honored sense of devoting one's embodied attention to the experience of the real, actual bodily presence of other human beings.  The very name "media" means "that which goes between," and anything between us can separate us as well as bring us together. 

So I'm not going to issue any blanket condemnations of smartphones at the dinner table. 
But I would ask parents to consider first how you use your smartphone and what kind of example you are setting for your children to follow.  Do you let it interrupt quality time with your spouse or children?  Or do you put it away at specific regular times, and devote your full attention to other members of your family?  Children have a powerful built-in instinct that says, "Whatever mommy or daddy does is okay," and if you tell your son to put away his smartphone at the dinner table and then whip yours out when it goes off, you've just wasted your breath.  The kids won't always be young, and you won't always be around to talk with them.  Do it while you have the chance.

Sources:  I referred to an article on the website PsychCentral by Amy Williams entitled, "How Do Smartphones Affect Childhood Psychology?" at, and a rather touching essay on the benefits of family meals by Cody C. Delistraty in The Atlantic online edition for July 18, 2014 at, as well as the Wikipedia article on thalidomide.

Monday, October 19, 2015

Will ISIS Hack the U. S. Power Grid?

In a meeting of electric-power providers last week, U. S. law enforcement officials revealed that Islamic State operatives have tried to hack into parts of the American power grid, so far without success.  But the mere fact that they're trying has some grim implications.

One of the officials, Caitlin Durkovich, is assistant secretary for infrastructure protection at the U. S. Department of Homeland Security.  She refused to provide specific details of the attacks, but an FBI official said so far that the attacks are characterized by "low capability." 

For some time now, it's been obvious that cyberwarfare may play an increasing role in future conflicts.  Perhaps the most significant successful attack up to now was mounted by a team of U. S. and Israeli experts in what came to be known as Stuxnet.  The attack was aimed at Iran's nuclear-material centrifuges and allegedly disabled many of them in 2010 before operators figured out what was going on. 

That attack was aimed at one specific facility, and the attackers had access to abundant information on the particular equipment involved.  Doing something similar to a significant part of the U. S. power grid would be a harder proposition for several reasons.

A Stuxnet-style attack on one generator, or even an entire plant, might temporarily  damage that plant and take it out of commission.  But the power grid is designed to deal with just such occurrences without major disruptions.  At any given time, a certain number of generators are offline for repairs or maintenance, and every so often a problem will cause one or more generators to trip out unexpectedly.  Unless the loss of capacity is very large or happens at a critical high-demand time (say on the hottest day of summer), the system absorbs the loss and reroutes power from other sources to make up the difference, often with no noticeable interruption to customers. 

So in order to produce a large-scale blackout that would do some good from a terrorism point of view, a different approach would be needed. 

The most vulnerable parts of the power grid from a hacking point of view are the network control systems themselves—the SCADA (supervisory control and data acquisition) devices and communications systems that tell system operators (both human and electronic) what the status of the grid is, and open and close the big high-voltage switches that route the energy.  A simultaneous order to a lot of circuit breakers to open up all across a large grid would throw the whole system into chaos, tripping other automatic breakers everywhere and necessitating a total shutdown and resynchronization, which could take hours or days—even longer if widespread mechanical damage occurred, which is possible. 

But doing that sort of attack would be very hard.  I am no power-grid expert, but I do know that long before the Internet came along, power utilities constructed their own special-purpose communication networks that carried the switch-command instructions, often by means of microwave relays or dedicated cables.  Originally, these specialized networks were entirely independent of the Internet because there was no such thing yet, and so were perfectly secure from Internet-based hacking.  Utilities tend not to throw anything away that still works, so my suspicion is that a good bit of network-control data still gets carried on these physically isolated communications links.  For a set of hackers halfway around the world to get into those specialized communications systems would require either amazing hacking abilities, or inside information, or most likely both. 

This is not to say that it's impossible.  But the job is orders of magnitude harder than disabling one uniform set of machines in one location.  As reports on the power-grid hacking attempts pointed out, the U. S. grid is a hodge-podge of widely different equipment, systems, protocols, hardware, and software.  A hack that might take out a power plant in Hackensack would probably be useless on a plant in Houston.  So to mount a coordinated attack that would create a politically significant amount of trouble would be a monumental undertaking—so hard that evil guys with limited resources may decide that some other type of troublemaking would be a better use of their time.

Does that mean we can just sit back and enjoy the fact that the Islamic State hackers don't know what they're doing?  Not necessarily.  Hackers come in all flavors, and as the Internet has played an increasing role in the day-to-day operation of electric utilities, those same firms have had to deal with the accompanying hazards of malevolent cyberattacks from who knows where.  So the fact that Islamic State hackers are going after the power grid is not exactly a surprise.

While the recent revelations have led to some calls for increased government oversight of cybersecurity for the power grid, the industry so far seems to have done a fairly good job at policing itself.  A report in USA Today back in March of 2015 said that the North American Electrical Reliability Corporation (NERC), which is the non-profit industry-sponsored security-standard enforcer, has slacked off on the number of penalties and fines it has assessed on its members in recent years.  But the president of NERC says this doesn't necessarily mean that his organization is getting lazy—it could just as well be that utilities are following the rules better.

Rules or no rules, the danger that foreign and domestic terrorist organizations could cause massive power blackouts in the U. S. is real.  And constant vigilance on the part of the utility operators is needed to prevent these attacks from getting anywhere.  Fortunately, the present structure of the grid makes it a particularly difficult target.  But that doesn't mean it couldn't ever happen.

Sources:  I referred to reports of the disclosures about cyberattacks on utility infrastructures carried by CNN on Oct. 15, 2015 at, and by the Washington Examiner at  USA Today carried an in-depth study of the issue by Steve Reilly on Mar. 24, 2015 at I blogged on Stuxnet on July 24, 2011 and July 2, 2012.

Monday, October 12, 2015

Can Technology Stop Mass Shootings?

The mass shooting at Umpqua Community College on Oct. 1 brought a violent end to the lives of nine victims (eight students and one professor), besides the death of the perpetrator, Christopher Harper-Mercer, at the hands of police called to the scene.  This tragedy has inspired a predictable chorus of editorials calling for something to be done about such things. 

Two voices heard on opposite sides of the political fence are E. J. Dionne, based at the Washington Post, and Charles Krauthammer, a familiar face on Fox TV.  In a recent column, Dionne decries the standard knee-jerk responses of his fellow liberals who call for gun control laws that they know won't pass Congress.  He rightly regards this as a futile gesture, especially now that Republicans control both houses of Congress and the National Rifle Association's influence is strengthened thereby.  Dionne's idea is to focus on gunmakers, who sell almost half their output to governments of various forms (federal, state, and local) and who might start making safer guns if that segment of the market demanded them. 

Safer how?  Dionne mentions two technologies that might mitigate unlawful gun use:  smart guns that can be used only by their owner, and microstamping of guns and bullets.  Several gunmakers have marketed various versions of smart guns, which typically use some add-on such as a magnetic ring or RFID chip worn by the owner to allow use of the gun.  These things are not popular with the gun lobby, and a sea change in attitudes would have to happen for any one of the smart-gun technologies to become common.  Microstamping is a patented technique of engraving a tiny serial number on the firing pin of a gun, which is then stamped into the cartridge when the gun fires.  If the cartridge is recovered, it can be matched with the microstamped gun.  Although California passed a law requiring microstamping of semi-automatic guns, it specifically exempted law-enforcement weapons (there goes the government tie-in), and two gun manufacturers have quit selling semi-automatic weapons in that state, citing the microstamping requirement as a major reason. 

The main weakness of Dionne's technological fixes has nothing to do with the virtues or flaws of a given new technology.  As Charles Krauthammer pointed out in his column last week, even if every new gun sold was smart enough to shoot only at truly bad guys, there were some 350 million guns in the U. S. as of last year (more than one for every man, woman, and child), and the only effective gun law that would stand a chance of reducing mass shootings would have to round up the ones out there already.  Krauthammer cites Australia's compulsory buy-back program as an example of this, but for a number of reasons it would never work in the U. S.  To stop such a program here, all that gun proponents would need to do is to cite the Second Amendment, which the U. S. Supreme Court has interpreted as granting citizens the right to bear arms.

And that gets to the tradeoff involved in this situation.  Australia decided that the risk of gun-related crime was so great that they sacrificed the freedom of average citizens to bear arms, by and large.  In this country, the right of private citizens to own guns is valued more highly, and the result is that we have to run the risk of unstable individuals now and then getting hold of a gun and shooting lots of people.

Is that problem any worse now than it has been?  Every mass shooting is a unique tragedy, but if we look at them in the same light as other unlikely but spectacularly awful ways to die such as airplane crashes, the problem takes on a different look.  According to the Stanford Mass Shootings in America Database, a comprehensive but not exhaustive study of mass shootings in the U. S. since 1966, 1011 people have died in mass shootings in the last 49 years.  To put that into perspective, more than 1300 passengers have died in commercial airline crashes in the U. S. since only 1996, although many of those fatalities happened in the 9/11 terrorist attack.  Graphing the Stanford data versus time produces a curve that has no clear upward or downward trend—just noticeable spikes that don't seem to be clustering toward the recent past. 

Maybe it's coldhearted to view these things as statistics, but one way to view this is that as a society, we have decided to tolerate a certain risk of a small number of unstable people getting hold of a gun as the price we pay for the freedom of the vast majority of well-behaved, law-abiding gun owners to keep their firearms.  Krauthammer speculates as to how you could stop the isolated mass shooters, but most of them prior to their flame-outs never do anything illegal enough to warrant taking their guns away before they come out shooting.  What has emerged about Christopher Harper-Mercer's background has eerie resonances with that of another mass shooter, Adam Lanza, who walked into a schoolroom in Sandy Hook, Connecticut and killed 26 people after shooting his mother, and then killed himself on Dec. 12, 2012.  Both were loners with absent fathers whose mothers struggled to socialize their autistic-spectrum sons.  But if having minor autistic tendencies is made a crime, we'll have to lock up a lot of engineers.

These matters come close to home here at my university, just down the road from Austin where Charles Whitman inaugurated the modern era of mass shootings in 1966 from the famed University of Texas tower.  In its most recent session, the Texas legislature passed a law making it legal for qualified concealed-weapons owners to carry their firearms into classrooms and other buildings at public and private universities.  The idea seems to be that if a nut case suspects that somebody besides himself may have a gun in the room, he'll at least hesitate before he starts anything.  Even if he does, maybe dead-eye Annie there in the back row will take him out before he gets too far. 

Needless to say, I don't look forward to the Shootout at the Mitte Engineering Building taking place in my classroom.  Fortunately, you have to be 21 to get a concealed-carry permit, and so only a small minority of our students would qualify. 

We can count on oceanic news coverage of any mass shooting, but it's hard to keep a sense of perspective while the media rattles on.  Unless the great majority of gun owners in the U. S. decide it's just not a good idea to have a gun around, those 350 million weapons are not going to go away any time soon.  And anybody without a serious criminal record (and even some with one) can still get one of them.  Current technological fixes for the problem simply don't seem to have the political traction to get very far.  Maybe smart, unobtrusive metal detectors with RFID chips for people authorized to carry concealed weapons could work, but that would be a lot of expense for an unlikely problem.  In the meantime, I'm going to act like nobody in my classroom has a gun.  But all the same, I'm glad my podium is close to the exit.

Sources:  E. J. Dionne's column "Let's focus on gun makers and smart-gun technology" was carried by the Austin American-Statesman on Oct. 9, 2015.  Charles Krauthammer's "Massacre begets charade with confiscation a no-go" appeared in the same publication on Oct. 10.  The Stanford Mass Shootings in America Database is available to anyone (after a check-in procedure) at  I also referred to Wikipedia articles on smart guns, microstamping, and airline fatality statistics.