Monday, December 28, 2015

The Ironies of Carbon Capture Technology

In a recent article in Scientific American, reporter David Biello summarizes the current state of carbon-capture technology, and it's not good.  If a negative view of carbon capture appeared in some obscure climate-change-denier publication, it could be dismissed as biased reporting.  But the elite-establishment Scientific American has been in the forefront of the anti-climate-change parade, and so for such an organ to publish such bad news means that we would do well to take it seriously.

The basic problem is that capturing a gas like carbon dioxide, compressing it, and injecting it deep enough underground where it won't come out again for a few thousand years is not cheap.  And the worst fossil-fuel offenders—coal-fired power plants—make literally tons of the stuff every second.  It would be hard enough to transport and bury tons of solid material (and coal ash is a nasty enough waste product), but we're talking about tons of a gas, not a solid.  Just the energy required to compress it is huge, and the auxiliary operations (cleaning the gas, drilling wells, finding suitable geologic structures to hold it underground) add millions to billions to the cost of an average-size coal-fired plant.  Worst of all, the goal for which all this effort is expended—slowing carbon-dioxide emissions—is a politically-tinged goal whose merit is doubted by many, and which is being ignored wholesale by some of the world's worst offenders in this regard, namely China and India. 

However, shrinking the U. S. carbon footprint is regarded by many as a noble cause, and a few years ago Mississippi Power got on the bandwagon by designing a new lignite-burning power plant to capture its own carbon-dioxide emissions and send them into a nearby oil field, whereupon they expel oil that is, uh, eventually burned to make more carbon dioxide.  Here is the first irony.  Evidently, one of the few large-scale customers for large quantities of carbon dioxide are oil companies, who send it underground (good) to make more oil come to the surface (not so good). 

The second irony is an economic one.  It is the punishment meted out by economics to the few good corporate citizens in a situation where most citizens are not being so good.

Currently in the U. S., there is no uniform, rational, and legally enacted set of rules regarding carbon-capture requirements.  So far, the citizenry as a whole has not risen up and said, "In our constitutional role as the supreme power in the U. S., we collectively decide that capturing carbon dioxide is worth X billion a year to us, and we want it done pronto."  Instead, there is a patchwork of voluntary feel-good individual efforts, showcase projects here and there, and large-scale operations such as the one Mississippi Power got permission to do from the state's utility commission, as long as they didn't spend more than $2.88 billion on the whole thing.

So far, it's cost $6.3 billion, and it's still not finished.  This means big problems for the utility and its customers, in the form of future rate hikes.  Capturing carbon is not a profitable enterprise.  The notion of carbon-trading laws would have made it that way, sort of, but for political reasons it never got off the ground in the U. S., and unless we get a world government with enforcement powers, such an idea will probably never succeed on an international level.  So whatever carbon capturing is going to be done, will be done not because it is profitable, but for some other reason.

The embarrassment of Mississippi Power's struggling carbon-capture plant is only one example of the larger irony, which is that we don't know what an appropriate amount is to spend on carbon capture, because we don't know exactly, or even approximately, what it will cost if we don't, and who will pay.  Probably the poorer among us will pay the most, but nobody can be sure.  (There's a lot of very expensive real estate on coasts around the world, and sometimes I wonder if that influences the wealthy class to support anti-global-warming efforts as much as they do.)  

The time factor is a problem in all this as well.  Nearly all forecasts of global-warming tragedies are long-term things with timelines measured in many decades.  That is good in the sense that we have a while to figure out what to do.  But in terms of making economic decisions that balance profit against loss—which is what all private firms have to do—such long-run and widely distributed problems are chimerical and can't be captured by any reasonable accounting system.  Try to put depreciation on an asset you plan to own from 2050 to 2100 on your income-tax return, and see how far you get. 

So the only alternative in many places for large-scale carbon capture to happen is by government fiat.  A dictatorial government such as China's could do this tomorrow if it wanted to, but as the recent Paris climate-accord meeting showed, it doesn't want to—not for a long time yet, anyway.  In a nominal democracy such as the United States, the political will is strong in some quarters, but the unilateral non-democratic way the present administration has been trying to implement carbon limits has run into difficulties, to say the least.

My sympathies to residents of Mississippi who face the prospect of higher electric bills when, and if, their carbon-capturing power plant goes online.  Whatever else the project has done, it has revealed the problems involved in building a hugely expensive engineering project for a payoff that few of those living today may ever see.

Sources:  The article "The Carbon Capture Fallacy" by David Biello appeared on pp. 58-65 of the January 2016 edition of Scientific American.

Monday, December 21, 2015

California Puts the Brakes on Autonomous Vehicles

The California Department of Motor Vehicles (CDMV) has issued proposed regulations for self-driving cars (also known as autonomous vehicles, or AVs), and what they are planning wasn't all good news, at least to hopeful AV developers such as Google.  Last Wednesday, the CDMV released a draft version of rules that would apply to AVs used not for experiments and tests (these have been allowed for some time already), but by paying customers.  They are pretty restrictive.

For one thing, the CDMV doesn't want anybody selling AVs yet—only leasing.  For another thing, a specially licensed person has to be in the vehicle whenever it's operating, and able to take over manual control at any time.  These restrictions rule out some of the most commercially promising potential applications for AVs, namely, driverless delivery vehicles.  In its defense, the CDMV says that more testing is needed before such vehicles can be let loose on California freeways.  And having driven on California freeways myself, I have to say they may have a point.

You can't blame the CDMV for being cautious.  So far, the testing Google and automakers such as Mercedes and Tesla Motors have done has not turned up any show-stopper problems with autonomous vehicle systems.  But effects that don't show up in small-scale tests can raise their ugly heads later.  I'm not a traffic engineer, but there may be new types of problems that don't arise until the percentage of AVs on the road rises above a certain threshold.  Despite all the manufacturers' efforts, AVs will act differently than human-driven cars, and depending on the programming, sensor layout, and other factors, there may be some unknown interactions, perhaps between cars of different makes, that will lead to weird and possibly hazardous problems that nobody could have suspected in advance.  We simply don't know.  So going slow in the largest automotive market of any state is perhaps a good thing.

On the other hand, history shows that government restrictions on new technology can quickly become absurd and even obstruct progress.  Historians of the automobile are familiar with the "red flag laws" that the English Parliament enacted in the latter part of the 1800s.  A typical law of this type required any "powered locomotive" on a public road to be accompanied by a person walking at least sixty yards (55 m) ahead of the vehicle, holding a red flag to be used as a signal to the operator to halt, and also to warn passersby of the machine's approach.  Despite rumors that these laws were passed specifically to slow down the spread of self-powered passenger vehicles, they were actually aimed at steam tractors, which were mobile steam engines used to operate agricultural machinery.  Steam tractors were developed as early as the 1860s, and the larger ones could do considerable damage to the roads of the day and frighten horses, so the regulations were appropriate at the time they were first passed.

However, when the newer, smaller passenger automobiles of the 1890s came along, the 4-miles-per-hour speed limits and other restrictions that were appropriate for steam tractors made little sense for autos, and it took some time for popular demand and pressure from automakers to change the red-flag laws.  Something similar happened in a few U. S. states, but by 1900 most red-flag laws had been repealed or transformed into regulations more suitable for internal-combustion cars.

There are a couple of lessons here for what could happen next with regard to AV regulations.

First, we should expect some overreacting on the part of government regulators.  No regulator I know of ever got fired for being too vigilant.  Unfortunately, very few regulators get fired for not being vigilant enough, either, but the tendency of a bureaucracy whose mission is to regulate an industry, is to do more than necessary rather than less, up to the limit of the resources the regulator has at hand.  Some commentators have said that what's bad for California is going to be good for Texas, which has taken a much more laissez-faire attitude toward AV experiments by Google and others.  So we can thank what remnants of federalism remain in the U. S. for the fact that if one state passes excessively restrictive laws on an activity, companies can simply pull up stakes and go to a more friendly state.

The second lesson is more subtle, but has deeper and broader implications.  It has to do with the gradual but pervasive spread of what is called "administrative law."  To explain this problem, we need another historical detour.

Those familiar with the U. S. Constitution know that the powers of the federal government were purposely divided into three parts:  the legislative branch for making the laws on behalf of the people it represents, the executive branch for enforcing the laws, and the judicial branch for judging whether citizens have violated the laws.  This was done in reaction to the so-called "prerogative" that the English kings of the 1600s and earlier liked to exercise.  In those bad old days, a king could haul off and make a law (legislative power), have his royal officers drag a subject in off the street (executive power), and pass judgment on whether the guy had broken the King's law (judicial power).  Combining these distinct powers in one person was a great way to encourage despotism and tyranny.  The authors of the U. S. Constitution had enough of that, thank you, so they strictly divided the operations of government into three distinct branches corresponding to the three basic functions of government, and made sure that new laws could be originated only by representatives elected by the people.

But over the last century or so, the dam holding back government by prerogative has sprung lots of leaks in the form of administrative laws.  Nobody elects anyone who serves in the California Department of Motor Vehicles.  It's just a bunch of bureaucrats who can make up regulations (legislate), pronounce penalties for violation of those regulations (execute), and in some cases even decide on whether a party is guilty or innocent of violating the regulations (judge).  Yes, the California Senate, a representative body, asked the CDMV to do this, but in turning over the power to make laws to the CDMV, the Senate abdicated its legislative function and handed it over to a non-representative body.

This is an oversimplified version of a huge and pervasive issue, but once you understand the nature of the problem, you can see versions of it everywhere, especially in the alphabet soup of federal agencies:  OSHA, FDA, FCC, etc.  At least in the case of the red-flag laws, it was Parliament itself which passed the laws, and which modified them in response to public demand when the time came.  But if the voters of California don't like what the CDMV does, they don't have a lot of options.

Perhaps the streets of Austin will see lots of consumer-owned AVs before you can find any in Los Angeles.  That's fine with me, as long as they drive at least as well as the average Texas driver.  And that shouldn't be too hard.

Sources:  I learned about the proposed CDMV regulations from an article by Kevin Williamson "The Long Road to Self-Driving Cars" in National Review at  I also referred to an article in Fortune's online edition at and Wired at  A summary of the proposed CDMV regulations can be found at  I also referred to the Wikipedia article "Locomotive Acts."  I am currently reading law scholar Philip Hamburger's lengthy tome Is Administrative Law Unlawful? (Univ. of Chicago Press, 2014), which contains hundreds of arguments against administrative law.

Monday, December 14, 2015

Collecting Thoughts, Ethical and Otherwise

The world of publishing is changing rapidly as electronic media such as ebooks open up new distribution channels that allow authors to bypass the traditional gatekeepers of hard-copy publishing houses.  One effect of this is to allow writers with small audiences to consider publishing their own books without having to sink thousands of dollars into a vanity press run of a thousand copies, for example.  Instead, these days you can spend some time learning how various ebook-publishing software and distributors work, and do the whole thing yourself (or at most, with the help of an artist for covers).  That is just what I've done with a collection of many of the most popular articles in this blog, and the result is Ethical and Otherwise: Engineering In the Headlines, the cover of which you can see in the sidebar to the right.

I apologize for taking over the blog this week for self-promotion, but I promise not to do it more than once per book.  So here goes.

Ethical and Otherwise has a total of 46 articles culled from the nearly ten years that I've been writing this blog.  They were selected largely on the basis of page views, and so to that extent you, the reader, have played an essential role in its production. 

It's organized into three broad sections:  "Tragedies Large and Small", "Cautionary Tales", and "The Engineering Profession."

The "Tragedies" section is the largest and describes disasters of various types:  "Earth, Air, Wind, and Fire" (natural or nature-assisted disasters); "Planes, Trains, and Automobiles" (transportation accidents); "Mines, Wells, Oil, and Gas"; and "Construction and Destruction."  In this section you'll find out what really caused the Titanic to sink, what set off the natural-gas explosion that killed three hundred students and others in New London, Texas in 1937, and what caused the submarine theater in the Aquarena Springs amusement park in San Marcos, Texas to flip over, besides many other disasters, both well-known and obscure.  In an interview with an engineering podcast a few years ago, I expressed some regret that so many of my blogs deal with death and mayhem, but that's what grabs the headlines, and it's apparently what people like to read about too. 

The "Cautionary Tales" section deals with engineering and technical-enterprise wrongdoings of various kinds:  cyber attacks, counterfeit electronic components, bribery, corruption, copyright battles, and similar matters.  Following that, the section on the engineering profession takes up questions about licensing, engineering education and employment, and other thoughts about the human enterprise of engineering.  Finally, I had to put in a section called "Engineering Ethics In Movies" because (for reasons that are still not clear), the most popular blog article of all time by far is a review of the Tesla film "The Prestige" I wrote back in 2006, and it didn't fit any of the other categories.

So far, the book is available in two formats:  as an iBook in the iTunes bookstore, and as a Kindle book at  If demand warrants, I will consider issuing a hard-copy paper version through an on-demand publisher, though I have not explored that option much up to this point. 

The selections are distributed fairly evenly throughout the history of the blog, so if you have started reading this blog only recently, you will encounter pieces in the book that you probably haven't read before.  While it's true that all the articles are out there for the reading without your having to buy the book, there's something to be said for the selection process, as the book represents less than 10% of the total number of articles—the most interesting 10%, I hope.

Those of you who have instructional responsibilities regarding engineering ethics may have found engineering-ethics case studies on the web in various places.  For example, Texas A&M maintains a website with case studies, as does the Illinois Institute of Technology, the National Academy of Engineering's OnlineEthics Center, and the National Society of Professional Engineers.  What the NSPE has is actually summaries of cases brought before their board of review, stripped of identifying information.  While these collections are useful, their scope is sometimes limited to certain types of engineering (e. g. civil), and they can sometimes be on the dry side. 

While I didn't put together Ethical and Otherwise exclusively with the classroom in mind, I hope ethics instructors will find it useful.  All the articles are about the same length (I aim for a thousand words, more or less), and they are all drawn from real-life situations of one kind or another.  While I haven't tried to do a full-dress scholarly bibliography, all the URLs referenced in the book were still working at the time of publication.  So I think it will be a useful and possibly even entertaining resource for those who teach ethics-related technical subjects.

Because most of the articles are independent of the others, it's the kind of book you can pick up and put down almost at random.  To be frank, I don't use a Kindle much myself, but my impression is that the kind of lighter tell-me-a-story reading that Ethical and Otherwise has lots of, is fairly well suited to the ebook format. 

At any rate, that's what the book is about, and so if you're looking for more of a dose of this sort of thing than my weekly posts provide, consider buying Ethical and Otherwise.  As far as sales go, I'll be happy if it earns back the $125 it cost to buy the ISBN number.  If after reading it, you like it, you will earn my undying gratitude by writing a favorable review on Amazon.  But don't let my urging bias your review.  That would be unethical, wouldn't it?

Sources:  Ethical and Otherwise:  Engineering In the Headlines is available in the Kindle format at  To find the iBook version in the iTunes store, go all the way to the bottom of the iTunes main page where it says "Explore" and click Books, then in the search box at the upper-right corner type "Ethical and Otherwise."  Texas A&M's collection of civil-engineering ethics cases can be found at  The case collection at the Illinois Institute of Technology is at  The National Academy of Engineering's Online Ethics Center has case studies and other ethics-related material on its main website at  And the National Society of Professional Engineers keeps their review board cases at  The phenomenon of a medium (such as a blog) advertising itself is known (at least to me) as Stephan's Law, as described in my blog of Dec. 15, 2014.

Monday, December 07, 2015

Child's Play: Hacking the Internet of Things

A company called VTech based in Hong Kong makes smart toys for kids.  One of their tablet products can connect to a parent's smartphone with a service called KidConnect, allowing children to send photos and text messages to their parents.  Sounds all nice and family-friendly, yes?  Well, in November the website Motherboard revealed that a hacker had managed to get into VTech's servers and download thousands of private photos, messages, passwords, and other identifying information that KidConnect users had sent and received.  This has understandably upset digital media commentator Dan Gillmor, who swears in a recent Slate article that not only he will never buy any Internet-enabled toys for children, he doesn't think anybody else should, either.  Reportedly, VTech has shut down the KidConnect service until they can do something about security.  But this incident brings up a wider question:  what dangers does the Internet of Things pose for children?

In case you've been living in a cave somewhere, the Internet of Things (IoT, for short) is the idea that in the very near future—by some measures, right now—internet connections, sensors, and the hardware and software needed to use them will be so cheap and ubiquitous that lots of everyday items will be connected to the Internet, sending and receiving data that will make great changes in our lives.  The promoters of IoT naturally hope that these changes will be for the better, and can point to examples that have done that.

This matter gets close to home for me personally, because for the last several years I have supervised electrical engineering senior design teams at my university, and several of the past and current teams have worked on projects that are IoT-related.  About four years ago, one team's project was a communications system designed to monitor electric-power consumption in the home, at a finer-grain level than just what the electric meter could sense about overall power consumption.  The idea was that if consumers have a detailed profile of their electricity usage, they can make more intelligent choices about what to turn on when.  Maybe doing the laundry late at night instead of right when you get home in the afternoon will put usage into a more favorable rate period, for example. 

As I was discussing the project with the team, it occurred to me that this information could be used for nefarious purposes.  You can tell a lot about a person if you have the kind of usage information the team was planning to measure: whether the user is home, for instance, and even what appliances are used and how often.  So I brought up this ethical issue with the team and made sure that they mentioned it in their final report. 

Since then, companies such as Freescale Semiconductor have jumped into IoT-related products and devices in a big way.  (Full disclosure:  Freescale has donated equipment and funds to the Ingram School of Engineering, where I work.)  From all I can tell, the Internet of Things is going to happen one way or another, and it behooves both engineers and the general public to give some thought to any possible downsides before something really bad happens.

Returning to the question of children and IoT, we are in a peculiar position these days.  Many children and young adults are vastly more tech-savvy than their parents, and this makes it hard for the parents to institute meaningful controls on what kids do online.  In the bad old days when the list of dangerous things in the home was mainly physical—guns, knives, poison, screwdrivers near electric outlets—it was a fairly simple matter for parents to keep toddlers out of harm's way.  But in the case of some toy that hooks up to your WiFi network, odds are that the parents are as clueless as the children regarding the privacy and security measures taken by the device's maker.  VTech itself didn't know how vulnerable its servers were until some enterprising hacker cracked into them and notified the media. 

Despite living with the Internet for close to thirty years now, we still have some things to learn about it, among which are new ways of using it that are potentially hazardous.  And children are an especially vulnerable population, as everyone agrees.  It's shortsighted to think of children always as the innocent parties in these matters too.  Some kids can be downright wicked, bullying others mercilessly.  Before we got so interconnected, a bully's sphere of influence was limited to the radius reachable by his fists, but hand a bully a smartphone with some sort of anonymous chatting app on it, and it's like putting wings on a wildcat.  His bullying sphere has instantly widened to include the entire globe, limited only by language ability and time.  And we have already seen instances in which Internet bullying has driven some vulnerable individuals to suicide.

Nobody is calling for a wholesale ban on Internet-enabled toys or anything like that.  But as I have often emphasized to my students in discussions of engineering ethics, many ethical lapses in the area of engineering can be traced to a lack of imagination.  When you are dealing with a physical structure like a bridge, it's relatively easy to calculate the maximum loads and find out how strong each member has to be for the bridge not to fall down.  But in any system that is intimately bound up with the behavior of people—especially millions of people at a time—your imagination has to anticipate the character and intentions of persons perhaps very different from you, who will twist your system around to serve their possibly sinister purposes. 

That is why privacy and security concerns need to be considered at the very beginning of any project that involves the Internet, and especially when a product is intended to be used by children.  VTech clearly did an inadequate job in this area, but they can serve as a bad example to warn future designers and users of IoT-enabled gizmos.  The craft of lockmaking is nearly as old as the craft of housebuilding, and for a good reason.  There are bad actors out there, and any time we open up a channel of communication involving a private citizen or residence, it needs to be guarded with the same care that we would extend to our own physical possessions.  Beyond mere technical ability, doing that well requires moral imagination, which should be in the toolkit of every good designer.

Sources:  The online magazine Slate carried the article "Parents: This Holiday Season, Do Not Buy Internet-Connected Toys for Your Kids" by Dan Gillmor at  That article referenced a report at Motherboard describing the VTech hack and what the hacker found, which is at

Sunday, November 29, 2015

Privacy and Backscatter X-Ray Technology

The New York City Police Department owns an unknown number of high-tech vans that allow their operators to play Superman—at least with regard to his X-ray vision abilities.  An online article in The Atlantic last month describes how NYPD Commissioner Bill Bratton indirectly admitted his organization was using X-ray vans when he refused to discuss the matter at a press conference, citing security concerns.  Superman was a fictional character whose strictly limited flaws were in service of a plot that always ended in the triumph of the unquestionably good over the irrevocably bad.  But real life isn't so simple.  And there are real concerns about the way the NYPD may be using this technology.

First, how does it work?  Offhand, it sounds highly irresponsible for somebody to just shoot X-rays at random passersby.  X-rays are a form of what is called ionizing radiation, meaning that they have enough energy to knock electrons out of atoms to make ions.  Such ions can wreak havoc in the DNA of a biological target, for instance, and lead to cancer and other problems.  That is why medical X-ray systems are highly regulated and only properly trained operators are allowed to use them.

But there is apparently a sort of escape clause for non-medical equipment that uses X-rays.  If it meets a certain technical standard that limits the amount of exposure someone would get from a typical spying operation that lasts a few seconds, then the FDA is not involved and the rules change.  According to numerous sources, the type of X-ray machinery used by the NYPD uses so-called "backscatter" X-ray technology that falls into the low-dose category. 

It's really rather clever.  Conventional X-ray machines use a transmission approach, sending X-rays through the item to be examined and recording what comes out on the other side.  Your dentist uses this type of machine, and the image of your teeth shows up because bone is denser than air or soft tissue and absorbs and scatters more X-rays.  But obviously, for transmission X-rays to work, the rays have to be strong enough to get all the way through the item being examined.

Backscatter X-rays work differently.  Instead of producing a strong beam that illuminates the whole target at once and goes through it, a scanning type of backscatter unit sends a "flying spot" of X-rays sweeping across the target, which could be a person inside his or her clothes, or even a car.  These X-rays don't have to penetrate the target.  Instead, all they have to do is cause a thing called Compton scattering, which is basically what happens when an X-ray encounters an electron and is generous enough to share some of its energy.  The electron takes off with some of the energy and a new X-ray photon appears carrying the energy that's left. 

It is these new lower-energy X-rays that are detected by the backscatter machine, which consequently does not need to use X-rays that are as energetic as those used by conventional transmission machines.  That and the fact that any one point on the target is exposed to X-rays for only a small fraction of the total time of exposure, means that the X-ray dose of a backscatter unit is much smaller than, say, what you'd get from a medical chest X-ray.  Numerous sources confirm that the dose is so small from a surveillance backscatter X-ray device, that you would get something similar by just standing on a street corner for an hour or so and exposing yourself to background radiation.  This comes from sources like cosmic rays and the potassium in construction materials, and everybody gets that every day, twenty-four hours a day.  So despite some concerns on the part of investigative journalists that there are health hazards from backscatter X-ray technology, as long as the systems are working properly and used properly there are much more important things to worry about.  

To my mind, the greater concern about these systems is privacy.  You don't think that research directors of government agencies would go in for public cheesecake photos of themselves, but the Wikipedia article on backscatter X-rays shows such an image of Susan Hallowell, director of the U. S. Transportation Security Administration's research lab.  It is, er, very clear that the image is that of a woman.  We can all be glad that she wasn't a man. 

At this point, we should take a look at the U. S. Constitution's Fourth Amendment, which is short enough to quote in full here:  "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."  Updating its 1792-era language to modern terms, it means that no government official can snoop on you unless they swear or affirm that they have a good reason to do so, and the officials can say exactly where they're going to look and what they're going to look for.

I'm not a lawyer, and since the Fourth Amendment was passed it has accumulated various qualifications and exceptions, like a ship picks up barnacles.  But the principle that the writers of that amendment had in mind is still clear.  A person's body, house, papers (which was the only way to record data back then), and "effects" (as in personal effects—what a ten-year-old would call "my stuff") are inviolable against government attempts to mess with them—taking them, looking at them, or anything along those lines.  The only cause strong enough to justify such violation is when the government has a good case to show that something is amiss and can describe what they want to look for and where.  This clearly rules out fishing expeditions, in which an official simply snoops at will and starts investigating a crime when the snooping itself provides evidence.

Right now, the NYPD is in effect saying "Trust us, we're using this X-ray van the right way."  But if citizens can't even know how many vans there are, let alone how they're being used, that is asking for a heck of a lot of trust.

Sources:  The Atlantic's article "The NYPD Is Using Mobile X-Ray Vans to Spy on Unknown Targets" appeared on Oct. 19, 2015 at  I also consulted the websites of firms that make such devices, including American Science and Engineering at and the Tek84 Engineering Group at  I thank my nephew Matt, a graduate student in criminal justice, for bringing this matter up at the Thanksgiving dinner table. 

Monday, November 23, 2015

VW's In A Fix With Their Fix

Back in September, the U. S. Environmental Protection Agency (EPA) accused Volkswagen of cheating with regard to emissions controls of many of its cars that use diesel engines.  VW admitted as much, its CEO resigned, and now the firm faces the problem of fixing all the cars that violate emissions standards.  One way or another, some 11 million cars worldwide are implicated, with about half a million in the U. S. alone.  How did VW get into this fix, and how are they going to dig themselves out?

As new information has emerged on exactly how the cheating was done, it's pretty easy to tell that this was no single-line software tweak by a lone rogue engineer.  According to a Nov. 4 BBC report, someone (probably several someones) designed software to detect when the car was on a test stand designed for EPA checks.  This typically involves running the car while it is on a dynamometer, which uses rollers underneath the wheels to load the engine to simulate actual road conditions.  But in order for the stationary test equipment to be connected to the vehicle, the car is usually sitting still in a laboratory somewhere during the test.  I'm not saying that I know how the software guys did it, but if I were faced with the problem of how to figure out if a test-stand situation like this was going on, I'd look at the built-in accelerometers that every airbag-equipped car has.  If nobody's at the steering wheel and the car isn't going anyplace even if it's in "drive" and the engine's running, chances are it's on a test stand. 

However they did it, when an emission-test situation was detected the car switched into a mode that made it pass the emissions test.  But the price was severely crippled power and lowered engine performance, which however would not typically show up on an emissions test—after all, nobody's actually driving it to tell.  Once the test was over, the software readjusted the engine settings to produce normal power and performance—and as much as forty times more nitrous oxides (NOx) than the EPA allows.  But hey—it passed the test.  That's all that counts, right?

This mode of cheating is why fixing the problem with many diesel models, especially older ones, is not going to be some simple reload-new-software exercise.  If you've gone on a road trip recently and looked around in a truck-stop convenience store, you may have noticed piles of plastic bottles full of something called "diesel exhaust fluid."  Turns out that this stuff is now needed for many tractor-trailer diesel engines in order to meet the EPA's requirements for NOx emissions.  There's machinery on board the truck that squirts the fluid—which contains urea—into the exhaust, and the urea solution vaporizes to form ammonia and carbon dioxide.  The ammonia, in the presence of a catalyst in a thing called a selective catalytic reduction system (SCR), combines with the nasty NOx molecules to form nitrogen and water, which finally leave the exhaust pipe and rejoin Mother Nature, leaving her nearly as pristine as she was before the truck came by. 

It's one thing for truck engineers to see the regulations coming down the pike, and take time to redesign the power plant so as to accommodate another anti-pollution system which requires valves, heaters to keep the urea solution from freezing, pipes, level-monitoring systems, and all the other stuff needed to do the NOx-killing job.  It's quite another thing for VW to be under the gun to retrofit small diesel passenger cars that are maybe four or five years old, with a kit of SCR stuff they were not designed to have.  You'll need someplace to stick the SCR unit in the exhaust line, a way to get a pipe from the SCR to the urea tank, a place to put the urea tank, control lines, etc.  Engineers estimate the cost per vehicle could range up to $1000 or more.  With some cars, it may be cheaper for VW simply to buy them back from the owners and send them to the scrapyard.  Software-only fixes may be possible for some diesel models, but it looks like millions of cars worldwide will need expensive hardware installations to meet current emissions requirements.

VW says its internal investigation into how all this happened is still continuing.  For their sake, I hope they wind it up pretty soon, at least well enough to publish a timeline with names and actions.  But even without such information, it's obvious by now that deception with regard to emissions controls was an established policy.  Maybe the conspiracy—that's not too strong a term at this point—was concealed from upper management, and that's one of the things we need to know.  But even if it was, it's clear that there was a group of engineers inside VW who deliberately set out to cheat the system of pollution controls.  And they got away with it for several years.

It's not often that such a clear-cut case of wrongdoing by engineers makes the headlines.  Far more often, engineers will face a dilemma in which either choice has advantages and disadvantages, both morally and otherwise.  And sometimes engineers make the wrong choice, basing their decisions on incomplete information.  But in most engineering situations, information is always incomplete.  There's always more you'd like to know, but at some point the project must go on, choices must be made, and sometimes they turn out to be wrong ones.

But the VW emissions case is different.  Deception was intended from the start.  I don't know what internal company dynamics brought pressure to bear on engineers to the extent that developing a software evasion of emissions controls seemed like a good idea, but clearly something was wrong with the way ethical principles were stated and handed down. 

Sometimes, companies who do bad things are unrepentant and fight tooth and nail despite being in the wrong.  In such cases, large government fines are sometimes the only thing that will make an impression.  But in VW's case, its CEO resigned, sales are dropping, and there are news stories with graphics that show the famed chrome VW emblem breaking apart.  It's starting to look like the market and news media will do more punishing than the EPA is likely to do.  Whether that's fair or not is almost beside the point.  To survive, VW will have to own up fully, fix the mess it made to the best of its ability, and be a different company from the inside out—from now on.

Sources:  An Associated Press article on the types of fixes needed by VW was published in numerous outlets, including the U. S. News and World Report website on Nov. 19 at  Information on the details of how the cheating software worked was carried by the BBC on Nov. 4 at  I also referred to the Wikipedia article on diesel exhaust fluid.  I last blogged on the VW emissions scandal on Sept. 21, 2015.

Monday, November 16, 2015

Rolling Back Mass Surveillance

Bruce Schneier is a man worth listening to.  In 1993, just as the Internet was gaining speed, he wrote one of the earliest books on applying cryptography to network communications, and has since become a well-known security specialist and author of about a dozen books on Internet security and related matters.  So when someone like Schneier says we're in big trouble and we need to do something fast to keep it from getting worse, we should at least pay attention.

The trouble is mass surveillance.  In his latest book, Data and Goliath, he explains that mass surveillance is the practice of indiscriminately collecting giant data banks of information on people first, and then deciding what you can do with it.  One of the best-known and most controversial examples of this is the practice of the U. S. National Security Agency (NSA) of grabbing telecommunications metadata (basically, who called whom when) covering the entire U. S., which was revealed when Edward Snowden made his stolen NSA files public in 2013.  Advocates of the NSA defend the call database by saying the content of the calls is not monitored, only the fact that they were made.  But Schneier makes short work of that argument in a few well-chosen examples showing that such metadata can easily reveal extremely private facts about a person:  medical conditions or sexual orientation, for example. 

It's not only government overreaching that Schneier is concerned about. Businesses come in for criticism too.  With data storage getting cheaper all the time, many Internet firms and network giants such as Google and Yahoo find that it's easier simply to collect all the data they can on their customers, and then pick through it to see what useful information they can extract—or sell to others.  This happens all the time.  Maybe the most visible evidence of it happens when you go online and look for, say, a barbecue grill at a hardware-store website.  Then, maybe several days later, you will be on a completely different site.  Say a vegetarian friend is coming over and you're looking up how to make vegan stew.  Lo and behold, right next to the vegan recipe, there's an ad for that barbecue grill you were looking at a few days ago.  How did they know?  With "cookies" (bits of data retained by your browser) and behind-the-scenes trading of information about you and your browsing habits.

But Schneier reserves his greatest concern for something that is perhaps hardest to define:  the loss of privacy.  The right to privacy is a vital if poorly defined right whose absence makes normal life almost impossible.  Schneier says, "Privacy is an inherent human right. . . . It is about choice, and having the power to control how you present yourself to the world."  Mass surveillance tramples over the right to privacy and trains millions subtly to alter their ways of living to avoid the pain of secrets revealed.  This way of living was familiar to those whose lives were monitored by totalitarian regimes such as the old East Germany or the Soviet Union.  True, Google isn't going to send a jackbooted corporal to your door if you say something nasty about Sergey Brin, Google's co-founder.  Brin himself was born behind the Iron Curtain, though his family emigrated when he was six, and he probably remembers little or nothing about the USSR.  Nevertheless, Google and other firms that collect massive amounts of private data from their customers have set up a situation in which the privacy rights of millions, even billions, depend solely on the good intentions of a few powerful decision-makers in private companies. 

So what do we do about this?  Schneier has lots of suggestions, and points to Europe as a place where privacy is more respected in law and custom.  Changing laws is a necessary first step.  Whenever anyone moves to restrict the mass-surveillance habits of government entities such as the NSA or the Federal Bureau of Investigation, their defenders threaten us with a terrorist apocalypse, saying if we don't give up this or that privacy right, we'll tie the government's hands and be helpless before terrorist assaults.  Schneier spends a lot of time taking apart this argument, to my mind pretty convincingly.  For one thing, mass-surveillance data has not proved that useful in uncovering terrorist plots, compared to old-fashioned detective work focused intensely on a few known troublemakers. In general, government should abandon most mass-surveillance practices in favor of concentrating on specific investigations, with permission granted by courts whose workings are made public to the extent possible.

As for massive snooping by private enterprises, Schneier thinks regulations are the best option.  These regulations would impose a kind of "opt-in" system.  Currently, if you have a privacy-related choice at all in dealing with Internet firms, you have to go to a lot of trouble to make them respect your privacy, if they will allow such a thing at all.  Under Schneier's proposed policy, companies could not take away your rights to your data without your explicit permission, and the choice would be explained clearly enough so that you wouldn't need to have your techno-lawyer read the fine print to understand what's going on. 

Neither Schneier nor I are political scientists, so it's hard to say how we would get from the current parlous situation to one in which online privacy is respected, and nobody can snoop on you unless they go to a lot of trouble and get special permission to do it.  But he's told us what the problem is, and now it's up to us to do something about it.

Sources:  Bruce Schneier's book Data and Goliath:  The Hidden Battles to Collect Your Data and Control Your World was published by W. W. Norton in 2015.  The quotation from it above is from p. 126.  I also referred to Wikipedia articles on Edward Snowden, MAINWAY (the NSA call databse), and Sergey Brin.

Monday, November 09, 2015

Did Exxon Mobil Lie About Climate Change?

The energy giant Exxon Mobil is being investigated by New York State's attorney general, according to a report last week in the New York Times.  The issue appears to be whether Exxon properly stated the risks of climate change to its future business in light of its own internal scientific climate research.  Critics of the company say it has engaged in deception similar to what tobacco companies did in the 1960s and 1970s, when cigarette makers funded research that cast doubt on the health dangers of tobacco use even as they knew the grim truth and concealed it.  For its part, Exxon's spokesman Kenneth P. Cohen said, "We unequivocally reject the allegations that Exxon Mobil has suppressed climate change research." 

Under a law called the Martin Act, the New York attorney general is charged with the investigation of financial fraud, and can issue subpoenas for records and documents relating to such an investigation.  Exxon got a subpoena along these lines last week, and is in the process of responding to it. 

Let's step back a moment and examine the question of how this case relates to the well-known practices of tobacco companies that attacked the credibility of research that showed smoking and chewing their products was hazardous to one's health.

The history of how Big Tobacco muddied the research waters is pretty clear.  After the tobacco firms fought what became a rear-guard action against the mounting evidence that smoking kills, both state and U. S. federal attorneys general sued large companies such as R. J. Reynolds beginning in the 1990s, claiming that they deceived consumers about the dangers of smoking even as the company's own internal research revealed the hazards involved.  These successful suits cost the companies billions of dollars in fines and continuing payments into state-controlled public-health funds. 

One of my high-school teachers loved questions that began, "Compare and contrast. . ." so let's do that here.  What are the comparisons and the contrasts between what Big Tobacco did, and what Big Oil is supposedly doing?

First, the comparison for similarities.  Exxon may have funded some researchers at times who opposed the general scientific consensus about climate change.  This consensus has itself been somewhat of a moving target as more data, more sophisticated computer models, and a better understanding of climatology in general have contributed to knowledge of the problem.   So for Exxon to be liable in the way that, say, R. J. Reynolds was liable, someone would have to show that (a) Exxon was publicly saying climate change isn't going to bother us, and (b) Exxon privately knew pretty much the opposite. 

There is also the question of harm.  It's pretty easy for a lawyer to argue that his late client died from smoking, which the client might have ceased and desisted from doing had he not been lied to by the maker of his cigarettes.  If some of the more dire forecasts of the climate-change prophets come to pass, we will also have widespread death and destruction from it too.  And to the extent that companies like Exxon were responsible for it, they could conceivably be held liable in some way.

Now for the contrasts.  Apparently the worst thing that the New York attorney general thinks Exxon has done is not murder or criminal negligence, but financial fraud.  Fraud generally involves the premeditated intent to trick or deceive someone to your own advantage.  The idea here seems to be that if (and that is a big "if") laws are passed or other factors intervene to make it harder for Exxon to profit from fossil fuels because of climate change, and Exxon knew this was likely to happen, and Exxon told its investors otherwise, then they have tricked their investors. 

Whatever you want to call this alleged action, it's a far cry from what blatant deceivers like Bernie Madoff did.  Madoff, you may recall, ran a Ponzi scheme and kept one set of books for public consumption and another set for his secret fraudulent operations.  While some European countries have begun to restrict fossil-fuel use in various ways—high fossil-fuel taxes, for example—their reasons for doing so often go beyond the threat of climate change.  And in the U. S., to the frustration of environmentalists, very few meaningful climate-change-inspired restrictions have been placed so far on the consumption of oil, gas, and coal.  This may change in the future, but it's hard to sue somebody for something that hasn't happened yet.  Oil prices have recently tanked (so to speak), but the reasons have little or nothing to do with climate-change laws and a lot more to do with higher domestic production and international politics. 

Another question is whether an engineering-intensive firm that operates legally to fulfill a widespread public need, as energy companies do, can be held liable for the free consumption decisions of millions of its customers.  Again, we come to the question of who has been harmed.  While lying is bad, if we find out that Exxon made some forecasts of future climate change that turn out to be wrong, that's not exactly the same as lying.  Overall, this investigation seems to be based on speculation about future harms more than it is a realistic assessment of how investors have been harmed up to now.  And such a thing will be hard to put across to a reasonable jury, assuming the case gets that far.

Of course, this may be the beginnings of what some might view as a government shakedown.  Rather than face the prospect of spending years or decades in court, Exxon may choose to settle out of court by paying fines or changing its way of business to make the New York attorney general happy.  Such proceedings always smack of blackmail to a greater or lesser degree, although sometimes they are the least bad alternative if a genuine wrong has occurred.

But to find out if that is the case, we'll just have to wait.  Wait to see what the attorney general of New York does next; wait to see if states and countries pass much more restrictive legislation inspired by climate change; and wait to see how much hotter it gets.  It may be a long wait for any or all of these things, so stay tuned.

Sources:  The New York Times article "Exxon Mobil Investigated for Possible Climate Change Lies by New York Attorney General" appeared on Nov. 6, 2015 at  I also referred to the Wikipedia article "Tobacco politics."  I blogged on a related matter pertaining to climate change and university-funded research in "A Chunk of (Climate) Change", posted on Mar. 2, 2015.