Monday, October 24, 2016

The Day The Internet Goes Down

This hasn't happened—yet.  But Bruce Schneier, an experienced Internet security expert with a track record of calling attention to little problems before they become big ones, is saying he's seeing signs that somebody may be considering an all-out attack on the Internet.  In an essay he posted last month called "Someone Is Learning How to Take Down the Internet," he tells us that several Internet-related companies which perform essential functions such as running domain-name servers (DNS) have come to him recently to report a peculiar kind of distributed denial-of-service (DDOS) attack.

For those who may not have read last week's blog about ICANN, let's back up and do a little Internet 101.  The URLs you use to find various websites end in domain names—for example, .com or .org.  One company that has gone public on its own with some limited information about the attacks is Verisign, a Virginia-based firm whose involvement with the Internet goes back to the 1990s, when they served as the kind of Internet telephone book for every domain ending in .com for a while, before the ICANN, now an internationally-governed nonprofit organization, took over that job.  Without domain-name servers, networked computers can't figure out how to find websites, and the whole Internet communication process pretty much grinds to a halt.  So the DNS function is pretty important.

As Schneier explains in his essay, companies such as Verisign have been experiencing DDOS attacks that start small and ramp up over a period of time.  He likens them to the way the old Soviet Union used to play tag with American air defenses and radar sites in order to see how good they were, in case they ever had to mount an all-out attack.  From the victim's point of view, a DDOS attack would be like if you were an old-fashioned telephone switchboard operator, and all your incoming-call lights lit up at once—for hours, or however long the attack lasts.  It's a battle of bandwidths, and if the attacker generates enough dummy requests over a wide enough bandwidth (meaning more servers and more high-speed Internet connections), the attack overwhelms the victim's ability to keep answering the phone, so to speak.  Legitimate users of the attacked site are blocked out and simply can't connect as long as the attack is effective.  If a critical DNS is attacked, it's a good chance that most of the domain names served will also disappear for the duration.  That hasn't happened yet on a large scale, but some small incidents have occurred along these lines recently, and Schneier thinks that somebody is rehearsing for a large-scale attack.

The Internet was designed from the start to be robust against attack, but back in the 1970s and 1980s, the primary fear was an attack on the physical network, not one using the Internet itself.  Nobody goes around chopping up fiber cables in hopes of bringing down the Internet, because it's simply not that vulnerable physically.  But it's likely that few if any of the originators thought of the possibility that the Internet's strengths—universal access, global reach—would be turned against it by malevolent actors.  It's also likely that few of them may have believed in original sin, but that's another matter.

Who would want to take down the Internet?  For the rest of the space here I'm going to engage in a little dismal speculation, starting with e-commerce.  Whatever else happens if the Internet goes down, you're not going to be able to buy stuff that way.  Schneier isn't sure, but he thinks these suspicious probing attacks may be the work of a "state actor," namely Russia or China.  Independent hackers, or even criminal rings, seldom have access to entire city blocks of server farms, and high-bandwidth attacks like these generally require such resources.

If one asks the simple question, "What percent of retail sales are transacted over the Internet for these three countries:  China, the U. S., and Russia?" one gets an interesting answer.  It turns out that as of 2015, China transacted about 12.9% of all retail sales online.  The U. S. was next, at about 8.1%.  Bringing up the rear is Russia, at around 2%, which is where the U. S. was in 2004.  Depending on how it's done, a massive attack on DNS sites could be designed to damage some geographic areas more than others, and without knowing more details about China's Internet setup I can't say whether China could manage to cripple the Internet in the U. S. without messing up its own part.  But there is so much U. S.-China trade that Chinese exports would start to suffer pretty fast anyway.  So there are a couple of reasons that if China did anything along these lines, they would be shooting themselves in the foot, so to speak.

Russia, on the other hand, has much less in the way of direct U. S. trade, and while it would be inconvenient for them to lose the use of the Internet for a while, their economy, such as it is, would suffer a much smaller hit.  So based purely on economic considerations, my guess is that Russia would have more to gain and less to lose in an all-out Internet war than China would.

A total shutdown of the Internet is unlikely, but even a partial shutdown could have dire consequences.  Banks use the Internet.  Lots of essential utility services, ranging from electric power to water and natural gas, use the Internet for what's called SCADA (supervisory control and data acquisition) functions.  The Internet has gradually become critical piece of infrastructure whose vulnerabilities have never been fully tested in an all-out attack.  It's not a comfortable place for a country to be in, and in these days of political uncertainty and the waning of dull, expert competence in the upper reaches of government, you hope that someone, somewhere has both considered these possibilities in detail, and figured out some kind of contingency plan to act on in case it happens. 

If there is such a plan, I don't know about it.  Maybe it's secret and we shouldn't know.  But if it's there, I'd at least like to know that we have it.  And if we don't, maybe we should make plans on our own for the Day The Internet Goes Down.

Sources:  Bruce Schneier's essay "Someone Is Learning How to Take Down the Internet" can be found at  I obtained statistics on the percent of U. S. retail e-commerce sales from the website, the China data from, and the Russia data from  I also referred to the Wikipedia article on Verisign.

Monday, October 17, 2016

Internet Technical Governance: ICANN Says "I can," But Can It?

At a time when politics seems to have gotten into everything, like sand after a trip to the beach, it's not too surprising to hear that Senator Ted Cruz and some state attorneys general have seized upon a largely technical issue involving the Internet domain name system (DNS), specifically the transfer of supervision from the U. S. Department of Commerce to an independent non-profit organization called ICANN (Internet Corporation for Assigned Names and Numbers).  This matter highlights a little-known fact about engineers:  they often handle political matters a good deal better than many politicians do.

I still think the best definition of politics is one I heard from my eighth-grade civics teacher:  "Politics is just the conduct of public affairs."  In the nature of something as widespread and influential as the Internet, in one sense every issue affecting its operation and integrity is political, in that it could potentially affect every user.  But that is not the usual sense in which the word is used.

The facts of the issue are these.  When you type in a URL that uses letters that stand some chance of being understood by a normal human (e. g. the Domain Name System is sort of like a phone book in which network computers look up the URLs that are linked to numbers that computers actually use.  Up until a couple of weeks ago (Oct. 1, to be exact), certain operations pertaining to the assignment of domain names and other more technical matters were performed under the supervision of the U. S. Department of Commerce's National Telecommunications and Information Administration (NTIA), through a contract with the already-existing ICANN, a nonprofit organization based in California.  This tie to the U. S. government was viewed by some as a liability, in that it has led in the past to calls from Russia and China to transfer supervision of ICANN to a United Nations agency called the International Telecommunications Union (ITU).  (You can tell there's engineers involved by the number of alphabet-soup outfits in this piece.)  Partly to counter this, for many years both Democratic and Republican presidential administrations have been moving to cut the last formal ties between the Department of Commerce and ICANN, and finally a date was set:  October 1 of this year.

For reasons known best to themselves, but possibly having to do with businesses which were not happy with how domain-name disputes turned out, the attorneys general of the states of Arizona, Nevada, Oklahoma, and Texas filed suit to block the transfer.  But a federal judge denied the request and for two weeks now, ICANN has been running without its former Department of Commerce supervision.  I for one have not noticed any big changes, but it was never the kind of thing that was supposed to lead to the sudden appearance of massive censorship on the Internet in the first place.

While the assigning of Internet domain names and keeping them straight on the "root servers" could conceivably be manipulated for devious or sinister purposes, I am unaware of any major instances of this.  As numerous reports pointed out, Internet censorship of the type that goes on in China or Egypt from time to time is committed by the host governments, not ICANN, and there's nothing ICANN can do about it if a sovereign government chooses to pull their Internet plug.  I won't say that the concerns of Sen. Cruz and company are entirely without merit, but it's one of those things that can't be predicted in advance. 

So far, ICANN, and many other technical matters pertaining to the Internet, seem to have been run in a way that is familiar to many engineers, but little known outside the engineering community.  There is not a single term that describes this process, but the phrases "consensus," "just-in-time governance," and "ad-hoc committees" pertain to it.  It is most prominent in the development of engineering standards, which the Internet vitally depends on.
Many times in the course of engineering history a need for a standard has arisen.  Technology gives rise to a new capability—precisely-machined screw threads, or radio transmissions, or computer networks—but it will work in a widespread way only if the parties making and using the technology agree on certain standards so everybody's screws will fit, or everybody's computer can talk to the others without a lot of fuss.  So engineers have learned to form standards committees whose members have in mind both technical knowledge and the interests of private and public entities concerned with the new technology.  These committees are very lean organizations—usually the members' firms or departments pay for their participation, so there is little or nothing in the way of staff, buildings, or tax money involved.  The committee meets as long as it takes to figure out a standard, agrees on it, and then publishes its results and in effect says, "If you want to play this new game, here are the rules."  The committee disbands, often, and life goes on, only better than before because now there's a new standard that engineers can use to implement a new technology.

Because these standards committees work almost entirely out of the public eye, most people don't even know they exist.  But without them, we wouldn't have, well, most of the highly sophisticated technology we have.  Wireless networks depend on standards.  The Internet depends on standards.  Electric power depends on standards (the battle of Westinghouse's AC versus Edison's DC was in large part an issue of standards).  And all these things get done almost invisibly, without much publicity or public expense.

Some political scientists have floated the idea of adopting the engineering-standard style of governance to more public matters, and they may have a point.  As anyone who has attended standards meetings can attest, they are not without controversy.  But by and large, standards organizations and technical outfits such as ICANN operate in this mode successfully and efficiently.  And unless future events prove otherwise, it's likely that the fears of Sen. Cruz and company will turn out to be groundless.

I hope ICANN can keep doing its generally good job without the Department of Commerce looking over its shoulder any more.  Instead of politicians making politics out of what looks to be a smoothly-functioning situation, perhaps we could encourage them to ask how engineers deal with technical matters that have political aspects, and learn something about how to get work done.  But at this point in history, it might be too much to ask.

Sources:  CNET carried two stories to which I referred on the transfer of ICANN to an independent status:  one on Sept. 16 at, and one on Oct. 1 at  I also referred to the Wikipedia article on ICANN.

Monday, October 10, 2016

115 Years Young?

Vannevar Bush, the head of the U. S. Office of Scientific Research and Development during World War II, tells the story of how during the war he was trying to gain more funding from Congress for medical research.  Hoping to further his cause, he convinced A. Newton Richards, President Roosevelt's chairman of the Committee on Medical Research, to testify in favor of more funding before a Congressional committee.  As Bush recounts, "It was towards the end of the war, and Richards was feeling tired and a bit old.  One of the congressmen asked him, 'Doctor, will all these researches you are carrying on tend to lengthen the span of human existence?'

'God forbid,' said Richards, smack into the record."

While not every medical researcher shares Dr. Richard's reluctance to lengthen human longevity, Dr. Jan Vijg of the Albert Einstein College of Medicine thinks he has discovered the true limit to how long humans can live.  It's about 115 years, he says.

According to a recent New York Times report, Dr. Vijg and his colleagues studied the mortality records of a number of countries to see which age group experienced the most rapid growth in recent decades.  As the general level of health care in industrialized countries has improved, the average lifespan has increased, but Dr. Vijg guessed that if there was a natural limit, it would show up first in the leveling off of the age of the fastest-growing group of old people.  For example, in the 1920s in France, 85-year-olds were the fastest-growing group, but by the 1990s that honor belonged to 102-year-olds.  In the last decade or so, the trend has stagnated at about 115.  There are exceptions, of course, such as Jean Calment, who before her death in France in 1997 at age 122 was fond of retelling her recollections of meeting Vincent Van Gogh.  But statistically, Dr. Vijg has strong evidence that no matter what specific diseases we conquer, we have a built-in expiration date of about 115 years.

Lots of people disagree with Dr. Vijg, of course.  One of the scientists who collected data used in Dr. Vijg's study deplores his conclusion, calling it a "travesty."  But we should distinguish between a descriptive study, whose purpose is simply to give us insight into what is in fact happening, and a claim of proof.  Not even Dr. Vijg is claiming to have proved nobody can live longer than 115, if for no other reason than the fact of Ms. Calment's achievement.  What he presents is persuasive statistical data that, unless we discover the root causes of aging and get a handle on how to manipulate them, we are unlikely to push the maximum lifespan higher than 115.

It's curious, but a friend of mine has been going around for some time claiming approximately the same thing on the basis of a sermon he heard.  I didn't hear the sermon, but evidently the preacher was looking forward to living to 120 on the basis of a Bible verse in the Old Testament.

He was probably talking about Genesis 6:3, which reads in the King James Version, "And the Lord said, 'My spirit shall not always strive with man, for that he also is flesh:  yet his days shall be an hundred and twenty years.'"

Now, this passage occurs in the midst of a number of other sayings that are, to say the least, hard to interpret—things about the sons of God marrying the daughters of men, giants in the earth, and so on.  Some interpreters say this passage has nothing to do with a limit on human lifespans; rather, it refers to the time during which God put up with man's increasing misbehavior before he decided to put an end to it with the Flood, which only Noah and his fellow shipmates survived. 

Whatever the meaning of the Genesis passage, it has historically been a truism that everyone's going to die sooner or later, and society has been arranged with that assumption in mind.  Dr. Vijg's claim that we shouldn't expect to live longer than 115 years or so just confirms what nearly everyone assumes, and puts a number on it.

However, these ideas are rejected by a small but vociferous group called transhumanists, who increasingly put their faith in the idea that humanity is shortly going to figure out how to extend useful, fruitful life indefinitely.  They don't always say "forever," but many of them mean that.  One type of transhumanists called "immortalists" in particular seem to think that we can figure out how to live forever.  While I understand why a person, especially one who doesn't believe in God, would get interested in extending human life—it's the only show in town, on their view—the danger in this movement is that in trying to move us toward a glorious paradisacal future, they will unwittingly turn the present into Hell on earth.  This sort of thing went on during the Cold War, and continues in some places today, as millions were subjugated by Communist regimes which promised a wonderful future of abundance at the price of sacrificed freedoms now.  Lest we dismiss the transhumanists as a powerless fringe group, one of their leading lights, Ray Kurzweil, currently holds a high-level position at Google. 

Perhaps the best thing is not to focus just on how long we can live, but how long we can live well.  Dr. Vijg takes this approach, saying that even if there is a natural limit of 115, there's a lot we can still do to prevent diseases such as Alzheimer's, osteoporosis, and other age-related maladies from robbing us of the enjoyment of those later years.  So the news that we can't live past 115 is not a counsel of despair, by any means.  Still, for those who think death is the end, it's not good news either.

However long one lives, the length matters less than what you do with it.  Old paintings of philosophers would sometimes show a human skull prominently displayed in the philosopher's study.  It represents a reminder that life is limited, and every minute is one of a finite number of minutes we will have, so we should make the most of them.  Some of the best advice along these lines, for believers and non-believers alike, is from Psalm 90, which says "So teach us to number our days, that we may apply our hearts unto wisdom."

Sources:  The New York Times article describing Dr. Vijg's work appeared on Oct. 5, 2016, at   The quotation from Dr. Richards is from Vannevar Bush's book of memoirs Pieces of the Action (NY:  William Morrow, 1970), p. 130.  The patience-of-God interpretation of Genesis 6:3 can be found at  I also referred to the Wikipedia articles on transhumanism and Ray Kurzweil.

Monday, October 03, 2016

Freeing Information—But How?

Last week I attended a talk by philosopher William "Trey" Brant III on a crisis in the academic publishing world involving either massive data piracy, or a blow for the freedom of information, depending on your point of view.  I was particularly interested in this topic because earlier in the day, I'd found out that the electronic version of one of my books had been "cracked," meaning people could now download it for free. 

The phrase "information wants to be free" was coined by Whole Earth Catalog author Stewart Brand in a conversation with Steve Wozniak in 1984.  Brand was a techno-optimist who participated in one of the first online communities in the San Francisco Bay area, and hoped that computer networks would foster a kind of egalitarian new age of togetherness and sharing.  That was before big money got involved.

As Brant described, and as I have been able to confirm since, the world of academic publishing, specifically scientific and technical papers, is now an exceedingly profitable one.  Here is the way it works.  People who write academic papers are not often great entrepreneurs, although there are exceptions.  They are typically in a situation where they have to publish a certain number of papers a year to keep their jobs—the old "publish or perish" paradigm, which has taken hold in more and more universities and colleges throughout the world.  In the old pre-Internet days, there were only so many journals, because the expense of printing and shipping pieces of paper around was nontrivial, and so the whole enterprise had a kind of natural limit.  The main subscribers to academic journals were either libraries (which had to subscribe to all the big ones) or individual academics (who typically subscribed to only one or two journals that matched their specialized interests).  If you go back far enough, say before World War II, academic publishing of journals was a very small business, not worth the time it would take for any major publisher to fool with. 

But then big money came to the hard sciences, professional organizations grew, and the Internet came along.  Now all you need to set up a journal is a website and connections to some academics willing to edit the thing.  And many of the established journals have now been liberated from the hard-copy page limits and can easily publish 10,000 pages a year—it's just more bits, not more trees.  So it's gotten a lot cheaper to run an academic journal, but it hasn't gotten any cheaper to subscribe.

Publishers such as Elsevier (full disclosure:  my latest technical paper was published through an Elsevier journal) have their profitable cake and get to eat it too.  The academics who write their content send their papers in for free, or in some cases even pay the publishers page charges.  The other academics who review the papers also usually review for free.  And then the publisher gets to charge five-figure subscription fees to the libraries of the same institutions where the papers were generated.  It is indeed good to be an academic publisher these days.  At least until Alexandra Elbakyan came along.

In 2011, Elbakyan was a neurotechnology graduate student in Kazakhstan, dismayed by the charges she had to pay per research article to places like Elsevier unless she was affiliated with a university whose library had a subscription.  (Anybody can get individual articles from these publishers, but unless you have a connection to a research library, they will ask you to pay on the order of $30 US per article.  Academics such as myself are shielded from these costs, which are borne by the libraries.)  So she found a willing hacker or two and started something called Sci-Hub.  According to Dr. Brant, you can find millions of documents on it that you'd otherwise have to pay for, and outside of North America and Northern Europe, especially in Russia and China, nobody fools with the official academic publishers any more.  They just go to Sci-Hub.

This is not a stable situation.  No matter what treaties and agreements say, if Country A harbors some folks who are violating Country B's copyright laws, and Country A doesn't want to cooperate, there's not much that anybody in Country B can do.  If everybody switches to using Sci-Hub, the academic publishers' revenue streams dry up and the whole system collapses.  It hasn't happened yet, but the house of cards is teetering.

The question is, who should pay for academic publishing, and how much is fair to charge for it?  Saying the free market will decide isn't going to work, because we're not talking about a commodity like oil or wheat.  Each academic paper is unique, and anyway, there are studies showing that only two or three other people ever read a typical journal paper anyway.  But if it's not in principle available to everybody, you can't say it's really published, so the availability has to be there.  In a way, academic publishing is a kind of vanity press, but one in which the writers don't typically pay up front.  Recently the concept of open-access publishing has made some headway, in which the author pays a lump sum (typically on the order of $1000 US), and the publisher promises to keep it on the Internet forever.  To my mind, that goes too far the other way, in that your typical English professor at a lowbrow college is not going to have that kind of money, and neither is his department. 

I don't know what the solution is.  Obviously, the peer-review process is still necessary, in which qualified experts pass judgment on what should be published, and it costs something to organize that and put papers in shape to go online.  But I seriously doubt that it costs as much as places like Elsevier are charging.  Maybe the pressure brought to bear by free-access sites such as Sci-Hub will lead to lower prices.  Or some mechanism or international agreement may be found in which people will still have access to information they need, but at a price which reflects something closer to the true cost of the service, and not just whatever the traffic bears.

And as for my book, well, there's something to be said for paper after all. 

Sources:  I thank Trey Brant for bringing my attention to this matter.  I referred to Wikipedia articles on "Information Should Be Free" and "Sci-Hub."  And no, I'm not going to tell you where to find my book for free, or what Sci-Hub's current domain name is.  You have to find those on your own.

Monday, September 26, 2016

Fracking and Earthquakes: The Tightest Link Yet

Stanford scientists have found the best evidence so far that injections of wastewater from hydraulic fracturing (fracking) oil and gas wells definitely cause earthquakes.  The next question is, how will the Texas Railroad Commission and the oil and gas industry respond?  But first, the scientists' study.

As readers of this blog may know, fracking involves the injection of special mixtures of water and proprietary stuff at extremely high pressure into specially drilled wells that penetrate oil- and gas-bearing formations which normally would not produce enough to be worth drilling into.  The producing wells are not the problem.  The problem is that a byproduct of the process is a huge amount of wastewater contaminated with salt, chemicals, and sometimes even radioactive stuff, and these days you don't just dump it out on the ground or into a nearby stream.  The drillers gather it up with tank trucks and ship it to disposal wells, where it is squirted several kilometers deep into rock formations under tremendous pressure. 

It's these disposal wells that seem to be associated with spates of earthquakes in north Texas and Oklahoma, which up to 2000 or so were some of the most earthquake-free areas in the U. S.  Fortunately, most of the earthquakes have been small—around 3 on the "moment magnitude" scale, which replaced the old Richter scale in the 1970s.  But a 4.8-magnitude quake on May 17, 2012 in the East Texas town of Timpson (about halfway between Lufkin and Longview) knocked down a brick wall, and turned out to be the largest such quake ever recorded in that area in recent times. 

Stanford geologist William Ellsworth, working with an international team of geophysicists, remote sensing experts, and others, decided to build a model of the subsurface rocks to see if they could reproduce the conditions that may have led to the earthquake.  Fortunately, that part of Texas is well-understood geologically, and Ellsworth's team obtained data on how much wastewater was injected into two pairs of wells, each at a different depth.  They also found and enhanced satellite-radar data that can measure movement at the earth's surface as slight as 1 millimeter per year.  They put all this data into a "poroelastic layered Earth model," meaning they accounted for porosity and elasticity—how holey and how flexible the rocks are.  They also knew about existing faults, and ran their model to predict both how much the surface might bulge after getting some 800,000 cubic meters of wastewater injected into it per year for several years.  Then they compared their model's predicted bulge to the measured bulge, which was several centimeters, and got pretty good agreement between their model and the actual satellite data.

That told them that another number their model produced—the increase in pore pressure—was also probably right.  When pore pressure increases by about 10 times atmospheric pressure (1 megapascal or more), this has been shown to cause earthquakes.  The mechanics are complicated, and I'm not a mechanical engineer.  Basically, the reason fault lines under pressure don't slip is that there is a lot of force squeezing the two sides together, and the resulting friction keeps things stationary.  But when you have increased pore pressure on the order of 1 megapascal, that somehow decreases the squeezing force and the thing starts to slip.  And slip it did, causing Timpson's quake and others.

Although most of the bulging occurred around the eastern pair of wells, the western wells were where the earthquake happened.  Ellsworth's team could explain this by citing differences in the porosity and elasticity of the rocks around each set of wells. 

So the scientists have made a model of the rocks under Timpson, injected their rock model with wastewater, and observed both a surface bulge that matches what satellites actually measured, and noted pore-pressure changes of a size that is known to cause earthquakes elsewhere.  And in fact, an earthquake happened.  Looks pretty conclusive to me.  But I'm not a Texas Railroad Commissioner.

What have railroads got to do with oil and gas production?  It's a long story, but basically, the Texas Railroad Commission (TRC), which originally did regulate railroads, backed into the business of granting permits for oil and gas production in the 1930s, and as time went on nobody has had the temerity to change its name.  It apparently did some useful work in the 1930s by putting the brakes on absurd overproduction and keeping oil prices from vanishing.  Nowadays, its regulatory duties are different, and involve environmental concerns as well as the usual support and encouragement of the industry it is charged with regulating. 

In reports describing the Stanford study, attempts by reporters to get a reaction out of the TRC were initially unsuccessful.  The Commission's mission statement has three bullets, saying it serves Texas through (1) "our stewardship of natural resources and the environment" (2)  "our concern for personal and community safety" and (3) "our support of enhanced development and economic vitality for the benefit of Texans."  Judging by the Commission's past reluctance to admit any causal link between fracking and earthquakes, their mission statement's bottom line, about enhancing development and economic vitality, appears to be taking precedence over the other two items, just as a company's bottom line tends to take precedence over other concerns.

Ellsworth and company have confirmed what many other geologists, as well as numbers of ordinary citizens, have been suspecting for a long time.  Most, if not all, of the increased earthquake activity in regions near wastewater injection wells can probably be attributed to those wells. 

By and large, Texans are reasonable people.  Fracking has been an economic blessing to many parts of the state, and it's unlikely that anything like the blanket fracking bans in New York and Maryland could happen here.  But now that there is reasonably good evidence of the connection between wastewater wells and earthquakes, it would only be reasonable for people who have lost property or been injured in such events to ask for compensation from the owners of the wells.  Of course, any time lawyers get involved, reason may fly out the window, but I think we can work these issues out without either continuing to deny that there's any association at all, or saying that fracking is an invention of the Devil and must be abolished from the planet.  Let's hope so, anyway.

Sources:  I referred to a report published online by the Dallas Observer on Sept. 23, 2016 at, one in the Dallas Morning News at, and the paper by M. Shirzaei, W. L. Ellsworth, K. F. Tiampo, P. J. González, and M. Manga, "Surface uplift and time-dependent seismic hazard due to fluid injection in eastern Texas," Science, vol. 353, Issue 6306, pp. 1416-1419, as well as the Texas Railroad Commission website

Monday, September 19, 2016

Time To Make Airbags Optional?

For at least a couple of years, we have known that certain airbag inflators made by the Japanese firm Takata have been exploding like small bombs, sending shrapnel into drivers and passengers who otherwise would almost certainly have survived the collisions that set off the airbags.  A recent investigative article published in the New York Times says that at least fourteen people have died as a result of exploding airbags.  There's no good way to die, but getting killed by a defective safety device has to be one of the worst.  And especially if the company making the things was doing a coverup to keep selling them, as the Times reports.

The coverup was revealed in testimony taken as part of a lawsuit filed by Honda against Takata.  The active chemical in many Takata inflators is ammonium nitrate (AN), the same stuff that was responsible for the explosion in West, Texas in 2013.  One of the main attractions of AN is that it's cheap, which is one reason that Takata has historically been so successful in beating out competitive inflator companies.  But AN easily absorbs water and can undergo changes when subjected to heat or humidity that make it much more likely to detonate when ignited.  There's a difference between fast controlled burning, which is what an inflator is supposed to do, and detonation, which is a practically instantaneous explosion that will shatter almost any container.  And preventing AN from detonating involves keeping all moisture away from it for as long as it's in the car, which can be many years. 

Accordingly, automakers buying Takata inflators insisted that the company do very sensitive leak tests of its containers.  These tests involved injecting a certain amount of helium gas into a container before sealing it, and then putting the whole thing in a vacuum chamber attached to a helium mass spectrometer that can detect only a few molecules of helium, which ordinarily is not present in sea-level air.  It's a great system when it's not abused.  But the only problem was that the containers being tested at Takata's plant in LaGrange, Georgia, kept flunking. 

So the engineers decided to fudge the results.  They pumped down the vacuum chamber several times, "testing" the same container repeatedly until it ran out of helium.  Then they checked it off as passing, put new bar codes on it to conceal what they'd done, and reported that the container passed.  One engineer involved in this scheme complained to his manager about the deception, and was told "not to come back to any more meetings."  He subsequently quit the company.

Up till now, it looked like the worst that Takata was guilty of was gross incompetence, but now there is evidence of outright fraud. 

When I blogged about this matter in 2014, I fully disclosed that both of the cars my wife and I drive are affected by the airbag recall.  We are certainly not alone.  It now looks like over sixty million of the suspect inflators are out there somewhere, and at least nine separate carmakers are struggling to manage the most massive and nightmarish recall in automotive history.  Right now I am waiting to hear from our local Honda dealer about a recall notice we received for our Element last July, telling us that the passenger-side inflator was suspect and we should get it replaced.  Only, they didn't have replacement parts yet, so in the meantime, try not to let anybody sit there.  Now and then I still live dangerously and sit in the passenger seat anyway.  I can only imagine what this has done to the resale value of the vehicle.  So we'll hang onto it until Honda gets a replacement inflator for it.  But I'm not exactly happy to learn from the Times article that the replacement inflator may use ammonium nitrate too.

This whole sad situation brings up a question that was supposedly settled back in the 1990s, when airbags became mandatory on new cars in the U. S.  Can we afford the incremental added protection airbags provide in the light of the hassle, and now hazards, they involve? 

In a calculation performed back in 2005, a writer at the libertarian website Freakonomics claimed that he'd figured out how much it costs to save a life with a seatbelt versus an airbag.  I don't know the details of his calculations, but the results are astonishing.  Seatbelts are pretty cost-effective as safety devices go.  It's about $30,000 to save a life with a seatbelt.  Airbags?  Not so much.  They are vastly more complicated and are effective mainly in head-on collisions.  So the cost to save a life with an airbag is—fasten your seatbelt—$1.8 million.  Now this fellow said that $1.8 million still isn't bad by regulatory standards.  If it was my life saved by an airbag, I would be glad that somebody, somewhere spent that $1.8 million.  But that calculation was done before the massive airbag recall happened, and so you would have to add on to that figure however many millions of dollars have been spent by the automakers on the recall, not to mention the time, anxiety, and waste associated with such recalls.  And the isolated but not negligible accidents involving deaths or injuries directly attributable to airbags.  I've heard that some people have simply stopped driving cars with defective airbags.  This is a little extreme, but if you have another car you can use, I can understand.

It has always seemed a little dubious to me to install shock-triggered explosive charges in cars, even if they are proved to be a lifesaving measure.  And now we have even more reason to wonder whether it might not be a bad idea to make airbag use optional.  Because even properly working airbags can be hazardous to small children, I believe some cars were equipped to turn off airbags if the weight of a child was detected on the corresponding seat.  The way things are now, if I knew how to disable the airbags in my cars, I'd do it, but they're so complicated nowadays I'd have to go to half a year of technician school and even then I'd probably end up setting the thing off when I tried to disconnect it.  You shouldn't have to be qualified as a bomb disarmer to work on your own car, but that's the way it is these days.

In the meantime, let's hope that whoever is making the replacement airbag inflators does a really good job this time, and the millions of car owners around the world driving around with potential bombs can get rid of them.  But maybe it's time to reconsider the whole question of whether using airbags is something that a government should order you to do, or something that is best left to the decision of the consumer.

Sources:  The article "A Cheaper Airbag, and Takata's Road to a Deadly Crisis" by Hiroko Tabuchi appeared in the Aug. 26, 2016 online edition of the New York Times at  I also referred to a useful website where updates on the crisis over the last two years have been collected, at  The Freakonomics piece appeared at, and my previous blog on this subject appeared on Oct. 27, 2014 at

Monday, September 12, 2016

Alternatives to Facebook: The Mondragon Solution

Whatever it is, Facebook is the largest of its kind in the world.  Having surpassed by its own measures the one-billionth-user mark in 2012 after only eight years in existence, it can boast that about one out of every seven people on Earth is a Facebook user.  (Just to get this out of the way, yours truly isn't one of them, but my wife is, and I trust her to tell me anything she sees on it that I need to know.)  Depending on your point of view, the fact that Facebook connects that many people—in principle, at least—is either one of the greatest communications achievements of all time, or a threatening sign that a private corporation now holds tremendous power over the lives of more people than most governments do.  This being an ethics blog, I'd like to look at the potential downside for a moment.

As thinkers all the way back to Aristotle have recognized, mankind is a social animal.  What others think and say about us is of vital, even fundamental, importance.  And if a significant part of one's social life takes place through media such as Facebook, social media become more than just one of a number of things you do, like gardening on weekends.  It becomes a near-essential part of your life, especially if you are younger.  A co-worker recently told me about a surprising result of a survey of college-age job seekers.  When asked if they would work for a company that prohibited them from using Facebook on the job, a substantial number of them said no.  As I haven't been able to track down the original survey, this may be an urban legend, but it rings true.

But the essential sometimes becomes the impossible.  I have known several people who have gotten so much grief from people commenting on their Facebook posts that they have quit using it altogether, either temporarily or permanently.  In the more extreme cases, otherwise responsible people have been forced out of jobs for actions which are entirely legal but go crossways to the opinions of large groups—or should I say "mobs"—who enact the online equivalent of a lynching.  As just one example, I can cite the case of Brendan Eich, developer of the JavaScript programming language, who became CEO of Mozilla Corporation in March of 2014.  Within days, an online tweet revealing that he had contributed $1000 to the California Proposition 8 campaign banning same-sex marriage led to an online shaming campaign by gay activists and ultimately his resignation, less than two weeks after taking the job.  And there have been rare but widely reported cases of cyberbullying (not necessarily involving Facebook) leading teens to commit suicide. 

Here's where Mondragon comes in.  In 1956, students at a Spanish technical college founded by a Catholic priest named José María Arizmendiarrieta formed a company that was eventually named after the town where it was based, Mondragón.  This company was not organized along the usual capitalist-ownership lines.  It was a co-operative enterprise and purposely incorporated the principle of subsidiarity, which is a concept from Catholic social teaching that says basically, if you can do it locally, do it locally.  From a humble beginning as a kerosene-stove factory, the Mondragon organization has grown to be one of the largest employers in the Basque part of Spain, and still substantially adheres to the principle of worker ownership, although not every single employee automatically gets a share of the company's profits (or losses, as the case may be). 

In a recent issue of the journal of technology and society The New Atlantis, Baylor University humanities professor Alan Jacobs makes the following proposition:  "If instead of thinking of the Internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative."  That may require a little unpacking.

By "statist," I think Jacobs means that vast outfits like Facebook take on the nature of a nation more than a company. In fact, elsewhere in the same article he refers to Facebook as "a state—a Hobbesian state."  More unpacking:  Philosopher Thomas Hobbes (1588-1679) was famous for writing Leviathan, in which he argued that societies form political communities because the alternative was dog-eat-dog chaos in which life would be "solitary, poor, nasty, brutish, and short."  In exchange for the modicum of protection that belonging to society provides, we agree to let go of certain kinds of freedom or autonomy—in the case of Facebook, you are trusting the organization with highly personal things such as photographs, death notices, and other pieces of yourself. 

In the space I have left, I can only sketch out what a Mondragon-cooperative Facebook-like entity would be like.  For one thing, the money it made would return to the users.  For another thing, it would be a lot smaller than Facebook.  And it might connect only people who are really and truly friends (or relatives), not just some random person who happens by your Facebook site and thinks you are clever, stupid, or whatever. 

As far as I know, there is no fundamental technical barrier to making such an entity.  Right here in my neighborhood, people use some email-related software to send messages to others within a few blocks, and it's already done me a lot more good than Facebook.  The thing Facebook has going for it is the famous leverage of the network effect, first discovered by the telephone companies around 1900.  Because the addition of one more user to a network costs only one unit (say) and leads to everybody on the network having one more person to talk with, economists argue that costs in a network go as N and the value of the network goes as N squared.  While this might have been true in the early days of hardwired networks and has become a universally-assumed truism today, it is not clear to me that with modern software capabilities that old rule still applies.  Of course, real friends and relatives can be nasty too, but at least in the case of a Mondragon-like social media network, the damage would be much more limited.

A pipe dream?  Maybe.  But I think it's worth considering.

Sources:  Alan Jacobs' article "Attending to Technology:  Theses for Disputation" (pp. 16-45) appeared in the Winter 2016 issue of The New Atlantis.  I referred to a website detailing six cyberbullying suicides at and the Wikipedia articles on Facebook, Brendan Eich, and the network effect.