Monday, August 26, 2013

Of Pecans, Profits, and Piety


Broadly speaking, the system of international trade we live under is a kind of technology.  It’s certain that without modern engineered means of transportation and communications, international markets would be much less significant than they are today.  And while the particular story I’m going to relate pertains to the oldest technology in human history, namely farming, the lesson behind it applies to many fields of engineering.

The pecan tree is the state tree of Texas.  (In case you’ve never heard a Texan say it, it’s pronounced “puh-cawn”).  Pecan trees can grow to a height of 100 feet (30 meters) or more, live up to 1000 years, and for most of those years can produce an abundant annual crop of tasty, highly edible nuts.  When my late grandfather moved to Fort Worth, Texas in 1930, he planted a pecan tree in his back yard.  The last time I visited his house (currently occupied by other relatives), that tree was still going strong, providing shade for most of the back yard and a good bit of the house too.  Pecan trees are native to Texas and grace thousands of acres of river banks and bottom lands, besides furnishing an important food crop to pecan growers who grow hundreds of different varieties.  Pecans are sold both for direct consumption, either in the shell or hulled, and also as ingredients for processed food that benefit from the addition of chopped or blended pecans.  But until about a decade ago, the pecan market was almost entirely domestic, with a good number being sold mainly in Texas.

Then someone in China caught on to the fact that the huge market there for snack nuts, sold often in vending machines in locations such as gas stations and convenience stores, might benefit from imported pecans.  Up until then, most of the snack nuts sold were Chinese walnuts, but the cheaper pecan tastes just as good (in my opinion, anyway), and some clever Chinese importers introduced the new nut to Chinese consumers around 2001.

They liked it—liked it so much that since 2007, shipments of pecans to China from the U. S. (which includes exports from relatively new pecan-growing states such as Georgia and New Mexico as well as Texas) averaged almost 60 million pounds annually.  But there is a fly in this profitable ointment, which is the fact that the Chinese market wants a particular kind of “improved” pecan, not the rich variety of our native pecans.

According to an article in Texas Monthly by James McWilliams, the hybrid improved pecans have a uniform size, uniformly thin shells, and uniform quality.  These improved varieties will work in the Chinese vending machines, which can’t handle the variation in shapes and sizes of native varieties.  Texas pecan growers have known about the Chinese market for years, but so far they have exhibited a marked reluctance to chop down their existing groves, many of which are native varieties, to plant the improved type that produces machine-vendable pecans.   In so doing, they are losing year by year a potential market that could allow Texas to surpass the newer pecan-growing states and once more lead the nation in pecan exports.

That takes care of pecans and profits; now for the piety.  I have been reading a book called Food & Faith, a work of theological musings about the connections between eating and Christianity.  The author, Norman Wirzba, relies on the works of agrarians such as Wendell Berry as well as more explicitly theological writers.  But I was struck by the following passage from the book as expressing exactly what is going on between the Chinese pecan market and Texas pecan growers:  “Food that may have begun in the ground [or on a tree] must lose all traces of soil, sunlight, and fragile plant and animal life so that it can be redesigned, engineered [!], improved, packaged, stored, and delivered in whatever ways the food producer sees fit.”  Wirzba’s book, among many other things, is an impassioned plea to stop thinking about food and eating merely in material and economic terms. 

Viewed one way, it only makes sense for Texas pecan farmers to replant their groves with machine-friendly pecan trees, for the more efficient production of pecans that will contribute to the efficient international trade that efficiently fills vending machines with pecans that Chinese consumers can eat to fuel the machines called their bodies. 

But viewed another way, there is incalculable value in tending native pecan trees which are so deeply connected at multiple levels to a part of the world that Texans, at least, view as God’s country.  Not that His title to the rest of the world is defective in any serious way.  But as a recent arrival here from California said to me the other day, “Texans seem to have a loyalty to their state that I haven’t noticed anywhere else.”  And native pecan trees are part of what makes Texas the place it is.  I find reassuring the fact that just down the road from where I live, in Seguin, you can visit Pape’s Pecan Nutcracker Museum, and view both stationary and portable World’s Largest Pecans.  One is a concrete model on a pedestal on the town square, and the other, welded out of steel, is mounted on a trailer for convenient towing in parades.  And I would like to think that at least part of the reason that Texas pecan growers haven’t done the economically sensible and efficient thing of whacking down all their old-fashioned native trees to plant new ones for the Chinese market, is that, well, there’s more important things than money. 

It takes ten years for a new pecan sapling to mature enough to start producing.  That induces a natural tendency in pecan farmers to take the long view.  Ten years from now, the Chinese may have dropped pecans for Brazil nuts, for all I know.  But the rich biological and cultural heritage represented by the native pecan trees of Texas will live on, I hope, for many generations to come.

Sources:  I learned about the Chinese pecan market from James McWilliams’ article “Shell Game” in the Sept. 2013 edition of Texas Monthly.  It is an excerpt from his book The Pecan:  A History of America’s Native Nut to be published in October 2013.  I also referred to an online article about the pecan market posted by Nature’s Finest Foods Ltd. (a brokerage firm) at http://www.nffonline.com/industry-news/2013/06/19/pecan-exports-china-falter and an item by the Whitney Consulting Group posted on Google Docs (account required) at https://docs.google.com/file/d/1l9XwHsObwgS8O9OERlQwUqaF_IEoBXSVhQLpcnLvwnLpB_fwr-kY1pSeeVdl/edit.  Norman Wirzba’s Food & Faith:  A Theology of Eating was published in 2011 by Cambridge University Press.

Monday, August 19, 2013

Guarding U. S. Nuclear Facilities: The ABCs of DBTs


Earlier this summer, I blogged about a small but determined team of anti-nuclear protesters, including a nun, who managed to get uncomfortably close to a supposedly secure stockpile of nuclear material maintained by the U. S. Department of Energy in Oak Ridge, Tennessee.  Fortunately, the most damage they caused was spray-painting some slogans on a wall, but if they had been terrorists determined to steal enough enriched uranium to make a nuclear weapon, the story might have ended differently. 

A recent report by a group of researchers at the LBJ School of Public Affairs at the University of Texas at Austin points out what they consider to be serious flaws in the way we currently establish levels of security for the various nuclear facilities in the U. S., which range from small research reactors and commercial nuclear power reactors up to full-scale armed nuclear weapons.  According to their report, the present method of deciding how much security is enough is based on something called the Design Basis Threat (DBT).  While the basic idea seems sound, the devil, as always, is in the details.

In order to protect something, you have to know (or guess) what you’re protecting it against.  The way the Design Basis Threat approach works is as follows.  Say you run a small research-type nuclear reactor, the kind operated by many universities, including for example the University of Texas at Austin.  You go to the appropriate agency, in this case the Nuclear Regulatory Commission, and ask what the appropriate Design Basis Threat is for your facility.  It turns out that “research reactors generally do not have to protect against radiological sabotage or provide an armed response to an attack.”  The Design Basis Threat is presumably an attack so feeble that the usual class of security guards found on college campuses would be able to handle it.  So you just go with the minimal kind of security you will typically find at a high-dollar lab of any kind in a public university, and you’re set.

On the other hand, if you run a large commercial power reactor near, say, New York City, such as the Indian Point plant on the Hudson, you are told that your Design Basis Threat includes “multiple groups attacking from multiple entry points; willing to kill or be killed; possessing knowledge about target selection; aided by active and/or passive insiders; employing a broad range of weapons and equipment, including ground and water vehicles.”  This typically means you have to maintain a dozen or so military-style armed guards at all times who are ready to fight off an attack by people who intend either to steal fissionable material or to blow up the place and spread the hot stuff around.  However, no commercial nuclear facility is required to be secure against an attack from the air. 

The requirements for safeguarding nuclear weapons, generally held only by the U. S. military, are even more stringent, as you might imagine. 

Anyone familiar with risks and accident histories knows that for every major disaster in a reasonably complex system, there are usually several less damaging minor incidents that can be called near misses or close calls.  The May 27 intrusion at Oak Ridge is just such a near miss, and to my mind seems to indicate that there may be cracks in the armor with which we protect our nuclear assets.  And some of these cracks may be due to the uneven way the Design Basis Threats are assigned, depending on the size and nature of the nuclear facility

The main criticism that the UT Austin researchers mount agains the current DBT regime is that while the larger facilities may be more likely to attract certain types of attacks, the nuclear material in the smaller facilities could be just as dangerous if stolen.  And the very fact that research reactors are not heavily guarded like commercial nuclear power plants are, makes the smaller operations more attractive to a potential terrorist, not less, if all they are trying to do is obtain a fissionable amount of material.  The UT Austin researchers point out that there are several examples of regulatory agencies backing down on the level of the assumed DBT because of industry’s protests that the resulting required protective measures would be too expensive.

This is one of these matters that may never be resolved unless we wake up some morning to the news that a major attack on a nuclear facility has succeeded.  And I hope that never happens.  But I can’t help but agree at least with the report’s claim that some of the ways that DBTs are currently established are lacking in logic.  For example, the Nuclear Regulatory Commission has stated that current nuclear plants have enough strength in their existing containment vessels to withstand aircraft attack without any further enhancements.  But on the other hand, it has made a rule for new nuclear-plant designs:  designers must show how the plant will withstand the intentional crash of a commercial airliner into it.  Probably the truth of the matter is that nobody knows what would have happened if the 9/11 attackers had targeted the Indian Point plant instead of the symbolically much more attractive World Trade Center towers.  But it’s clearly something we don’t want to learn about from experience.

The UT Austin report will probably be criticized as an academic armchair exercise by those who spend their lives in the nuclear industry.  But academics who are remote from day-to-day issues in an industry can nevertheless bring different and sometimes valuable perspectives to a problem, and so I hope the report’s suggestions of how to improve nuclear security in the U. S. contribute to the ongoing challenges of living with nuclear materials, benefiting from them where possible, and not allowing them to fall into the wrong hands.

Sources:  I referred to a news article about the Nuclear Proliferation Prevention Project’s report which appeared on the CNN website on Aug. 15, 2013 at http://www.cnn.com/2013/08/15/us/nuclear-plants-security/.  The Project’s working paper itself can be accessed at http://blogs.utexas.edu/nppp/files/2013/08/NPPP-working-paper-1-2013-Aug-15.pdf.  Full disclosure:  I hold a Ph. D. in electrical engineering from the University of Texas at Austin and a part-time research professor appointment there. My blog on the protesting nun and her group appeared on May 27, 2013. 

Monday, August 12, 2013

Cybercrime: Prevention or Punishment?


Last week I needed an item at a Harbor Freight store in Austin.  Harbor Freight deals in low- to mid-priced tools imported from China, and unless you’re looking for something that will last for decades, it’s a good place to shop.  As soon as I walked in the door, one of the cash-register attendants came up to me and said, “Just to let you know, our registers are down and all we’re taking is cash right now.”  I’m one of those troglodytes (look it up) who prefers cash anyway, so this didn’t bother me other than the fact that I had to wait in a long line that was backed up because the sales clerk had to look up each item’s SKU on a handheld unit, write down the price by hand, add up the total on a calculator, and make change. When I paid for my item, the clerk asked me if I minded not getting a receipt.  I replied, “Not as long as somebody doesn’t stop me at the door for shoplifting.” 

While I was waiting in line, I saw posted next to the register a notice from Eric Smidt, Harbor Freight’s president.  It was about a recent incident of hacking that resulted in the theft of a large number of their customers’ credit-card numbers, and said that the firm was taking every possible step to deal with the problem.  Whether this issue had anything to do with their registers going down that day is unclear, but it got me to thinking about the differences between old-fashioned analog theft and cybercrime.

Now if dozens of Harbor Freight customers had been koshed on the heads as they left the stores and had their wallets taken, I bet you would have heard about it in the news.  Old-fashioned personalized one-on-one crime like that is much more likely to be reported by the injured individual, and because the criminals tend to be local, the local jurisdiction responsible has a fairly straightforward job on its hands, once the crook is identified.  But those responsible for the Harbor Freight data breach could be literally anywhere in the world that there is an Internet connection, which means just about anywhere in the world. 

Cybercrime is a lot less risky.  According to online reports, the Harbor Freight breach may have been one of 2013’s largest in terms of numbers stolen, comparable to a similar attack that netted about 2.4 million customer debit and credit card numbers.  The company found out about the attack in June, when credit-card firms began noticing a lot of fraudulent charges to accounts owned by Harbor Freight customers.  Apparently the hackers penetrated the company’s main network and gained access to data from all 400 of its retail stores.

There are several ways the criminals can profit from their ill-gotten numbers.  The retail way is to use the cards themselves to buy stuff they want.  My own credit-card number was stolen this way once, and in the list of charges that my bank seriously doubted I’d made were things like services at an upstate New York spa and jewelry charged to a Las Vegas store.  But the big money is in the wholesale underground exchange of hard cash for hot credit-card lists, and I suspect that is what the Harbor Freight crooks did with their numbers.

Because it’s so hard to catch and convict cyber criminals, most companies rely instead on anti-virus software, firewalls, and other protective measures rather than spending a lot of effort in working with law enforcement personnel to catch the perpetrators.  But a recent study by a group of researchers based in Cambridge, England points out that this may not be the most cost-effective approach. 

The study shows that the amount of money lost per person to number theivery such as occurred with the Harbor Freight customers is in the range of a few dollars per customer per year.  On the other hand, the money spent by firms on computer security measures may exceed what is lost to this type of cybercrime.  The authors say it might be cheaper overall to spend more money on tracking down the relatively small number of cyber criminals, and less on security measures.

That is good advice as far as it goes, but it neglects the hard problem of jurisdictional diversity, as you might call it.  Say you can locate the Harbor Freight perpetrators, and they turn out to live in a country that has a dysfunctional government that can’t enforce ordinary laws, let alone laws about cybercrime.  Short of mounting an armed invasion of the country to catch the crooks, a private firm or even another sovereign country has its hands tied.  Unless some effective international agreements could be made for the extradition of cyber criminals, and some uniform laws passed in every host country that makes the same actions illegal everywhere, it will continue to be very hard to punish those who steal data across international boundaries.  Look at the trouble the U. S. government has had with Eric Snowden, who committed a data breach of NSA information right here in the U. S. and then ran off with it to Russia, which has recently granted him asylum.  Once international relations and antagonisms get mixed into a criminal act, things get vastly more complicated.

Overall, we benefit greatly from the worldwide coverage of the Internet for both global commerce and less quantifiable benefits such as the freedom to communicate political and cultural ideas across boundaries.  These benefits come at a cost, however, and it looks like unless the international jurisdiction problem can be addressed more effectively than it has been in the past, we will have international cybercrime with us for the foreseeable future.  And despite Eric Smidt’s assurances, which I’m sure are sincere, the next time I go to Harbor Freight I think I’ll bring cash along.  But I think I’ll ask for a receipt.

Sources:  A report on the Harbor Freight data breach can be found at the Bank Info Security website at http://www.bankinfosecurity.com/impact-harbor-freight-attack-grows-a-5970/op-1.  The Cambridge cybercrime report is discussed at gcn.com/Articles/2012/06/18/Cost-of-cybercrime-Cambridge-study.aspx.  And the difficulties of prosecuting crimes in different jurisdictions are described well by Deb Shinder at http://www.techrepublic.com/blog/it-security/what-makes-cybercrime-laws-so-difficult-to-enforce/.

Monday, August 05, 2013

The Things That Didn’t Happen To Flight 214


Just moments before Asiana Airlines Flight 214 was to land at the San Francisco International Airport on July 6, some passengers noticed that backdraft from the jet engines was kicking up seawater.  This usually doesn’t happen on normal approaches to Runway 28L, which extends from just behind a seawall that faces San Francisco Bay onto land.  A few seconds later, the main landing gear hit the seawall and sheared off.  After that impact, both engines and the tail section came off, carrying some passengers and crew with it.  The main fuselage slammed into the runway and spun almost completely around before grinding to a halt. 

Flight attendants sprang into action, assisting passengers who needed help in exiting the aircraft.  One injured girl was pulled from the plane by a first responder, only to be covered in firefighting foam from arriving fire trucks.  Sadly, another emergency vehicle’s driver failed to see her underneath the foam, and she was struck and killed.  Another passenger died at the scene and a third passed away a few days later from injuries.  All of the other 304 people aboard survived, including all the pilots and crew, although some sustained serious injuries.  After the plane was evacuated, a fire from an oil leak demolished much of the fuselage, but without injuring anyone.

Any fatal accident involving air travel is a tragedy—usually an avoidable one.  But this accident could have been much worse, and that fact carries with it some implicit good news. 

For one thing, the Boeing 777 involved is a model that was introduced in 1995, and this 2013 accident is the first one involving loss of passenger lives in a flight-related accident.  Although fatal accidents have occurred earlier, they involved refueling or other ground-based situations.  This is an outstanding safety record compared to planes developed during the earlier years of aviation.

Another fact worth noting is that the landing gear was purposely designed to break away under a sufficiently large impact, rather than staying attached to cause a destructive nosedive.  We are familiar with breakaway traffic signs on highways, but I wasn’t aware until now that the same principle has been designed into landing gear.

Finally, the fact that the fuselage endured the abuse of skidding thousands of feet down the runway sans landing gear and kept the remaining fuel from catching fire, staying together long enough for everyone to escape, is a testimonial to its structural engineering.  I am no mechanical engineer, but somebody did something right to make a fuselage that would hang in there during such a trial.

There are things that no airframe can endure, of course.  If the plane had encountered a large immovable object, for example, the outcome might have been quite different.  A similar accident in some ways to the Asiana Airlines crash took place on August 2, 1985.  A Delta Airlines Lockheed L-1011 with 163 people on board was caught in a microburst and windshear during a thunderstorm at the Dallas-Fort Worth Airport during its final landing approach.  The sudden loss of airspeed and accompanying downdraft forced the plane to the ground north of the runway, where it skidded into some giant water tanks and exploded.  Only 26 people survived.  Windshear detectors have since been installed at many airports, and pilots are much more aware of the dangers of such conditions, so the cause of that particular crash is much less likely to occur these days.

The cause of the Asiana crash is still under investigation, but attention has been focused on the flight crew, which consisted of three captains and a first officer.  The man actually flying the plane at the time of the crash had less than fifty hours’ experience on 777s, and was being instructed by the pilot in command, who occupied the co-pilot’s seat at the time.  The runway’s instrument landing system (ILS) vertical glide slope was out of service and a notice had been issued to that effect.  This made it impossible to execute an ILS landing to the runway.  Records indicate that the various automated landing-assistance systems were manipulated during the approach, and it may not have been clear to the flight crew that their approach was too low and slow until it was too late to do anything about it.  The laws of inertia are always in force, and a lot of advance planning has to be done to bring a huge heavy object like a 777 in contact with the ground safely.  Although final conclusions will have to await the completion of the ongoing investigations, it appears that pilot error may be at the bottom of this accident.

As long as human pilots fly planes, we will always have to contend with the possibility of pilot error.  But in general, air travel is safer now than it has ever been, in terms of fatalities per passenger-mile flown.  Even the absolute numbers of fatalities per year, which obviously stood at zero until the invention of the airplane, continues a downward trend that began in the 1970s, and is the lowest since about 1954.  And the total number of passenger-miles flown in 1950 was only about 2% of what it was in 1990. 

The Asiana crash may have stemmed from confusion about who was in charge—the autopilot mechanisms or the real pilot.  But for the vast majority of planes and flights, the amazing system of man and machine called air travel operates efficiently, economically, and with a safety record that was unimaginable in the early days of flight.

Sources:  I referred to the Wikipedia articles on “Asiana Airlines Flight 214,” “Delta Airlines Flight 191,” “USAirways Flight 1549,” and “Aviation safety.”  I also obtained statistics on air travel safety from a paper by Prof. Dan Bogart of UC Irvine which can be found at http://www.socsci.uci.edu/~dbogart/transport_momentusBogart_6.11.12.pdf.

Friday, July 26, 2013

The Medieval Wisdom of Google’s “Don’t Be Evil”


Back in 2000, when the founders of Google were discussing ways to express their core philosophy, Paul Buchheit (employee No. 23) suggested “Don’t be evil.”  At the time, he was simply trying to contrast the way Google did business with the less salutary practices of some of their competitors.  Nobody dared to disagree with the principle of not being evil, so the phrase was adopted and down to today remains one of Google’s official core values.  Along the way it has acquired another phrase, so the complete statement is “Do the right thing; don’t be evil.”  In promulgating this notion, Google has (perhaps unwittingly) taken a stand on the side of Aristotle, St. Thomas Aquinas, and countless other ancient sages against much of what today passes for acceptable moral principles.  It would surprise me, however, to discover that more than a few Google employees are aware of this.

Many of them, in fact, would probably subscribe to the notion that no one should impose one’s moral principles on another person.  Even Google doesn’t explicitly recommend their “do good, avoid evil” principle for everybody; the most they are saying is that Google employees will try to live up to it.  If you like doing evil, fine, just don’t go to work for Google.  But as physicist Anthony Rizzi points out in his book The Science Before Science, the advice to not impose one’s moral views on another, is itself a moral view. 

If I see an adult male in a shopping mall beating up a two-year-old, and I rush to intervene, and the man says, “Leave us alone, you’ve got no business imposing your morality on me,” I could respond with, “Sir, that itself is a moral principle which you are trying to impose on me.”  (What I would really do is call the cops, but that’s another matter.)  And in any event, as Rizzi points out, no one consistently acts as though all moral principles are simply matters of personal preference, even though they may give lip service to the idea in academic papers, for example.  If the chair of a philosophy department read a paper by one of his philosophers claiming that all morality is relative, and called the author up one day and said, “Because all morality is relative and I don’t like your looks, I’m reducing your pay by half,” I seriously doubt that the philosopher would calmly accept this as a logical consequence of his own philosophical position.  So even if some people say morality is relative, on matters that affect them personally they usually don’t act like they really believe it.

So where does that leave us?  It begins to look as though there really may be some objective moral principles “out there” so to speak, independent of whatever we say or think about them.  And behind them all, at the head of the logical chain of reasoning where first things must always be, stands the principle embraced by Google:  “Do the right thing; don’t be evil.”  You can’t derive that principle from anything else.  It is one of those self-evident statements that can’t come from another more basic notion.  As it stands, of course, it needs development before it can help you live your life.  But all other moral principles can be logically derived from what Rizzi calls “the first principle of ethics”:  do good and avoid evil.

Ah, but what is good and what is evil?  In a thousand-word column, I obviously can’t do justice to that question.  The short answer is, good is that which fulfills one’s purposes, and evil is the absence of such good.  One reason there is so much evil in the world is that, while every person does what seems good at a particular time and place, what seems good at the time may not really help one to fulfill one’s purposes.  It may seem good to an alcoholic to take one more drink, even if it’s the one that makes him so drunk he gets in his car and causes the death of another driver.  It’s not always easy to figure out what the true good is, which is one reason why ethics can get complicated—so complicated that the analytically-minded tend to throw up their hands and say it’s all hopeless. 

But it’s not hopeless.  Most people figure out what good to do, and what evil to avoid, with a good bit of success every day.  The lapses happen when our emotions or our hasty judgments lead us astray.  It requires just as much thought and attention, if not more, to be a good person as it does to be a good engineer.  But the technical and the ethical sides of engineering start from different foundations.

When Mr. Buchheit hit on “Don’t be evil” to guide what would become one of the greatest corporations of the twenty-first century, he was saying more than he knew.  Neither Google (through whose facilities this blog appears, by the way) nor any other firm can completely live up to their core principles, including that one.  But having it out there to shoot for is a start.  And in having that core principle to live up to, all the Googleites are following in the footsteps of medieval thinkers such as St. Thomas Aquinas, who clearly saw that the first logical step in being good is to admit there are such things as universal moral principles, and that the one to start with is “do good and avoid evil.” 

Sources:  Anthony Rizzi is a practicing research physicist at the Institute for Advanced Physics at Baton Rouge, Louisiana (www.iapweb.org) and author of The Science Before Science:  A Guide to Thinking in the 21st Century (IAP Press, 2004).  Of all books that I’ve read about scholastic philosophy (which is the term for the type of philosophy done in the High Middle Ages by St. Thomas Aquinas), Rizzi’s does the best job of defining terms and explaining concepts in ways that the average non-philosopher can understand.  I also referred to the Wikipedia articles on Paul Buchheit and “Don’t be evil.” 

Monday, July 22, 2013

Open Spectrum: An Ethical Open Question


My personal dealings with the Federal Communications Commission have been limited to holding a commercial “radio operator” license that qualified me to be the night watchman at a small AM station for two weeks in high school, and holding an amateur radio license, which I put in jeopardy once by building a ham transmitter that splattered signals all over the 40-meter band and got me a “no-no” notice from an FCC monitoring station.  Although I am no longer an active ham, I still have the license, and so I was interested to see a proposal recently that seemed to threaten not only my license, but those of all ham radio operators in the U. S.

The proposal is for moving from the present FCC-mandated allocations of the radio spectrum to something called “open spectrum.”  Before I explain what open spectrum means, I should mention some basics about the radio spectrum and how it is allocated today.

Radio waves are characterized by their frequency.  For example, WRR in Dallas, Texas (one of the oldest stations west of the Mississippi) operates on a frequency of 101.1 MHz.  All the frequencies from less than 100 kHz (one hundred thousand) up to many GHz (billions) are usable for communications under some conditions, but you can’t just keep piling more and more signals onto the same frequencies.  Eventually, interference between signals gets so bad that the whole thing becomes unusable.  A problem like this arose in the 1920s when AM radio broadcasting got off the ground, and led eventually to the formation of the FCC, along with international organizations with similar purposes:  to regulate how each frequency was used. 

Currently, the FCC mandates in most cases exactly who can use what frequencies and how, and issues licenses to the authorized users, whether they be hams, radio stations, or wireless hubs that link computers and cellphones.  I said “in most cases”:  there are a few spectrum segments that are “license-free” and anyone can use these for anything as long as they follow a few general rules.  The FCC in recent years has also held a limited number of auctions, which is a baby step toward one form of open-spectrum idea.

You can think of the various open-spectrum proposals as themselves spread out along a spectrum.  At one end is the old-fashioned central-control approach that the FCC used prior to about 1985:  nobody does anything without being told exactly where and how to do it by the FCC.  At the other end is the (impractical) libertarian extreme of no regulations by anyone anywhere, and everybody just does the best they can.  No one seriously advocates that extreme either, because it would threaten with disabling  interference the investments of many billions of dollars by operators of cellphone systems, wireless and broadcast networks, and so on. 

The most viable-looking forms of open-spectrum plans propose something like the following, which was broached in a recent issue of National Review by Christopher DeMuth.  One day the FCC gets up in the morning and declares that everyone who currently holds a license henceforward owns property rights in that portion of the spectrum licensed to them.  The licensees can freely buy and sell their spectrum chunks and use them for anything they like, and the only role the FCC will play is like the land-records office of a county:  just keeping track of who owns what.  DeMuth says that this would free up a lot of spectrum that is currently misallocated under the old rules, such as the underutilized spectrum in the UHF TV bands.  These vast regions of largely empty MHz were originally allocated in the 1950s to make up for defects in the crude TV tuners of the day.  But today’s spread-spectrum and software-radio approaches could exploit these regions much more effectively, if there were only a way to get at them.  DeMuth claims that the form of open-spectrum policy he advocates would do this and more.

The ethical aspects of open-spectrum notions comes into play when you consider the radio spectrum as a limited public resource, similar to a national forest.  There is only so much of it available in a given geographic area, and if you trash it, you’ll have a lot of problems and it will affect everybody one way or another, including those who will not be able to afford their present spectrum allocations under the privatizing style of open-spectrum proposal.  The parties who I am concerned about in the free-market open-spectrum approach are non-profit individuals and organizations such as radio amateurs (whose licenses explicitly forbid them to do anything with their hobby for pay), scientists such as radio astronomers, non-profit agencies of various kinds, and other special cases such as emergency communications entities with limited budgets. 

A different open-spectrum proposal by Professor Eli Noam at Columbia University would take a less radical approach.  Instead of buying and selling, spectrum users would pay rent based on demand, and the rental fees would go to the government.  Under this proposal, the non-profit entities could receive protection from the vicissitudes of the market by having their licenses for free, courtesy of the FCC, much as the situation is now.  But most of the spectrum would be turned over to a market-based approach where the more valuable segments would be more costly to use.  This idea does most of what the private-ownership concept does, but leaves a slightly larger role for the government to do what I think is an appropriate thing:  to guard the rights of those who cannot defend themselves on an economic basis, but who nevertheless perform useful functions in society. 

At the risk of getting more complaints about mentioning politics, I will say that neither of these open-spectrum proposals seems to stand much of a chance of getting adopted under an administration which generally favors top-down approaches to regulation.  But I could be wrong, and if enough people of different political stripes get behind the open-spectrum concept, it could actually happen.  But I hope a form is adopted which would protect the rights and privileges of those who would no longer be able to afford to use the spectrum under the totally free-market approach. 

Sources:  Christopher DeMuth’s article “Open Skies and Open Spectrum” appeared in the July 15, 2013 issue of National Review, pp. 32-36.  Mr. DeMuth is a distinguished fellow at the Hudson Institute think tank, which is also where Harold Furchgott-Roth works, the author of a paper which embodies the open-spectrum proposal Mr. DeMuth describes.  Eli Noam’s earlier open-spectrum paper “Taking the Next Step Beyond Spectrum Allocation:  Open Spectrum Access,” written in 1995, is available at http://www.columbia.edu/dlc/wp/citi/citinoam21.html.  I also referred to the Wikipedia article on open spectrum. 

Sunday, July 14, 2013

The Quebec Rail Disaster: Nine Notorious Necessities


In the science of logic, a necessary condition for a thing to occur is a circumstance that will stop the thing from occurring if the circumstance is not present.  Having fuel in my car is a necessary condition for getting it to run.  No fuel, no go. 

In the study of industrial disasters, you often find that instead of a single cause, there are multiple interlocked causes, each one of which was a necessary condition for the accident to take place.  The longer the chain of necessary conditions, the less likely it is that all of them will happen in a way that leads to a problem.  This is one reason why such situations are hard to anticipate.  The disaster that struck the town of Lac-Mégantic, Quebec on the night of July 5 and 6 is just such a tragedy.  While the full details have yet to be investigated, by going only a little beyond what is known I can assemble a chain of nine separate conditions, each of which had to prevail in order for the accident to happen. 

Lac-Mégantic is a community of about 5,000 on the tip of a lake of the same name, near the Canada-Maine border.  It was founded during the construction of the Canadian transcontinental railway in 1884, and a heavily-used rail line still runs directly through the center of town and extends westward up a rather steep grade to the next town of Nantes.  About 11 P. M. on the night of July 5, a five-locomotive eastbound train consisting of 73 tank cars filled with crude oil, besides other cargo, pulled into Nantes and stopped.  The engineer, Tom Harding, was done with his day’s run and planned to spend the night in a hotel in nearby Lac-Mégantic.  Before he left for the night, he followed what was apparently standard procedure in securing the train:  setting the handbrakes on all five locomotives and ten freight cars, and leaving one engine running to maintain pressure on the air brakes.

Railroad air brakes were an invention of George Westinghouse, who had the clever idea that a loss of brake pressure, such as would occur if part of a train broke away and lost its air connection to the engine’s compressor, should apply the brakes.  The same basic system is used today.  Each car has an air storage tank that provides the actual pressurized air that is applied to the brake cylinders on each “truck” or set of wheels.  The air from the tank is valved to the brakes when the main-line pressure falls, either gradually for controlled braking or suddenly, as when a train comes apart. 

But each tank holds only so much air, and so the other function of the air in the line is to maintain pressure in the tanks on each car. 

By setting the handbrakes, Harding was taking extra precautions.  In principle, as long as the engine was running to maintain air pressure, the air tanks would stay pressurized and the brakes would hold.  Satisfied that he had done his job, he made his way to Lac-Mégantic and to bed.

At 11:30 P. M., a concerned citizen reported to the Nantes fire department that the one running engine was on fire.  Local firemen reported to the scene and rousted out the nearest available rail employee, who turned out to be a track worker unfamiliar with the operation of locomotives.  The firemen and the railway employee managed to turn off the engine and put out the fire, but no one re-started the engine after the fire, and everyone left the scene by midnight.

For reasons that remain to be determined, about an hour later the train’s brakes ceased to hold, probably because the pressure in the air tanks dropped after the engine and its compressor were shut off.  The steep (1.2%) grade combined with the heavy load to send the train accelerating down the line six miles (10 km) along a straight track that executes a sharp bend in the center of Lac-Mégantic, near a strip of bars where late-night patrons were enjoying themselves.  Several people noticed the train rushing past at up to 66 MPH (100 km/hr).  When the leading locomotives hit the curve, they left the track and landed a few blocks away.  But the easily-punctured single-walled tank cars ruptured and caught fire, turning Lac-Mégantic into an inferno.  At the time of this writing (July 14), thirty-three bodies have been recovered, and seventeen more persons are missing and presumed dead. 

For this tragedy to occur as it did, you had to have

1.  A long heavy train full of
2.  Crude oil or other flammable material in
3.  Easily-ruptured tank cars attached to an
4.  Engine that caught fire and was turned off by
5.  A track worker who didn’t know how to start it again, and
6.  A steep grade under the train, and
7.  The engineer asleep in a hotel where nobody could find him, who
8.  Set enough handbrakes to keep the train from moving under normal circumstances, but not enough under the peculiar conditions that prevailed, and
9.  A town built around the closest turn in the tracks downhill from where the train started to roll.

If any of those nine conditions had been otherwise, the accident would not have happened, at least not to the extent that it did.  A shorter, lighter train might not have overcome the fading brakes.  Tanks of a non-flammable substance would have caused physical damage, but would not have burned.  Special double-walled tanks that are recommended for use in crude-oil service but not required, might not have ruptured as easily.  If the engine hadn’t caught fire, it would have kept the brakes going.  If the railroad employee had known how to start another engine, the brakes would have worked.  If the track had been flat instead of on a grade, the train might not have rolled away.  If the engineer had left a note saying where to call him in case of emergency, he might have restarted the engine.  If he had set more handbrakes, it might have prevented the runaway.  And if the bend in the tracks had been in a cornfield instead of in the center of town, no one might have died.

These “ifs” are cold comfort to those who have lost family and friends in the tragedy.  But they show what a long chain of unusual circumstances had to come about for the accident to happen.  Investigations may show that using more rupture-resistant tank cars would have reduced the chances of fire.  And there will certainly be questions raised about the advisability of crude-by-rail, or CBR, as a means of transporting large quantities of crude oil, instead of politically controversial pipelines, for example.  Let’s hope that what engineers, regulators, and the public learn from this accident will help in preventing the next one.

Sources:  I referred to articles on the accident in The Globe and Mail at http://www.theglobeandmail.com/news/national/mapping-the-tragedy-a-timeline-of-the-lac-megantic-train-disaster/article13105115/ and The Guardian at http://www.guardian.co.uk/world/2013/jul/12/quebec-oil-train-crash-disaster-24-bodies, a blog by Lloyd Alter at http://www.treehugger.com/energy-disasters/what-caused-train-disaster-not-brake-failure.html,
and a blog in Railway Age by Tony Kruglinski at http://www.railwayage.com/index.php/blogs/tony-kruglinski/so-now-we-know-they-can-blow-up.html, as well as the Wikipedia articles on Lac-Mégantic (the town) and the Lac-Mégantic derailment. 
  


Note added July 18:  Several comments indicate I did not adequately explain the operation of locomotive air brakes.  They are indeed essentially fail-safe in the short term, but not in the long term. 

An analogy can be made to the type of emergency lighting in stores and restaurants that is powered by storage batteries.  Under normal circumstances, the utility power charges the batteries and signals the emergency lights to remain off.  But when the utility power fails, the loss of voltage signals the emergency lights to turn on and operate from their storage batteries.  However, the storage batteries will eventually lose their charge if the power is not restored within a certain time.  When the storage batteries are depleted, the emergency lights will fail as well.

Here is the analogy.  The utility’s electric power is like the locomotive’s air compressor.  The emergency-light storage batteries are like the compressed-air tanks in each rail car.  The emergency lights coming on are like the brakes on each car becoming actuated by the air pressure from the tanks when the pressure from the locomotive fails (either deliberately or accidentally).  The tanks can supply only so much air (which escapes due to leaks, imperfect seals, etc.) and eventually the pressure falls to the point that the brakes no longer hold, just as the emergency lights will eventually go out. 

One reason the brakes are not actuated by a non-pneumatic method (e. g. springs) is that there are situations in which railway workers need cars to roll freely without being connected to a source of air pressure, as when the cars are sorted by being rolled one at a time over a “hump” and down a controlled grade through a sequence of switches.  If the brakes were always applied in the absence of air pressure, this kind of thing would not be possible. 

I hope this clarifies the question of how air brakes on trains work.
 

Sunday, July 07, 2013

Global Warming, Solar Energy, and $300,000 Tortoises: The Morality of Energy Production


On Tuesday, June 25, in a speech before enthusiastic students at Georgetown University, President Obama delivered a message outlining his vision for what the United States ought to do, and what he personally is going to do, about the moral issue of energy production.  Now at first glance, you would think that energy production is a technical issue that should be left to engineers and economists.  But it was clear from the President’s speech that he thinks it is also a moral issue, as moral as which side you should fight on in a war.  His speech, in fact, was peppered with militant terminology.  He spoke of having the “courage to act,” he talked of the “fight against climate change,”  and expressed his desire for America to “win the race for clean energy.”  Toward the end, he called for citizens “who will stand up, and speak up, and compel us to do what this moment demands.”  To that end, he announced that he was going to ask the Environmental Protection Agency (EPA) to issue regulations that, according to Obama critic Charles Krauthammer, will “make it impossible to open any new coal plant and will systematically shut down existing plants.”

If the construction of new coal-fired power plants is going to come to an end, maybe we can start building more Ivanpahs instead.  Ivanpah is a Piute term meaning “good water,” and is the name of a giant solar-energy project not too far from where Interstate 15 crosses the California-Nevada line on its way to Las Vegas.  Built by a consortium of construction and solar-energy firms, plus money from Google investors, Ivanpah consists of three circular arrays of tracking mirrors that direct sunlight onto four-hundred-foot tall “power towers.”  Atop each power tower is a cubical black boiler to make steam that turns turbines that drive generators to make electricity.  This is the kind of thing that President Obama sees as the future of energy production:  it is solar-based, it adds nothing in operation to the nation’s carbon footprint, and it is even respectful of the rights of the 150 or so desert tortoises on the construction site who were carefully inventoried and transported to an equally suitable habitat at a cost of about $50 million—or roughly $300,000 per tortoise. 

The engineer in me applauds the Ivanpah project.  It is an elegant yet simple solution to several of the problems that plague direct photoelectric energy production using solar cells, one of which is the fact that all days are not equally sunny.  When clouds show up, energy production from solar cells drops instantly, and this is not the sort of behavior that power grids like. 

The Ivanpah plant mitigates the cloud problem in a couple of ways.  First of all, unless there’s a solid cloud cover (not too common in the desert), smaller cloud shadows won’t put the five square miles of mirrors out of commission all at once.  And even if insolation, as it’s called, varies over a time period of minutes or even hours, the thermal inertia of the large power-tower boilers means that the plant will still be producing energy even when it is temporarily in the shade due to clouds.  So without any extra effort, the Ivanpah project has sidestepped one of the significant technical obstacles faced by solar-cell arrays.

Still, Ivanpah is expensive.  According to Wikipedia, the whole project, now nearing completion, will cost about $2 billion when finished.  This is roughly four times what a new coal-fired plant of equivalent peak output would cost.  And the coal plant will run any time you want it to.  True, you have to buy coal over the life of the plant, but this can be factored into the cost, and the economics of that calculation tells you why so much of our electric energy is still supplied by coal.

If we stopped building new fossil-fuel power plants tomorrow and allowed only nuclear, solar, and other renewable forms of new plant construction henceforth, several things would happen.  Electricity would become gradually more expensive and possibly less reliable than it would be otherwise.  And America’s contribution to the world’s output of carbon in the atmosphere, which has already fallen to 1992 levels, would fall faster and be overwhelmed by the soaring use of coal and other fossil fuels by China, India, and the rest of the world.  The overall objective effect on global warming would be minimal.

Everyone has a moral compass that helps prioritize ethical decisions.  For most people, murder is a more significant moral issue than jaywalking.  President Obama views the threat of global warming as a moral equivalent of war, to judge by his Georgetown speech.  He clearly wishes to unite the country around a common set of sacrifices that will allow us to hold our heads up before our grandchildren, whose world we should literally save from destruction by the evil forces of climate change.

I will merely point out, as Krauthammer has, that climate change comes in dead last in a poll of 21 matters of concern to Americans.  Jobs and the economy are things that the average U. S. citizen is far more concerned about, but the President’s moral compass seems to be insensitive to such concerns.  Or perhaps, as a practical politician, he realizes that in his lame-duck term he should spend his limited time on matters where he can act unilaterally, as with his instructions to the EPA, and not waste his energy on proposals he will not be able to get through Congress.  Leadership is a mysterious thing, and some leaders who have received the laurels of historical honor were excoriated and criticized at the time.  The President obviously feels he is in this category, and often refers to his unpopular proposals as being on the “right side of history.” 

But there sometimes is not much difference between being ahead of the pack and simply being out in left field.  If there was a truly united sense among Americans that the nation was under an existential threat and climate change was the culprit, President Obama’s rhetoric would fit the national mood and history might go his way.  But I fear what we are witnessing is instead the desperate actions of a leader who wants to force his vision of the future on a public that is unwilling to pay the high price for a dubious honor that may not come for generations, if ever.   

Sources:  I learned about Ivanpah from the print edition of the June 24, 2013 issue of Time Magazine.  I used information from the project website http://ivanpahsolar.com, President Obama’s speech of June 25 as transcribed by the Wall Street Journal at http://blogs.wsj.com/washwire/2013/06/25/full-transcript-of-obamas-remarks-on-climate-change/, Charles Krauthammer’s column for July 6, 2013 as presented in the Pittsburgh Post-Gazette at http://www.post-gazette.com/stories/opinion/perspectives/charles-krauthammer-obama-will-risk-the-economy-for-no-impact-on-climate-change-694450/, and the Wikipedia articles on “Ivanpah Solar Power Facility” and “Fossil fuel power station.”