Monday, September 23, 2013

Engineers and Technological Unemployment: What Are People For?

 
The ways that people make a living today are very different from what they were a generation ago.  In 1970, Detroit was still a bustling manufacturing metropolis, thousands of women earned a decent living as telephone operators, and many newspapers provided employment to linotype operators who spent their days at the keyboard of a clunky pile of machinery that molded molten type metal into sticks.  Needless to say, you will have problems finding a manufacturing job in Detroit these days, and the other jobs are history too.  Furthermore, engineers played an essential role in these changes, and leading the charge are those engineers who made computers and information technology the chief engine of creative destruction over the last few decades. 

And we are by no means done, according to a report by some Oxford researchers that was mentioned in the online magazine Salon.com recently.  Carl B. Frey and Michael A. Osborne studied the U. S. employment picture and used sophisticated (and undoubtedly computerized) data-grinding methods to discover that almost half of current U. S. jobs could eventually vanish as they are taken over by "computerisation."

Some parts of this forecast are easy to believe.  If experiments by Google and others succeed in replacing human car and truck drivers with robot drivers, long-distance truckers, bus drivers, taxi drivers, and anybody else who gets paid to drive something somewhere will have to find something else to do.  Among the ranks of engineers themselves, this sort of thing has been going on for decades too.  If you look at old photographs of the offices of large engineering firms in the 1970s, one of the most typical images is that of a huge open room filled with row after row of drawing boards, each one with its white-shirted male engineer.  The output of a roomful of engineers at drawing boards can be matched today by one modern engineer armed with CAD (computer-aided design) software.  The processes are so different that they are not directly comparable, but obviously, today's engineers have very different skills than the engineers of 1970, many of whom were little more than glorified draftsmen.  But is all this real cause for concern?  Or are we looking at the problem from too narrow an angle?

What so often goes completely unmentioned in discussions of technological unemployment, is the question of anthropology:  what is your model of the human being?  I think the model that most secular economists and researchers use is something like this.  All life is basically economic in character, and the ultimate good in this life is a smoothly functioning economy, wherein everyone capable of contributing to it works to the best of their ability and receives in turn the material benefits of their work.  That is a nice picture as far as it goes, but as a philosophy of life it's somewhat lacking.

For a completely different take on technological unemployment, you should read one of a number of works that were popular in the 1930s.  Even in the teeth of the Great Depression, writers such as C. C. Furnas in The Next Hundred Years went into optimistic technophilic raptures about how the increasing efficiency and productivity brought about by technological advances would let most people earn all the money they needed by working only one or two hours a day, leaving the rest of the time for leisure pursuits such as art appreciation and charitable work.  We have certainly gone beyond Mr. Furnas's wildest dreams of increased productivity.  So what went wrong with his vision?

I'm not entirely sure, but one factor seems to be the social consensus of what kinds of work and lives are to be desired, and what kinds are to be disparaged.  A short list of the occupations that are admired and envied in the U. S. today might start out like this:  movie stars, rock stars, the wealthy (regardless of how they made their money), sports figures, politicians (a few, anyway), . . . and it's going to be a short list because for the most part, we don't have true heroes anymore, just people who are famous for a bit and then fall victim to that favorite journalistic enterprise, The Mighty Brought Low. 

An even more important reason for the problem that most people seem dissatisfied with their occupational lot in life is that respect for ordinary, non-intellectual, perennial jobs that are nonetheless useful to society has largely vanished from the scene.  This disrespect can do tremendous psychological damage to those who hold such jobs, which in any economy is going to be the majority of workers.  Take janitors, for instance. Janitorial work is the classic job today that "don't get no respect," in Rodney Daingerfield's phrase. But it was not always thus.

My first job, outside of being paid by family friends, was sweeping the floor in a sign plant in Fort Worth, Texas.  It took me just about all day to work my way around the band saws, the bending brakes, and the vacuum molding machine.  There was no air conditioning, no breaks except for lunch, and by the time five o'clock rolled around I was ready to go home and flop—no heavy reading for this boy that summer.  But I was grateful for the work and the pay, and did it as well as I could. 

An amazing thing happened my last day on the job.  They gave me a going-away party, complete with a little cake.  I can still see their faces:  the grizzled old shop foreman who showed me the ropes my first day on the job, the skinny bandsaw operator with slicked-back black hair who always talked about how he'd rather be fishing, the red-headed cowboy who I saw one day taking liberties with the secretary (it bothered me until I found out they were married)—they thought enough of my humble sweeping up after them, to honor me and wish me well in the future. 

They did this, not because I had proved to be in the 99th percentile of floor sweepers nationwide, but because I had done a simple job with a reasonable amount of dedication.  They saw even such humble work as worthy of honor, and decided to honor me because I had done it well. 

If that deep respect for those who do any kind of honest work, technological or otherwise, were embedded in the ethos and psyche of this nation, the employment picture would largely take care of itself.  But those in charge of the economy would first have to believe in the honor of work, and then put their money where their hearts are.  And by and large, neither the money nor the hearts are in the right place today.

Sources:  The article "Don't look back—the machines are gaining on you" by Andrew Leonard appeared in Salon.com at http://www.salon.com/2013/09/20/dont_look_back_the_machines_are_gaining_on_you/.  The Oxford study "The future of employment:  how susceptible are jobs to computerisation?" is available for download at http://www.futuretech.ox.ac.uk/sites/futuretech.ox.ac.uk/files/The_Future_of_Employment_OMS_Working_Paper_1.pdf.  The Next Hundred Years by C. C. Furnas was published in 1936 by Williams & Wilkins of New York.  

Interview:  After this blog was posted, it was carried on www.MercatorNet.com and led to an interview of yours truly with Drew Mariani, host of a talk radio show on the U. S. radio network Relevant Radio.  Streaming audio of the interview can be found on the Sept. 27, 2013 download page at 
http://www.relevantradio.com/audios/the-drew-mariani-show.  

Monday, September 16, 2013

Cars, Cameras, and Computers: License Plates for the 21st Century


Soon after we moved from Texas to Massachusetts in 1983, I went to the Registry of Motor Vehicles (which we always referred to thereafter as the Registry of Woes, but that's another story) and got Massachusetts license plates.  Ours had some fairly typical arrangement of letters and numbers (e. g. HGQ 796), but as we spent more time there, I began to notice that a few cars had plates with only three digits, or maybe two:  "967" or "76."  It took some asking around to discover what was so special about those plates, but eventually I found out.

Turns out that those two- and three-digit plates were handed down from generation to generation, possibly even bequeathed in wills.  You see, Massachusetts was the first state in the nation to issue license plates, in 1903, and the first plates did have only two or three digits.  Somewhere along the line, the bluebloods who owned the first cars with license plates decided that some visible trace of this distinction should be left to their descendants.  So they evidently spoke with their buddies in the legislature to allow these old plate numbers to be passed to younger relatives.  So eighty years later, any latecomers to Massachusetts (and never mind if you moved there as a three-month-old, that makes you a latecomer) could look around and tell which drivers were descended from families old enough, and presumably rich enough, to have owned one of the first cars in the Commonwealth. 

I don't know if this curious habit continues there today, but if it does, it looks like Massachusetts may have to figure out a way to tell computers about the achievements of remote ancestors as well as people.  There is a good chance that everyone's license plate in the near future will have not only visible characters, readable easily by humans and with some difficulty by machines, but will also bear an invisible barcode that is much easier to read by machine than the visible characters are. 

Most people know that computers can read license-plate numbers by now almost as well as people can.  These devices, known as Automated License Plate Readers (ALPR for short) use a digital camera and a series of algorithms to separate the alphanumeric characters from the background, which task is increasingly challenging these days when custom plates have pictures of everything from blue whales to your favorite grandchild.  Once that's done, the algorithms interpret the characters and send the result to law-enforcement officials or whoever is interested.  These ALPR devices are used in automated tollbooths on toll roads, as well as their more controversial use in camera-equipped traffic signals that generate tickets for people who run red lights. 

But ALPR does not read the plates correctly 100% of the time, and so 3M and other firms have developed a type of infrared-readable ink that can be used to print a certain form of bar code directly over the visible image on the plate.  3M claims that the invisible barcode is much more reliably read by automatic bar-code readers than the visible characters, and at least one state (Virginia) is seriously considering adopting the machine-readable barcodes.  I have heard a rumor (which was the inspiration of this blog, incidentally) that many if not most states already use them, but I have not been able to confirm this rumor.  It may be the sort of thing that some states would prefer not to be known, anyway. 

We Americans are very attached to our cars, partly because the automobile is perhaps the single most significant technology that enables millions to live more independent lives in many senses.  The mobility permitted by the automobile has altered much of the country's built environment and contributes to the sense of freedom symbolized in movies when a solitary car speeds away from the camera down a lonely desert road. 

Anything that compromises the privacy of the very private space represented by the automobile tends to get our attention.  Many new cars now carry in their onboard computers a system that amounts to a "black box" which records data on control settings, acceleration, and other information that is of interest to insurance companies and lawyers in the event of an accident involving the vehicle.  And now that many cars come with GPS and wireless transceivers, not to mention the cellphones people carry, it is no long stretch of the imagination to picture a Big-Brother government knowing exactly which checkpoints you passed when, any time it wants.  The technology is already largely in place.

But in a way, the invisible-ink barcode idea is only applying to automobiles what we have already applied to our persons.  We are long since used to carrying forms of personal identification that are designed to be read by both humans and machines.  The magnetic strips on your credit cards, the RFID chips in your driver license (that's the way Texas refers to it, not as a "driver's license") and possibly a company or university ID card, and the cellphone in your pocket that is kept track of by your phone company are earlier steps in this direction.  There has been a lot of speculation (including an article that I contributed to in a professional magazine) that sooner or later, having some sort of RFID chip permanently implanted in your body will either become popular as a voluntary form of self-imposed cyborgism, or will be required by the state at some point.

Compared to having an RFID chip implanted on your person, letting your state's motor vehicle office put invisible ink on your license plate is not that big a deal.  From a technical point of view, it's just an incremental improvement that will simplify and improve the accuracy of machines that read license plates.  But the very fact that someone thought it interesting enough to spread a rumor about it says that invisible ink on license plates may cross another invisible line on the way to a future that not all of us would like to see happen.

Sources:  In 2012, the Commonwealth of Virginia (that's how some former colonies refer to themselves) commissioned a study of license plates that mentions the invisible-ink-barcode technology, and is downloadable at http://leg2.state.va.us/dls/h&sdocs.nsf/fc86c2b17a1cf388852570f9006f1299/db715763b38b14da85257ad200653d0d/$FILE/RD383.pdf.
The 3M firm has a news item on a "license-plate shootout" field test of various ID technologies, including their own, at one of the longest URLs I've ever seen:
http://solutions.3m.com/wps/portal/3M/en_US/NA_Motor_Vehicle_Services_Systems/Motor_Vehicle_Industry_Solutions/3m-motor-vehicle-services-systems-resources/newsletter-signup-3m-motor-vehicle-services-systems/newsletter-archive-3m-motor-vehicle-services-systems/?PC_7_U00M8B1A00PAD0A0C2MU390ED1000000_assetId=1361625382051.  And I also referred to the Wikipedia article on "vehicle registration plates."  The magazine article I contributed to appeared in Proceedings of the IEEE, "Social implications of technology:  the past, the present, and the future," vol. 100, pp. 1752-1781, May 2012.

Monday, September 09, 2013

Wisdom Literature for 21st-century Engineers


The other day I received a copy of a book written by a retired engineering professor and academic administrator named Lyle Feisel.  Prof. Feisel has found plenty of good works to do in his retirement, one of which was to write a column for The Bent, the magazine of the Tau Beta Pi engineering honor society.  He has collected these columns in a book with the title Lyle's Laws:  Reflections on Ethics, Engineering, and Everything Else. University administrators as a group are not noted for their literary brilliance or scintillating wit, and I will admit I opened the book with some trepidation.  But even after I had read (and enjoyed) it, it took me a while to figure out what category of literature it was. 

It's not an ethics textbook, by any means.  There are no homework problems, and each of its forty or so chapters is only a few pages long, dealing with a separate topic introduced by the "law" in question:  a single word or phrase followed by a brief aphorism.  Even though the chapters are independent, a particular view of the world emerges from the whole as you read.  That doesn't mean it's a work of philosophy, either—Prof. Feisel uses no fancy philosophical vocabulary, and makes no pretense of adhering to any particular philosophical or religious system. 

Finally it struck me what the book was:  it's a work of wisdom literature for 21st-century engineers. 

Wisdom literature is what scholars call the literary genre represented by the books of Proverbs and Ecclesiastes in the Hebrew Bible.  These books are collections of short, informal words of advice, without much in the way of overall organization or pattern, but rich with anecdotes, stories with a moral, and observations on human nature.  So is Lyle's Laws. 

Wisdom is not a word that gets a lot of use these days.  I once heard it defined as the ability to apply knowledge effectively, and that covers not only what engineers should do but what anyone with specialized knowledge has an obligation to do.  Many, if not most, of Lyle's Laws are not original.  For instance, No. 25, "Possibility:  If it can happen, it will happen" derives from that principle known to all working engineers, Murphy's Law ("If anything can go wrong, it will.").  But Feisel's form of the law allows for unexpected good things to happen as well, though you shouldn't count on them happening as a part of your design!  I heard a version of another law—"Discoverability:  Don't record anything you don't want the whole world to see"—from an older engineering professor in the 1990s, who told me he always warned his students not to write down anything that you wouldn't mind seeing reprinted on the front page of the New York Times.  But that's what wisdom consists of:  basic truths about human nature and human relations that are often learned by experience and passed on from generation to generation.  As C. S. Lewis pointed out in The Abolition of Man, it's as hard to devise a truly original moral principle as it is to come up with a new primary color besides red, blue, and green. 

But if there is moral medicine in Lyle's Laws, it is covered with a pleasant and engaging outer coating of war stories (some of them literally that:  the author is a Navy veteran), professional and personal tales that introduce many of the chapters, and a tone that is never preachy or didactic.  Sometimes you read a book and wish you could meet the author afterwards, and this is that kind of a book. 

This is true despite the fact that I found myself mentally squirming after reading a few of the chapters, notably the one entitled "Comfort:  Beware the cozy comfort zone."  Somewhere in the book I came across the question, "What do you know how to do now that you didn't know how to do a year ago?"  That prompted me to think about how much of what I do is simply more of the same, and how much is something I don't know how to do, but want to learn, even at the cost of some mental anguish and frustration.  This is an especially good question for tenured professors, who sometimes appear to the outside world to be entitled to coast for the rest of their lives.  Fortunately, I was able to come up with a few things I've learned in the past year, anyway, and I hope to add to the list as time goes on. 

Who should read this book?  I think there's a difference between who should read it and who will read it.  I would like every undergraduate engineering student in the English-speaking world to read the book (and so would Prof. Feisel, obviously).  If they did, and if they took the advice in the book to heart, they could avoid a lot of the errors, screwups, and cases of bad judgment that sometimes make the lives of young engineers as interesting as they are.  But that is a dream impossible of realization, short of some rich guy taking the notion to send free copies to all engineering schools.  I suspect that many of the people who will read the book are those of us in the late summer and fall of our careers, who can relate to the historical situations that Prof. Feisel alludes to and resonate with the truths he elucidates from his stories and experiences.  But the book would also serve as a good recommended read for engineering ethics courses, and I hope it will be used that way.

In my technical lectures, I occasionally mention a historical anecdote in connection with my technical topic of the day, and I have learned that a little of such material goes a long way.  Most young people, at least most young engineering students, are not that interested in history.  The spirit of our age is inherently forward-looking and views history as something to be overcome and surpassed, not something to learn from.  And for the most part, that is a good thing.  Too much regard for the past keeps you from moving into the future as fast as the next guy, as I have learned from my own experience.  But the human side of engineering is a function of human nature, which doesn't change.  And Lyle's Laws is one of the most easily read, and yet rewarding, works on human nature and engineering that I have come across in years.

Sources:  Lyle's Laws:  Reflections on Ethics, Engineering, and Everything Else, by Lyle D. Feisel, was published in 2013 by Brooklyn River Press, New York.

Monday, September 02, 2013

What We Don't Know About Chemical Accidents


The fertilizer-plant explosion last April 17 in West, Texas that killed 15 and demolished a good part of the town was only the most recent of a number of accidents involving hazardous chemicals that have happened in Texas over the years.  Home to a large number of refining and petrochemical plants and other high-tech industries, Texas has had more of its share of explosions, fires, releases of toxic and polluting chemicals, and other chemical-related accidents.  But when a team of Dallas Morning News reporters tried to answer what they thought was a simple, straightforward question about the frequency of chemical accidents, they found a mare's nest of conflicting and incomplete statistics.  Is this a basic problem that leads to a higher rate of accidents than we would otherwise have?  Or is it just an inherent difficulty that comes about because of the nature of chemical accidents?

The News reporters were unable to find a single database of national scope that answered the question they were asking.  I think what they wanted to find was something like what the U. S. Center for Disease Control (CDC) maintains on statistics such as cases of measles or rabies, or the National Transportation Safety Board's database on fatal accidents involving air transport.  But what they found instead was a hodgepodge of things:  raw unfiltered lists of emergency calls to the U. S. Coast Guard, lists of incidents investigated by the Occupational Safety and Health Administration (OSHA), and data collected by the Chemical Safety Board, which relies primarily on media reports—in other words, the reporters themselves!  They found glaring inconsistencies among the numbers cited by the various sources of information, and although they were able to identify 24 potentially serious chemical accidents in Texas between 2008 and 2011, they were almost sure that the true number was higher.

The first question in compiling statistics on something is to define exactly what you are compiling statistics on.  The problem of defining a chemical accident is not a trivial one.  Clearly, if I'm working in my garage and accidentally knock over a can of used oil that spills into the ground, that is not something that should be treated with the same seriousness as the West explosion.  But by some definitions, both are chemical accidents.  So first, a line needs to be drawn defining how serious an accident should be before it is logged into a database.  But how do you draw that line?  Should you log only accidents that resulted in casualties (deaths or injury to persons), or a minimum amount of property damage, or all accidents that involve certain types of particularly hazardous chemicals?  There are millions of kinds of chemicals, and the hazard to humans of many of them are simply unknown. 

Even if we agree that casualty-only accidents are what we want, the problems of privacy and legal liability come into play. As noted by the Dallas Morning News reporters, private firms are reluctant to share details of their inner workings that might leave them open to lawsuits or might prove repellent to potential investors.  As the aftermath of the West explosion has shown, the legal environment of chemical accidents is complex, poorly defined, and is the result of a tangle of criminal, regulatory, and civil codes that do not produce the kind of clear-cut situations that are easy to record in databases.

Not mentioned by the investigative reporters is a powerful external force on chemical industries which makes most firms maintain and enforce strict internal safety rules and records of accidents.  This force is applied by insurance companies.  My brother-in-law is the chief safety officer for a large firm that operates refineries in several states.  One of his main jobs is to travel to the home offices of the company's main insurers annually, and present detailed reports of his firm's safety records and the measures they are taking to make sure lessons are learned from near-misses in order to prevent bigger accidents in the future.  While these matters are handled out of the public eye, the desire to keep insured is one reason that the chemical industry as a whole has a safety record that is much better than it could be, considering the millions of pounds of hazardous material that passes through its facilities every year.  And in conversations with my brother-in-law, I have learned that firms quickly learn about accidents at other firms, and take steps to make sure those types of incidents don't happen to them too.  In other words, a good bit of what a comprehensive nationwide database of chemical accident reports would do, is already taking place: namely, information-sharing among the plant operators themselves.

Of course, there are always exceptions, which often tend to be among the smaller independent operators that can't afford full-time safety officers and large staffs.  The West fertilizer plant was one such operation, but it is not clear that having an accurate national database of fertilizer-plant explosions would have made much difference in the way that particular accident transpired. 

More and better publicly accessible information about chemical accidents is a desirable thing, and I hope that the West explosion will lead to a better system of gathering and presenting such data nationwide.  But if this goal is achieved at the cost of burdensome, onerous, and unjustly harsh regulations of industries which already do a fairly good job of self-policing due to the economic interests of their insurers, the price tag may be more than we should pay.

Sources: The Dallas Morning News report appeared on that paper's website on Aug. 24-25 at http://www.dallasnews.com/news/west-explosion/headlines/20130824-after-west-disaster-news-study-finds-u.s.-chemical-safety-data-wrong-about-90-percent.ece.

Monday, August 26, 2013

Of Pecans, Profits, and Piety


Broadly speaking, the system of international trade we live under is a kind of technology.  It’s certain that without modern engineered means of transportation and communications, international markets would be much less significant than they are today.  And while the particular story I’m going to relate pertains to the oldest technology in human history, namely farming, the lesson behind it applies to many fields of engineering.

The pecan tree is the state tree of Texas.  (In case you’ve never heard a Texan say it, it’s pronounced “puh-cawn”).  Pecan trees can grow to a height of 100 feet (30 meters) or more, live up to 1000 years, and for most of those years can produce an abundant annual crop of tasty, highly edible nuts.  When my late grandfather moved to Fort Worth, Texas in 1930, he planted a pecan tree in his back yard.  The last time I visited his house (currently occupied by other relatives), that tree was still going strong, providing shade for most of the back yard and a good bit of the house too.  Pecan trees are native to Texas and grace thousands of acres of river banks and bottom lands, besides furnishing an important food crop to pecan growers who grow hundreds of different varieties.  Pecans are sold both for direct consumption, either in the shell or hulled, and also as ingredients for processed food that benefit from the addition of chopped or blended pecans.  But until about a decade ago, the pecan market was almost entirely domestic, with a good number being sold mainly in Texas.

Then someone in China caught on to the fact that the huge market there for snack nuts, sold often in vending machines in locations such as gas stations and convenience stores, might benefit from imported pecans.  Up until then, most of the snack nuts sold were Chinese walnuts, but the cheaper pecan tastes just as good (in my opinion, anyway), and some clever Chinese importers introduced the new nut to Chinese consumers around 2001.

They liked it—liked it so much that since 2007, shipments of pecans to China from the U. S. (which includes exports from relatively new pecan-growing states such as Georgia and New Mexico as well as Texas) averaged almost 60 million pounds annually.  But there is a fly in this profitable ointment, which is the fact that the Chinese market wants a particular kind of “improved” pecan, not the rich variety of our native pecans.

According to an article in Texas Monthly by James McWilliams, the hybrid improved pecans have a uniform size, uniformly thin shells, and uniform quality.  These improved varieties will work in the Chinese vending machines, which can’t handle the variation in shapes and sizes of native varieties.  Texas pecan growers have known about the Chinese market for years, but so far they have exhibited a marked reluctance to chop down their existing groves, many of which are native varieties, to plant the improved type that produces machine-vendable pecans.   In so doing, they are losing year by year a potential market that could allow Texas to surpass the newer pecan-growing states and once more lead the nation in pecan exports.

That takes care of pecans and profits; now for the piety.  I have been reading a book called Food & Faith, a work of theological musings about the connections between eating and Christianity.  The author, Norman Wirzba, relies on the works of agrarians such as Wendell Berry as well as more explicitly theological writers.  But I was struck by the following passage from the book as expressing exactly what is going on between the Chinese pecan market and Texas pecan growers:  “Food that may have begun in the ground [or on a tree] must lose all traces of soil, sunlight, and fragile plant and animal life so that it can be redesigned, engineered [!], improved, packaged, stored, and delivered in whatever ways the food producer sees fit.”  Wirzba’s book, among many other things, is an impassioned plea to stop thinking about food and eating merely in material and economic terms. 

Viewed one way, it only makes sense for Texas pecan farmers to replant their groves with machine-friendly pecan trees, for the more efficient production of pecans that will contribute to the efficient international trade that efficiently fills vending machines with pecans that Chinese consumers can eat to fuel the machines called their bodies. 

But viewed another way, there is incalculable value in tending native pecan trees which are so deeply connected at multiple levels to a part of the world that Texans, at least, view as God’s country.  Not that His title to the rest of the world is defective in any serious way.  But as a recent arrival here from California said to me the other day, “Texans seem to have a loyalty to their state that I haven’t noticed anywhere else.”  And native pecan trees are part of what makes Texas the place it is.  I find reassuring the fact that just down the road from where I live, in Seguin, you can visit Pape’s Pecan Nutcracker Museum, and view both stationary and portable World’s Largest Pecans.  One is a concrete model on a pedestal on the town square, and the other, welded out of steel, is mounted on a trailer for convenient towing in parades.  And I would like to think that at least part of the reason that Texas pecan growers haven’t done the economically sensible and efficient thing of whacking down all their old-fashioned native trees to plant new ones for the Chinese market, is that, well, there’s more important things than money. 

It takes ten years for a new pecan sapling to mature enough to start producing.  That induces a natural tendency in pecan farmers to take the long view.  Ten years from now, the Chinese may have dropped pecans for Brazil nuts, for all I know.  But the rich biological and cultural heritage represented by the native pecan trees of Texas will live on, I hope, for many generations to come.

Sources:  I learned about the Chinese pecan market from James McWilliams’ article “Shell Game” in the Sept. 2013 edition of Texas Monthly.  It is an excerpt from his book The Pecan:  A History of America’s Native Nut to be published in October 2013.  I also referred to an online article about the pecan market posted by Nature’s Finest Foods Ltd. (a brokerage firm) at http://www.nffonline.com/industry-news/2013/06/19/pecan-exports-china-falter and an item by the Whitney Consulting Group posted on Google Docs (account required) at https://docs.google.com/file/d/1l9XwHsObwgS8O9OERlQwUqaF_IEoBXSVhQLpcnLvwnLpB_fwr-kY1pSeeVdl/edit.  Norman Wirzba’s Food & Faith:  A Theology of Eating was published in 2011 by Cambridge University Press.

Monday, August 19, 2013

Guarding U. S. Nuclear Facilities: The ABCs of DBTs


Earlier this summer, I blogged about a small but determined team of anti-nuclear protesters, including a nun, who managed to get uncomfortably close to a supposedly secure stockpile of nuclear material maintained by the U. S. Department of Energy in Oak Ridge, Tennessee.  Fortunately, the most damage they caused was spray-painting some slogans on a wall, but if they had been terrorists determined to steal enough enriched uranium to make a nuclear weapon, the story might have ended differently. 

A recent report by a group of researchers at the LBJ School of Public Affairs at the University of Texas at Austin points out what they consider to be serious flaws in the way we currently establish levels of security for the various nuclear facilities in the U. S., which range from small research reactors and commercial nuclear power reactors up to full-scale armed nuclear weapons.  According to their report, the present method of deciding how much security is enough is based on something called the Design Basis Threat (DBT).  While the basic idea seems sound, the devil, as always, is in the details.

In order to protect something, you have to know (or guess) what you’re protecting it against.  The way the Design Basis Threat approach works is as follows.  Say you run a small research-type nuclear reactor, the kind operated by many universities, including for example the University of Texas at Austin.  You go to the appropriate agency, in this case the Nuclear Regulatory Commission, and ask what the appropriate Design Basis Threat is for your facility.  It turns out that “research reactors generally do not have to protect against radiological sabotage or provide an armed response to an attack.”  The Design Basis Threat is presumably an attack so feeble that the usual class of security guards found on college campuses would be able to handle it.  So you just go with the minimal kind of security you will typically find at a high-dollar lab of any kind in a public university, and you’re set.

On the other hand, if you run a large commercial power reactor near, say, New York City, such as the Indian Point plant on the Hudson, you are told that your Design Basis Threat includes “multiple groups attacking from multiple entry points; willing to kill or be killed; possessing knowledge about target selection; aided by active and/or passive insiders; employing a broad range of weapons and equipment, including ground and water vehicles.”  This typically means you have to maintain a dozen or so military-style armed guards at all times who are ready to fight off an attack by people who intend either to steal fissionable material or to blow up the place and spread the hot stuff around.  However, no commercial nuclear facility is required to be secure against an attack from the air. 

The requirements for safeguarding nuclear weapons, generally held only by the U. S. military, are even more stringent, as you might imagine. 

Anyone familiar with risks and accident histories knows that for every major disaster in a reasonably complex system, there are usually several less damaging minor incidents that can be called near misses or close calls.  The May 27 intrusion at Oak Ridge is just such a near miss, and to my mind seems to indicate that there may be cracks in the armor with which we protect our nuclear assets.  And some of these cracks may be due to the uneven way the Design Basis Threats are assigned, depending on the size and nature of the nuclear facility

The main criticism that the UT Austin researchers mount agains the current DBT regime is that while the larger facilities may be more likely to attract certain types of attacks, the nuclear material in the smaller facilities could be just as dangerous if stolen.  And the very fact that research reactors are not heavily guarded like commercial nuclear power plants are, makes the smaller operations more attractive to a potential terrorist, not less, if all they are trying to do is obtain a fissionable amount of material.  The UT Austin researchers point out that there are several examples of regulatory agencies backing down on the level of the assumed DBT because of industry’s protests that the resulting required protective measures would be too expensive.

This is one of these matters that may never be resolved unless we wake up some morning to the news that a major attack on a nuclear facility has succeeded.  And I hope that never happens.  But I can’t help but agree at least with the report’s claim that some of the ways that DBTs are currently established are lacking in logic.  For example, the Nuclear Regulatory Commission has stated that current nuclear plants have enough strength in their existing containment vessels to withstand aircraft attack without any further enhancements.  But on the other hand, it has made a rule for new nuclear-plant designs:  designers must show how the plant will withstand the intentional crash of a commercial airliner into it.  Probably the truth of the matter is that nobody knows what would have happened if the 9/11 attackers had targeted the Indian Point plant instead of the symbolically much more attractive World Trade Center towers.  But it’s clearly something we don’t want to learn about from experience.

The UT Austin report will probably be criticized as an academic armchair exercise by those who spend their lives in the nuclear industry.  But academics who are remote from day-to-day issues in an industry can nevertheless bring different and sometimes valuable perspectives to a problem, and so I hope the report’s suggestions of how to improve nuclear security in the U. S. contribute to the ongoing challenges of living with nuclear materials, benefiting from them where possible, and not allowing them to fall into the wrong hands.

Sources:  I referred to a news article about the Nuclear Proliferation Prevention Project’s report which appeared on the CNN website on Aug. 15, 2013 at http://www.cnn.com/2013/08/15/us/nuclear-plants-security/.  The Project’s working paper itself can be accessed at http://blogs.utexas.edu/nppp/files/2013/08/NPPP-working-paper-1-2013-Aug-15.pdf.  Full disclosure:  I hold a Ph. D. in electrical engineering from the University of Texas at Austin and a part-time research professor appointment there. My blog on the protesting nun and her group appeared on May 27, 2013. 

Monday, August 12, 2013

Cybercrime: Prevention or Punishment?


Last week I needed an item at a Harbor Freight store in Austin.  Harbor Freight deals in low- to mid-priced tools imported from China, and unless you’re looking for something that will last for decades, it’s a good place to shop.  As soon as I walked in the door, one of the cash-register attendants came up to me and said, “Just to let you know, our registers are down and all we’re taking is cash right now.”  I’m one of those troglodytes (look it up) who prefers cash anyway, so this didn’t bother me other than the fact that I had to wait in a long line that was backed up because the sales clerk had to look up each item’s SKU on a handheld unit, write down the price by hand, add up the total on a calculator, and make change. When I paid for my item, the clerk asked me if I minded not getting a receipt.  I replied, “Not as long as somebody doesn’t stop me at the door for shoplifting.” 

While I was waiting in line, I saw posted next to the register a notice from Eric Smidt, Harbor Freight’s president.  It was about a recent incident of hacking that resulted in the theft of a large number of their customers’ credit-card numbers, and said that the firm was taking every possible step to deal with the problem.  Whether this issue had anything to do with their registers going down that day is unclear, but it got me to thinking about the differences between old-fashioned analog theft and cybercrime.

Now if dozens of Harbor Freight customers had been koshed on the heads as they left the stores and had their wallets taken, I bet you would have heard about it in the news.  Old-fashioned personalized one-on-one crime like that is much more likely to be reported by the injured individual, and because the criminals tend to be local, the local jurisdiction responsible has a fairly straightforward job on its hands, once the crook is identified.  But those responsible for the Harbor Freight data breach could be literally anywhere in the world that there is an Internet connection, which means just about anywhere in the world. 

Cybercrime is a lot less risky.  According to online reports, the Harbor Freight breach may have been one of 2013’s largest in terms of numbers stolen, comparable to a similar attack that netted about 2.4 million customer debit and credit card numbers.  The company found out about the attack in June, when credit-card firms began noticing a lot of fraudulent charges to accounts owned by Harbor Freight customers.  Apparently the hackers penetrated the company’s main network and gained access to data from all 400 of its retail stores.

There are several ways the criminals can profit from their ill-gotten numbers.  The retail way is to use the cards themselves to buy stuff they want.  My own credit-card number was stolen this way once, and in the list of charges that my bank seriously doubted I’d made were things like services at an upstate New York spa and jewelry charged to a Las Vegas store.  But the big money is in the wholesale underground exchange of hard cash for hot credit-card lists, and I suspect that is what the Harbor Freight crooks did with their numbers.

Because it’s so hard to catch and convict cyber criminals, most companies rely instead on anti-virus software, firewalls, and other protective measures rather than spending a lot of effort in working with law enforcement personnel to catch the perpetrators.  But a recent study by a group of researchers based in Cambridge, England points out that this may not be the most cost-effective approach. 

The study shows that the amount of money lost per person to number theivery such as occurred with the Harbor Freight customers is in the range of a few dollars per customer per year.  On the other hand, the money spent by firms on computer security measures may exceed what is lost to this type of cybercrime.  The authors say it might be cheaper overall to spend more money on tracking down the relatively small number of cyber criminals, and less on security measures.

That is good advice as far as it goes, but it neglects the hard problem of jurisdictional diversity, as you might call it.  Say you can locate the Harbor Freight perpetrators, and they turn out to live in a country that has a dysfunctional government that can’t enforce ordinary laws, let alone laws about cybercrime.  Short of mounting an armed invasion of the country to catch the crooks, a private firm or even another sovereign country has its hands tied.  Unless some effective international agreements could be made for the extradition of cyber criminals, and some uniform laws passed in every host country that makes the same actions illegal everywhere, it will continue to be very hard to punish those who steal data across international boundaries.  Look at the trouble the U. S. government has had with Eric Snowden, who committed a data breach of NSA information right here in the U. S. and then ran off with it to Russia, which has recently granted him asylum.  Once international relations and antagonisms get mixed into a criminal act, things get vastly more complicated.

Overall, we benefit greatly from the worldwide coverage of the Internet for both global commerce and less quantifiable benefits such as the freedom to communicate political and cultural ideas across boundaries.  These benefits come at a cost, however, and it looks like unless the international jurisdiction problem can be addressed more effectively than it has been in the past, we will have international cybercrime with us for the foreseeable future.  And despite Eric Smidt’s assurances, which I’m sure are sincere, the next time I go to Harbor Freight I think I’ll bring cash along.  But I think I’ll ask for a receipt.

Sources:  A report on the Harbor Freight data breach can be found at the Bank Info Security website at http://www.bankinfosecurity.com/impact-harbor-freight-attack-grows-a-5970/op-1.  The Cambridge cybercrime report is discussed at gcn.com/Articles/2012/06/18/Cost-of-cybercrime-Cambridge-study.aspx.  And the difficulties of prosecuting crimes in different jurisdictions are described well by Deb Shinder at http://www.techrepublic.com/blog/it-security/what-makes-cybercrime-laws-so-difficult-to-enforce/.

Monday, August 05, 2013

The Things That Didn’t Happen To Flight 214


Just moments before Asiana Airlines Flight 214 was to land at the San Francisco International Airport on July 6, some passengers noticed that backdraft from the jet engines was kicking up seawater.  This usually doesn’t happen on normal approaches to Runway 28L, which extends from just behind a seawall that faces San Francisco Bay onto land.  A few seconds later, the main landing gear hit the seawall and sheared off.  After that impact, both engines and the tail section came off, carrying some passengers and crew with it.  The main fuselage slammed into the runway and spun almost completely around before grinding to a halt. 

Flight attendants sprang into action, assisting passengers who needed help in exiting the aircraft.  One injured girl was pulled from the plane by a first responder, only to be covered in firefighting foam from arriving fire trucks.  Sadly, another emergency vehicle’s driver failed to see her underneath the foam, and she was struck and killed.  Another passenger died at the scene and a third passed away a few days later from injuries.  All of the other 304 people aboard survived, including all the pilots and crew, although some sustained serious injuries.  After the plane was evacuated, a fire from an oil leak demolished much of the fuselage, but without injuring anyone.

Any fatal accident involving air travel is a tragedy—usually an avoidable one.  But this accident could have been much worse, and that fact carries with it some implicit good news. 

For one thing, the Boeing 777 involved is a model that was introduced in 1995, and this 2013 accident is the first one involving loss of passenger lives in a flight-related accident.  Although fatal accidents have occurred earlier, they involved refueling or other ground-based situations.  This is an outstanding safety record compared to planes developed during the earlier years of aviation.

Another fact worth noting is that the landing gear was purposely designed to break away under a sufficiently large impact, rather than staying attached to cause a destructive nosedive.  We are familiar with breakaway traffic signs on highways, but I wasn’t aware until now that the same principle has been designed into landing gear.

Finally, the fact that the fuselage endured the abuse of skidding thousands of feet down the runway sans landing gear and kept the remaining fuel from catching fire, staying together long enough for everyone to escape, is a testimonial to its structural engineering.  I am no mechanical engineer, but somebody did something right to make a fuselage that would hang in there during such a trial.

There are things that no airframe can endure, of course.  If the plane had encountered a large immovable object, for example, the outcome might have been quite different.  A similar accident in some ways to the Asiana Airlines crash took place on August 2, 1985.  A Delta Airlines Lockheed L-1011 with 163 people on board was caught in a microburst and windshear during a thunderstorm at the Dallas-Fort Worth Airport during its final landing approach.  The sudden loss of airspeed and accompanying downdraft forced the plane to the ground north of the runway, where it skidded into some giant water tanks and exploded.  Only 26 people survived.  Windshear detectors have since been installed at many airports, and pilots are much more aware of the dangers of such conditions, so the cause of that particular crash is much less likely to occur these days.

The cause of the Asiana crash is still under investigation, but attention has been focused on the flight crew, which consisted of three captains and a first officer.  The man actually flying the plane at the time of the crash had less than fifty hours’ experience on 777s, and was being instructed by the pilot in command, who occupied the co-pilot’s seat at the time.  The runway’s instrument landing system (ILS) vertical glide slope was out of service and a notice had been issued to that effect.  This made it impossible to execute an ILS landing to the runway.  Records indicate that the various automated landing-assistance systems were manipulated during the approach, and it may not have been clear to the flight crew that their approach was too low and slow until it was too late to do anything about it.  The laws of inertia are always in force, and a lot of advance planning has to be done to bring a huge heavy object like a 777 in contact with the ground safely.  Although final conclusions will have to await the completion of the ongoing investigations, it appears that pilot error may be at the bottom of this accident.

As long as human pilots fly planes, we will always have to contend with the possibility of pilot error.  But in general, air travel is safer now than it has ever been, in terms of fatalities per passenger-mile flown.  Even the absolute numbers of fatalities per year, which obviously stood at zero until the invention of the airplane, continues a downward trend that began in the 1970s, and is the lowest since about 1954.  And the total number of passenger-miles flown in 1950 was only about 2% of what it was in 1990. 

The Asiana crash may have stemmed from confusion about who was in charge—the autopilot mechanisms or the real pilot.  But for the vast majority of planes and flights, the amazing system of man and machine called air travel operates efficiently, economically, and with a safety record that was unimaginable in the early days of flight.

Sources:  I referred to the Wikipedia articles on “Asiana Airlines Flight 214,” “Delta Airlines Flight 191,” “USAirways Flight 1549,” and “Aviation safety.”  I also obtained statistics on air travel safety from a paper by Prof. Dan Bogart of UC Irvine which can be found at http://www.socsci.uci.edu/~dbogart/transport_momentusBogart_6.11.12.pdf.

Friday, July 26, 2013

The Medieval Wisdom of Google’s “Don’t Be Evil”


Back in 2000, when the founders of Google were discussing ways to express their core philosophy, Paul Buchheit (employee No. 23) suggested “Don’t be evil.”  At the time, he was simply trying to contrast the way Google did business with the less salutary practices of some of their competitors.  Nobody dared to disagree with the principle of not being evil, so the phrase was adopted and down to today remains one of Google’s official core values.  Along the way it has acquired another phrase, so the complete statement is “Do the right thing; don’t be evil.”  In promulgating this notion, Google has (perhaps unwittingly) taken a stand on the side of Aristotle, St. Thomas Aquinas, and countless other ancient sages against much of what today passes for acceptable moral principles.  It would surprise me, however, to discover that more than a few Google employees are aware of this.

Many of them, in fact, would probably subscribe to the notion that no one should impose one’s moral principles on another person.  Even Google doesn’t explicitly recommend their “do good, avoid evil” principle for everybody; the most they are saying is that Google employees will try to live up to it.  If you like doing evil, fine, just don’t go to work for Google.  But as physicist Anthony Rizzi points out in his book The Science Before Science, the advice to not impose one’s moral views on another, is itself a moral view. 

If I see an adult male in a shopping mall beating up a two-year-old, and I rush to intervene, and the man says, “Leave us alone, you’ve got no business imposing your morality on me,” I could respond with, “Sir, that itself is a moral principle which you are trying to impose on me.”  (What I would really do is call the cops, but that’s another matter.)  And in any event, as Rizzi points out, no one consistently acts as though all moral principles are simply matters of personal preference, even though they may give lip service to the idea in academic papers, for example.  If the chair of a philosophy department read a paper by one of his philosophers claiming that all morality is relative, and called the author up one day and said, “Because all morality is relative and I don’t like your looks, I’m reducing your pay by half,” I seriously doubt that the philosopher would calmly accept this as a logical consequence of his own philosophical position.  So even if some people say morality is relative, on matters that affect them personally they usually don’t act like they really believe it.

So where does that leave us?  It begins to look as though there really may be some objective moral principles “out there” so to speak, independent of whatever we say or think about them.  And behind them all, at the head of the logical chain of reasoning where first things must always be, stands the principle embraced by Google:  “Do the right thing; don’t be evil.”  You can’t derive that principle from anything else.  It is one of those self-evident statements that can’t come from another more basic notion.  As it stands, of course, it needs development before it can help you live your life.  But all other moral principles can be logically derived from what Rizzi calls “the first principle of ethics”:  do good and avoid evil.

Ah, but what is good and what is evil?  In a thousand-word column, I obviously can’t do justice to that question.  The short answer is, good is that which fulfills one’s purposes, and evil is the absence of such good.  One reason there is so much evil in the world is that, while every person does what seems good at a particular time and place, what seems good at the time may not really help one to fulfill one’s purposes.  It may seem good to an alcoholic to take one more drink, even if it’s the one that makes him so drunk he gets in his car and causes the death of another driver.  It’s not always easy to figure out what the true good is, which is one reason why ethics can get complicated—so complicated that the analytically-minded tend to throw up their hands and say it’s all hopeless. 

But it’s not hopeless.  Most people figure out what good to do, and what evil to avoid, with a good bit of success every day.  The lapses happen when our emotions or our hasty judgments lead us astray.  It requires just as much thought and attention, if not more, to be a good person as it does to be a good engineer.  But the technical and the ethical sides of engineering start from different foundations.

When Mr. Buchheit hit on “Don’t be evil” to guide what would become one of the greatest corporations of the twenty-first century, he was saying more than he knew.  Neither Google (through whose facilities this blog appears, by the way) nor any other firm can completely live up to their core principles, including that one.  But having it out there to shoot for is a start.  And in having that core principle to live up to, all the Googleites are following in the footsteps of medieval thinkers such as St. Thomas Aquinas, who clearly saw that the first logical step in being good is to admit there are such things as universal moral principles, and that the one to start with is “do good and avoid evil.” 

Sources:  Anthony Rizzi is a practicing research physicist at the Institute for Advanced Physics at Baton Rouge, Louisiana (www.iapweb.org) and author of The Science Before Science:  A Guide to Thinking in the 21st Century (IAP Press, 2004).  Of all books that I’ve read about scholastic philosophy (which is the term for the type of philosophy done in the High Middle Ages by St. Thomas Aquinas), Rizzi’s does the best job of defining terms and explaining concepts in ways that the average non-philosopher can understand.  I also referred to the Wikipedia articles on Paul Buchheit and “Don’t be evil.”