Monday, December 27, 2010

A Night to Remember on the Deepwater Horizon

Walter Lord, in his classic nonfiction book A Night To Remember, used dozens of interviews and historical documents to recount the 1912 sinking of the Titanic in vivid and harrowing detail. Now David Barstow, David Rohde, and Stephanie Saul of the New York Times have done something similar for the Deepwater Horizon disaster last April 20. While official investigators will probably take years to complete a final technical reconstruction with all the available information, the story these reporters have pieced together already highlights some of the critical shortcomings that led to the worst deepwater-drilling disaster (and consequential environmental damage) in recent memory.

Their 12-page report makes disturbing reading. They describe how Transocean, the company which owned the rig and operated it for the international oil giant BP, was under time pressure to cap off the completed well and move to the next project. They show something of the complex command-and-control system for the rig that involved all kinds of safety systems (both manual and automatic) as well as dozens of specialists out of the hundred or so engineers, managers, deckhands, drillers, cooks, and cleaning personnel who were on the rig at the time. And they reveal that while the blowout that killed the rig was about the worst that can happen on an offshore platform, there were plenty of ways the disaster could have been minimized or even avoided—at least in theory. But as any engineering student knows, there can be a long and rocky road between theory and practice. I will highlight some of the critical missteps that struck me as common to other disasters that have made headlines over the years.

I think one lesson that will be learned from the Deepwater Horizon tragedy is that current control and safety systems on offshore oil rigs need to be more integrated and simplified. The description of the dozens of buttons, lights, and instrumentation in physically separate locations that went off in response to the detection of high levels of flammable gas during the blowout reminds me of what happened at the Three Mile Island nuclear power reactor in 1979. One of the most critical personnel on the rig was Andrea Fleytas, a 23-year-old bridge officer who was one of the first to witness the huge number of gas alarms going off on her control panel. With less than two years experience on the rig, she had received safety training but had never before experienced an actual rig emergency. She, like everyone else on the rig, faced some crucial decisions in the nine minutes that elapsed between the first signs of the blowout on the rig, and the point where the explosions began. Similarly, at Three Mile Island, investigators found that the operators were confused by the multiplicity of alarms going off during the early stages of the meltdown, and actually took actions that were counterproductive. In the case of the oil-rig disaster, inaction was the problem, but the cause was similar.

Andrea Fleytas or others could have sounded the master alarm, instantly alerting everyone that the rig was in serious trouble. She could have also disabled the engines driving the rig’s generators, which were potent sources of ignition for flammable gas. And the crew could have taken the drastic step of cutting the rig loose from the well, which would have stopped the flow of gas and given them a chance to survive.

But each one of these actions would have exacted a price, ranging from the minor (waking up tired drill workers who were asleep at 11 o’clock at night with a master alarm) to the major (cutting the rig loose from the well meant millions of dollars in expense to recover the well later). And in the event, the confusion of the situation with unprecedented combinations of alarms going off and a lack of coordination among critical personnel in the command structure meant that none of these actions that might have mitigated or avoided the disaster were in fact done.

It is almost too easy to sit in a comfortable chair nine months after the disaster and criticize the actions of those who afterward did courageous and self-sacrificing things while the rig burned and sank. None of what I say is meant as criticism of individuals. The Deepwater Horizon was above all a system, and when systems go wrong, it is pointless to focus on this or that component (human or otherwise) to the exclusion of the overall picture. In fact, a lack of overall big-picture planning appears to be one of the more significant flaws in the way the system was set up. Independent alarms were put in place for specific locations, but there were no overall coordinated automatic systems that would, for example, sound the master alarm if more than a certain number of gas detectors sensed a leak. The master alarm was placed under manual control to avoid waking up people with false alarms. But this meant that in a truly serious situation, human judgment had to enter the loop, and in this case it failed.

Similarly, the natural hesitancy of a person with limited experience to take an action that they know will cost their firm millions of dollars was just too much to overcome. This sort of thing can’t be dealt with in a cursory paragraph in a training manual. Safety officers in organizations have to grow into a peculiar kind of authority that is strictly limited as to scope, but absolute within its proper range. It needs to be the kind of thing that would let a brand-new safety officer in an oil refinery dress down the refinery’s CEO for not wearing a safety helmet. That sort of attitude is not easy to cultivate, but it is vitally necessary if safety personnel are to do their jobs.

Disasters teach engineers more than success, it is said, and I hope that the sad lessons learned from the Deepwater Horizon disaster will lead to positive changes in safety training, drills, and designs for future offshore operations.

Sources: The New York Times article “The Deepwater Horizon’s Final Hours” appeared in the Dec. 25, 2010 online edition at http://www.nytimes.com/2010/12/26/us/26spill.html.

Sunday, December 19, 2010

Cheaters 1, Plagiarism-Detection Software 0

The Web and computer technology have revolutionized the way students research and write papers. Unfortunately, these technologies have also made it vastly easier to plagiarize material: that is, to lift verbatim chunks of text from published work and pass it off as your own original creation. In response, many universities have promoted the use of commercial plagiarism-detection software, marketed under names such as Turnitin and MyDropBox. Still more unfortunately, in a systematic test of how effective these programs are in detecting blatant, wholesale plagiarism, the software bombed.

Why is plagiarism perceived as getting worse than it used to be? One factor is the physical ease of plagiarism nowadays. Back in the Dark Ages when I did my undergraduate work, it was not quite the quill-pen-by-kerosene-lamp era, but if had ever I decided to plagiarize something, it would have taken a good amount of effort: hauling books from the library, photocopying journal papers, dragging them to my room, and typing them into my paper letter by letter with a manual typewriter. With all that physical work and dead time involved, copying a few paragraphs with the intent of cheating wasn’t much easier than simply thinking up something on your own. The physical labor was the same.

Fast-forward to 2010: there’s Microsoft Word, there’s Google, and if you’re under 22 or so these things have been there for at least half your life. The “copy” and “paste” commands are vastly easier than hunting and pecking out your own words. And you suspect that a good bit of everything out on the Web was copied and pasted from somewhere else anyway. So what is the big deal some professors make about this plagiarism thing? The big deal is this: it’s wrong, because it constitutes theft of another person’s ideas, and fraud in that you give the false impression that you wrote it yourself.

In engineering, essays and library-research reports make up only a small part of what students turn in, so I do not face the mountains of papers that instructors in English or philosophy have to wade through every semester. But with plagiarism being so easy, I do not blame them for resorting to an alleged solution: the use of plagiarism-detection software. Supposedly, this software goes out and compares the work under examination with web-accessible material and if it finds a match, it flags the work with a color code ranging from yellow to red. Work that passes muster gets a green.

In a recent paper in IEEE Technology and Society Magazine, Rebecca Fiedler and Cem Kaner report their tests of how well two popular brands of plagiarism-detection software actually work on papers that were copied word-for-word from academic journals. The journals themselves were not listed in the article, but appear to be the usual type of research journal which requires payment (either from an individual or a library) for online access. There is the key, I think, to why the software failed almost completely to disclose that the entire submission was copied wholesale, in twenty-four trials of different papers. If I interpret their data correctly, only one of the two brands tested was able to figure this out, and even then it was in only in two of the twenty-four cases. Fiedler and Kaner conclude that professors who rely exclusively on such software for catching plagiarism are living with a false sense of security, at least where journal-paper plagiarism is concerned.

I think the results might have been considerably better for the software if the authors had chosen to submit material that is openly accessible on the Web, rather than publications that are sitting behind fee-for-service walls that require downloading particular papers. In my limited experience with doing my own plagiarism detection, I was able simply to Google a suspiciously well-written passage out of an otherwise almost incomprehensible essay, and located the university lab’s website where the writer had found the material he plagiarized. And I didn’t need the help of any detection software to do that.

As difficult as it may seem, the best safeguard against plagiarism (other than honesty on the part of students, which is always encouraged) is the experience of instructors who become familiar with the kind of material that students typically turn in, and even with passages from well-known sources which might be plagiarized. No general-purpose software could approach the sophistication of the individual instructor who deals with this particular class of students about a particular topic.

Of course, if we’re talking about a U. S. History class with 400 students, the personal touch is hard to achieve. Especially at the lower levels, books are more likely to be plagiarized from than research papers, and as Google puts pieces of more and more copyrighted books on the Web, plagiarism detection software will probably take advantage of that to catch more students who try to steal material. It’s like any other form of countermeasure: the easy cheats are easily caught, but the hard-working cheats who go find stuff from harder-to-access places are harder to catch. But it’s not impossible, and one hopes that by the time students get to be seniors, they have adopted enough of their chosen discipline’s professionalism to leave their early cheating ways behind. Sounds like a country-western song. . . .

If any students happen to be reading this, please do not take it as an encouragement to plagiarize, even from obscure sources. The fact that your instructors’ cheating-detection software doesn’t work as well as it should is no reason to take advantage of the situation. Anybody reading a blog on engineering ethics isn’t likely to be thinking about how to plagiarize more effectively, anyway—unless they have to write a paper on engineering ethics. In that case, leave this blog alone!

Sources: The article “Plagiarism Detection Services: How Well Do They Actually Perform?” by Rebecca Fiedler and Cem Kaner appeared in the Winter 2010 (Vol. 28, no. 4) issue of IEEE Technology and Society Magazine, pp. 37-43.

Monday, December 13, 2010

The Irony of Technology in “Voyage of the Dawn Treader”

I write so often about bad news involving engineering and technology because engineers usually learn from mistakes more than they learn from success. But not always. A more positive theme in engineering ethics takes exemplary cases of how engineering was done right, and asks why and how things worked out so well. That’s what I’m going to do today with the latest installment of the series of “Chronicles of Narnia” movies, namely “The Voyage of the Dawn Treader.”

It is ironic that the most advanced computer-generated imagery (CGI) and computer animation was used to bring to the screen a story by a man who was a self-proclaimed dinosaur, an author who wrote all his manuscripts by hand with a steel pen and never learned how to drive a car. C. S. Lewis, who died in 1963 after achieving fame as one of the greatest imaginative Christian writers of the twentieth century, also wrote one of the most prescient warnings about the damage that applied science and technology could do to society. In The Abolition of Man, Lewis warned that the notion of man’s power over technology was wrongly conceived. The thing that increased scientific and technological abilities really allow, is for those in control of the technology to wield more power over those who are not in control. Of course, he granted that technological progress had also led to great benefits, but that was not his point.

Perhaps the most popular of all his works of fiction is the “Chronicles of Narnia” series, a set of seven interrelated books for children in which he drew upon his vast learning as a scholar of medieval and Renaissance literature to produce one of the most completely realized works of fantasy ever written. I have read all of the stories many times. And like many other readers, I had my doubts that any cinematic version of them would stand a chance to live up to the unique standard posed by the books. For one thing, Lewis’s descriptions of fantastic beings such as minotaurs, centaurs, and fauns are suggestive rather than exhaustive, leaving much to the reader’s imagination, as most good literature does. This throws a great burden upon anyone who attempts to render the stories in a graphic medium. I was saddened to see at the end of the movie the dedication “Pauline Baynes 1922-2008.” Baynes was the artist chosen by both Lewis and his friend J. R. R. Tolkien to provide illustrations for the “Chronicles” and for many of Tolkien’s imaginative works as well. Baynes’s drawings fit in with Lewis’s descriptions so well because they did what book illustrations are supposed to do: namely, they enhanced the reader’s experience without turning the story in a direction not intended by the author.

And that is what the hundreds of IT professionals, artists, technicians, computer scientists, entrepreneurs, and others involved in “The Voyage of the Dawn Treader” film have done. As computer graphics has advanced, people engaged in what began as a purely engineering task—to render a realistic image of a natural feature such as the hair on a rat being blown by the breeze atop the mast of a sailing ship—find themselves having not only to deal with the sciences of mechanics and fluid dynamics, but even now and then making fundamental advances in our understanding of how air flows through fibrous surfaces or how light travels through a complex mineral surface. Fortunately for the moviegoing public, none of this needs to be understood in order to watch the movie, the production of which is comparable in today’s terms with the effort needed to build part of a medieval cathedral. But anyone can walk into a cathedral and enjoy the stained-glass windows without understanding how they were made. This connection is not lost on the moviemakers. In fact, the very first scene in the film focuses on a stained-glass window showing the Dawn Treader ship, just before the camera zooms away to reveal a tower in the city of Cambridge, where the story begins.

It is this sensitivity to the spirit of the tales and the style, if you will, of Narnia that makes the movie both an essentially faithful rendition of the book, and an excellent adventure on its own. For cinematic reasons, the screenwriters did some mixing of plot elements and originated a few new ones, but entirely within the spirit of what G. K. Chesterton calls the “ethics of elfland.” Chesterton expresses the ethic this way: “The vision always hangs upon a veto. All the dizzy and colossal things conceded depend upon one small thing withheld.” The chief plot innovation concerns a search for the seven swords of the lost lords of Narnia, which unless I’m mistaken were not in the original story. But until these swords are placed on a certain table, the Narnians cannot triumph over a strong force of evil that threatens to undo them.

What would C. S. Lewis think? Well, those who believe in an afterlife can conclude that he will find out eventually about what has been done with his stories, and perhaps some of us will some day be able to ask the man himself. He may answer, but then again he may view his earthly works in the same light that St. Thomas Aquinas viewed his own magisterial works of philosophy toward the end of his life. According to some reports, Aquinas was celebrating Mass one day when he had a supernatural experience. He never spoke of it or wrote it down, but it caused him to abandon his regular routine of dictation. After his secretary Reginald urged him to get back to work, Aquinas said, “Reginald, I cannot, because all that I have written seems like straw to me.” Once one encounters that joy which, in Lewis’s words, is the “serious business of Heaven,” the fate of a children’s story at the hands of this or that film crew may not seem all that important. But those of us still here in this life can rejoice in a faithful rendition of a spiritually profound work, made possible in no little part by engineers who simply did their jobs well and with sensitivity to the spirit of the project.

Sources: The Chesterton quotation is from chapter 4, “The Ethics of Elfland,” of Chesterton’s 1908 book Orthodoxy. I used material from the Wikipedia article on St. Thomas Aquinas in the preparation of this article.

Monday, December 06, 2010

TSA Has Gone Too Far

It’s not too often that I take an unequivocal stand on a controversial issue. But this time I will. The U. S. Transportation Safety Administration (TSA) is wasting millions of dollars putting thousands of harmless passengers through humiliating, indecent, and probably unconstitutional searches, while failing in its primary mission to catch potential terrorists. I say this as a participant in the invention of one of the two main technologies currently being deployed for whole-body scans at U. S. airports.

Back in 1992 when airport security checks of any kind were a novelty, I was consulting for a small New England company whose visionary president anticipated the future demand for whole-body contraband scans. I helped in the development of a primitive version of the millimeter-wave scanning technology that is now made by L3Comm. The scan took 45 minutes, had very low resolution, but produced recognizable images of non-metallic objects hidden under clothes. As I recall, the main reason the company didn’t pursue the technology further was that it revealed too many details of the human body, and we thought the public would rise up in revolt if some bureaucrat proposed to electronically strip-search all passengers.

Well, here we are eighteen years later, and the TSA is now installing that technology plus a similar (but even more detail-revealing) X-ray technology at dozens of airports across the land. The agency is reluctant to share any information that would cause it problems, but the few images that have gotten into the public media are enough to tell us that Superman’s X-ray vision is indeed here. In the movie of the same name starring the late Christopher Reeve, the X-ray vision thing was played for a joke in his encounter with Lois Lane. But forcing thousands of ordinary, harmless citizens, including elderly folks and young children, none of whom have been charged with a crime, to subject themselves to electronic invasions of privacy, with the potential for abuse that entails, is an outrage.

Not only is it an outrage, but it is unlikely to achieve the purpose which the TSA says it is achieving at this tremendous price: lowering the risk of terrorist acts in the air. So far, airport body scans have caught zero terrorists. None. All the interceptions and near-misses we have had lately have been thwarted either by alert passengers (and incompetent terrorists), by tips from people with knowledge of the plots, or by old-fashioned detective work that doesn’t stop looking when it runs up against a matter of political correctness. The U. S. is nearly unique among all major nations in relying on this inefficient and intrusive blanket of technologically-intensive measures to achieve safe air travel, rather than focusing limited resources on groups and individuals who are the most likely to cause trouble, as the Israelis do.

The current administration is bending over backwards not to offend Muslim sensibilities in this or any other situation. I am all for respecting and allowing religious freedom, but when nearly all crimes of a certain kind are associated with members of an identifiable group, whether they be Muslim, Jewish, Christian, liberal, conservative, red-haired, or whatever, I don’t want those charged with the responsibility of catching them to purposely throw away that information and instead impose punitive and humiliating (and ineffective) searches on every single person who chooses to fly. And I haven’t even gotten to the “enhanced” pat-downs that the TSA offers as alternatives to the whole-body scans. That amounts to asking whether you would rather have your thumb squeezed with a pair of pliers or in a vise.

The public statements of the TSA on this matter have been about what you’d expect from a rogue bureaucracy. Inanities such as saying “if you don’t want to be searched, just don’t fly” are as useful today as saying “if you don’t like risking your life in automobile traffic, get out and walk.” Here is where the best hope of reversing this egregious and unconstitutional overreaching lies: in the boycotting of airports where the new systems are used. If air travel decreases to the point that the airlines notice it, they will become allies to the public in the battle, and there will be at least a chance that Washington will listen to corporations that employ a lot of union workers, rather than the great unwashed masses that have been ignored repeatedly on everything from health care to offshore oil drilling already.

Civilizations can decline either with a bang or by slow degrees. In historian Jacques Barzun’s monumental From Dawn to Decadence: 1500 to the Present, we find described as one of the characteristics of modern life a slow encrustation of restrictions on freedom exacted by bureaucracies whose ostensible purpose is to make life better in the progressive fashion. I think Barzun had in mind things like income-tax forms and phone trees, but he lives right down the road in San Antonio, whose airport just installed the new scanning systems. I doubt that he flies much anymore (he turned 103 last month), but if he does, he will be faced with a good example of his own observation: some hun-yock* in a blue uniform will treat the dean of American historians, a man whose family fled World War I to the U. S. and freedom, to the degrading and wholly unnecessary humiliation of being suspected as a terrorist and having his naked body exposed to the eyes of some nosy minion of the government.

To Jacques Barzun and to all the other people who simply want to get from A to B on a plane and have no malevolent intentions regarding their mode of transportation, I apologize on behalf of the engineers and scientists whose work has been misused, among whom I count myself.

Sources: The millimeter-wave technology used for whole-body scans is described well in the Wikipedia article “Millimeter-wave scanner,” and the X-ray system can be read about at http://epic.org/privacy/airtravel/backscatter/#resources. My Jan. 10, 2010 entry in this blog has a reference to my published work on the early version of the millimeter-wave scanner. *The word “hun-yock”, which I find spelled on the Web as “honyock” or “honyocker” was used by my father to indicate a person who did something unwise and publicly irritating. I can think of no better term for the present situation.

Monday, November 29, 2010

Holes in the Web

Tim Berners-Lee, inventor of the WorldWideWeb, thinks we should worry about several threats to the Web’s continued integrity and usefulness. When someone of this importance says there are things to worry about, we should at least listen to him. I for one think he has some good points, which I will now summarize from a recent article he wrote in Scientific American magazine.

The first threat Berners-Lee points out is the practice of creating what he calls “silos” of information on the otherwise universally accessible Web. Facebook, iTunes, and similar proprietary sites treat information differently than a typical website does. The original intent was that every bit of information on the Web could be accessed through a URL, but as those (such as myself) who have no Facebook page have discovered, there is information inside Facebook that only people who have Facebook pages can gain access to. And the iTunes database of information about songs and so on is accessible only through Apple’s proprietary software of the same name.

The second threat he sees is the potential breaching of the firewall between the Web (which is a software application) and the Internet (which is basically the networking hardware used to run the Web). Again, the original intent was that once you pay for an Internet connection of a certain speed, you are able to access absolutely anything on the Web just as easily as anyone else with the same speed of connection. This is called “net neutrality” and recently it has been under attack by institutions as powerful as Google and Verizon, who as Berners-Lee points out, moved last August to create special rules for Internet connections using mobile phones. They say that the limited spectral bandwidth of mobile phones makes it necessary for companies to discriminate (e. g. charge extra) for certain types of applications, or make it harder for users to access sites that are not part of the institution’s own setup.

One motivation for Berners-Lee’s cautions is an old communications-network principle that dates back to the early days of the telephone. Larger communications networks are more valuable to the users than smaller ones, but the value increases faster than just the number of users. Since each new user can not only gain access to all the others, but all the other users can also access the new user, the usefulness of a network tends to increase as the square of the number of users. That is, a network with 20 users is not twice as useful as one with ten, but four times as useful. Extrapolate this to the billions that apply to the Web, and you see how organizations that persist in walling off information and users may reap some short-term selfish benefits, but at a cost to the usefulness of the Web as a whole.

The last major concerns that Berners-Lee voices are matters of privacy and due process. There is now a way to crack open the individual packets of information that carry Web traffic and associate particular URLs with particular users. He sees this as a major privacy threat, although it isn’t clear how widely it’s being used yet. Another thing that threatens the freedom of people to use the Web is a recent trend by some European governments to cut off Web access to people who are even suspected of illegal downloading of copyrighted material. No trial, no defendant in court, no hearing: just a company’s word that they think you did something wrong. Since access to the Web is now as taken for granted as access to electricity, Berners-Lee sees this as a violation of what in Finland is now regarded as a fundamental human right: the right to access the Web.

These warnings need to be taken seriously. As director of the World Wide Web Consortium, the organization that is formally charged with the continued development of the Web, Berners-Lee is in a good position to do something about them. But he can’t control the actions of private companies or governments, so consumers and voters (at least in countries where votes mean something) will have to go along with his ideas to make a difference.

The Web is a new kind of creature in political, governmental, and economic terms. There has never before been a basically technical artifact which is simultaneously international in scope, beyond the regulatory authority of any single governmental entity, not produced by a single firm or monopolistic group of firms, and fundamentally egalitarian in nature without any controlling hierarchy. Of course, a good deal of the nature of the Web was expressly intended by its founder, who because of his youth at the time he developed it (he was only 35) is still very much with us and able to give helpful suggestions on this, the twentieth anniversary of the Web. (For those who care, Berners-Lee got the first Web client-server connection running on Christmas Day, 1990.)

What actually happens with the Web in the future, therefore, depends in a peculiar way on what its own users decide, and to much less of a degree what private companies or governments choose to do. There is probably much good in that way of doing things, since it prevents anything from happening that violently opposes the will or desires of the majority of users. But it also builds in a lot of immunity from what you might call reform efforts that go against common but less than salutary desires: the need to reduce Web pornography traffic, for instance.

For better or worse, Sir Timothy (he was knighted by his native England in 2004) has impressed a good deal of his open-source, egalitarian philosophy on his brainchild the Web, which has grown vastly beyond his initial expectations. As any good father does, he wants his child to grow and prosper and be a good citizen. Now that you have heard some of Berners-Lee’s cautionary words, you can do your part, however minor, to see that this happens.

Sources: The December 2010 issue of Scientific American carried Berners-Lee’s article “Long Live the Web” on pp. 80-85. It can also be accessed (without charge!) at the Scientific American website http://www.scientificamerican.com/article.cfm?id=long-live-the-web.

Monday, November 22, 2010

An Open Letter In Response to Stephen H. Unger’s Essay “Unwanted Newborns: A Painful Problem”

Dear Steve,

Your “Ends and Means” essays are always worth reading, and I appreciate being on your email list. I value the experience of being on the IEEE Society on Social Implications of Technology board with you for a time, and it has been a privilege to know one of the founding fathers of engineering ethics. It is with that same sense of collegiality and fair discussion which we shared in SSIT board meetings that I undertake to disagree with you regarding your latest essay, “Unwanted Newborns: A Painful Problem.”

I will first try to restate your argument as accurately as I can. Your analysis is basically an exercise in utilitarian ethics, and its underlying premise is that actions are to be judged according to whether they accrue to the greater happiness of people, and by the same token, whether they result in less unhappiness. Another premise is your definition of who counts as a person. In previous essays you have put forward the idea that murder is wrong mainly because it makes people afraid of being murdered, which is, among other things, a type of unhappiness. You draw upon this notion to examine the case of newborn babies who for various reasons (handicaps, disabilities, the unfitness or unwillingness of the mother) are not desired by their parents and are a potential burden on society. Because newborn babies give no evidence of fearing or even understanding death, you conclude that they cannot be subject to such fear. They are therefore exempt from the main reason you have previously put forward to justify society’s no-murder rule. You conclude that the government (or society) should defer to the persons most concerned—namely, the parents—in the case of unwanted newborns, and if the parents decide to kill the child (either actively by execution or passively by withholding treatment, medication, or food), your ethical reasoning tells you that such an act is not wrong, as difficult as it might be for all concerned.

Steve, if I agreed with your premises, I would have to agree with your conclusion because it follows logically from your premises. But your premises are mistaken. Here is how.

Your major misstep is to exclude newborn babies from the category of those with a right to life for the reason that they cannot (apparently) conceive of or fear death. This is in contrast to the approach favored by many right-to-life groups and individuals (including myself), which is to confer the right to life upon any biological entity that can be shown to be a human being at any stage of development or consequent stage of life. The latter criterion includes everyone from a just-fertilized human ovum all the way to a 100-year-old man in a persistent vegetative state.

There are important differences between these two criteria.

Your criterion is based on a behavior, or rather, the lack of a specific insight on the part of the baby which we deduce from the baby’s behavior: namely, the knowledge and fear of death. My criterion is based on physical evidence that can be easily and scientifically verified, e. g. by a DNA test to show whether the being in question is a member of the species Homo sapiens. This criterion essentially says, “If it is alive, and if it is human, then it qualifies as a person.” I think you will admit that applying my criterion is a fairly simple and straightforward matter that in most cases can be done by inspection.

But what about your criterion? Exactly when does a baby reach the age at which it can understand and fear death? Moreover, does this fear have to be active, or merely a potential fear? And how do you know whether it is present or not in any given individual? Must we develop an “awareness-of-death inventory” and administer it before legitimately taking a newborn child’s life? We are talking about an extremely serious matter here—the qualifications for membership in the rights-endowed human race—and it will not do to make unverified assumptions or generalizations. I hope you will not accuse me of undue levity if I say that I have known some teenagers who, at least by their behavior, showed no evidence whatever of the fear of death. Should they therefore be disqualified from membership in the category of humans, allowing us to kill them at will? Handicapped infants are not the only beings who can become a burden to their parents.

I find it ironic that you, who make no secret of your Jewish heritage, nevertheless contrast your position with that of the Nazi regime, which engaged in the killing of innocent human beings on a mass-production scale. I disagree when you say that adopting your idea to exclude newborns from the right to life will not lead to a slippery slope downward toward another Holocaust. Both you and the Nazi regime have already taken the first step: namely, the act of dehumanizing someone who most people would normally regard as a person.

If you study the memoirs of former concentration-camp guards, you will find that their need to view the prisoners as cattle, vermin, raw material for soap—anything other than a fellow human being—was critical to the guards’ ability to continue the heinous work in which they were engaged. Once they admitted to themselves that the naked, trembling body before them embodied a soul just like theirs, the game was over, and there was nothing left to them but suicide, desertion, or insanity.

Once you make membership in the human race dependent on any manifested ability and not on objective physical facts, you have crossed a critical line, and the rest is merely details. Whether you specify the ability to fear death, or to say “Heil Hitler” or “Hail, Caesar,” or integrate sin(x) dx, or anything else, you have fundamentally changed the category of qualification from that of physical nature to that of performance or behavior. To qualify as human, one must not simply be human, one must do something that is regarded as characteristic of humans. Exactly what activity is used as a criterion is secondary.

Even if you and I were to agree that a performance-based qualification for human rights should be adopted, the one you choose is problematic in the extreme. You cannot prove that babies do not fear death in the same way that I can prove by a DNA test that a baby is human. The idea that babies do not fear death is a speculative conclusion based on the lack of evidence to the contrary. And as most scientists would agree, absence of evidence is not evidence of absence. While newborn babies cannot talk and therefore cannot communicate the essence of their experiences to us, no one except newborns has any idea of what it is like to be a newborn, any more than I have an idea of exactly what it is like to be a wombat or a gerbil. Who knows but what any unpleasant experience, from hunger to the sight of an unfamiliar face, is more terrifying to an infant than facing a firing squad is for an adult? Certainly some babies scream as though it is.

Your failure to follow out some of the implications of your logic leads you to make some unguarded statements that, on examination, turn out to be little more than hopeful wishes rather than reasonable bases upon which to erect a system of ethics. For example, in discussing your principle that murder is to be avoided because it causes the fear of death in others, you write, “Nobody should have valid cause to worry that they might at some time in the immediate, or even remote, future be deprived of their lives.” You probably had in mind an unwritten supporting clause, namely, “deprived, that is, by the intentional actions of others.” But Steve, every one of us has valid cause to worry that we might at some time in the future be deprived of our lives. So far, the death rate among humans is 100%. The art of living consists largely in resisting the despair that continued contemplation of our deaths will induce. And there are two main ways to fight this despair.

One way is to philosophize. And philosophizing is anything but an exact science. One reason that different philosophers reach such a variety of conclusions about any given moral issue is that they begin with different assumptions. And while many given philosophical systems can be made internally consistent, each must begin from assumptions that can only be granted or denied, not argued about logically unless the participants share a deeper basis of assumptions on which their argument is based.

There are many moral philosophies that deny it is permissible to kill newborns. The fact that your particular line of reasoning concludes the opposite simply expresses the fact that you have chosen a different set of assumptions than other moral philosophers have. But having chosen those assumptions, you can go on about the business of life knowing that you have at least made an effort to be logically consistent. And until death puts an end to all philosophizing, your philosophy can provide you with a guide to moral action.

The other way humanity has found to combat the despair of the contemplation of death is through religion. You mention religion, or rather “religions” toward the end of your essay, but after admitting that most world religions do not condone “neonaticide,” you say that because we in the U. S. live in a pluralistic society, religious scruples about killing newborns should not be imposed on those who do not subscribe to them. So your ultimate response to religious arguments against killing newborns is to claim political immunity from such proscriptions.

How did it come about that you live in a society which respects your right to dissent from the beliefs of a religious majority? The founders of the United States wisely saw that a coerced religion is really no religion at all. They valued religion too much to make it compulsory. They left citizens responsible only to God, or their own consciences, with regard to religious belief, and prohibited the governmental establishment of religion, as well as any law preventing the free exercise thereof. Why they did so is a matter of some historical complexity, but an important contributing factor was the then relatively new idea in Protestant Christianity that faith in God was a matter for individual inquiry and decision, rather than a government-imposed requirement.

By contrast, history shows that governments based on an explicitly atheistic philosophy have no compunction about defining humanity in almost arbitrary ways, and in terminating those whose right to life has been revoked by their failure to meet certain requirements for behavior, or descent, or income level, or almost anything else you care to name. In adopting a policy such as the one you urge, the U. S. government would be endorsing the idea that personhood depends on an aspect of intellectual capacity, not on the simple fact of being human. Of course, the Roe v. Wade decision arbitrarily deprived millions of unborn children of the right to life, but the fact that opposition to that decision and its consequences is as strong now as it was thirty years ago shows that many U. S. citizens disagree with the arbitrary removal of the right to life simply because the life in question is inconvenient or painful for others.

Steve, I value my memories of the many times that you spoke your mind regardless of the consequences, and made a positive difference in the way engineering ethics is discussed, debated, and practiced. It saddens me to see you apply your great abilities to a moral problem and come out on the wrong side. While I do not have much hope that my words will persuade you to change your position, I would like to think that my respectful opposition to it is in the same honorable tradition that you yourself have established.

Yours sincerely,

Karl Stephan

Sources: Stephen H. Unger, Professor Emeritus of computer science and electrical engineering at Columbia University and author of Controlling Technology (1982), one of the earliest engineering ethics textbooks, posts his “Ends and Means” essays at http://www1.cs.columbia.edu/~unger/myBlog/endsandmeansblog.html, where his latest essay “Unwanted Babies: A Painful Problem” can be found.

Monday, November 15, 2010

Wendell Berry and the Two Economies

Modern engineering as it is currently practiced is deeply embedded in the context of the global economy of modern industrial societies. Large corporations are the only organizations complex enough to coordinate the production of things as intricate as computers or airliners. So when someone such as writer and philosopher Wendell Berry criticizes the economic basis on which current engineering depends, his words are worth considering for their indirect implication that engineering, too, in some respects, is a house built on sand.

In an essay entitled “Two Economies,” Berry first recognizes the thing we usually mean when we say “economy”: a global system of exchange based on what is called fiat money—money that is a creature of governments which, as the U. S. Federal Reserve recently announced plans to do, can create as much as $600 billion out of thin air over a period of a few months. And that is one of Berry’s complaints about that economy: the fact that it is not based on anything beyond the say-so of certain powerful people and interests who attempt to control it to their advantage.

But beyond the thing that is usually meant by “the economy” lies an all-encompassing principle or entity that Berry chooses to call the Great Economy. It is, he says, “. . . the ultimate condition of our experience and of the practical questions rising from our experience” and is “both known and unknown, visible and invisible, comprehensible and mysterious.” The idea of the Great Economy makes no sense outside of religious considerations, but that need not detain us, since every great classical religion says something meaningful about the Great Economy, though not in those terms.

In contrast to the human-created economy which is to some extent manageable, the Great Economy cannot be managed. It can only be conformed to by individuals and groups who acknowledge their inability to be fundamentally in control of their existence. Only when we admit that can we go about the business of constructing a human economy that works according to the terms of the Great Economy.

How would the world’s economy change if it conformed more to the Great Economy? I can mention only a couple of Berry’s ideas in the limited space available here. One is to cease viewing the various goods of the Great Economy as resources to be exploited. Berry says of the modern industrial economy that the “invariable mode of its relation both to nature and to human culture is that of mining: withdrawal from a limited fund until that fund is exhausted.” According to Berry, the industrial economy acknowledges no limits and recognizes no ultimate goals: it “cannot prescribe the terms of its own success.”

Berry sees much that is fundamentally wrong with things that most of us take for granted and rarely think about. He is not surprised that proponents of free enterprise end up so often on the dockets of criminal courts, and that much of modern medicine has become an “exploitive industry, profitable in direct proportion to its hurry and its mechanical indifference.” The reason is that as long as one never looks beyond the limits of a human-created economy, one ignores the Great Economy at his peril. But such ignorance comes at a price, and billions of people around the world pay that price every day.

Berry is probably the leading living proponent of the philosophical and economic movement called agrarianism. More than just a simplistic back-to-the-farm philosophy, agrarianism sees humanity in a holistic way that views work, leisure, money, community, and government as integrated parts of the Great Economy. One of the first arguments most engineers might think of when confronting the ideas of agrarianism is that if everybody tried to live out its principles, our present way of life would be destroyed. Not everybody can live as a subsistence farmer, or even has the interest, ability, or resources for such a life.

But that argument is itself taken from the industrial-economy playbook, which instantly takes any proposal and tries to homogenize it, duplicate it, and apply it worldwide. Those very actions are counter to agrarian principles, which are primarily local, personal, and can take form only in the context of small communities where people know each other. And in fact, as Berry points out, there are thousands of Amish farmers and other members of certain religious communities (including monasteries) where a good bit of the agrarian ideal works in practice.

So what should an engineer take away from Berry’s picture of the Great Economy that surrounds our human economies like a family home encompasses children in a back-yard treehouse? Well, short of dropping out and joining an Amish community, religion and all, I think engineers could benefit in several ways from thinking about Berry’s ideas. No plans turn out quite the way we expect, for one thing. That sounds like a restatement of Murphy’s Law (“if anything can go wrong, it will”), but in fact it is an admission that the imponderable and unpredictable, especially if human beings are involved, can easily overwhelm the calculable and certain. And thinking about the people who ultimately use the things engineers work on, and the cultural and spiritual contexts of their lives, is something we could all do more of, to the benefit of both society and our own organizations.

Many of the dire predictions Berry has made over the years have either come to pass to some degree, or are so much a chronic condition of our times that we have ceased to notice them. But we can sometimes learn more from our critics than from our friends, and I would urge anyone who has never read anything by Wendell Berry to do so soon.

Sources: Berry’s essay “Two Economies” appears in a collected edition of his essays entitled The Art of the Commonplace (Shoemaker & Hoard, 2002), pp. 219-235.

Monday, November 08, 2010

Engineering Ethics and Natural Law

As anybody who has read this blog for a while knows, I do not view engineering ethics as a narrow, specialized field where only experts can render the right opinions. I believe anyone who has enough moral sense to graduate with an engineering degree has the ability to think ethically, and with a little help and advice can make good ethical judgments about a wide variety of professional concerns. Today I will explain why I think this is true.


If a person can think clearly enough to do engineering, he or she has what I will term a “deep knowledge” of right and wrong. This knowledge is not the same as what we conventionally call “conscience”: it is more like the fundamental principles on which everyone’s conscience is based. Some examples of this deep knowledge are things like:


Being fair is better than being unfair.


Betraying a friend is wrong.


Marital infidelity is wrong.


Evidence for this deep knowledge is to be found on every children’s playground and in the legends, literature, and law of every culture. It is simply an empirical fact that normal human beings have an inborn knowledge of right and wrong at a deep level.


This is not to say that everyone in every culture agrees on every detail of every ethical question. This deep knowledge combines with cultural norms, life experiences, training, and other factors to produce a conscience of which we are consciously aware. Some people manage to suppress their deep knowledge so that even their conscience does not bother them as they go about committing serial murders or turn themselves into suicide bombers. But rest assured the knowledge is there; it has simply been suppressed by other influences. The idea that this deep knowledge of right and wrong exists at some level in every human being is called “natural law.”


A person who has mastered the technical material of an engineering discipline has the intellectual capacity to understand and imagine the ethical consequences of engineering activity. Whether or not they apply their minds to this question is a matter of training and discipline. Up to the twentieth century, most people (including a good many who benefited from college educations) belonged to a religious tradition which encouraged acceptance of the principles of natural law, and legal codes were largely in conformance both with religious tradition and natural law as well. But with the advent of various totalitarian governments and a broad rejection of religion as a serious matter in higher education and elite classes, things changed.


Today you will find little support for the idea that everyone has a deep built-in knowledge of right and wrong which simply needs to be elucidated to become effective. Colleges and universities either avoid the subject altogether or teach ethics in a way that would never work for mathematics or physics. Imagine in your first physics class if the instructor got up and said something like, “There are many physics traditions: some people believe F = ma, while others believe F = m + a and still others believe F = m/a. We will not insist on any one of these, and simply want to tolerate everyone’s opinions on the subject while thinking how to apply these principles to practical situations.” It sounds absurd, and yet many instructors of professional ethics take what amounts to that position with regard to ethical principles. And if you go to experts who base their ethics on elaborately wrought philosophical structures, you can find someone who will justify anything from drug testing using people hired off the streets, to infanticide (the famous Princeton philosopher Peter Singer has said that killing newborns is not the same thing as killing a person).


As natural-law philosopher J. Budziszewski has said, our deep knowledge of right and wrong is still there, but factors such as the atrophy of tradition, the cult of the expert, and the disabling of shock and shame have made it harder for us to connect with that deep knowledge and act on it. Thus it can be that trying to do ethics in accordance with certain complex philosophical approaches can take you to a conclusion that makes logical sense, given your philosophical assumptions, and yet feels wrong. I am here to say that in such a case, you probably ought to go with your feelings.


But not always: “going with your feelings” is one more factor that has landed us in more trouble with regard to natural law. A moment’s thought will reveal how wrong it is to say that one’s feelings must always be followed as a guide to action. Yet for people who have no belief in the deep knowledge of right and wrong, and base their moral decisions on examples from popular culture where following your feelings is a bedrock principle, there may be nothing better to turn to. Feelings are real, and paying attention to your feelings is important, but unless you are some kind of saint, obeying your feelings is not going to lead to the right decision all the time (and even the saints admitted to having wrong feelings from time to time).


But not always: “going with your feelings” is one more factor that has landed us in more trouble with regard to natural law. A moment’s thought will reveal how wrong it is to say that one’s feelings must always be followed as a guide to action. Yet for people who have no belief in the deep knowledge of right and wrong, and base their moral decisions on examples from popular culture where following your feelings is a bedrock principle, there may be nothing better to turn to. Feelings are real, and paying attention to your feelings is important, but unless you are some kind of saint, obeying your feelings is not going to lead to the right decision all the time (and even the saints admitted to having wrong feelings from time to time).


If I had room, I could explore the reasons for believing in this deep knowledge, which ultimately lead back to the idea of a Creator who designed them into us in conformance with the way the world is. But the nice thing about natural law is that even if a person doesn’t believe in God, the deep knowledge is there, and if you can help bring it to the surface, their conscience will guide them to the right decision regardless.


This is why I believe engineering ethics is not just a field for experts. Everyone can do it, but it requires thought as well as feelings, will as well as intelligence, and reliance on something that is ultimately not of our own making.


Sources: I relied on J. Budziszewski’s book What We Can’t Not Know: A Guide (Spence, 2003) for the basic ideas in this blog. It is highly recommended as a readable yet sophisticated and thorough treatment of applied natural law. I last mentioned natural law in my blog of Nov. 23, 2009 (“Ethics: Evolved or Given?”).

Monday, November 01, 2010

One Spammer Down, Thousands to Go

The people who invented what we now know as the Internet almost certainly did not intend for it to be used mostly for sending unwanted messages that cost the recipient a lot more than the sender, seldom get read, and serve almost no redeeming social purpose. But according to the Wikipedia article on e-mail spam, 78% of all e-mail messages sent over the Internet are the mass-produced, often illegal type of advertising known as spam. Last month, this incredible flood of junk slowed down by about a fifth after Russian authorities took actions against Igor A. Gusev, the head of Spamit.com. Gusev, who fled the country after his house was raided Sept. 27, ran Spamit.com as a kind of spam wholesaler, paying “retail” spammers to send junk e-mails. Once his website closed, many senders of spam saw no point in carrying on and shut down, at least temporarily. Experts cite this as the main cause of about a 20% decrease in the volume of spam worldwide. However, they expect that other entrepreneurs in this sordid activity will soon show up to take up the slack left by Gusev’s departure.


The Internet has taken its place alongside power grids, water-supply systems, and snail mail as one of the modern-day utilities we all rely on. But I was amazed to learn that nearly four-fifths of all e-mails sent are spam. If I consider what fraction of snail mail delivered to my door is in the same category, however, it’s not so surprising.


Every form of unsolicited advertising entails some effort, however minor, on the part of the intended recipient. Even ignoring billboards on the freeway takes a bit of mental effort, although it’s so miniscule as to be negligible. Throwing away physical pieces of paper that come in the postal mail is a more substantial time-and-effort sink, although one’s expenditure is limited to the reading needed to save the desirable or necessary things such as bills and toss the rest. But the main cost of snail-mail advertising is borne by the sender.


Not so for spam. As the Wikipedia article on e-mail spam points out, spam is equivalent to postage-due advertising, since the per-message cost to the spammer is an insignificant fraction of what it costs the recipient to deal with spam, either through blocking filters or the old-fashioned way of selecting and deleting it. Either way, the estimated cost to the recipient per spam message is about ten cents, which multiplied by the many billions of spams per year means that U. S. businesses alone spend on the order of $20 billion a year that they would not have to spend if spam weren’t such a problem. And this doesn’t even begin to address the other issues connected with spam, such as the illegal “botnets” set up by spammers to send most of the stuff, the phishing attacks that many spam messages contain, and the viruses and other malware that spam can infest your computer with.


So why has this pernicious situation been allowed to develop? One big problem is the prevailing U. S. law governing spam, the CAN-SPAM Act of 2003. This law sets certain standards for spam in order for it to be legal, such as truth in the subject line and no forged addresses. As long as spam meets these fairly low standards, it is not illegal, at least according to Federal law, which pre-empted most state laws about spam. CAN-SPAM is a lot more lenient than many European laws against spam, and since the U. S. is the place where much spam originates (or at least is sent from by botnets, which are themselves illegal), our relatively lax laws exacerbate the problem for the rest of the world. Since spammers easily can live in one country, conduct their business transactions in a second country, and do their technical operations in several other places around the world, combating the problem in an organized way requires international cooperation. And unless things get really serious, such as the prospect of some kind of organized attack on a nation’s Internet infrastructure, ordinary spam is not something that attracts the attention of the limited resources of international law enforcement organizations.


All the same, it is too bad that the present Internet structure makes it so easy for spammers to get away with their nefarious activities. Engineers like efficiency, and to see nearly 4/5 of a resource go to something that is usually illegal, almost never succeeds in the sense of generating responses, and does nothing but annoy most recipients and cost them money to get rid of, is just a shame. It may be too late to do much about it, such as redesigning the Internet to make unsolicited emails harder to send. There are occasional discussions about redoing the fundamental technical structure of the Internet, right down to the protocols, but this would be like switching the world’s electric utilities from AC to DC, a huge production that would not be easy to carry out. Short of that, I suppose we will all just have to regard spam as one of those necessary evils like noise in communications channels. Noise is due to fundamental physical laws, while spam derives ultimately from choices people make. The fact that some people will make wrong or evil choices seems to be as reliable as the law of gravity, though. It was G. K. Chesterton who said thatz the Christian doctrine of original sin—the idea that everyone is born with the ability to sin—is the only one for which there is abundant empirical evidence. And spam seems to bear that out.


Sources: The actions taken against Gusev and its consequences are described in an Oct. 28, 2010 article at http://www.techspot.com/news/40897-spam-drops-by-20-after-russia-takes-down-one-man.html.

Monday, October 25, 2010

Wave Shield: Prize, Placebo, or Fraud?

The other day, my wife brought home from the health-food store a brochure advertising something called a “Wave Shield.” The headline on it reads, “Finally, cellular protection.” They’re not talking about body cells, but cell phones and cordless phones. Even before wireless consumer products hit the market, scientists were studying the question of whether the low-level radio-frequency electromagnetic emissions from cell phones and similar wireless devices could cause bodily harm. While this is no place to review the vast literature on the subject, I have read summaries and am qualified to express an opinion, because electromagnetics is my professional specialty. The best I can tell from the decades of research is that, if there is any deleterious effect of cell-phone use in terms of causing brain cancer or other serious health problems, it is a very small effect and probably insignificant compared to most other elective hazards of daily life, such as using cell phones while driving.


However, there have been enough scary news reports over the years to raise at least a suspicion in the public mind that something bad may result from using cell phones and other wireless gizmos. The Wave Shield company of Boca Raton, Florida has decided to cash in on that suspicion. Here is how.


Their advertisements are carefully designed to stay within the letter of the scientific facts. Every statement of possible damage due to wireless-device use is couched in terms of “may” or “could,” not “will.” But their pictures of three-year-olds using cell phones and MRI cross-sections of before-and-after rat brains subjected to RF (the intensity is not stated, but it was probably far in excess of anything a cell phone would radiate) are all designed to inspire fear in the reader.


Once that happens, here comes the solution: a little metal ring with a tiny piece of window-screen-like metal mesh in it. The idea is, you fit this thing around the earpiece of your phone, and it reduces the radiation in the immediate vicinity of the screen, say, half an inch away or so. It does essentially nothing to keep most of the radiation from the phone away from your brain. To their credit, the Wave Shield people admit as much: “The vast majority of electromagnetic radiation emitted by cellular and cordless phones comes from the antenna and parts of the phone other than the ear piece. Wave shield products have no effect on this electromagnetic energy.” But they are counting on the public’s “innumeracy” (inability to make quantitative judgments other than comparing prices), fearfulness, and ignorance of electromagnetic theory to yield them a customer base willing to pay twenty or thirty bucks for a little ring with a screen in it.


The company’s website has testimonials, which I have no reason to believe are not genuine. But here we encounter the placebo effect: the fact that taking even sugar pills that cannot possibly have any objective chemical action on a given malady will nevertheless make a certain percentage of ill people feel better. Nobody in the testimonials says they have been cured of brain cancer, but they cite reduction of headaches and a perception that the phone is cooler as benefits of the Wave Shield. The “cooling” probably results from the space opened up between the head and the phone by the thickness of the ring. You could get the same effect from a piece of cardboard taped to the phone, but it wouldn’t look as good. And the perceived reduction in headaches could be a classic case of the placebo effect in operation.


This campaign is a specific example of a common phenomenon in the advertising of technical products which, if not strictly unethical, certainly takes an arguably unfair advantage of potential customers. It is what I call the “appeal to the lizard brain.” Apparently, psychologists working with advertisers have found that while most people are capable of following logical arguments and making buying decisions based on conscious rational thought, we all have a more primitive part of the brain which we share with lower animals such as lizards. This lizard brain knows nothing of logic, and instead operates on emotionally-based criteria such as fear and a striving for the satisfaction of physical appetites for comfort, food, and sex. The popularity of large SUVs, for example, derives largely from the fact that the lizard brain thinks driving around in a big intimidating car will keep you safer than driving a small, pipsqueak car. This is despite the fact that the better maneuverability of small cars makes them safer, in many cases.


Another way advertisers appeal to the lizard brain is by saying the cautious legal things in words, but showing emotionally charged pictures that contradict the words. Wave Shield shows they have learned this lesson with their photos of innocent children using cell phones and ghastly deteriorated brains designed to make you ask, “Is that happening to my brain?” Makers of drugs that lower cholesterol do the same thing. If you watch an ad for one of those drugs, while you hear the announcer reading the long list of side effects and saying the drug should be supplemented by changes in diet and exercise, the screen shows photos of luscious, fat-heavy pot roasts and bowls of ice cream—all things you are NOT supposed to eat if you’re lowering your cholesterol. The message, of course, is that if you take our drug, you can eat anything you like and not worry about your cholesterol.


Engineers engaged in such enterprises may take the attitude that “hey, all they pay me to do is to make sure the product meets the technical specifications. What marketing does with it is not my problem.” Well, if advertising strays over the line into fraud, it can be your problem. And even if the advertising is technically within the letter of the law—as with Wave Shield’s disclaimer that their product basically doesn’t do squat, but couched in language that most consumers won’t understand or pay attention to—if the overall effect is to sell somebody a thing that doesn’t really do what the lizard brain thinks it will, the spirit of the laws against fraud has been violated.


All the same, there is nothing new about this sort of thing. The best guard against it is an educated public and a profession of engineers who will not tolerate misleading advertising of products they contribute to.


Sources: The Wave Shield company’s website, for those who just can’t stand not to know what it’s like, is www.waveshield.com. An article by Malcolm Gladwell explaining how the lizard-brain approach to marketing SUVs works appeared in The New Yorker on Jan. 12, 2004, and is viewable in part at the website http://www.gladwell.com/2004/2004_01_12_a_suv.html.