Monday, December 31, 2012
On August 28, 2009, an off-duty California highway patrolman named Mark Saylor was driving his Lexus (made by Toyota) near Santee, California with three members of his family. Suddenly the car accelerated to speeds of up to 100 MPH, and one of the occupants called 911 to report that they were in trouble and the car “had no brakes.” Seconds later, the Lexus collided with another vehicle, rolled down an embankment, and caught fire, killing everyone inside. This was the first highly publicized incident in what came to be known as Toyota’s gas-pedal problem.
On Mar. 6, 2010, I blogged on what had transpired since that and other similar sudden-acceleration incidents had come to light with regard to a variety of Toyota models. By then, Toyota had already been cited by the U. S. National Highway Traffic Safety Administration (NHTSA) for a letter it sent out to owners about the problem which the NHTSA said was “misleading.” Toyota later paid a fine to the NHTSA for not notifying the agency promptly enough when reports of unintended acceleration began to reach the automaker.
There were at least two main suspected causes of these incidents. One, which Toyota admitted and issued massive recall notices to fix, involved a misfit between the gas pedal and certain floor mats that could catch in the pedal mechanism, making it difficult or impossible to slow down. The second suspected cause was that glitches in the control software that interfaces between the gas pedal and the engine were appearing randomly or in response to unpredictable RF interference, for example. Toyota has insisted all along that there is no problem with the software.
But now, after a large class-action lawsuit was filed against the company in California, Toyota has offered a $1-billion settlement which has yet to be approved by the judge in the case. However, it looks like the worst may be over for the car company.
The details of how the settlement breaks down are interesting, to say the least. Apparently to widen the class of harmed individuals, lawyers in the suit are suing on behalf of anyone who sold or traded Toyotas between September of 2009 and the end of 2010, presumably because the resale value of all Toyotas was depressed by the ongoing bad news. Under the proposed settlement, that particular class is getting $250 million as compensation. Toyota has developed a “brake-override” system that will evidently guarantee your ability to stop the car even if you put a brick on the accelerator (which is not recommended in any case). Some Toyotas can’t accommodate this new system as a retrofit, so owners of those vehicles get up to $125 apiece instead; the rest get the brake-override system installed free of charge. About $400 million is going for extended warranties on several components that came under suspicion during the investigation: tail-light switches, onboard computers, and so on. And the lawyers, without whom this whole settlement would not have been possible, get $227 million. Such is justice in today’s world.
Toyota does not admit to any wrongdoing in the settlement, although it is pretty clear that they have decided things are amiss enough to spend a billion dollars fixing them. To put this amount in perspective, Toyota’s total revenues for the year ending March of 2012 was $226 billion. So a billion dollars is not a huge chunk of their revenues, though it will certainly cut into their profits, which run a few billion dollars a year when they make money at all. Nevertheless, the financial world generally looks kindly upon Toyota at this news, because it clears up a good deal of the uncertainty surrounding the cloud of lawsuits arising from the acceleration problem.
The unintended-acceleration problem is a good example of how non-technical factors begin to enter into a problem once it has entered the public mind. It is possible (though not likely) that a similar crisis could strike any engineering-intensive business with a large customer base of non-technical consumers. Rumors do spread, even if they are not founded in fact. One instinctive response that many engineers might take toward such a situation—the spread of unfounded rumors about a technical problem—would be simply to state the technical reasons and results of tests that show the rumor to be false.
If everyone listening were engineers, this action alone might clear up the issue. But most people, thank God, are not engineers. And hearing a bunch of incomprehensible techno-speak will not allay their fears about an unlikely but graphically grisly possibility of something as awful as dying in a car crash caused by a runaway accelerator you are helpless to control. After a good bit of fumbling early on, Toyota’s public relations and legal departments got their acts together and came up with a settlement that seems to go the extra mile to alleviate not only the technical problems Toyota itself discovered—the gas-pedal-floormat interference—but a range of other issues which may or may not be based in reality: extended warranties for parts that some people think may be defective, and a new technical fix that will prevent accidents from unintended acceleration even if the driver does something stupid like stomping on the gas and the brake at the same time.
And drivers do stupid things sometimes, no doubt about it. An investigation sponsored by the U. S. government found that most of the cases of unintended acceleration were due to driver error. This could mean anything from a loose bottle of shampoo rolling under the gas pedal at the wrong time to a person freezing stiff-legged in terror as the car roars out of control. But if the new brake-override system really does its job, Toyotas will have an edge over most other cars that don’t have it. And a brake-override system may become standard on all new cars in the future, which would be a generally good thing, I suppose.
But it’s too bad that the process took so long, cost so much money, and involved so many lawyers. However, that’s the way things get done in today’s systems of justice, where problems are always viewed with one eye on the bottom line. Let’s hope that automotive engineers of the future, both at Toyota and elsewhere, will pay more attention to customer complaints and be more proactive when similar safety problems arise.
Sources: I referred to an article “Toyota in $1.1 Billion Gas-Pedal Settlement” in the Dec. 27, 2012 online edition of the Wall Street Journal at
Monday, December 24, 2012
For the most part, universities have resigned from the business of teaching morals, but with one important exception: cheating on homework and exams. While cheating is apparently a fairly widespread practice—recent surveys of college students indicate that between 65 and 80 percent of students admit to cheating at least once—that doesn’t make it right. As counterfeiting is to the economy, cheating is to grades, which professors sometimes refer to as the “coin of the realm” in academia. I won’t waste a lot of time here explaining why cheating is wrong. It combines lying, sometimes stealing (if you turn in someone else’s work), and indulging in flaunting-the-rules behavior that can form a lifelong habit of cheating in other areas besides academics.
While there have probably been cheaters ever since there have been students, the Internet has provided more ammunition both on the side of the cheaters and on the side of those who try to catch them. The other day I stumbled onto a website which I will not encourage visits to here by giving their URL. But believe me, it does exist. It is a commercial site at which you can submit an essay question or homework problem, and for a fee, you get a finished essay or solution. Their homepage has some smarmy lingo saying that everybody who’s been a student has thought at some time or another of getting “help” with homework, and we’re just making it easier for you.
They have a “legal” section which is the most hypocritical boilerplate of its kind I have ever seen. In one part of the text it warned that anything provided to the user was simply “for reference” and should be referenced like any other reference work—as though a student would copy the essay in question and then write, “Oh, by the way, I got this entire essay from a pay-per-assignment website.” But in another section, the company absolved itself of any responsibility for adverse consequences should you turn the stuff in as your own work, which was clearly the whole purpose of the site.
If a student availed himself of this type of service, it would be hard to detect if the product is really original. (In the same legal boilerplate, the cheating site guaranteed that its product would be 90% free of plagiarism, which tells me they allow an internal plagiarism rate of 10%.) But the Internet, while tempting students to plagiarize sources by copying and pasting wholesale without attribution, also makes it easier to discover such cheating. I have run across two such cases of plagiarism that I was able to figure out, one with the help of the Internet.
In one case, two students in a class of mine were turning in letter-for-letter identical homework solutions, which could not have been just coincidental. The grading assistant pointed it out to me, and I invited each student into my office individually and showed them the evidence. Each student said they had never copied from the other one, and this was technically true. But a few hours later, one of them came by my office and admitted he had found a website, posted by the textbook publisher, of solutions to all the homework problems, and both he and the other student were just copying that site, rather than doing the homework by themselves.
I suppose I was partly responsible for that incident, because I was unaware that the publisher had done such a thing. If I’d known that a complete solution set to all the homeworks was out there on the web, I would have thought twice about using that textbook. But the second case of cheating was both more surprising and more blatant.
Because I have published a few papers on the subject of ball lightning, I have started to receive requests from journals to review similar papers in the peer-review process that most reputable academic journals use. In reviewing one such paper, I came across a passage that seemed both better-written than the average level of the rest of the paper, and rather hard to understand. I looked up the Wikipedia article on the subject of the passage, and to my surprise, I found that the author of the paper had copied a whole paragraph almost word-for-word out of the Wikipedia article, and had not cited Wikipedia as a source.
The paper had other problems too, but when I pointed out this blatant plagiarism to the journal editors, they summarily rejected the paper. And they should have, too. I have come to anticipate a certain amount of that kind of thing from undergraduates who have not learned what plagiarism is, and may have gotten away with it repeatedly at the high-school level. But I was shocked to find that a scientist would be so careless, although plagiarism and even fabrication of data is not unknown in journal papers.
What is the solution for cheating at the undergraduate level? While I suspect we will never reduce the level of cheating to zero, the same article (from an American Psychological Association online journal cited below) where I found the statistics on cheating, also cites a study that says creating a peer-level atmosphere that discourages academic dishonesty is helpful.
I can personally attest to the effectiveness of this approach. I attended a small private undergraduate school which had a stringent, and largely effective, honor code. Most of my exams were take-home exams that allowed a specified time for completion, and I stayed within the time limits and to the best of my knowledge, never cheated. This is not to say “oh, what a good boy am I,” but to point out that if a student knows that cheating is rare and frowned upon both by other students and faculty, it is less likely that whatever pressures are present will push people over the edge into cheating. And psychologist David Rettinger, interviewed in the APA article, says that “the key is to create this community feeling of disgust at the cheating behavior.” Sometimes this comes from student-led groups such as Academic Integrity Matters! at the University of California at San Diego, which sponsored a petition drive asking faculty members to be more explicit about what cheating is and what the penalties are.
As with many other things, the Internet is both a blessing and a curse when it comes to academic cheating. I think I will be a little more clear to my students in the future about what I consider cheating and plagiarism, and hope that they will take my words to heart.
Sources: The American Psychological Association online publication “Monitor on Psychology” is the source of the statistics and quotation used in this article. The article “Beat the Cheat” by Amy Novotney appeared in its June 2011 edition at http://www.apa.org/monitor/2011/06/cheat.aspx.
Monday, December 17, 2012
Pornography is a big business, something that millions of people around the world indulge in, and while viewing it can get you into trouble if you hold a prominent corporate or political office or if you get involved in the child variety, it’s a private affair and not a big deal most of the time.
Pornography is exploitation that twists and defaces a type of relationship that is the earthly model of how Christ relates to his Church, and it can spiritually damage and enslave anyone who gets involved in it, crippling one's ability to relate to the opposite sex in the way God intended.
Which view do you agree with? Probably most readers will incline toward the first view, which says that in most cases, viewing porn is a private decision that should be left up to the individual, and the legal system should get involved only in situations for which there is near-universal agreement that innocents are being harmed, such as the production or viewing of child pornography. But the second view (which happens to be mine, more or less) is rooted in a Christian model of humanity which sees human sexuality as a gift from God, which men and women have a responsibility to use according to divine instructions. In the second view, pornography exploits those who are involved in producing it as well as those consuming it, and debauches (a nice old-fashioned word) the users, accustoming their sexual responses to images which cannot be approached by the reality of any actual woman. As such, pornography—especially the online variety, which is by far the most common nowadays—is worth opposing, restricting, and fighting with the legal system, even at the cost of one’s own well-being.
Over the past year or two, my views on the relationship between God’s law and human laws have changed. When religious conservatives who are in the numerical minority in a democratic country manage to gain access to levers of power, they sometimes indulge a fantasy which goes something like this: “Pass a law against a popular but immoral thing, and people will quit doing it.” This happened in 1919 when the amendment to the U. S. Constitution prohibiting the manufacture or sale of alcoholic beverages, which came to be known as Prohibition, was ratified by enough states to become law. Prohibition was a long-term goal of the Anti-Saloon League, an organization supported by many Protestant churches but with its power base mainly in rural areas. What did not happen was that alcohol abuse vanished overnight. Instead, the consumption of alcohol went underground, leading to smuggling, bootlegger gang warfare, and a lowering of the respect for law, all of which finally led to the repeal of Prohibition in 1933. The bottom line of this lesson is that law works better as a mirror of a society’s mores than it does as a bridle that tries to jerk the society in a direction it generally does not wish to go. That is, laws against a so-called “private” sin such as pornography should be enacted only when a substantial number of citizens in a country think it should be illegal. So, while I am personally unhappy that online pornography is as popular and successful as it is, my view is that passing lots of laws against it, at least in the U. S., would probably be a waste of time. But not in South Korea.
According to a recent Associated Press article, a good many South Koreans not only dislike online pornography, they are trying to do something about it. Making anything illegal on the Internet is a challenge because of the intrinsically global nature of the medium. But that hasn’t stopped South Korean law enforcement officials from arresting about 6400 people in only six months for producing, selling, or posting pornography online.
Almost a third of South Koreans are Christians (counting both Protestants and Catholics), which makes it the most Christian nation in East Asia by far. And Christianity in South Korea tends to be taken seriously by its adherents, who now send more missionaries overseas than many Western countries do, including those which evangelized their nation in the first place. Many of these Christians make up a cadre of about 800 volunteer Internet “Nuri Cops” who regularly spend time patrolling the Internet for South Korean porn, turning in the results of their searches to police for further investigation and prosecution.
About now, you may be wondering what kind of person would devote their spare time to viewing pornography with the sole purpose of wiping it out. To some, it may sound suspiciously like a member of the Anti-Saloon League who insists on tasting all the wine and beer before pouring the rest into the gutter. I would imagine it takes a particular type of person to do this work without being harmed by it, and perhaps no one is totally immune. But you could compare this type of work to the religious orders during the Black Plague of the 14th century in Europe who devoted themselves to the care of the ill, although many of their number ending up catching the disease and dying of it themselves.
For some readers, this comparison will seem completely wacky. What possible parallel can there be between caring for the innocent victims of a physical disease like the bubonic plague, and snooping around on the Internet for websites that seem to provide harmless (or at least, not very harmful) entertainment for people in the privacy of their homes?
It boils down to whether one believes in the soul as well as the body. If there is the death of the body, there can also be such a thing as the death of the soul. Enslavement to sin—any sort of sin—is a road that leads the soul to death, and one way to help souls escape death is to make it harder to find opportunities to sin. That is just what the Nuri Cops are trying to do.
While I would like to see something like that take place in the U. S., we would first have to have a cultural shift of seismic proportions: one that would involve a resurgence of authentic belief in Christianity at the highest as well as the lowest levels of society, in the cities, editorial offices, studios, and corporate headquarters as well as the farms and private homes of America. In the meantime, all I can do is congratulate the South Koreans for acting on their beliefs, and hope that maybe they will return the favor that Western missionaries did for them by evangelizing us some day.
Sources: The Austin American-Statesman carried the article “South Korea’s cyberporn vigilantes” on pp. F3 and F5 of its print edition of Dec. 16, 2012. I referred to the Wikipedia articles on “Religion in South Korea” and “Prohibition.”
Monday, December 10, 2012
The U. S. National Aeronautics and Space Administration (NASA) has just been judged by a blue-ribbon panel appointed by the National Research Council (NRC) at the instigation of Congress. That branch of government wanted an independent assessment of NASA’s strategic direction and goals in light of continuing fiscal constraints and national priorities. The resulting 80-page report is the best summary I have seen of NASA’s past successes, present ills, and possibilities for improving itself in the future.
Besides the agency’s spectacular successes, ranging from the 1969 lunar landing on down, NASA has also been the organization behind some of the most famous tragedies in the engineering ethics literature. The losses of the space shuttles Challenger in 1986 and Columbia in 2003 were both preventable disasters that revealed serious problems with NASA’s management and safety structures. More strategically, NASA has been perceived by many as a set of solutions looking for problems, and the NRC report confirms this picture.
It’s a cliché to say that if you don’t know where you’re going, it will be hard to tell when you get there or how long it will take. But that is the picture that emerges from the investigation and analysis performed by members of the NRC panel, who visited ten different sites in the widespread NASA organization and took most of a year to compile their results.
The good news is that there are patches of well-organized high-achieving activity within the organization. The unmanned space exploration effort, characterized by projects such as the Curiosity Mars rover, has had notable successes and largely stays within budget and on schedule. It is significant that NASA carries out a periodic ten-year “decadal survey” of the science communities interested in these projects, and a strategic plan for them is thereby updated with extensive international input.
But with regard to manned spaceflight, the picture is, if not dismal, at least discouraging. For one thing, the target keeps moving around. I happened to be in Washington, D. C. the day President George Herbert Walker Bush called for a manned flight to Mars, way back in 1989. But President Obama has instead brought up the idea of a trip to an asteroid, without saying which one. The NRC reports the lack of widespread enthusiasm within NASA for the asteroid journey, and in the meantime, if the U. S. wants to put a man in space for any reason right now, we have to go buy tickets from the Russians.
In some ways, NASA is the victim of its own past successes. During the Apollo buildup in the 1960s, expensive new facilities were built purposely in many different states to solidify Congressional support for the space effort. NASA is now saddled with billions of dollars’ worth of real estate occupied by aging specialized test facilities which in many cases have lots of deferred maintenance needed. Turning these facilities into commercial operations is a nice idea, and in a few places this has worked, but frankly there are not too many commercial users in need of a test stand for a Saturn-V rocket engine, for example. About a third of NASA’s employees are government civil servants, not contractors, and there are special complications in shifting civil-servant staffing to meet changing needs.
The NRC report doesn’t simply list NASA’s ills; it contains a list of recommendations as well. Many of the difficulties NASA has encountered result from trying too many ambitious things with insufficient funds. The NRC realistically admits that the chances of increasing NASA’s total budget are small, so they don’t see an overall increase in funding as a realistic solution.
They list three other options as more realistic possibilities. One is strictly downsizing: sell off underutilized facilities and lay off or retire surplus staff. This option might work, but as someone who has lived through an organizational downsizing at a university, it creates a poisonous work environment and there is a possibility the treatment might succeed only in killing the patient. A second option, which is compatible with downsizing, is to reduce the size of NASA’s program portfolio: in other words, try doing fewer things well more than many things not so well. To me, this makes the most sense, and is consistent with my blog of June 20, 2011, which examined a proposal for reorganizing NASA around the model of the U. S. Coast Guard.
The most interesting option proposed by the NRC is to greatly increase national and international cooperative efforts with other U. S. government agencies, private industry, and foreign entities. Currently, NASA is already moving in that direction with regard to future manned flight hardware, saying that it will assume more of a supervisory role to contractors, who will have greater freedom in developing spacecraft to go to wherever NASA finally decides to go. But as other nations continue to develop space capabilities that in some ways outstrip those of the U. S., cooperation rather than competition would seem pretty sensible in many cases.
NASA was born in the midst of the Cold War between the U. S. and the USSR, and without that war that wasn’t a war, it never would have received the massive support for the race to the moon, which it did with almost no help from anybody outside the U. S. Unfortunately, that “not-invented-here” attitude seems to have lingered on in the institution long after it has outlived its usefulness, at least with regard to major manned-flight programs. But the result has been the end of the Shuttle program without a viable alternative to take its place.
The NRC report ends with an optimistic call to the executive branch and Congress to do something that will focus NASA on a meaningful strategic plan. (The current NASA planning document is a fuzzy kind of thing that amounts to mom-and-apple-pie for space.) Given the ongoing near-chaos in Washington, I am not hopeful that the NRC will get what it calls for, and what the rest of the nation deserves from an agency which still has great talent and capabilities. But if we don’t get action from Washington that puts NASA back on track, at least we have heard clearly from the NRC about exactly what the problems are.
Sources: The NRC report (a draft version at this writing) can be downloaded in its entirety from http://www.nap.edu/catalog.php?record_id=18248.
Monday, December 03, 2012
If you have driven a considerable distance in West Texas (and it’s hard not to drive a considerable distance when things are as far apart as they are out there), you have seen the slightly Martian-looking sight of a forest of identical white towers, each with a triplet of whirling blades, covering a good part of the whole visible horizon. Wind energy from huge wind farms has been one of the big success stories in renewable energy by some measures, and Texas leads the nation in the amount of installed capacity per state (over 10,000 megawatts ). And according to a recent article in the Austin American-Statesman, about 70 firms in Texas supply products or services to the wind-generation industry. But all this may hit a serious roadblock January 1 if the federal tax credit that has encouraged commercial wind-powered generation for two decades comes to an end, along with a lot of other tax cuts and incentives. This is one effect of the so-called “fiscal cliff” that will automatically take effect if the U. S. Congress and the President don’t do something to stop it. The prospective end of the wind tax credit has important implications for what some philosophers call engineering “macro-ethics”: the engineering ethics of public policy and related matters.
The tax credit is substantial: anyone selling wind energy commercially can qualify for a 2.2 cents-per-kilowatt-hour tax credit from the government for a period of ten years. This has led some wind-power producers to give away energy for free on occasion, just to get the tax credit. And note that a credit is better than a deduction: a deduction means you pay less tax than you would have otherwise, but a credit means you get a check straight from the Treasury, even if you owed no taxes to begin with. No wonder parts of West Texas look like the Jolly White Giant has scattered around three-petalled dandelion seeds.
The rationale behind the tax credit, enacted in the last days of the administration of the elder George Bush in 1992, was that a strictly free-market approach to wind energy might never get off the ground, because the vagaries of fossil-fuel prices would discourage private investors from putting their money into it. Nobody would want to build a lot of wind generators when fuel prices were high, only to see their investment turn to nothing when fuel prices fell and wind became uncompetitive. So the tax credit gave investors a guaranteed return for ten years, which is a reasonable payback time for a major investment such as a wind farm.
Viewed just from the standpoint of installed capacity, the tax credit has been a major success. On one (presumably windy) day in October of this year, wind accounted for over a fourth of all the electric energy produced in the ERCOT network (the Electric Reliability Council of Texas, which is the name of Texas’ largely independent transmission network). The growth of wind-related manufacturing and service firms has been a bright spot in the nationwide economy, and Texas is not the only state to benefit from the growth of wind farms.
That’s the good news. The bad news is that already, the prospect that the tax credit might end has hurt bookings of new business at wind-related firms and caused concern that new construction of wind farms might come to a screeching halt. And sure enough, another energy-related technology—“fracking,” which makes abundant new sources of natural gas available—has caused the price of natural gas to plummet. This means that from a subsidy-free economic viewpoint, anyone wanting to build new generation capacity would be crazy to build wind generators when a natural-gas-fired plant would be cheaper and much more reliable (no wind, no power). In fact, the free market for energy in Texas and many other states is providing little if any incentive for anyone to build new power plants, despite the ongoing need and the fact that brownouts on hot summer days have become uncomfortably common.
The parties involved in this matter are roughly as follows. There are people who build and install and own and run wind-generation facilities; there are consumers of electricity (basically everybody) who have various preferences about both price and the nature of how electricity is made; there are government entities, mainly the federal government and the state governments; and there are investors whose money can come into the game as long as they see they’ll get a good return on their investment.
If the tax credits go away, most investors will walk away from future wind farms, at least under the present circumstances of low natural-gas prices. We will be stuck with what wind farms we’ve got, though if running the farms becomes unprofitable their owners will let them stand idle at best, and will tear them down if things get too bad. As long as fossil fuel prices stay low, electricity consumers won’t have to pay a lot more, but they may well run into increasingly serious brownouts and blackouts if more generation capacity isn’t built, or if serious conservation efforts are not made. And conservation isn’t free: the largest users of electricity have to justify it on a dollars-and-cents basis, not just because it feels good.
Back in the early days of networked power in the 1920s, the free market reigned because no legislators had given much thought to the need to regulate electric utilities yet. After notable abuses such as monopolistic practices by single-owner utilities (among whom was numbered my great-uncle L. L. Stephenson, who was an ice-plant and electric-power mogul in San Antonio until he died in 1929), first individual cities, and finally the State of Texas, decided that electricity was too necessary a thing to be left entirely to the whims of private firms with no regulation. So in the next couple of decades, the industry came under the supervision of state public utility commissions, and a kind of deal was reached. The state commissions had the authority to set electric rates, but agreed (“colluded with” would be too strong a term) to allow the utility companies a fixed and reasonable rate of profit. The best thing about being regulated from the viewpoint of the utilities was the fact that their fiscal environment was largely predictable. This meant planning and investment, which for electric utilities extends decades into the future, could be made with some reassurance that the plans would work out and investments would not be wrecked by unexpected changes in allowable rates and so on.
A number of things conspired to overthrow this regulatory regime. Both the oil crisis of the 1970s, which introduced unpredictability into the fuel-cost equation, and a spirit of deregulation that extended from the airline industry to the telecommunications business led to the experiment of a free market in electric energy, which has been the case in Texas for many years now. The prospective end of the tax credit for wind generation would be yet another step towards a totally free market in this regard. While I think it is a good thing to generate some power from wind, we may soon be seeing the harm that comes from relying too much on legislation that produces artificial incentives for certain kinds of technologies. But there is also a harm done when anyone, including a government, breaks a promise such as the promise of ten years of tax credits. Let’s hope governments at all levels move toward providing a somewhat more predictable environment in which to do business, including the business of making electricity from wind.
Sources: The article “End of Wind?” appeared in the Dec. 2, 2012 print edition of the Austin American-Statesman, p. A1 and A10. I also referred to the Wikipedia article “Wind power in the United States” for statistics on Texas wind generation.
Monday, November 26, 2012
When I was teaching an engineering ethics module for a few semesters, one of the first things I asked the students to do was to spend five minutes writing an answer to this question: “How do you tell the difference between right and wrong conduct?” The responses usually fell into three categories.
Many people would say that they rely on what amounts to intuition, a “gut feeling” that a course of action is right or wrong. Nearly as popular was the response that they look to how other people would act in similar circumstances. Very rarely, a student would say that he relies on religious guidelines such as the Ten Commandments or the Quran. These results are consistent with a claim by neuroscientist Josh Greene that many of our moral decisions are guided, if not determined, by the way our brains are wired. But we can rise above our instinctive moral habits by consciously thinking about our ethical dilemmas and applying reason to them.
An article in Discover magazine outlines the research Greene and others have done with sophisticated brain imaging techniques such as fMRI (functional magnetic resonance imagining), which indicates spots in the brain that get more active when the mind is engaged in certain activities. Greene finds that routine ethical choices such as whether to get up in the morning are handled by lower-level parts of the brain that we share with other less-developed animals. But when he poses hard ethical dilemmas to people, it is clear that the more sophisticated reasoning areas of the brain go to work to deal with reasoning about probabilities, for example, as well as the parts that control simpler instinctive actions.
One of the ethical dilemmas Greene uses is a form of the “trolley problem” conceived by some philosophers as a test of our ethical reasoning abilities. As Philippa Foot posed the problem in 1967, you are asked to assume that you are the driver of a tram or trolley that is out of control, and the only choice of action you have is which track to follow at an upcoming switch. There is one man working on the section of track following one branch of the switch, and five men on the other branch. Which branch do you choose, given that someone is going to be killed either way?
Greene has found that these and similar hard-choice ethical problems cause the brain to light up in objectively different ways than it does when a simpler question is posed such as, “Is it right to kill an innocent person?” Whether or not these findings make a difference in how you approach ethical decision-making depends on things that go much deeper than Greene’s experiments with brain analysis.
But first, let me agree with Greene when he says that the world’s increasing complexity means that we often have to take more thought than we are used to when making ethical decisions. One reason I favor formal instruction in engineering ethics is that the typical gut-reaction or peer-pressure methods of ethical decision-making that many students use coming into an ethics class, are not adequate when the students find themselves dealing after graduation with complex organizations, multiple parties affected by engineering decisions, and complicated technology that can be used in a huge number of different ways. Instinct is a poor guide in such situations, and that is why I encourage students to learn basic steps of ethical analysis so that they are at least prepared to think about such situations with at least as much brain power as they would use to solve a technical problem. This is a novel idea to most of them, but it’s necessary in today’s complex engineering world.
That being said, I believe Greene, and many others who take a materialist view of the human person, are leaving out an essential fact about moral reasoning and the brain. The reigning assumption made by most neuroscientists is that the self-conscious thing we call the mind is simply a superficial effect of what is really going on in the brain. Once we figure out how the brain works, they believe, we will also understand how the mind works. While it is important to study the brain, I am convinced that the mind is a non-material entity which uses the brain, but is not reducible to the brain. And I also believe we cannot base moral decisions upon pure reason, because reason always has to start somewhere. And where you start has an immense influence on where you end up.
As a Christian supernaturalist, I maintain that God has put into every rational person’s heart a copy, if you will, of the natural laws of morality. This is largely, but not exclusively, what Greene and other neuroscientists would refer to as instinctive moral inclinations, and they would trace them back to the brain structures they claim were devised by evolution to cope with the simpler days our ancestors lived in. (If they really think ancient times were simpler, try living in the jungle by your wits for a week and see how simple it is.) God has also made man the one rational animal, giving him the ability to reason and think, and God intends us to use our minds to make the best possible ethical decisions in keeping with what we know about God and His revealed truth. This is a very different approach to ethics from the secular neuroscience view, but I am trying to make vividly clear what the differences are in our respective foundational beliefs.
So both Greene and I think there are moral decisions that can be made instinctively, and those that require higher thought processes. But what those higher thought processes use, and the assumptions they start from, are very different in the two cases. I applaud Greene for the insights he and his fellow scientists have obtained about how the mind uses the brain to reach moral decisions. But I radically disagree with him about what the outcomes of some of those decisions should be, and about the very nature of the mind itself.
Sources: The Discovery magazine online version of the article on Josh Greene’s research can be found in the July-August 2011 edition at http://discovermagazine.com/2011/jul-aug/12-vexing-mental-conflict-called-morality. I also referred to the Wikipedia article on “Trolley problem.”
Monday, November 19, 2012
The other day, a friend of mine had emergency surgery for a strangulated hernia. While I have not pressed him for details (he’s still recovering), it’s possible that the surgeons used robotic surgery aids during the operation. Believe it or not, there are ethical questions one can ask about such devices, and because they involve highly engineered robotic systems, I think it’s appropriate to discuss the question of whether these gizmos are worth the money they cost in a blog about engineering ethics.
Before we go any farther, I should make clear that we are not talking about replacing surgeons with robots. (The surgeons wouldn’t stand for it, for one thing.) A live human surgeon is always in control. In the case of robotic laparoscopic surgery, for example, three small incisions are made to allow the insertion of a camera and operating tools. The surgeon sits at a control console a few feet away from the patient, watches the camera field of view on a display console, and manipulates the tools remotely. Obviously, such surgery requires special training, but many surgeons say they can do a better job with robotic aids once they have mastered the techniques involved. An endoscopic camera can give a better view than you could get by standing above the patient and looking through an old-fashioned open-wound type of incision, which is much larger than the laparoscopic type used in most robotic operations. It’s also easier to do finely-calibrated motions with the robotic instruments, because one’s hand motions get scaled down to allow better control of the tools. A new million-dollar robotic surgery system recently described in the New York Times reduces the number of laparoscopic incisions from three to one. Patients undergoing operations with this kind of robot for gallbladder removal (the only type for which this particular unit is approved by the U. S. Food and Drug Administration, or FDA) can go home with only one small incision, which can be near the navel and practically invisible.
It appears that most surgeons are generally in favor of robotic surgery—what about the patients? Other things being equal, fewer, smaller incisions would be better. But what about the cost? Here is where things get complicated.
According to the Times article, one use of the million-dollar robot can add up to $60,000 to a surgical bill, depending on what auxiliary equipment a hospital already has. Under the present fee-for-service model, if insurance companies approve, the patient can have the surgery and may not even notice the extra charge on the bill. But if people had to pay directly out of pocket for their operations, I wonder how many folks would shell out an extra $60,000 to wind up with only one abdominal scar instead of three? Maybe a few bathing-suit supermodels could justify the expense, but what about the rest of us?
This is a hypothetical question, because not many people pay for operations out of pocket. Very poor people have Medicaid in many cases and most employed people (not all) have health insurance. In the U. S., we are looking at a large change in how medical care is funded, with the gradual rollout of what even the President himself now terms Obamacare. Without going into details, the net effect of this law will be to require more people to have health insurance and to extend governmental control of the system with regard to what interventions will and will not be paid for. It is far from clear that Obamacare will have the net effect of making robotic surgery more accessible.
Expensive robotic surgery systems are just one specific example of a trend that has been going on for decades: the soaring sophistication and cost of modern medical care. Perhaps because U. S. health care is more free-market-oriented than in many other countries, this trend has been more noticeable here than elsewhere. Although one might expect the U. S. to be more friendly to costly healthcare innovations than nationalized-medicine countries such as Canada, it turns out that the firm making the million-dollar robot is based in Toronto. One reason for that may be that the FDA runs one of the more restrictive regulatory operations compared to many other countries. It is very difficult, expensive, and risky to get drugs or other medical interventions approved in this country. A good bit of the million-dollar price tag for the robot machinery goes to pay off expenses for getting the thing approved. No one wants to go back to the bad old days when any quack could set up shop as a doctor and inflict wanton harm upon uninformed patients, but there is a lot of evidence that the FDA has gone too far in the other direction of making it too hard to get innovative medical advances approved.
From an engineering point of view, robotic surgery seems to be a basically beneficent kind of technology. If it leads to fewer complications, faster surgery, and more rapid recovery, it might even be shown to pay for itself compared to the old-fashioned open-wound method, simply on the basis of efficiency. Unfortunately, the medical profession is still learning how to measure its own performance in quantitative ways, and so it is hard even to obtain reliable data on such cost-saving possibilities. This is where industrial engineers can help, but only if the medical community asks for assistance.
Industrial engineers are the efficiency experts of engineering. They look at any process and find ways to measure how resources are used, what the goals are, and how the process can be made more efficient. Up to now, applying industrial engineering principles to medicine has been somewhat of a novelty. But if we as a nation are serious about doing something with regard to the rising costs of health care, I see a great future role for industrial engineering in the evaluation and comparison of medical procedures. The tricky part will be to apply these techniques intelligently, and not in a one-size-fits-all way that will simply centralize control in Washington without making anything better.
My best wishes to my friend for his rapid recovery, and for the betterment of the nation’s health care system in general. I for one hope robots will be a part of it.
Sources: The New York Times article “When Robotic Surgery Leaves Just a Scratch” appeared in the online edition on Nov. 17, 2012 at http://www.nytimes.com/2012/11/18/business/single-incision-surgery-via-new-robotic-systems.html.
Monday, November 12, 2012
My father didn’t like to spend money when he didn’t have to, so when my mother expressed a wish for an automatic dishwasher, one day he showed up with an old portable unit that some friends of ours got rid of when they bought a newer model. It was a big floor-model box on rollers, and you ran one hose to the kitchen sink and another to the sink drain and plugged it into a wall outlet. It worked fine for a few weeks. Then one day it refused to drain. We opened the door and saw all this dirty dishwater, so we bailed it out and I volunteered to fix it. Because I was cheaper than calling a repairman, my father agreed to let me tear into the thing. After a lot of gross and messy work, I found the problem: a toothpick had lodged between the drain pump impeller and the housing. That little toothpick had jammed the pump, and as a result the whole washer couldn’t drain.
I learned several things from that experience (not the least of which was to avoid appliance repair as a future career). But the most important one was that fairly small, common, almost unnoticeable things can have big negative effects. And the things don’t need to be physical ones at all. In fact, immaterial things can make a lot more difference than any physical object, especially if they are so widespread that you don’t notice them, like fish who don’t realize it’s water that they’re swimming in. The little thing I’d like to draw your attention to is nominalism.
The word “nominal” is often used by engineers to mean “typical” or “according to the specifications.” But its original meaning is “relating to names.” Nominalism is a philosophical position first proposed by William of Ockham (~1288 A. D. - ~1348). Until he came along, most philosophers thought the word “apple,” for example, referred to a real and essential, though immaterial, “appleness” that is shared by all things properly called apples. However, William of Ockham claimed that there was no such thing as appleness—the essence of what it is to be an apple. Instead, “apple” is just a name for certain kinds of objects that we, in our human wisdom, have decided to call apples. In other words, he denied that there are any universals—that is, essences of things. There’s just a lot of round red fruits out there that, for convenience, we have decided to group under the name of “apple,” but in reality, all apples are different individuals and there is nothing more to the word than the sum of all things called apples.
After William of Ockham proposed nominalism, the other philosophers had to think of a name to call themselves, and the term they chose was “realists.” A realist, in this technical sense, thinks that there is indeed a universal concept, objective and independent of our minds, which in English is denoted by the word “apple.” These concepts, which the moderate realist Aristotle called essences, are as objectively real as a bank account. A bank account is not a material thing, though there may be material records of it. A bank account is a non-material concept, and so are the concepts of “apple,” “tree,” “horse,” and “man.”
Unless you are aware of this historical controversy, as a typical 21st-century person you probably think and act as a nominalist most of the time. For example, if you agree with the words of the 1992 U. S. Supreme Court decision in Planned Parenthood vs. Casey that “At the heart of liberty is the right to define one's own concept of existence, of meaning, of the universe, and of the mystery of human life,” you are a nominalist, because defining is what a nominalist does. First comes the name, then come the items to be grouped under that name. But the namer is always in charge, and things can be arbitrarily regrouped by the namer to suit one’s convenience. As the philosopher Richard Weaver has pointed out, an important consequence of nominalism is that “if words no longer correspond to objective realities, it seems no great wrong to take liberties with words.” So, for example, the genocide of Jews by Nazi Germany in World War II is euphemized to “the final solution.”
Engineers are perhaps less apt to fall into the grosser errors of nominalism, because we have frequent encounters with objective reality. If a computer chip you design doesn’t work, calling it by a different name isn’t going to make it start working. But even the way engineers use logic has been affected by nominalism. The digital logic that all digital computers use is based on symbolic logic devised by George Boole, a nineteenth-century mathematician whose hope was to reduce all logic to symbols. The trouble is, symbolic logic assumes that nominalism is true, and throws out a great deal of material that traditional Aristotelian logic relied on, including the notion that understanding is a uniquely human power essential to right thinking. But if everybody uses nominalist logic that can be expressed by Boole’s “boolean algebra,” we have reduced our thought processes to those that can be done by computers. This is an important source of the idea put forth by artificial intelligence proponents that the brain is really nothing more than an advanced wet computer. If we can’t make computers act like humans, we’ll reduce humans to the point that they act like computers.
Let’s hope that doesn’t happen. Regular readers of this blog may have noted that I try to approach philosophical matters from an Aristotelian perspective that moves from the real, objective world to the world of thoughts. Nominalism tempts us to do the opposite: to define things the way we want them to be, and then look for pieces of reality that fit our preconceived notions. I think engineers of all people should be aware of the dangers of nominalism. Realism is more than just being practical; it means realizing that there is more to the world than we can possibly understand or control, and the proper attitude toward nature is one of humility. Otherwise, like that toothpick in the dishwasher, nominalism can throw a whole culture out of whack.
Sources: The quotation by Richard Weaver is from his 1948 book Ideas Have Consequences (Univ. of Chicago Press), p. 7. I was inspired to write about nominalism and realism by reading one of the few logic textbooks in print which employ realist Aristotelian logic rather than symbolic logic as its basis: Peter Kreeft’s Socratic Logic (South Bend, Indiana: St. Augustine’s Press, 2004).
Monday, November 05, 2012
Here in the U. S. we are in the last three days of the 2012 election season. At stake is the Presidency, hundreds of seats in the U. S. Congress, and thousands of state and local races. Our question for today is this: have advances in technology made democracy as it is practiced in the U. S. better or worse?
That question immediately leads to another: by what standard are we to judge improvements in democracy? Unlike technical concepts such as ideal 100% efficiency, it’s hard to imagine what an ideal democracy would look like. Just imagining ideal people running it doesn’t do any good. It was James Madison who pointed out, “If men were angels, no government would be necessary.” So to my mind, at least, an ideal democracy would allow real fallible people to govern themselves to the best of their abilities.
That being said, we must distinguish between direct democracy and representative democracy. An old-fashioned New England town meeting where the citizens simply represent themselves is a direct democracy, but obviously such a method gets impractical as the size of the political entity becomes larger than a few thousand people. So what we are discussing is an ideal representative democracy: one in which people elect representatives periodically to embody their interests and judgments in the operation of government.
In such a situation, we can assess how accurately the elected representatives really mirror the inclinations of their constituents, and how effective this representation is in running the government. By this standard, technological advances have helped matters in some ways, but have led to big distortions and injustices in other ways.
The good news is that, for example, ballot-counting is a lot faster, and probably more accurate overall, with electronic voting machines and computer networks to handle the math and presentation of election returns. Electronic news media allows interested parties to learn a great deal of more diverse information than was provided in the old days when there were at most two or three TV networks, as many radio networks, and a couple of daily newspapers in most major media outlets, but no Internet or social media.
What about the negatives? One serious problem I have seen is the way that computer-intensive calculations have been employed in demographic analyses to gerrymander U. S. House of Representatives districts. “Gerrymandering” is a term that comes from an odd-shaped Massachusetts congressional district drawn by Governor Elbridge Gerry to benefit his political party in 1812. The odd shape basically divided his opponents’ constituents and united his own, but one resulting district looked like a salamander and was satirized in a political cartoon that labeled it a “gerrymander.” The term caught on as a way of criticizing the drawing of voting districts so as to favor one party or another.
Unfortunately, gerrymandering has become a way of life, and it is a routine thing for the party in power in a state to take advantage of every U. S. Census to push the gerrymandering art to new highs. With computer-intensive analysis of election returns, this process has become one that often guarantees the outcome of an election in a given district. One adverse consequence is a complete reversal of the original intent of the Founders as to the different purposes of the House of Representatives and the Senate.
Senators, with their revolving six-year terms, were intended to be a steady, long-term, stabilizing influence on government, while representatives, all of whom are elected every two years, were meant to reflect rapidly changing constituent opinion. The gerrymandering of House districts has turned this intention on its head. With computer-aided gerrymandering, many representatives enjoy fifteen, twenty, and even thirty-year tenures in their House seats. But because Senators by law are elected from an entire state (and so far, no one has seriously contemplated gerrymandering state borders yet), they are the ones who can be turned out after a single term, and often are.
I haven’t even mentioned such things as targeted campaign ads aimed at specific demographic groups, the overwhelming power of electronic media and its crippling expense that leads to the exclusion of all but the best-funded candidates (which means that no one without rich friends or wealthy corporations on their side can do much of anything nationally), and the handing over of Congressional authority to unelected bureaucrats, which while not directly aided by technological advances, seems to have become more popular as technology has advanced. And there are now instant polls conducted daily if not more frequently, with dozens of places online to see the poll results almost minute by minute—or instantly in the case of those public-opinion meters displayed on some TV channels during the Presidential debates. Just about the only thing that hasn’t changed is the formal mechanism of voting and what it means.
The optimist in me says not to worry too much—perhaps the modern voter really takes advantage of the superabundance of information and delivers a more informed decision than those in the past whose media sources were so much more limited. But the pessimist in me thinks that technologized democracy tends to pander to the worst and the simplest arguments and procedures: mass-media scare tactics, reducing voters to a single demographic characteristic (e. g. poor, black, Hispanic, working class, etc.) and manipulating voters based on that characteristic, and other techniques that tend to remove power as a practical matter from the average non-politician citizen, and concentrate it into the hands of the few elite who operate the handles of the analysis and publicity machines.
I recently read a book that pointed out many of these flaws and recommended some radical changes that might move the situation back closer to where it was a few decades ago. Arnold Kling’s Unchecked and Unbalanced is primarily an analysis of the 2008 financial crisis, but along the way he shows how un-representative our U. S. government has become. One main takeaway from the book is the fact that while the current system of constituting the House and Senate was set up back when the population was only a few million, it has not undergone any basic change since that time, and now our population is more than 300 million. In order to get back to the point that a single congressperson represents about the same number of people that he or she did in 1800, say, we would have to have perhaps 5,350 of them rather than the 535 that we do now. As things are, there is simply too much power and money concentrated in the hands of too few people, and the great mass of voters go largely unrepresented.
Largely, but not completely. It still means something to vote, and I hope all of my U. S. readers with that privilege will go out and exercise it on Tuesday. And we will all have to abide by the outcome, so you better vote wisely.
Sources: Arnold Kling’s book Unchecked and unbalanced : how the discrepancy between knowledge and power caused the financial crisis and threatens democracy was published by Rowman & Littlefield in 2010. I found the James Madison quote from one of his Federalist Papers at http://www.goodreads.com/quotes/70829-if-men-were-angels-no-government-would-be-necessary-if, and used the Wikipedia articles describing “U. S. population”, “democracy”, “gerrymander,” and “republic.”