Book Search is a portal that Google, Inc. is developing to provide access to all the world's books in digital form. How many is that? If you count editions (not individual copies), a recent Associated Press article about the project says there are between 50 and 100 million books in the world. The largest research library that I deal with on a regular basis, at the University of Texas at Austin, has only eight million of these. So clearly, Google will have done a great thing if and when it finishes—although with new books coming out all the time, a project like that is never really finished.
At first glance, this sounds like a great step forward in the history of information, on a par with the invention of printing. There are many parallels between the two events. Before movable type made it possible to produce thousands of identical copies of a manuscript, hand-copied books were rare, expensive treasures that only the wealthy and powerful classes could afford, by and large. But once Europe had dozens of print shops churning out books and pamphlets by the hundreds, prices came down to the point that artisans, shopkeepers, and even some farmers and peasants could afford them. You can make arguments that the Renaissance, the Protestant Reformation, and the Industrial Revolution all depended vitally on the invention of printing.
However, there is one critical difference between the invention of printing and what Google is doing. Print shops, publishers, and the whole network of book production, distribution, and the libraries that developed to house them were under the control of a diverse array of entrepreneurs, private organizations, schools, and governments. On the other hand, Google is, well, Google—a single, monolithic, centrally controlled corporation. Is there any ethical problem with that? It depends.
One thing that may be in danger is what I would term the universal freedom of library access. At any university library worthy of the name, anywhere in the world, any person can simply walk in and look at the general collections, generally without charge. And if you can produce scholarly credentials, you will usually be allowed to examine even the rarest items in their collections, under proper security controls, of course. The only limitation (and this is a severe one, admittedly) is that you have to travel physically to the library in question. But once you're there, you're in.
We have already seen how many internet firms have submitted to the will of dictatorial nations in exchange for the privilege of operating there. In my Mar. 30, 2006 blog, I criticized Google, Yahoo, and Microsoft for kowtowing to the government of the Peoples' Republic of China by restricting users' access to certain sites that the government deemed objectionable. Surely the books and other published works of Chinese dissidents will not be welcome there in electronic form any more than the people themselves, many of whom have endured long prison terms or even death for the "crime" of expressing their opinions.
But that is only one example of how Google, or any entity which has exclusive legal rights to the propagation of large amounts of information in a single medium, could distort or restrict access to the written heritage of the human race.
Am I being paranoid in sensing the potential for some sinister goings-on? I do not presently attribute evil or malign motives to Google, but sometimes things that look good to start with have bad unintended consequences. All I'm saying is that letting a single firm be in control of the way most of the world will in the future access its own written heritage, is at the least an unprecedented step, and potentially a very dangerous one.
The management of Google may all be nice folks now. But what if China gets more prosperous and has so much money in its government-controlled stock investment option that one day it hauls off and buys Google? Sounds ridiculous now, but if you had said in 1965 that in forty years, General Motors would be a money-losing basket case and Japanese car makers would beat them in worldwide sales, you would have gotten peculiar glances then too. Then China would get to say who gets access to what—an eventuality that few people would enjoy or benefit from.
My point is that the concentration of information control in the hands of a few is something to be regarded with caution, to say the least. Same goes for news media, but here we're talking a lot more than just news media—the intellectual heritage of the entire human race is at stake.
Do I have any suggestions? Well, no, in this case I'm just trying to get the ball rolling on a discussion. Even if I owned stock in Google, I have no illusions that they would listen to my opinions about their project. But if we're going to go ahead with this thing, we should at least go into it with our eyes open—as long as we can still see on our own.
Sources: The Associated Press article by Natasha Robinson on Google's Book Search project and its efforts toward the preservation of historical books was published in numerous venues. I saw it in print in the Austin American-Statesman (p. D3 of the Apr. 28, 2008 edition), and a version is accessible online at http://abcnews.go.com/Technology/wireStory?id=4722073.
Monday, April 28, 2008
Monday, April 21, 2008
Human Biological Enhancement and the Ethics of Personhood
Some philosophers of the mind like to try a little thought experiment on their students. It goes something like this. Suppose some years from now, a person—an ordinary human being—gets some dreaded brain disease that gradually destroys his gray matter. But also suppose that medical technology has advanced to the point that as the brain's biological tissue dies, it can be replaced by silicon (or some equivalent futuristic material) that is functionally equivalent to the dying brain part. And so as time goes on, Mr. Brain Patient has more and more of his brain replaced by the future's equivalent of computer chips. At what point, the philosopher asks, does the patient cease to be a human and begin to be a computer?
At one time, you could laugh off the whole thing by saying nobody has ever done such a thing and it's unlikely that they ever will. But no longer. Writing in Technology and Culture, historian Michael D. Bess points out that numerous blind and otherwise disabled people have received brain implants that allow them to see or communicate in ways that are utterly impossible for the rest of us mortals. Having a bunch of wires attached to your brain is not the same thing as replacing your cerebellum with a mainframe, but the border has been crossed. What happens from now on is more a matter of degree than of kind.
Bess foresees not just advances in brain science, but in genetic engineering and pharmacology as well, all leading to what he calls "human biological enhancement." Currently, the goal of most such projects is to use technology to restore the abilities of disabled people to something close to normal: curing genetic diseases, allowing the blind to see, allowing people with strokes or myasthenia gravis who end up "locked in" (unable to move or talk) to communicate via brain waves, and so on. But what is to prevent a person who sees through a computer from attaching an infrared camera to their input so they can see in the dark? Or what if we find a drug that restores Alzheimer's patients to normal brain function, and also gives normal people an IQ of 200? What is to keep us from taking human nature as merely raw material, a rough design to be improved on with increasingly advanced engineering? And what do we call these improved beings? People? Cyborgs? Or something in between?
Bess, for his part, sees no practical way to avoid these changes. The science will keep progressing, and as the natural desire on the part of people to take advantage of enhancements pulls the technology into the marketplace, we will face the issue of how to treat folks who have version numbers after their names (Bess titled his essay "Icarus 2.0"). He imagines that the only way to stop or regulate human biological enhancement would be to pass a worldwide set of laws together with a huge enforcement mechanism to chase down any miscreants trying to do enhancments under the table, so to speak. He sees the very public failure of the attempt to regulate performance-enhancing drugs in sports as a sign that this road is doomed to futility.
What we ought to do instead, he says, is get used to it. Start now to develop an "ethics of personhood" that in his words constitutes "an expanded conception of human dignity, a more generous understanding of the word 'us'." If one day you go to your job and find that the new hire you have to work with moves on wheels, sees through cameras, and accesses the Internet just by thinking, Bess is concerned that somehow you will be tempted to view that being as something other than human. We need to start now to work on that problem so that it doesn't lead to disastrous social consequences.
Well, I'm doing my little bit by drawing your attention to this matter. I'm already working with a colleague who gets around on wheels—he has osteomyelitis and spends most of his day in an electric wheelchair. Perhaps if these changes come along slowly enough, we can get used to them.
But for some reason, in searching history for an encounter between two very different orders of being who both happened to be human, the story of the early Spanish explorations of the New World comes to mind. With their armor, ships, and guns, the Spaniards must have looked to the native Americans like R2D2 looks to us. And sure enough, a whole lot of social disruption and suffering came about as a result of that encounter. But most of the misery and suffering was experienced by the native Americans, not the "enhanced" Spaniards.
Bess seems to be worried that un-enhanced humans will discriminate against the enhanced types, because they'll look odd or peculiar. But the case of Spanish exploitation of the New World suggests that the problems will mostly be experienced by those who, for whatever reason, don't benefit from technologically enhanced abilities. Especially if enhancement is expensive (it will always be at first), you could easily end up with an elite class of enhanced humans who would regard political and social power as their right.
Aldous Huxley's 1932 dystopia Brave New World divided the genetically engineered population of the future into alphas, betas, and gammas, as I recall. The alphas were the natural-born leaders with enhanced intelligence, and the gammas were bred (or manufactured, really) for menial jobs such as elevator operators (Huxley's crystal ball didn't include much in the way of automation). Huxley avoided the problem of having the gammas rise up in revolt when he made their genetic makeup include a natural-born enjoyment of menial tasks.
I don't know about you, but I wouldn't want to live in such a world. Bess is to be congratulated for raising a concern that we ought to start thinking about now. But I believe he's looking in the wrong places for problems. The enhanced types will do just fine—the people we need to start thinking about defending are the poor, the discriminated against, and the unborn, now and perhaps even more in the future.
Sources: Bess's essay "Icarus 2.0: A Historian's Perspective on Human Biological Enhancement" appears in the January 2008 issue of Technology and Culture (vol. 49, no. 1, pp. 114-126).
At one time, you could laugh off the whole thing by saying nobody has ever done such a thing and it's unlikely that they ever will. But no longer. Writing in Technology and Culture, historian Michael D. Bess points out that numerous blind and otherwise disabled people have received brain implants that allow them to see or communicate in ways that are utterly impossible for the rest of us mortals. Having a bunch of wires attached to your brain is not the same thing as replacing your cerebellum with a mainframe, but the border has been crossed. What happens from now on is more a matter of degree than of kind.
Bess foresees not just advances in brain science, but in genetic engineering and pharmacology as well, all leading to what he calls "human biological enhancement." Currently, the goal of most such projects is to use technology to restore the abilities of disabled people to something close to normal: curing genetic diseases, allowing the blind to see, allowing people with strokes or myasthenia gravis who end up "locked in" (unable to move or talk) to communicate via brain waves, and so on. But what is to prevent a person who sees through a computer from attaching an infrared camera to their input so they can see in the dark? Or what if we find a drug that restores Alzheimer's patients to normal brain function, and also gives normal people an IQ of 200? What is to keep us from taking human nature as merely raw material, a rough design to be improved on with increasingly advanced engineering? And what do we call these improved beings? People? Cyborgs? Or something in between?
Bess, for his part, sees no practical way to avoid these changes. The science will keep progressing, and as the natural desire on the part of people to take advantage of enhancements pulls the technology into the marketplace, we will face the issue of how to treat folks who have version numbers after their names (Bess titled his essay "Icarus 2.0"). He imagines that the only way to stop or regulate human biological enhancement would be to pass a worldwide set of laws together with a huge enforcement mechanism to chase down any miscreants trying to do enhancments under the table, so to speak. He sees the very public failure of the attempt to regulate performance-enhancing drugs in sports as a sign that this road is doomed to futility.
What we ought to do instead, he says, is get used to it. Start now to develop an "ethics of personhood" that in his words constitutes "an expanded conception of human dignity, a more generous understanding of the word 'us'." If one day you go to your job and find that the new hire you have to work with moves on wheels, sees through cameras, and accesses the Internet just by thinking, Bess is concerned that somehow you will be tempted to view that being as something other than human. We need to start now to work on that problem so that it doesn't lead to disastrous social consequences.
Well, I'm doing my little bit by drawing your attention to this matter. I'm already working with a colleague who gets around on wheels—he has osteomyelitis and spends most of his day in an electric wheelchair. Perhaps if these changes come along slowly enough, we can get used to them.
But for some reason, in searching history for an encounter between two very different orders of being who both happened to be human, the story of the early Spanish explorations of the New World comes to mind. With their armor, ships, and guns, the Spaniards must have looked to the native Americans like R2D2 looks to us. And sure enough, a whole lot of social disruption and suffering came about as a result of that encounter. But most of the misery and suffering was experienced by the native Americans, not the "enhanced" Spaniards.
Bess seems to be worried that un-enhanced humans will discriminate against the enhanced types, because they'll look odd or peculiar. But the case of Spanish exploitation of the New World suggests that the problems will mostly be experienced by those who, for whatever reason, don't benefit from technologically enhanced abilities. Especially if enhancement is expensive (it will always be at first), you could easily end up with an elite class of enhanced humans who would regard political and social power as their right.
Aldous Huxley's 1932 dystopia Brave New World divided the genetically engineered population of the future into alphas, betas, and gammas, as I recall. The alphas were the natural-born leaders with enhanced intelligence, and the gammas were bred (or manufactured, really) for menial jobs such as elevator operators (Huxley's crystal ball didn't include much in the way of automation). Huxley avoided the problem of having the gammas rise up in revolt when he made their genetic makeup include a natural-born enjoyment of menial tasks.
I don't know about you, but I wouldn't want to live in such a world. Bess is to be congratulated for raising a concern that we ought to start thinking about now. But I believe he's looking in the wrong places for problems. The enhanced types will do just fine—the people we need to start thinking about defending are the poor, the discriminated against, and the unborn, now and perhaps even more in the future.
Sources: Bess's essay "Icarus 2.0: A Historian's Perspective on Human Biological Enhancement" appears in the January 2008 issue of Technology and Culture (vol. 49, no. 1, pp. 114-126).
Monday, April 14, 2008
Thoughts on the Passing of a Zip Drive
In my household we try not to let too much old technology pile up, so after my wife bought a new laptop the other day, we began saying good-bye to her old Mac tower. It gave good service from about 2002 to a couple of years ago, and one of its features we're going to miss is its Zip drive. Zip disks were a removable magnetic-disk storage medium that were popular from the mid-nineties until flash drives came along. The first Zip disks held 100 MB, which was later boosted to 250 MB, but with 1-gig flash drives so cheap now I can't imagine there's much of a market for Zip drives now. Thing is, we have about 40 or so Zip disks that have stuff on them going all the way back to 1988, when my wife first learned to do graphics on a computer. Some of it has been backed up here and there, but if I had to tell you where, I'd be in trouble. So I spent yesterday afternoon transferring a good many of those old Zip disks to a backup drive, and it got me to thinking about the permanent impermanence of digital storage.
Every two to five years or so, a new generation of storage media come along. If the new generation didn't rise up and commit parricide on the previous generation, it wouldn't be so bad. But the hallmark of modern technology is "creative destruction," so for a new storage medium to be successful, it has to drive the previous medium out of existence. True, you can usually find antique drives, media, and even computers that use them if you look hard enough, but having to hunt around and assemble your own computer museum just to read some old files is hardly practical for most people. So the only alternative if you don't want your old data to go away as definitely as if you wrote it on paper and threw the paper on a bonfire, is to transfer it to the next medium. Which is fine for another two to five years, and then. . . .
And that gets me to wondering, what am I saving all this stuff for anyway? The inventor and futurist Ray Kurzweil wrote about this in one of the most human-sounding passages of a book about how we're all eventually going to live as software on hardware that will take over the universe (you think I'm kidding, go read The Singularity Is Near). His father Fredric was a musician and music teacher who fled Germany in the 1930s for the U. S. When he died at 58, the son inherited a large volume of paper documents, recordings, and other memorabilia. After starting a project to digitize all this stuff, Ray reached a conclusion which is as simple as it is startling. It was this: "Information lasts only so long as someone cares about it."
Like many of Kurzweil's philosophical epigrams, it contains elements of truth. I'm sure lots of information, in the form of paper, hard drives, old floppy disks, and so on, is eradicated every day simply because nobody needs or wants it any more, and the space or money it takes up is needed for something else. But just because somebody cares about information doesn't mean it will necessarily endure. Along with caring, the people interested in the data need the resources it takes to preserve it—whether that means space, funding for periodic migrations to new media, or archeological work.
In a way there's nothing new about this. People have been making choices about what information to save and what to toss ever since the invention of writing. Writing and paper are different in degree from Zip disks and flash drives, but not in kind. They are all technologies for the storage of a non-material entity—namely, information—using material media. You can make a good argument that the invention of writing made civilization possible, in that laws, history, customs, religious traditions, and most of what makes a culture could then be preserved independently of particular people with both good memories and the ability to pass their memories on to other people who could do the same. And I'm not one of these people who sit up at night worrying that historians of the future will have nothing to go on after the global catastrophe that wipes out all computer memories everywhere—although if that did happen, we'd all have a lot to worry about, not just the historians.
If we knew for certain whether anybody in the future would care about this or that data file, things would be easier. But you never know. Certain kinds of information, such as emails in the Executive Branch of the U. S. government, are just assumed to have historical importance, which is why the Bush administration got in some trouble a few months ago after admitting that they appear to have "lost" some emails covering several years, and had to recover them from backup tapes.
But for most ordinary, non-historical personages like myself, the candidates for people who will care about your information include yourself in the future, your relatives and children, and maybe a few friends and associates. It's actually a pretty short list. And unless you're a professional historian or plan to become the subject of one, if you don't think your list of carers-in-the-future would be interested in your tax return for 1982, you can just go ahead and throw it away.
Sources: Ray Kurzweil's The Singularity Is Near (Viking, 2005) carries the story of his attempts to archive his father's legacy on pp. 326-330. Zip is a registered trademark of Iomega Corporation, which still sells Zip drives, so maybe I won't worry about backing up those remaining disks just yet.
Every two to five years or so, a new generation of storage media come along. If the new generation didn't rise up and commit parricide on the previous generation, it wouldn't be so bad. But the hallmark of modern technology is "creative destruction," so for a new storage medium to be successful, it has to drive the previous medium out of existence. True, you can usually find antique drives, media, and even computers that use them if you look hard enough, but having to hunt around and assemble your own computer museum just to read some old files is hardly practical for most people. So the only alternative if you don't want your old data to go away as definitely as if you wrote it on paper and threw the paper on a bonfire, is to transfer it to the next medium. Which is fine for another two to five years, and then. . . .
And that gets me to wondering, what am I saving all this stuff for anyway? The inventor and futurist Ray Kurzweil wrote about this in one of the most human-sounding passages of a book about how we're all eventually going to live as software on hardware that will take over the universe (you think I'm kidding, go read The Singularity Is Near). His father Fredric was a musician and music teacher who fled Germany in the 1930s for the U. S. When he died at 58, the son inherited a large volume of paper documents, recordings, and other memorabilia. After starting a project to digitize all this stuff, Ray reached a conclusion which is as simple as it is startling. It was this: "Information lasts only so long as someone cares about it."
Like many of Kurzweil's philosophical epigrams, it contains elements of truth. I'm sure lots of information, in the form of paper, hard drives, old floppy disks, and so on, is eradicated every day simply because nobody needs or wants it any more, and the space or money it takes up is needed for something else. But just because somebody cares about information doesn't mean it will necessarily endure. Along with caring, the people interested in the data need the resources it takes to preserve it—whether that means space, funding for periodic migrations to new media, or archeological work.
In a way there's nothing new about this. People have been making choices about what information to save and what to toss ever since the invention of writing. Writing and paper are different in degree from Zip disks and flash drives, but not in kind. They are all technologies for the storage of a non-material entity—namely, information—using material media. You can make a good argument that the invention of writing made civilization possible, in that laws, history, customs, religious traditions, and most of what makes a culture could then be preserved independently of particular people with both good memories and the ability to pass their memories on to other people who could do the same. And I'm not one of these people who sit up at night worrying that historians of the future will have nothing to go on after the global catastrophe that wipes out all computer memories everywhere—although if that did happen, we'd all have a lot to worry about, not just the historians.
If we knew for certain whether anybody in the future would care about this or that data file, things would be easier. But you never know. Certain kinds of information, such as emails in the Executive Branch of the U. S. government, are just assumed to have historical importance, which is why the Bush administration got in some trouble a few months ago after admitting that they appear to have "lost" some emails covering several years, and had to recover them from backup tapes.
But for most ordinary, non-historical personages like myself, the candidates for people who will care about your information include yourself in the future, your relatives and children, and maybe a few friends and associates. It's actually a pretty short list. And unless you're a professional historian or plan to become the subject of one, if you don't think your list of carers-in-the-future would be interested in your tax return for 1982, you can just go ahead and throw it away.
Sources: Ray Kurzweil's The Singularity Is Near (Viking, 2005) carries the story of his attempts to archive his father's legacy on pp. 326-330. Zip is a registered trademark of Iomega Corporation, which still sells Zip drives, so maybe I won't worry about backing up those remaining disks just yet.
Monday, April 07, 2008
Whistleblowing on Southwest Airlines: Cracks of Doom or Paperwork Errors?
The lot of a whistleblower is not an easy one. And I'm not talking about football referees. In engineering ethics parlance, a whistleblower is someone who goes public with information about a safety issue, after trying without success to deal with the problem through normal organizational channels. Whistleblowers can toot either before or after something terrible happens, but the consequences for them are usually the same: isolation, criticism, and often the loss of a job or even a career. Their only compensation is the knowledge that, in most cases at least, they did the right thing.
Charlambe "Bobby" Boutris is finding out right now what life as a whistleblower is like. In 1998, the Federal Aviation Administration (FAA) hired him, and an important part of his job was to make sure that airlines complied with what are called Airworthiness Directives (ADs for short). These are rules that the FAA makes to ensure the safety of aircraft, and detail such things as regular fuselage inspections, especially for older planes.
You'd think nothing much could go wrong with the fuselage compared to moving parts like the engine and so on, but think again. If you've ever been on a jet aircraft and looked through a window with a view over the wing, you have probably noticed that the wingtip wiggles up and down several inches during air turbulence. That is perfectly normal, and designed into the way the plane works. If the wing was built solidly enough not to wiggle at all, it would make the plane so heavy that it couldn't get off the ground.
But if you've ever bent a paper clip back and forth until it breaks, you know about a thing called metal fatigue. And not only the wing, but all stress-bearing parts of the fuselage experience tiny movements that over time, can cause metal fatigue and cracks. Most of the time these cracks are small and don't spread. But in 1988, they were responsible for one of the most spectacular airline accidents in aviation history.
Passengers in the first-class section of an Aloha Airlines flight over Maui were astonished to see the roof of the plane pop off and rip away in the violent decompression, taking a flight attendant with it. The pilot, not even fully aware of what happened, quickly adapted to the altered flying characteristics of his plane and safely landed at a nearby airport. The attendant was the only fatality, but clearly, airlines did not want to take the chance of this kind of thing happening again. Investigation showed that the plane, which was one of the oldest in Aloha's fleet, had developed fatigue cracks that had spread to cause the whole top section of the fuselage to fly off.
For this and other very good reasons, the FAA requires air carriers to inspect their fleets for fatigue cracks on a regular basis. Now, these cracks are a statistical thing, like mortality rates. It's hard to predict whether a given plane will develop a crack at a given place by a given time, but the inspections are timed so that on average, any cracks can be caught and repaired well before they become dangerous. But the system works only if you keep to the schedule.
Well, it appears that Southwest Airlines didn't keep to the inspection schedule. In testimony before Congress on April 4, Inspector Boutris told the story of how he found numerous cases in which inspection records were either too mixed up to tell whether the inspections had been done, or showed definitely that planes had gone as long as 30 months past the time when ADs specified they had to be pulled out of service to be inspected. It's illegal to fly a plane in revenue service if it's behind in certain kinds of inspections.
What made matters worse was that when Boutris asked permission from his FAA supervisor to issue a letter of investigation to Southwest in 2007, the supervisor told him to tone it down to a letter of concern, which does not carry the same impact. Eventually, in late March of 2007, Southwest did finish up the late inspections, but only after some airplanes had gone months or years without them. The FAA has announced its intention to fine Southwest ten million dollars for flying the uninspected planes, at least one of which was found to have fatigue cracks after inspections were finally performed.
On a scale of "who cares?" to "stick it to 'em," you can identify two extremes of how one can view this story. If you take the side of Southwest Airlines, you can point out that besides being one of the most profitable airlines in the business, they have never had a catastrophic accident in which more than one person was killed. And that incident, when a ground crew member was pulled into an engine, was due to pilot error, not mechanical failure. True, they didn't follow all the rules, but no harm was done—none of their planes popped their tops like the Aloha Airlines flight did.
On the other extreme, you can say that you keep good safety records like that by following the rules, even if it means grounding a large fraction of your fleet to make overdue inspections. The attitude of Boutris' supervisor appears to be one of "don't rock the boat," which might indicate that he was more concerned with how Southwest Airlines would fare than he was worried about the safety of the flying public, despite the fact that he worked for the government. That indicates systemic organizational problems both within the FAA and Southwest Airlines.
Back in high school, I attended Explorer Scout meetings that were held in the basement of a telephone exchange building. On the wall of the break room was a brass plaque, as I recall, and its words went something like this: "No service is so urgent or no business need is so critical that we fail to perform our work safely." Back then, Ma Bell had a guaranteed monopolistic income, and could afford to make safety priority number one. But I thought it was a great motto at the time, no matter what the business was or how it was doing financially. And I still do. I hope Southwest Airlines agrees with me, not just in words, but in actions as well.
Sources: A video of Mr. Boutris' opening statement before a Congressional committee investigating this matter can be viewed at http://salon.glenrose.net/?view=plink&id=6899. A CNN article on the Southwest Airlines actions and the FAA's response is at http://www.cnn.com/2008/US/03/06/southwest.planes/. The Wikipedia article on Aloha Airlines has a brief description of the 1988 accident.
Charlambe "Bobby" Boutris is finding out right now what life as a whistleblower is like. In 1998, the Federal Aviation Administration (FAA) hired him, and an important part of his job was to make sure that airlines complied with what are called Airworthiness Directives (ADs for short). These are rules that the FAA makes to ensure the safety of aircraft, and detail such things as regular fuselage inspections, especially for older planes.
You'd think nothing much could go wrong with the fuselage compared to moving parts like the engine and so on, but think again. If you've ever been on a jet aircraft and looked through a window with a view over the wing, you have probably noticed that the wingtip wiggles up and down several inches during air turbulence. That is perfectly normal, and designed into the way the plane works. If the wing was built solidly enough not to wiggle at all, it would make the plane so heavy that it couldn't get off the ground.
But if you've ever bent a paper clip back and forth until it breaks, you know about a thing called metal fatigue. And not only the wing, but all stress-bearing parts of the fuselage experience tiny movements that over time, can cause metal fatigue and cracks. Most of the time these cracks are small and don't spread. But in 1988, they were responsible for one of the most spectacular airline accidents in aviation history.
Passengers in the first-class section of an Aloha Airlines flight over Maui were astonished to see the roof of the plane pop off and rip away in the violent decompression, taking a flight attendant with it. The pilot, not even fully aware of what happened, quickly adapted to the altered flying characteristics of his plane and safely landed at a nearby airport. The attendant was the only fatality, but clearly, airlines did not want to take the chance of this kind of thing happening again. Investigation showed that the plane, which was one of the oldest in Aloha's fleet, had developed fatigue cracks that had spread to cause the whole top section of the fuselage to fly off.
For this and other very good reasons, the FAA requires air carriers to inspect their fleets for fatigue cracks on a regular basis. Now, these cracks are a statistical thing, like mortality rates. It's hard to predict whether a given plane will develop a crack at a given place by a given time, but the inspections are timed so that on average, any cracks can be caught and repaired well before they become dangerous. But the system works only if you keep to the schedule.
Well, it appears that Southwest Airlines didn't keep to the inspection schedule. In testimony before Congress on April 4, Inspector Boutris told the story of how he found numerous cases in which inspection records were either too mixed up to tell whether the inspections had been done, or showed definitely that planes had gone as long as 30 months past the time when ADs specified they had to be pulled out of service to be inspected. It's illegal to fly a plane in revenue service if it's behind in certain kinds of inspections.
What made matters worse was that when Boutris asked permission from his FAA supervisor to issue a letter of investigation to Southwest in 2007, the supervisor told him to tone it down to a letter of concern, which does not carry the same impact. Eventually, in late March of 2007, Southwest did finish up the late inspections, but only after some airplanes had gone months or years without them. The FAA has announced its intention to fine Southwest ten million dollars for flying the uninspected planes, at least one of which was found to have fatigue cracks after inspections were finally performed.
On a scale of "who cares?" to "stick it to 'em," you can identify two extremes of how one can view this story. If you take the side of Southwest Airlines, you can point out that besides being one of the most profitable airlines in the business, they have never had a catastrophic accident in which more than one person was killed. And that incident, when a ground crew member was pulled into an engine, was due to pilot error, not mechanical failure. True, they didn't follow all the rules, but no harm was done—none of their planes popped their tops like the Aloha Airlines flight did.
On the other extreme, you can say that you keep good safety records like that by following the rules, even if it means grounding a large fraction of your fleet to make overdue inspections. The attitude of Boutris' supervisor appears to be one of "don't rock the boat," which might indicate that he was more concerned with how Southwest Airlines would fare than he was worried about the safety of the flying public, despite the fact that he worked for the government. That indicates systemic organizational problems both within the FAA and Southwest Airlines.
Back in high school, I attended Explorer Scout meetings that were held in the basement of a telephone exchange building. On the wall of the break room was a brass plaque, as I recall, and its words went something like this: "No service is so urgent or no business need is so critical that we fail to perform our work safely." Back then, Ma Bell had a guaranteed monopolistic income, and could afford to make safety priority number one. But I thought it was a great motto at the time, no matter what the business was or how it was doing financially. And I still do. I hope Southwest Airlines agrees with me, not just in words, but in actions as well.
Sources: A video of Mr. Boutris' opening statement before a Congressional committee investigating this matter can be viewed at http://salon.glenrose.net/?view=plink&id=6899. A CNN article on the Southwest Airlines actions and the FAA's response is at http://www.cnn.com/2008/US/03/06/southwest.planes/. The Wikipedia article on Aloha Airlines has a brief description of the 1988 accident.
Monday, March 31, 2008
BitTorrent and Comcast: Who Pays and How?
Back on Feb. 4 of this year, I noted how a group of Swedish software experts got in trouble for running a peer-to-peer system for distributing video content over the Internet. The claim made by the prosecutors was that most of the content was pirated. Well, that turned out to be a sign of things to come. For some months now, the major U. S. cable television and Internet network operator Comcast has been in a dispute with BitTorrent Inc., a firm that provides software allowing peer-to-peer sharing of video. And the outcome of the fight may affect how all of us pay for Internet services for years to come.
The first punch in the public fight came when BitTorrent accused Comcast of singling out users of BitTorrent's protocol for interference and interruptions when Comcast's network traffic got too heavy for comfort. At first Comcast denied any such discrimination, but later under pressure, spokesmen for the cable and network firm admitted that they were doing exactly that. Then the Federal Communications Commission got involved and has held public hearings about the matter. On Mar. 27 (last Thursday), Comcast announced that it was making a number of changes that will both eliminate the discriminatory network measures against BitTorrent users and should make improvements in everyone's service through increased software and hardware efficiency and investment. But that hasn't stopped the FCC from announcing another hearing set for Apr. 17 at Stanford University in the heart of Silicon Valley, where I'm sure they will find people with an abundance of opinions on both sides.
What is BitTorrent and how does it work? You may recall the flaps about peer-to-peer sharing of audio files over the Internet a few years ago. BitTorrent's protocol also uses the fact that a file that one person wants is usually stored on thousands of other computers on the network. But video files are thousands of times bigger than audio files, especially if we're talking about HD video, which is becoming increasingly popular. The process of getting only one source computer to send a gigabyte-size file (1,000,000,000 bytes) over the Internet to another computer is tedious, error-prone, and takes a long time. So BitTorrent draws upon many of the other computers that have the file in question and gets them to cooperate by sending different pieces of the file to the target computer. Somehow the software coordinates all this confusion of activity, and the end result to the user is that he or she gets the desired file a lot faster than if only two computers were involved.
But as with so many things, what's good for the individual may not bode well for the group. Comcast and other network service providers estimate that because of BitTorrent's popularity, as much as half of all Internet traffic at certain times consists of peer-to-peer file sharing of this type. Comcast has defended its actions against BitTorrent protocols simply as their attempt to manage their limited network capacity fairly so that other customers were not left out in the cold with impaired service.
The word "fairly" means ethics has come into the picture. This ethical question arises from a tension that was born with the Internet some two decades ago, a tension between two competing philosophies.
Call the first the egalitarian-vision philosophy: the idea that information should be free, all Internet users should have the same privileges and access, and that such ideas should be built into the technical machinery of the Internet. The founders and early users of the Internet were imbued with this philosophy, and its legacy lives on in the basic structure of Internet protocols.
The second philosophy is the commercial free-enterprise notion that the Internet is a means to make money, and you should charge whatever the traffic will bear. It was years before anyone figured out how to make money with the Internet, but with the coming of Google I think it is fair to say that some people, anyway, have managed to do that. This philosophy sees the market as the best arbiter of resource distribution and even matters of fairness. Although there are now a few coarse-grained ways of charging people who want faster Internet service more money, hardly anyone pays any surcharge that depends on how much you actually use the thing. That is, if you ask your service provider for high-speed Internet service, you get a monthly bill that's the same whether you never touched your computer that month, or whether you downloaded seventeen movies in ten days using BitTorrent.
The network operators argue, and with some merit, that if five percent of their customers tie up half the resources of the entire network, it is not fair to the other 95% who pay just as much but have their service degraded by the overcrowding due to BitTorrent traffic. One alternative that Time Warner Cable is reported to be trying out in Beaumont, Texas on a trial basis is "metered" Internet use. That is, if you use more than a certain bandwidth-time product, let's call it, then you pay an extra fee. Metered use flies in the face of decades of Internet tradition and egalitarian philosophy, but if such distortions of the market as those caused by BitTorrent users continue, something will have to change, and the network companies may resort to metering on a wider scale.
A curious analogy to what is happening now with BitTorrent and Comcast went on for over a century in New York City. Until the late 1980s, residential users of the Big Apple's water supply had no meters—they just paid a flat monthly fee. You can imagine how this affected the way people used water. Finally, meters were installed, and the city as a whole used 28% less water in 2006 than it did in 1979. The Internet isn't water, but like water, it is not an infinite resource, and we may have to start paying by the drink if we don't want the whole thing to break down.
Sources: Bob Fernandez of the Philadelphia Inquirer has reported extensively on the BitTorrent-Comcast dispute, and I used his articles published on Mar. 23 (http://www.philly.com/philly/business/20080323_Online_Video__Data_Tidal_Wave_.html) and Mar. 27 (http://www.philly.com/philly/business/20080327_Comcast_agreement_in_dispute_with_BitTorrent.html). The statistic about New York City water use came from the Wikipedia article "Environmental issues in New York City."
The first punch in the public fight came when BitTorrent accused Comcast of singling out users of BitTorrent's protocol for interference and interruptions when Comcast's network traffic got too heavy for comfort. At first Comcast denied any such discrimination, but later under pressure, spokesmen for the cable and network firm admitted that they were doing exactly that. Then the Federal Communications Commission got involved and has held public hearings about the matter. On Mar. 27 (last Thursday), Comcast announced that it was making a number of changes that will both eliminate the discriminatory network measures against BitTorrent users and should make improvements in everyone's service through increased software and hardware efficiency and investment. But that hasn't stopped the FCC from announcing another hearing set for Apr. 17 at Stanford University in the heart of Silicon Valley, where I'm sure they will find people with an abundance of opinions on both sides.
What is BitTorrent and how does it work? You may recall the flaps about peer-to-peer sharing of audio files over the Internet a few years ago. BitTorrent's protocol also uses the fact that a file that one person wants is usually stored on thousands of other computers on the network. But video files are thousands of times bigger than audio files, especially if we're talking about HD video, which is becoming increasingly popular. The process of getting only one source computer to send a gigabyte-size file (1,000,000,000 bytes) over the Internet to another computer is tedious, error-prone, and takes a long time. So BitTorrent draws upon many of the other computers that have the file in question and gets them to cooperate by sending different pieces of the file to the target computer. Somehow the software coordinates all this confusion of activity, and the end result to the user is that he or she gets the desired file a lot faster than if only two computers were involved.
But as with so many things, what's good for the individual may not bode well for the group. Comcast and other network service providers estimate that because of BitTorrent's popularity, as much as half of all Internet traffic at certain times consists of peer-to-peer file sharing of this type. Comcast has defended its actions against BitTorrent protocols simply as their attempt to manage their limited network capacity fairly so that other customers were not left out in the cold with impaired service.
The word "fairly" means ethics has come into the picture. This ethical question arises from a tension that was born with the Internet some two decades ago, a tension between two competing philosophies.
Call the first the egalitarian-vision philosophy: the idea that information should be free, all Internet users should have the same privileges and access, and that such ideas should be built into the technical machinery of the Internet. The founders and early users of the Internet were imbued with this philosophy, and its legacy lives on in the basic structure of Internet protocols.
The second philosophy is the commercial free-enterprise notion that the Internet is a means to make money, and you should charge whatever the traffic will bear. It was years before anyone figured out how to make money with the Internet, but with the coming of Google I think it is fair to say that some people, anyway, have managed to do that. This philosophy sees the market as the best arbiter of resource distribution and even matters of fairness. Although there are now a few coarse-grained ways of charging people who want faster Internet service more money, hardly anyone pays any surcharge that depends on how much you actually use the thing. That is, if you ask your service provider for high-speed Internet service, you get a monthly bill that's the same whether you never touched your computer that month, or whether you downloaded seventeen movies in ten days using BitTorrent.
The network operators argue, and with some merit, that if five percent of their customers tie up half the resources of the entire network, it is not fair to the other 95% who pay just as much but have their service degraded by the overcrowding due to BitTorrent traffic. One alternative that Time Warner Cable is reported to be trying out in Beaumont, Texas on a trial basis is "metered" Internet use. That is, if you use more than a certain bandwidth-time product, let's call it, then you pay an extra fee. Metered use flies in the face of decades of Internet tradition and egalitarian philosophy, but if such distortions of the market as those caused by BitTorrent users continue, something will have to change, and the network companies may resort to metering on a wider scale.
A curious analogy to what is happening now with BitTorrent and Comcast went on for over a century in New York City. Until the late 1980s, residential users of the Big Apple's water supply had no meters—they just paid a flat monthly fee. You can imagine how this affected the way people used water. Finally, meters were installed, and the city as a whole used 28% less water in 2006 than it did in 1979. The Internet isn't water, but like water, it is not an infinite resource, and we may have to start paying by the drink if we don't want the whole thing to break down.
Sources: Bob Fernandez of the Philadelphia Inquirer has reported extensively on the BitTorrent-Comcast dispute, and I used his articles published on Mar. 23 (http://www.philly.com/philly/business/20080323_Online_Video__Data_Tidal_Wave_.html) and Mar. 27 (http://www.philly.com/philly/business/20080327_Comcast_agreement_in_dispute_with_BitTorrent.html). The statistic about New York City water use came from the Wikipedia article "Environmental issues in New York City."
Monday, March 24, 2008
Sustainable—But At What Cost?
I read a lot of discussion these days about "sustainability," "sustainable engineering," "sustainable agriculture," and so on. Sustainability, we are told, is the key to solving everything from global warming to finding world peace. What exactly is sustainability, and what are its implications?
One of the most obvious features of today's technological economy is not sustainable: the use of fossil fuels, which means mainly oil, natural gas, and coal. However these resources were formed (and there is still a good bit of debate about that), everybody agrees it took millions of years, and we stand a fair chance of running through them in a good deal less than 0.1% of that time, say a few hundred years. So the use of fossil fuels for energy is not sustainable.
So what? If you look around for anything at all, technological or not, which has turned out to be truly sustainable over recorded history, the list is fairly short. Things like the practice of begetting and raising families, farming, the life of some cities (e. g. Damascus, which is one of the oldest cities on earth), and even a few (very few) business firms have gone on for hundreds of years or more, and show no sign of disappearing because of lack of resources. I could add the professions of doctors and lawyers, and let's not forget taxes, but not governments that levy taxes—the habit endures even though the taxing entities don't.
The proponents of sustainability want basically everything we do to be a part of that kind of list—a list of things which have long traditions going back over many cultures and governments into the past.
In an article in the current issue of The New Atlantis, Yuval Levin makes the point that certain ideas vigorously promoted by political liberals in the U. S. are actually quite conservative. Sustainability, if successfully implemented, fits right into this pattern. If all social activities, technological and otherwise, were sustainable in the sense that liberals usually mean, the activities would go on and on without having to end because of physical limitations. While certain features might change, the physical resources needed would be either renewable or permanent.
Now that is a very conservative picture, meaning that the physical essentials of technology would not change. If new materials were invented that required using something that couldn't be recycled and reused–then they wouldn't be sustainable, and you couldn't use them. Everything would be recycled, with energy coming only from the sun. (Strictly speaking, even the sun isn't sustainable, although we can count on it shining for a few more million years.)
What if we went to such a totally sustainable economy? Some things wouldn't change much at all. Most steel is now made from recycled scrap, for instance, so that wouldn't be much of a problem.
But what about concrete? I have toyed with the idea of recycling concrete, because as far as I know, you could apply enough heat to it, drive off the water, and get back the calcium silicate that was in the original Portland cement. The trouble with this is, it would be vastly more expensive (and energy-intensive) to make cement from recycled concrete (laboriously hauled back from wherever it was poured to the recycling plant where huge amounts of energy would be required), than it would be simply to dig up some more limestone and sand from the ground. Ah, but limestone and sand are not renewable resources. Yes, there is enough limestone and sand to last us a long time, but if you're going to be a sustainable absolutist, you can't use anything that isn't recycled or in principle, recyclable.
I'm pushing this idea to the limits to make a point, but the point is a valid one. Namely, some things are more easily sustainable than others, and it simply doesn't make sense to hold sustainability up as a practical goal for every technological field, unless we are willing to make some very weird and silly changes in the way we do things.
While I was on vacation last week, I toured Indian City U. S. A. outside Anadarko, Oklahoma. It's a sort of outdoor museum where seven different kinds of Native American dwellings have been constructed and preserved. It was pouring rain at the time, but that didn't stop our guide from pointing out the different features of the various structures which were, of course, made from all-natural materials: tree trunks, mud, grass, and so on. Native Americans were the first recyclers, he said, since when they were finished with a structure they just abandoned it and let it return to Nature.
Though I didn't say anything at the time, I had a big "Yes, but. . . " in mind. Although estimates of how many people lived in what is now called North and South America before 1492 vary from 8 million to over 100 million, the figure is certainly less than the approximately 900 million people that the New World harbors today. And the Americas are some of the least densely populated regions of the developed world. If we all went back to living the way the first Native Americans did, there is no way that we would all be able to survive, even if we all suddenly acquired the hunting, gathering, and rudimentary agricultural skills necessary for such a life. And if we managed somehow to eke out a living, few of us would enjoy rising at dawn, doing back-breaking manual labor all day, and retiring at dusk only to do it all over again the next morning.
The only time when something like this has been tried on a massive scale recently was the Great Cultural Revolution under Mao Tse-tung in the Peoples' Republic of China, from 1968 to 1976. Millions of intellectuals and other suspicious persons, including most of the faculty members at all Chinese universities, were summarily hauled off to the countryside for a little bucolic "re-education" that lasted seven or eight years. I have known citizens of that country who lived through that period, and they tell me that it set back their lives a decade or more, and the progress of the country by a generation. But it was certainly sustainable, in the sense that they were still living and probably consuming fewer resources than they would have in the cities.
Few if any of the proponents of sustainability have in mind a radical, total shift to something like that. Or if they do, they're not talking about it openly. I favor a reasoned, appropriate move toward more nearly sustainable technology when it makes economic sense, when its adoption won't cause undue suffering or disruption, and when it leads to more human thriving than formerly. But a draconian swift transition to a totally sustainable economy would be in most respects indistinguishable from a worldwide depression. And I hope we don't get to that point any time soon.
Sources: Yuval Levin's article "Science and the Left" appears in the Winter 2008 edition of The New Atlantis.
One of the most obvious features of today's technological economy is not sustainable: the use of fossil fuels, which means mainly oil, natural gas, and coal. However these resources were formed (and there is still a good bit of debate about that), everybody agrees it took millions of years, and we stand a fair chance of running through them in a good deal less than 0.1% of that time, say a few hundred years. So the use of fossil fuels for energy is not sustainable.
So what? If you look around for anything at all, technological or not, which has turned out to be truly sustainable over recorded history, the list is fairly short. Things like the practice of begetting and raising families, farming, the life of some cities (e. g. Damascus, which is one of the oldest cities on earth), and even a few (very few) business firms have gone on for hundreds of years or more, and show no sign of disappearing because of lack of resources. I could add the professions of doctors and lawyers, and let's not forget taxes, but not governments that levy taxes—the habit endures even though the taxing entities don't.
The proponents of sustainability want basically everything we do to be a part of that kind of list—a list of things which have long traditions going back over many cultures and governments into the past.
In an article in the current issue of The New Atlantis, Yuval Levin makes the point that certain ideas vigorously promoted by political liberals in the U. S. are actually quite conservative. Sustainability, if successfully implemented, fits right into this pattern. If all social activities, technological and otherwise, were sustainable in the sense that liberals usually mean, the activities would go on and on without having to end because of physical limitations. While certain features might change, the physical resources needed would be either renewable or permanent.
Now that is a very conservative picture, meaning that the physical essentials of technology would not change. If new materials were invented that required using something that couldn't be recycled and reused–then they wouldn't be sustainable, and you couldn't use them. Everything would be recycled, with energy coming only from the sun. (Strictly speaking, even the sun isn't sustainable, although we can count on it shining for a few more million years.)
What if we went to such a totally sustainable economy? Some things wouldn't change much at all. Most steel is now made from recycled scrap, for instance, so that wouldn't be much of a problem.
But what about concrete? I have toyed with the idea of recycling concrete, because as far as I know, you could apply enough heat to it, drive off the water, and get back the calcium silicate that was in the original Portland cement. The trouble with this is, it would be vastly more expensive (and energy-intensive) to make cement from recycled concrete (laboriously hauled back from wherever it was poured to the recycling plant where huge amounts of energy would be required), than it would be simply to dig up some more limestone and sand from the ground. Ah, but limestone and sand are not renewable resources. Yes, there is enough limestone and sand to last us a long time, but if you're going to be a sustainable absolutist, you can't use anything that isn't recycled or in principle, recyclable.
I'm pushing this idea to the limits to make a point, but the point is a valid one. Namely, some things are more easily sustainable than others, and it simply doesn't make sense to hold sustainability up as a practical goal for every technological field, unless we are willing to make some very weird and silly changes in the way we do things.
While I was on vacation last week, I toured Indian City U. S. A. outside Anadarko, Oklahoma. It's a sort of outdoor museum where seven different kinds of Native American dwellings have been constructed and preserved. It was pouring rain at the time, but that didn't stop our guide from pointing out the different features of the various structures which were, of course, made from all-natural materials: tree trunks, mud, grass, and so on. Native Americans were the first recyclers, he said, since when they were finished with a structure they just abandoned it and let it return to Nature.
Though I didn't say anything at the time, I had a big "Yes, but. . . " in mind. Although estimates of how many people lived in what is now called North and South America before 1492 vary from 8 million to over 100 million, the figure is certainly less than the approximately 900 million people that the New World harbors today. And the Americas are some of the least densely populated regions of the developed world. If we all went back to living the way the first Native Americans did, there is no way that we would all be able to survive, even if we all suddenly acquired the hunting, gathering, and rudimentary agricultural skills necessary for such a life. And if we managed somehow to eke out a living, few of us would enjoy rising at dawn, doing back-breaking manual labor all day, and retiring at dusk only to do it all over again the next morning.
The only time when something like this has been tried on a massive scale recently was the Great Cultural Revolution under Mao Tse-tung in the Peoples' Republic of China, from 1968 to 1976. Millions of intellectuals and other suspicious persons, including most of the faculty members at all Chinese universities, were summarily hauled off to the countryside for a little bucolic "re-education" that lasted seven or eight years. I have known citizens of that country who lived through that period, and they tell me that it set back their lives a decade or more, and the progress of the country by a generation. But it was certainly sustainable, in the sense that they were still living and probably consuming fewer resources than they would have in the cities.
Few if any of the proponents of sustainability have in mind a radical, total shift to something like that. Or if they do, they're not talking about it openly. I favor a reasoned, appropriate move toward more nearly sustainable technology when it makes economic sense, when its adoption won't cause undue suffering or disruption, and when it leads to more human thriving than formerly. But a draconian swift transition to a totally sustainable economy would be in most respects indistinguishable from a worldwide depression. And I hope we don't get to that point any time soon.
Sources: Yuval Levin's article "Science and the Left" appears in the Winter 2008 edition of The New Atlantis.
Saturday, March 15, 2008
Robot Rats and SARs for PEPs
Sometimes things happen fast in politics. On Sunday morning, March 9, Eliot Spitzer woke up to the beginning of his 63rd week in office as Governor of New York State, an office which served as a stepping stone to the White House for his predecessors Theodore and Franklin D. Roosevelt. He had an apparently unstained reputation for fighting corruption in high places, which he had earned during his seven years as New York State's Attorney General, going after everything from Enron-type financial scandals to prostitution rings.
Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?
Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.
Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.
As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.
One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.
At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.
Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.
And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.
Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.
Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?
Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.
Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.
As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.
One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.
At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.
Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.
And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.
Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.
Tuesday, March 11, 2008
Engineering the End of Malaria
In my Feb. 25 entry, I used the idea of wiping out malaria as an example of what might be done with "a few billion dollars" that would otherwise go toward dealing with global warming. I will admit that I simply pulled that number out of the air. Since then I have learned that while eliminating malaria is something that people as wealthy as Bill and Melinda Gates have tried to do, it is by no means a simple or straightforward task. But engineers may be able to help in some ways you wouldn't expect.
As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.
The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.
At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.
One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.
This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?
In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.
Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.
Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.
As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.
The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.
At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.
One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.
This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?
In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.
Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.
Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.
Monday, March 03, 2008
Locked-In Profits or Service to the Downtrodden?
Suppose you're the wife of a man who got arrested in Oakland, California. You weren't with him at the time, and all you know is the bare fact that he was arrested. Until recently, your only alternative was to call the Alameda County public information number, work your way through a phone tree, and hope there would be a live person at the other end who could tell you something. Sometimes there was and sometimes there wasn't. But now, thanks to the initiative of some staff in the Alameda County Information Technology department, there is an Inmate Locator on the county's website. If you have the person's full name, or even if all you know is that they were booked in the last twenty-four hours, you can get online and see identifying information, the "custody status," and which jail they're in. Of course, you have to have a computer and a high-speed internet connection to do this efficiently, but doesn't everybody?
Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?
You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).
I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.
Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.
As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.
I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.
But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.
With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.
This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.
So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.
Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."
Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?
You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).
I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.
Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.
As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.
I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.
But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.
With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.
This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.
So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.
Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."
Monday, February 25, 2008
Discounting Global Warming, Revisited
Running this blog is a pretty one-sided deal most of the time. Every week I send out some thoughts into the blogosphere, and rarely do I get a response. But last week's post about applying the economics of discounting to global warming got not just one, but two responses, both making similar criticisms. For this blog, that amounts to a storm of controversy, and I can't resist responding. But first, let me summarize the criticisms.
The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"
The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."
The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.
Allow me to differ.
Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.
When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.
You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.
I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.
My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.
I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.
We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.
In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.
Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."
The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"
The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."
The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.
Allow me to differ.
Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.
When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.
You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.
I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.
My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.
I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.
We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.
In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.
Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."
Monday, February 18, 2008
Should We Discount Global Warming?
No, by "discount," I don't mean "ignore altogether." What I mean is what bankers and economists mean by the word. The discount rate is an assumed interest rate that is used to make economic decisions, as anyone who has taken engineering economics will recall. And the funny thing is, although discussions of global warming invariably deal with matters fifty or a hundred years in the future, hardly anyone applies the simple economics of discount rates to the problem. When you do, the result is a surprise.
Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.
This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.
But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.
Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.
Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.
Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.
This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.
But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.
Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.
Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.
Monday, February 11, 2008
The Price of Life: Industrial Accidents Then and Now
The refining giant British Petroleum has been in the news again lately, and not in a good way. At the firm's Texas City, Texas refinery on Jan. 14, a worker named William Gracia died when a lid blew off a water filtration vessel during a startup procedure and hit him in the head. The day before that, BP's board of directors fired its CEO, Lord Browne of Madingley, not quite three years after an explosion at the same refinery killed 15 people and injured 170 in the worst U. S. industrial accident in a decade. Although reasons are not usually given when a CEO is dismissed, one can speculate that the disaster had something to do with Lord Browne's departure—that and the $1.6 billion the firm paid out to settle some 4,000 lawsuits, and the $1 billion repair bill to get the refinery operating again. The $22 billion in profits that BP made in 2006 puts these numbers into perspective. Or does it?
What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.
Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?
Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.
It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.
On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.
In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.
Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.
Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.
Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).
What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.
Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?
Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.
It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.
On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.
In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.
Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.
Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.
Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).
Monday, February 04, 2008
If You Can't Trust the Experts. . .
Being an expert at something is both a privilege and a responsibility. Experts who abuse their special abilities make things harder for experts who follow the rules. There's nothing new about these ideas. But the experts who follow the rules often get ignored in the flaps over experts who violate the rules.
Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.
The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."
This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?
Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.
But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.
I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.
After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.
Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.
Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.
Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.
The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."
This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?
Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.
But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.
I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.
After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.
Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.
Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.
Monday, January 28, 2008
One Laptop Per Reviewer
A few months ago (in "One Laptop Per Child: Will It Fly?" on Oct. 22, 2007), I commented on the XO laptop designed by some MIT folks who want to bring the benefits of computer technology to millions of children in third-world countries. It's now been long enough for several reviewers to write independent judgments of the unit, and the results are interesting, to say the least.
Andrew "bunnie" Huang, a recent MIT Ph. D. graduate who writes a blog on computer hardware, thinks the mechanical design of the unit is "brilliant." He was impressed by clever little tricks such as the way the designers used the WiFi antennas to fold down and seal the ventilation holes when the unit's not being used. Along with several other reviewers, Huang liked the way the screen remains visible even in bright sunlight—an intentional design choice that makes the unit usable in outdoor settings.
Huang was less impressed with the software, which consists mainly of a custom-tailored word processing program, a web browser, and a few games. The games ran okay, but the web browser was challenged by all but the simplest websites and the keyboard, a sealed-membrane type, was tiring to use for more than a few minutes.
Since the XO is designed for children, several reviewers turned the unit over to their kids to see what would happen. This is hardly a fair test of how the device will fare in Ulan Bator or Rwanda, because the children of people who write computer reviews for a living are going to be a little more tech-savvy than your average child in a developing country. Not surprisingly, the reports from the younger set were mixed. One kid liked the "squishy" feel of the membrane keyboard, but gave up on the gizmo when he found he couldn't use some functions on one of his favorite websites. In order to get her XO to work properly, another reviewer had to face the challenges of downloading a new operating system from the OLPC website. She pointed out that following instructions like "At your root prompt, type: olpc-update (build-no) where (build-no) is the name of the build you would like" is not something that many non-techie adults will be able to handle. Kids adapt faster, but they have to have some initial guidance too.
Many of the units reviewed were pre-production prototypes, and so we should make allowances for that. Also, since each reviewer got only one unit, no one ever tried the mesh-networking capability. Mesh networking means that in a village with a dozen XO's, every laptop could in principle communicate with every other laptop as well as with the one internet hub in the village, all without fancy network setups or wires. We have to take the developers' word that this feature works as advertised.
Overall, the reviewers were enthusiastic about the genuinely good features—mainly hardware ones—and tried to be kind about the limitations, mainly in software and capabilities.
I've been sitting here racking my brain for an example of something like this in the history of technology which actually worked. What I'm trying to think of is a situation where a bunch of experts saw a need for a specially stripped-down version of something that was successful elsewhere in the context of a wealthy set of economies, and designed it and implemented it through government channels. And the only example I can come up with is the Trabant, which can hardly be termed an unqualified success.
For those who don't recognize the name (probably nearly everyone), the Trabant was the only car made in East Germany from 1957 to the end of the Cold War and the fall of the Berlin Wall. It had a two-cycle lawnmower-type engine, a plastic body, and could go from 0 to 60 in only 21 seconds—on good days. I remember reading in the early 1990s about a resident of East Germany who bought a "real" car and was so disgusted with his Trabant that he drove it into a dumpster and left it there. By now, the few remaining "Trabis" have become collector's items, but back when the Trabant was the only car you could buy in East Germany, demand for them outside the country was approximately zero.
Will the XO become the computer world's version of the Trabant? One reason to think not is that the XO seems to be designed better in some ways than most of today's laptops. My guess is that engineers will cherry-pick the XO's design, taking the good features and putting them into higher-end commercial models, but probably leaving the software alone. And unless some huge institution like the Department of Defense or a national government enforces the use of a particular software that is otherwise not as attractive as commercially viable products, its doom is generally sure.
All the professional computer reviewers in the world can say nice things about the XO, but that won't make it popular among its target audience: children in the poorest parts of the world. Trying to do something about poverty—economic and intellectual—is a good thing. And it's only natural for computer experts to try to use their expertise to benefit poor people with computers. But in trying to get the technology to the people who need it, the OLPC people will have to deal with matters even more complex than open operating systems and mesh networks: the root causes of poverty, unemployment, and oppression. And the realm of those matters is not to be found in hardware or software, but in the human soul.
Sources: I consulted XO reviews by Martha Mendoza of AP (reprinted in the Jan. 28, 2008 edition of the Austin American-Statesman, Jamie and Nicholas Bsales at http://www.laptopmag.com/Review/My-8-Year-Old-Reviews-the-OLPC-XO.htm, Kenneth Barrow at
http://www.notebookreview.com/default.asp?newsID=4093, and "bunnie" at http://bunniestudios.com/blog/?p=218. A story on the collector's renaissance of the Trabant can be found at http://www.nytimes.com/2007/06/17/world/europe/17trabant.html. The One Laptop Per Child website is http://laptop.org/.
Andrew "bunnie" Huang, a recent MIT Ph. D. graduate who writes a blog on computer hardware, thinks the mechanical design of the unit is "brilliant." He was impressed by clever little tricks such as the way the designers used the WiFi antennas to fold down and seal the ventilation holes when the unit's not being used. Along with several other reviewers, Huang liked the way the screen remains visible even in bright sunlight—an intentional design choice that makes the unit usable in outdoor settings.
Huang was less impressed with the software, which consists mainly of a custom-tailored word processing program, a web browser, and a few games. The games ran okay, but the web browser was challenged by all but the simplest websites and the keyboard, a sealed-membrane type, was tiring to use for more than a few minutes.
Since the XO is designed for children, several reviewers turned the unit over to their kids to see what would happen. This is hardly a fair test of how the device will fare in Ulan Bator or Rwanda, because the children of people who write computer reviews for a living are going to be a little more tech-savvy than your average child in a developing country. Not surprisingly, the reports from the younger set were mixed. One kid liked the "squishy" feel of the membrane keyboard, but gave up on the gizmo when he found he couldn't use some functions on one of his favorite websites. In order to get her XO to work properly, another reviewer had to face the challenges of downloading a new operating system from the OLPC website. She pointed out that following instructions like "At your root prompt, type: olpc-update (build-no) where (build-no) is the name of the build you would like" is not something that many non-techie adults will be able to handle. Kids adapt faster, but they have to have some initial guidance too.
Many of the units reviewed were pre-production prototypes, and so we should make allowances for that. Also, since each reviewer got only one unit, no one ever tried the mesh-networking capability. Mesh networking means that in a village with a dozen XO's, every laptop could in principle communicate with every other laptop as well as with the one internet hub in the village, all without fancy network setups or wires. We have to take the developers' word that this feature works as advertised.
Overall, the reviewers were enthusiastic about the genuinely good features—mainly hardware ones—and tried to be kind about the limitations, mainly in software and capabilities.
I've been sitting here racking my brain for an example of something like this in the history of technology which actually worked. What I'm trying to think of is a situation where a bunch of experts saw a need for a specially stripped-down version of something that was successful elsewhere in the context of a wealthy set of economies, and designed it and implemented it through government channels. And the only example I can come up with is the Trabant, which can hardly be termed an unqualified success.
For those who don't recognize the name (probably nearly everyone), the Trabant was the only car made in East Germany from 1957 to the end of the Cold War and the fall of the Berlin Wall. It had a two-cycle lawnmower-type engine, a plastic body, and could go from 0 to 60 in only 21 seconds—on good days. I remember reading in the early 1990s about a resident of East Germany who bought a "real" car and was so disgusted with his Trabant that he drove it into a dumpster and left it there. By now, the few remaining "Trabis" have become collector's items, but back when the Trabant was the only car you could buy in East Germany, demand for them outside the country was approximately zero.
Will the XO become the computer world's version of the Trabant? One reason to think not is that the XO seems to be designed better in some ways than most of today's laptops. My guess is that engineers will cherry-pick the XO's design, taking the good features and putting them into higher-end commercial models, but probably leaving the software alone. And unless some huge institution like the Department of Defense or a national government enforces the use of a particular software that is otherwise not as attractive as commercially viable products, its doom is generally sure.
All the professional computer reviewers in the world can say nice things about the XO, but that won't make it popular among its target audience: children in the poorest parts of the world. Trying to do something about poverty—economic and intellectual—is a good thing. And it's only natural for computer experts to try to use their expertise to benefit poor people with computers. But in trying to get the technology to the people who need it, the OLPC people will have to deal with matters even more complex than open operating systems and mesh networks: the root causes of poverty, unemployment, and oppression. And the realm of those matters is not to be found in hardware or software, but in the human soul.
Sources: I consulted XO reviews by Martha Mendoza of AP (reprinted in the Jan. 28, 2008 edition of the Austin American-Statesman, Jamie and Nicholas Bsales at http://www.laptopmag.com/Review/My-8-Year-Old-Reviews-the-OLPC-XO.htm, Kenneth Barrow at
http://www.notebookreview.com/default.asp?newsID=4093, and "bunnie" at http://bunniestudios.com/blog/?p=218. A story on the collector's renaissance of the Trabant can be found at http://www.nytimes.com/2007/06/17/world/europe/17trabant.html. The One Laptop Per Child website is http://laptop.org/.
Monday, January 21, 2008
Did Morality Evolve? Part 2
Last week I commented on an article by Harvard psychologist Steven Pinker about what he called the "moral instinct." Pinker reviewed scientific efforts to study moral thinking in the brain and across cultures which showed that (a) moral issues are treated differently in the brain than other kinds of thought processes and (b) there seems to be a core of moral principles or categories that show up in every culture studied. I pointed out that the second fact was noticed long before Pinker and his colleagues came along, in the form of the theory of natural law. But I left for today the question of where these core principles come from.
As a subscriber to the evolutionary origin of everything human, Pinker believes that morality is ultimately attributable to evolution. However, he is sensitive to the jaundiced eye with which the general public tends to view evolutionary psychology. As Pinker puts this dim view, "Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes."
This is all wrong, Pinker says, and he gives two reasons for why we shouldn't be afraid or concerned when people like him show us the true foundations of morality.
For one thing, the idea of the "selfish gene" is only a metaphor. Genes aren't really selfish, he says, but in order to simplify complex concepts for mass consumption, geneticists have sometimes talked about genes as though they had personality traits such as selfishness and a determination to survive. And people take this the wrong way to mean that if my genes are selfish, then I must be too, even when I think I'm being generous or self-sacrificing, because it's all really a ploy to perpetuate my genes.
Okay, but Pinker can't have things both ways in this regard. Either the idea of the selfish gene is a reality, or it is a metaphor. If we are moral and believe in the absolute rightness of certain moral principles merely because we evolved that way, then the selfish gene is more than a metaphor: it is the bottom level of reality, the ultimate explanation. And if talking about selfish genes is just a metaphor, and the reality is that genes are just molecules, then what does that make people? Just bigger collections of molecules. And if genes can't be selfish in any meaningful sense, why can the larger collections of molecules called people be selfish, or moral, or anything else other than passive followers of physical law?
To his credit, Pinker senses these questions at some level, because next he asks with regard to the idea that morality evolved, ". . . where does it leave the concept of morality itself?" Does it have a real, objective existence independent of genes or evolution, or is it just foam on the ocean of evolved life, a superficial feature that would cease to exist if the evolved creatures called human beings died out?
Pinker notes that many people attribute the origin of moral principles to God. Then he misapplies what is known to philosophers as the "Euthyphro dilemma." Euthypro is the title of one of Plato's dialogs in which Plato describes a conversation between Socrates and a young man named Euthyphro who wants to prosecute his own father for murder. Disrespect for elders was an impiety in Greek society, but so was murder—hence the dilemma. Socrates asks why the moral or pious act is regarded as moral or pious: "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?"
Pinker takes this dilemma and uses it as a supposedly bulletproof response to anyone who claims a divine origin for morality. And he does it not by asserting anything, but by throwing up a cloud of questions which he leaves to the reader to answer in the desired way: "Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist?"
After disposing of the God alternative, Pinker admits that maybe moral principles have a kind of Platonic existence "out there," like the truths of mathematics. Even atheists can believe in the Pythagorean Theorem, and Pinker seems to be comfortable with the idea of "moral realism"—the notion that maybe there really are moral principles that we discover, but which would be there even if people weren't around to understand them. And he winds up by saying that maybe we'll behave better if we understand where our morality comes from and how our bodies work when we deal with moral issues.
If Pinker had looked a little more seriously at the Euthyphro dilemma, he would have realized that Socrates didn't so much dispose of the idea of a divine origin for morality as he tried to lead Euthyphro to a deeper understanding of what piety is. Philosophers still discuss various ways of concluding the Euthyphro argument, which is by no means universally regarded as a knockout response to the contention that God invented morality.
If one believes in a God outside the natural universe and time, a God who created everything, then morality must be one of the things God created. Philosophers like to pose "what-if" questions that are titillating to our intellects, but often these questions disregard the character of the personalities involved. My own answer to the question of whether God would suddenly turn around and make the good today bad tomorrow, is that "God wouldn't do a thing like that." Maybe in some abstract God-of-the-philosophers world, such a thing is a logical possibility. But those who know God, which is just an extension of how one person knows another, know that God doesn't act that way. Never has and never will.
So a viable alternative to Pinker's Platonic moral realism is a theologically informed belief that somehow—perhaps by using evolutionary processes—God wrote the moral law on our hearts. Either way, I can say along with Pinker that we didn't just make it up by ourselves.
Sources: Pinker's article can be found at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Both Wikipedia ("Euthyphro dilemma") and the Stanford Encyclopedia of Philosophy ("Religion and morality") have good discussions of the Euthyphro dialogue and its implications. The quotation from Socrates above was taken from the Wikipedia article.
As a subscriber to the evolutionary origin of everything human, Pinker believes that morality is ultimately attributable to evolution. However, he is sensitive to the jaundiced eye with which the general public tends to view evolutionary psychology. As Pinker puts this dim view, "Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes."
This is all wrong, Pinker says, and he gives two reasons for why we shouldn't be afraid or concerned when people like him show us the true foundations of morality.
For one thing, the idea of the "selfish gene" is only a metaphor. Genes aren't really selfish, he says, but in order to simplify complex concepts for mass consumption, geneticists have sometimes talked about genes as though they had personality traits such as selfishness and a determination to survive. And people take this the wrong way to mean that if my genes are selfish, then I must be too, even when I think I'm being generous or self-sacrificing, because it's all really a ploy to perpetuate my genes.
Okay, but Pinker can't have things both ways in this regard. Either the idea of the selfish gene is a reality, or it is a metaphor. If we are moral and believe in the absolute rightness of certain moral principles merely because we evolved that way, then the selfish gene is more than a metaphor: it is the bottom level of reality, the ultimate explanation. And if talking about selfish genes is just a metaphor, and the reality is that genes are just molecules, then what does that make people? Just bigger collections of molecules. And if genes can't be selfish in any meaningful sense, why can the larger collections of molecules called people be selfish, or moral, or anything else other than passive followers of physical law?
To his credit, Pinker senses these questions at some level, because next he asks with regard to the idea that morality evolved, ". . . where does it leave the concept of morality itself?" Does it have a real, objective existence independent of genes or evolution, or is it just foam on the ocean of evolved life, a superficial feature that would cease to exist if the evolved creatures called human beings died out?
Pinker notes that many people attribute the origin of moral principles to God. Then he misapplies what is known to philosophers as the "Euthyphro dilemma." Euthypro is the title of one of Plato's dialogs in which Plato describes a conversation between Socrates and a young man named Euthyphro who wants to prosecute his own father for murder. Disrespect for elders was an impiety in Greek society, but so was murder—hence the dilemma. Socrates asks why the moral or pious act is regarded as moral or pious: "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?"
Pinker takes this dilemma and uses it as a supposedly bulletproof response to anyone who claims a divine origin for morality. And he does it not by asserting anything, but by throwing up a cloud of questions which he leaves to the reader to answer in the desired way: "Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist?"
After disposing of the God alternative, Pinker admits that maybe moral principles have a kind of Platonic existence "out there," like the truths of mathematics. Even atheists can believe in the Pythagorean Theorem, and Pinker seems to be comfortable with the idea of "moral realism"—the notion that maybe there really are moral principles that we discover, but which would be there even if people weren't around to understand them. And he winds up by saying that maybe we'll behave better if we understand where our morality comes from and how our bodies work when we deal with moral issues.
If Pinker had looked a little more seriously at the Euthyphro dilemma, he would have realized that Socrates didn't so much dispose of the idea of a divine origin for morality as he tried to lead Euthyphro to a deeper understanding of what piety is. Philosophers still discuss various ways of concluding the Euthyphro argument, which is by no means universally regarded as a knockout response to the contention that God invented morality.
If one believes in a God outside the natural universe and time, a God who created everything, then morality must be one of the things God created. Philosophers like to pose "what-if" questions that are titillating to our intellects, but often these questions disregard the character of the personalities involved. My own answer to the question of whether God would suddenly turn around and make the good today bad tomorrow, is that "God wouldn't do a thing like that." Maybe in some abstract God-of-the-philosophers world, such a thing is a logical possibility. But those who know God, which is just an extension of how one person knows another, know that God doesn't act that way. Never has and never will.
So a viable alternative to Pinker's Platonic moral realism is a theologically informed belief that somehow—perhaps by using evolutionary processes—God wrote the moral law on our hearts. Either way, I can say along with Pinker that we didn't just make it up by ourselves.
Sources: Pinker's article can be found at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Both Wikipedia ("Euthyphro dilemma") and the Stanford Encyclopedia of Philosophy ("Religion and morality") have good discussions of the Euthyphro dialogue and its implications. The quotation from Socrates above was taken from the Wikipedia article.
Monday, January 14, 2008
Did Morality Evolve? Part 1
Every now and then I like to ruminate on the paradox of engineering ethics. Modern engineering is founded on the principles of objectivity, the scientific method, and the rule of accepting only ideas that can be defended by logical arguments based on observations and measurements. But the foundations of ethics and morality look very different, to say the least. So how can you do engineering ethics without betraying the principles of either engineering or ethics?
The latest stimulus to re-examine this topic came in the form of an article in the Jan. 13 online edition of the New York Times Magazine by Steven Pinker. Pinker holds a chair in the Department of Psychology at Harvard University. He is a comparative rarity among academic psychologists in that he writes clearly and actually listens to the arguments of his opponents. In "The Moral Instinct," Pinker surveys the rapidly advancing science of studying moral behavior by using the tools of experimental psychology.
One of the most interesting recent findings is that the brain has a kind of morality switch built into it. Psychologists can study the activity of particular areas of the brain by using a technique called functional MRI, which shows a picture of brain regions that are taking up more oxygen and presumably working harder. A region called the "dorsolateral surface of the frontal lobes" handles rational thinking such as trying to balance your checkbook without a calculator. On the other hand, the medial frontal-lobe regions deal with emotions about other people—a morality switch that gets turned on some times but not other times.
In one study, the researchers posed a series of moral dilemmas to the subjects and asked them to decide what to do. One question—call it the utilitarian question—involved throwing a streetcar-track switch to save five workers' lives by sending a runaway train to run over a sixth worker. Another question—call it the emotional question—was basically the same dilemma, but instead of throwing a switch, the subject had to decide whether to throw a fat man off a bridge. Of the tests that were not spoiled when the subject laughed so hard at the questions that he fell out of the chair and away from the fMRI machine, the researchers found that only the rational part of the brain got involved when the critical act was just throwing a switch. But when the subject had to imagine walking up to a living, breathing man and throwing him to his death, even if it would save five other lives, the emotional part of the brain lit up and got into a fight with the rational part, which also woke up a third part of the brain that acts as a kind of referee between conflicting signals.
The point of this is that psychologists can now use fMRI and other techniques to distinguish between questions and issues that we use mainly rational thinking to answer, and ones which we respond to by appealing to a more basic, non-rational process that Pinker calls the "moral instinct." And Pinker says some very interesting things about this instinct.
For one thing, studies of people from all walks of life and from a variety of cultures all indicate that there may be a core of instinctive moral beliefs that we all have in common. The very fact that Pinker is willing to admit this shows that he is not captive to the "morality-is-subjective" school of thought which has flourished in academia in recent years. Pinker says what he says, not because of any ideological conviction, but because survey and laboratory data from all over the world confirm it. He cites the work of another psychologist, Jonathan Haidt, who says there are basically five categories of moral principles that cover most of the ground for everybody. What are they?
Without going into too much detail, here's the list: (1) Harm—don't hurt other people and help them if you can. (2) Fairness—people in comparable situations should be treated comparably. (3) Group loyalty—other things being equal, take care of your own (family, friends, city, nation) first. (4) Authority—there are rules, rulers, and rulemakers who should be respected and deferred to. (5) Purity—Saintliness, cleanliness, and being without spot or blemish are good things, and grubbiness, filth, and disorder are bad ones.
Pinker says a lot more, but perhaps I will save some of it for next week. I'd like to stop right there and note that what Pinker and his psychological colleagues are doing is searching for experimental validation of something called natural law. And it looks to me like they've found it.
Natural law is the idea that certain principles of morality are not simply agreed upon by mutual consent, but somehow inhere in the nature of things. And not only that, but in some sense these principles of natural law are built into human nature. The idea of natural law goes back at least to St. Thomas Aquinas, who saw it as something God put into all human beings, whether or not they believed in God. It was viewed as a strong basis for human laws until the Enlightenment, when other philosophies of law became more popular. But natural law still has its defenders in the legal profession, political science, and religion.
One of the most articulate defenses of natural law was written in 1947 by C. S. Lewis, the Oxford literary scholar and author. In a small book called The Abolition of Man, Lewis appended a list of what he discerned to be the central principles of what he called the "Tao" or universal laws of morality. Lewis's "Law of General Beneficence" and his "Law of Mercy" look a lot like the moral principle pertaining to Harm above. His "Duties to Parents, Elders, Ancestors" pertain to the principle of Authority, and you can link Lewis's "Duties to Children and Posterity" and his "Law of Special Beneficence" (that is, to family, country, etc.) to the Group Loyalty principle above.
How did Lewis come up with a list that overlaps in so many ways with the product of the latest modern psychological research? By studying the writings of ancient cultures: Babylonia, Egypt, China, and the Norsemen, among others. Pretty good for a guy with no research funding or graduate assistants, way back in the dark ages of 1947.
The point of this little lesson is that ethics and morality, far from being founded on criteria that are purely subjective, and therefore culturally bound and changeable, seems to come from a source that is pretty constant in its basic outlines across time, space, and cultures. And the latest deliverances of modern experimental psychology back up that idea. We will say more about Pinker's article next week, but this point is worth pondering till then.
Sources: Pinker's article appeared at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Besides Lewis and his The Abolition of Man, another highly readable proponent of natural law is J. Budziszewski, professor of political science at the University of Texas at Austin and author of Written On the Heart: The Case for Natural Law (1997).
The latest stimulus to re-examine this topic came in the form of an article in the Jan. 13 online edition of the New York Times Magazine by Steven Pinker. Pinker holds a chair in the Department of Psychology at Harvard University. He is a comparative rarity among academic psychologists in that he writes clearly and actually listens to the arguments of his opponents. In "The Moral Instinct," Pinker surveys the rapidly advancing science of studying moral behavior by using the tools of experimental psychology.
One of the most interesting recent findings is that the brain has a kind of morality switch built into it. Psychologists can study the activity of particular areas of the brain by using a technique called functional MRI, which shows a picture of brain regions that are taking up more oxygen and presumably working harder. A region called the "dorsolateral surface of the frontal lobes" handles rational thinking such as trying to balance your checkbook without a calculator. On the other hand, the medial frontal-lobe regions deal with emotions about other people—a morality switch that gets turned on some times but not other times.
In one study, the researchers posed a series of moral dilemmas to the subjects and asked them to decide what to do. One question—call it the utilitarian question—involved throwing a streetcar-track switch to save five workers' lives by sending a runaway train to run over a sixth worker. Another question—call it the emotional question—was basically the same dilemma, but instead of throwing a switch, the subject had to decide whether to throw a fat man off a bridge. Of the tests that were not spoiled when the subject laughed so hard at the questions that he fell out of the chair and away from the fMRI machine, the researchers found that only the rational part of the brain got involved when the critical act was just throwing a switch. But when the subject had to imagine walking up to a living, breathing man and throwing him to his death, even if it would save five other lives, the emotional part of the brain lit up and got into a fight with the rational part, which also woke up a third part of the brain that acts as a kind of referee between conflicting signals.
The point of this is that psychologists can now use fMRI and other techniques to distinguish between questions and issues that we use mainly rational thinking to answer, and ones which we respond to by appealing to a more basic, non-rational process that Pinker calls the "moral instinct." And Pinker says some very interesting things about this instinct.
For one thing, studies of people from all walks of life and from a variety of cultures all indicate that there may be a core of instinctive moral beliefs that we all have in common. The very fact that Pinker is willing to admit this shows that he is not captive to the "morality-is-subjective" school of thought which has flourished in academia in recent years. Pinker says what he says, not because of any ideological conviction, but because survey and laboratory data from all over the world confirm it. He cites the work of another psychologist, Jonathan Haidt, who says there are basically five categories of moral principles that cover most of the ground for everybody. What are they?
Without going into too much detail, here's the list: (1) Harm—don't hurt other people and help them if you can. (2) Fairness—people in comparable situations should be treated comparably. (3) Group loyalty—other things being equal, take care of your own (family, friends, city, nation) first. (4) Authority—there are rules, rulers, and rulemakers who should be respected and deferred to. (5) Purity—Saintliness, cleanliness, and being without spot or blemish are good things, and grubbiness, filth, and disorder are bad ones.
Pinker says a lot more, but perhaps I will save some of it for next week. I'd like to stop right there and note that what Pinker and his psychological colleagues are doing is searching for experimental validation of something called natural law. And it looks to me like they've found it.
Natural law is the idea that certain principles of morality are not simply agreed upon by mutual consent, but somehow inhere in the nature of things. And not only that, but in some sense these principles of natural law are built into human nature. The idea of natural law goes back at least to St. Thomas Aquinas, who saw it as something God put into all human beings, whether or not they believed in God. It was viewed as a strong basis for human laws until the Enlightenment, when other philosophies of law became more popular. But natural law still has its defenders in the legal profession, political science, and religion.
One of the most articulate defenses of natural law was written in 1947 by C. S. Lewis, the Oxford literary scholar and author. In a small book called The Abolition of Man, Lewis appended a list of what he discerned to be the central principles of what he called the "Tao" or universal laws of morality. Lewis's "Law of General Beneficence" and his "Law of Mercy" look a lot like the moral principle pertaining to Harm above. His "Duties to Parents, Elders, Ancestors" pertain to the principle of Authority, and you can link Lewis's "Duties to Children and Posterity" and his "Law of Special Beneficence" (that is, to family, country, etc.) to the Group Loyalty principle above.
How did Lewis come up with a list that overlaps in so many ways with the product of the latest modern psychological research? By studying the writings of ancient cultures: Babylonia, Egypt, China, and the Norsemen, among others. Pretty good for a guy with no research funding or graduate assistants, way back in the dark ages of 1947.
The point of this little lesson is that ethics and morality, far from being founded on criteria that are purely subjective, and therefore culturally bound and changeable, seems to come from a source that is pretty constant in its basic outlines across time, space, and cultures. And the latest deliverances of modern experimental psychology back up that idea. We will say more about Pinker's article next week, but this point is worth pondering till then.
Sources: Pinker's article appeared at http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html. Besides Lewis and his The Abolition of Man, another highly readable proponent of natural law is J. Budziszewski, professor of political science at the University of Texas at Austin and author of Written On the Heart: The Case for Natural Law (1997).
Subscribe to:
Posts (Atom)