On Monday, May 12, the Sichuan region of China was devastated by one of the worst earthquakes in recent memory. At this writing, the death toll stands at over 50,000, and more bad news about the disaster arrives daily. One of the strangest news stories that has come out of region concerns rumors spread on the Internet that scientists working for the Chinese government knew the earthquake was going to happen, and suppressed the information out of fear that making their prediction public would cause panic ahead of the Olympic games.
A news source almost certainly affiliated with the Chinese government (China Radio International) issued a release Wednesday which quoted Zhang Guomin, a research fellow at China's Institute for Earthquake Science, as saying that earthquake forecasts should be based on scientific analysis and not tailored to political requirements. According to him, earthquake forecasts are not possible with our present state of knowledge. However, another researcher, Zhang Xiaodong of the China Earthquake Networks Center, seems to wish that predictions were possible, because he told the reporters, "I feel deeply regretful and sorrowful at the failure to predict the earthquake."
What if we could predict earthquakes with the same accuracy as, say, we can predict tornadoes today? At least one leading authority believes that such predictions may be possible. A NASA researcher named Friedemann Freund has published a series of papers over the years that connect measurable changes in the earth's electromagnetic fields to strong earthquakes that happen shortly after the changes. (My blogs of Feb. 20, 2007 and Apr. 13, 2006 describe more technical details.)
Without taking sides on whether this is in fact possible, let's do a little thought experiment. Suppose after X years of research and development, we assemble the expertise, equipment, and networks needed to predict major deadly earthquakes. Now no prediction system is going to be perfect, so let's say its accuracy can be quantified this way: when the system predicts an earthquake of at least a given magnitude in a given geographic area during a given time window (probably at least a week, and maybe much longer), the prediction is borne out 80% of the time. And let's say false positives and false negatives are equally likely. That is, for the 20% of predictions that come out wrong, 10% are major earthquakes that happen when none was predicted, and 10% are non-events that don't happen when an earthquake was predicted.
Given this imaginary system, what do we do with it? Do we treat the forecasts like hurricane forecasts and order mass evacuations? That's certainly one approach. Originally, Hurricane Katrina was predicted to hit the Houston area, and a graduate student I knew was pretty perturbed when he wasn't able to arrange for transportation out of the city. As it turned out, he was one of the lucky ones—nothing too bad happened to Houston, but everybody who tried to flee had to endure the grandaddy of all traffic jams on the already-clogged Houston freeways.
Hurricanes generally end up somewhere, so hurricane forecasters are given the benefit of the doubt when they miss on exact predictions of the storm's path. But what if earthquake experts made a prediction that turned out to be a complete bust—that is, everybody evacuates for the full term of the warning and exactly nothing happens? That might sully the reputation of the field indefinitely, and nobody would take them seriously forever after.
To bring the matter closer to home, what if this hypothetical system predicted The Big One for the San Francisco Bay area? If we shut down everything that goes on in Silicon Valley for a week, that would constitute a major economic disaster of its own. You don't just walk up to a huge semiconductor plant and turn off the switch, unless you want to turn it into scrap. Of course, a major earthquake might do that for you, but then you get into the question of how to deal with an evacuation order that would cost billions of dollars to a private company. Lives are more valuable than property, but property isn't negligible. And that's just one example of many problems that we would face in dealing with accurate earthquake forecasts.
The approach California has taken in the absence of reliable earthquake predictions is to mandate earthquake-resistant construction. But that costs more than ordinary construction, and requires a well-functioning regulatory system and a cooperative construction industry, neither of which are always found in other countries. Mass evacuations are simpler, and might be the best path to pursue for countries that can't afford to replace their entire infrastructure with earthquake-resistant structures.
Clearly, even if we had reliable earthquake prediction, we would face a lot of issues in deciding how to act on the knowledge it would provide. But it seems to me that knowledge is always better than ignorance, especially when it comes to earthquakes. And considering the terrible loss of life and property that major earthquakes usually cause, I wish that we spent more intellectual capital on serious efforts to predict earthquakes, and tried to evaluate the predictions in a statistically meaningful way.
Sources: The China Radio International article I quoted appeared at http://english.cri.cn/2946/2008/05/15/48@357631.htm.
Saturday, May 17, 2008
Saturday, May 10, 2008
Ethics of the Smart Car
The relationship between drivers and their cars has always been a complex one, fraught with emotional and moral overtones. Maybe that was why some television writers with more enthusiasm than judgment came up with the concept of "My Mother the Car." I'm old enough to remember watching that show, which aired on U. S. television back in 1965. The basic idea was that this guy buys an antique car, only to discover that somehow his deceased mother's spirit has taken up residence in it. The radio dial flashed whenever she spoke to him, I guess so TV viewers could tell that it really was the car and not some hallucinogen-inspired inner voice. The show lasted only one season and is remembered, if at all, for being one of the worst TV series of all time. But if Prof. Clifford Nass of Stanford University has his way, we all may be talking with our cars in the future—and the cars may talk back in tones to match our emotions.
A recent Wired article profiled Prof. Nass's research on the future of the human-automobile interface, and how smart cars may be used. Smart in what sense? Well, with current GPS (global positioning system) technology and computer power, coupled with broadband wireless networks that will be ubiquitous soon, you can imagine driving down the street and saying to your car, "Hey, I'd like a pizza. Any good places within a couple of blocks?" Advertisers and automakers would like your car to reply, "Well, there's Gino's in the next block and Papa's one block over—they're having a lunch special today. What shall it be?"
Of course, the same smarts that lets your car give you dining advice will also empower it to remember how you drive. Auto insurance companies currently give discounts to good drivers and raise rates on poor ones, but the quality of your driving is determined mainly by very coarse measures: the number of accidents and traffic violations. Suppose every week your insurer could download and process (by software, of course) hundreds of details about how you drive: how fast you pulled out after a light changed, whether you were speeding and by how much, and whether you ran red lights without getting caught. Most of the technology's there, it's just a matter of developing it.
Some people would think this amounts to turning one's private car into a spy. The matter gets even more complex if we move to cars which partially or totally take over many of the functions of driving. (See my column "The Human Side of Automated Driving" Dec. 10, 2007). Clearly, if you take your hands completely away from the controls and let the car do everything, your responsibility for accidents that ensue is limited, if not absent entirely. But many plans for computer-assisted driving don't go that far. Nass imagines a heavy-footed driver negotiating with his car for permission to step on the gas after a stop light changes. "Aw, c'mon, just this once?" "No, you're wasting gas, and at five dollars a gallon!" Nass says that changing the car's tone of voice to match the driver's mood may help the situation, but I'm not so sure.
Right after it was economically feasible to put computer-generated voices in cars, some time in the early 1990s, a few manufacturers experimented with it. The idea proved to be almost universally unpopular, as the mechanical female tone reminded everybody of their worst nagging nightmares of school librarians and mothers (there it is again), and the feature disappeared in a model year or so.
Where is engineering ethics in all this? The first responsibility of engineers who are working on these things is to make sure they don't make driving more dangerous. Of course, that doesn't mean things can't ever go wrong occasionally, but tests will have to show a general improvement in safety before new features can be adopted. As for insurance companies and driving information, there is a public-policy aspect which has not been debated yet. It's the same kind of question that arises when health insurers want to use a person's genetic information to restrict health coverage, except in that case you can't help what genes you were born with, but you presumably have some control over how you drive. But should a taxi driver in New York pay higher rates than the legendary little old lady from Pasadena who only drives to church on Sundays? These are questions that involve technology as well as issues of fairness, economics, and what insurers like to call "moral hazard"—that is, the idea that you should not be exempt from all the consequences of your own voluntary bad behavior.
For my part, I'll be content to drive my old, dumb cars (dumb in two senses) until the wheels fall off. And maybe by then I can buy a car named James and commute by saying, "Home, James," and just enjoy the scenery while the car worries about the congestion on IH-35.
A Note To Readers
For the next two to four weeks I will be pursuing some research in a rather remote location where Internet access is not as reliable as it could be. So I apologize in advance for any delays in my weekly postings, which I will try to keep current as much as possible. For more information about the subject of my research, see www.nightorbs.net.
Sources: The Wired article appeared on May 9, 2008 at http://blog.wired.com/cars/2008/05/a-data-mining-c.html. And Wikipedia has an article that will tell you more than you will ever need to know about the show "My Mother the Car."
A recent Wired article profiled Prof. Nass's research on the future of the human-automobile interface, and how smart cars may be used. Smart in what sense? Well, with current GPS (global positioning system) technology and computer power, coupled with broadband wireless networks that will be ubiquitous soon, you can imagine driving down the street and saying to your car, "Hey, I'd like a pizza. Any good places within a couple of blocks?" Advertisers and automakers would like your car to reply, "Well, there's Gino's in the next block and Papa's one block over—they're having a lunch special today. What shall it be?"
Of course, the same smarts that lets your car give you dining advice will also empower it to remember how you drive. Auto insurance companies currently give discounts to good drivers and raise rates on poor ones, but the quality of your driving is determined mainly by very coarse measures: the number of accidents and traffic violations. Suppose every week your insurer could download and process (by software, of course) hundreds of details about how you drive: how fast you pulled out after a light changed, whether you were speeding and by how much, and whether you ran red lights without getting caught. Most of the technology's there, it's just a matter of developing it.
Some people would think this amounts to turning one's private car into a spy. The matter gets even more complex if we move to cars which partially or totally take over many of the functions of driving. (See my column "The Human Side of Automated Driving" Dec. 10, 2007). Clearly, if you take your hands completely away from the controls and let the car do everything, your responsibility for accidents that ensue is limited, if not absent entirely. But many plans for computer-assisted driving don't go that far. Nass imagines a heavy-footed driver negotiating with his car for permission to step on the gas after a stop light changes. "Aw, c'mon, just this once?" "No, you're wasting gas, and at five dollars a gallon!" Nass says that changing the car's tone of voice to match the driver's mood may help the situation, but I'm not so sure.
Right after it was economically feasible to put computer-generated voices in cars, some time in the early 1990s, a few manufacturers experimented with it. The idea proved to be almost universally unpopular, as the mechanical female tone reminded everybody of their worst nagging nightmares of school librarians and mothers (there it is again), and the feature disappeared in a model year or so.
Where is engineering ethics in all this? The first responsibility of engineers who are working on these things is to make sure they don't make driving more dangerous. Of course, that doesn't mean things can't ever go wrong occasionally, but tests will have to show a general improvement in safety before new features can be adopted. As for insurance companies and driving information, there is a public-policy aspect which has not been debated yet. It's the same kind of question that arises when health insurers want to use a person's genetic information to restrict health coverage, except in that case you can't help what genes you were born with, but you presumably have some control over how you drive. But should a taxi driver in New York pay higher rates than the legendary little old lady from Pasadena who only drives to church on Sundays? These are questions that involve technology as well as issues of fairness, economics, and what insurers like to call "moral hazard"—that is, the idea that you should not be exempt from all the consequences of your own voluntary bad behavior.
For my part, I'll be content to drive my old, dumb cars (dumb in two senses) until the wheels fall off. And maybe by then I can buy a car named James and commute by saying, "Home, James," and just enjoy the scenery while the car worries about the congestion on IH-35.
A Note To Readers
For the next two to four weeks I will be pursuing some research in a rather remote location where Internet access is not as reliable as it could be. So I apologize in advance for any delays in my weekly postings, which I will try to keep current as much as possible. For more information about the subject of my research, see www.nightorbs.net.
Sources: The Wired article appeared on May 9, 2008 at http://blog.wired.com/cars/2008/05/a-data-mining-c.html. And Wikipedia has an article that will tell you more than you will ever need to know about the show "My Mother the Car."
Monday, May 05, 2008
I Got the Botts About Bots
My father, God rest his soul, had enough South Texas German in him to be subject to occasional fits of Teutonic depression. He had enough self-awareness to know what was going on when these moods hit him. When we asked him what was bothering him, he'd generally say, "Aw, I've got the botts." (I never saw him write the word down, but for some reason I think it's spelled with two t's.) He passed on many years before the Internet was more than a gleam in a few researchers' eyes, but if he were alive now, he might well have the botts about bots.
A bot is a piece of malevolent software (malware) that infects your computer with the purpose of controlling it to do things that the bot tells it to do. These things are generally not nice. In the case of one of the worst bots, Storm Worm, some observers say that over a million computers took orders from some people who apparently went on the black market to offer denial-of-service attacks to the highest bidder. If a criminal takes up the offer, the victim's website is likely to be inundated with many millions of emails or other automated requests for service, whereupon the target website immediately gets overwhelmed and becomes inaccessible to legitimate users. Creators of botnets have progressed in the last few years from random vandalism to coordinated criminal activity, which is why computer security firms and software providers from Microsoft on down have lately spent so much time and effort combating the problem.
Until recently, people such as myself who use Macintosh computers could ignore bots, since up to 2004 or so no one had bothered to write a bot for Macs. Since only a relatively small percentage of all computers online at a given time are Macs, a malware writer who wants access to the largest number of computers in the shortest time is probably not going to bother writing two different bot programs, one for Macs and one for PCs. (Most legitimate software companies don't either, but that's another story.) But this supposed invulnerability has evidently come to an end. The other day I received a message from the IT division of a university where I do research. It informed me that a Mac on a network node in the lab I was working in was being remotely controlled by a bot. I was alarmed until I called the people and checked the Ethernet ID address, or whatever it's called—an identifying number unique to my computer. The number didn't match mine, so my computer must not have been the one that was zombified. Still, it means there could be a problem in the future.
It turns out that bots tend to use something called IRC, which stands for Internet Relay Chat. This is the old original protocol that enabled the first internet-based chats, before companies started selling proprietary versions. I am not a computer scientist and I don't know why this particular protocol is so useful to botnet masterminds, but it is.
Wouldn't it be nice if we could rewind to the day when the first wide-eyed innocent programmer came up with the neat idea of the IRC in the first place? "Hey, kids, let's make it so we can chat over the Internet in real time." Sounds great. But apparently, there is something fundamentally flawed about that IRC protocol that makes it able to take over people's computers.
I'm sure that was the last thing in the programmer's mind, to put in a built-in flaw that would later be exploited by criminal elements to the harm of thousands of victims, and to the possible legal compromise of millions of people who unknowingly participate in these crimes simply because their computers are hosting bots and follow the orders of their evil digital masters. But hey—with opportunity comes responsibility.
There is an idea in the engineering ethics world called the precautionary principle. Wikipedia defines it this way: "If there is a risk that an action could cause harm, and there is a lack of scientific consensus on the matter, the burden of proof is on those who would support taking the action." You hear more about it in European ethics discussions than in the U. S. Taking it seriously would severely hamper development of new technologies of all kinds. I wonder, though, if the people who developed the early Internet protocols had taken a more cynical view of human nature, and tried to think of all the evil things ill-willed programmers could do with the neat tools they were putting out there, if we might not have some of the problems we struggle with today.
If, for example, the developers of the IRC had taken a prototype version to some creative young bucks who spent their days trying to devise malevolent uses for new software, they might have discovered the extreme usefulness of IRC in botnets. And who knows?—they might have fixed it in a way that stayed permanently embedded in the Internet as it grew faster than almost anyone expected.
It's obviously too late to close the barn door on that particular horse. Now that Macs can harbor bots, I'll just have to be careful and try to make sure I follow good computer hygiene, for whatever good that will do. But people are writing new software all the time, and some of it is destined to be as influential and ubiquitous as the infamous IRC protocol is now. Surely we have learned a lesson about the depths of depravity to which some programmers will stoop. I just hope that people who write software these days take some thought as to how what they develop could be misused in the future, and even twist their minds around to be creative about it—and then fix it so it can't be used that way.
Sources: Slate has a good introduction to the subject of bots at http://www.slate.com/id/2190275/. A recent overview of the subject from a technical perspective can be found at http://8e6labs.com/2007/11/02/overview-of-the-threats-posed-by-bots/.
A bot is a piece of malevolent software (malware) that infects your computer with the purpose of controlling it to do things that the bot tells it to do. These things are generally not nice. In the case of one of the worst bots, Storm Worm, some observers say that over a million computers took orders from some people who apparently went on the black market to offer denial-of-service attacks to the highest bidder. If a criminal takes up the offer, the victim's website is likely to be inundated with many millions of emails or other automated requests for service, whereupon the target website immediately gets overwhelmed and becomes inaccessible to legitimate users. Creators of botnets have progressed in the last few years from random vandalism to coordinated criminal activity, which is why computer security firms and software providers from Microsoft on down have lately spent so much time and effort combating the problem.
Until recently, people such as myself who use Macintosh computers could ignore bots, since up to 2004 or so no one had bothered to write a bot for Macs. Since only a relatively small percentage of all computers online at a given time are Macs, a malware writer who wants access to the largest number of computers in the shortest time is probably not going to bother writing two different bot programs, one for Macs and one for PCs. (Most legitimate software companies don't either, but that's another story.) But this supposed invulnerability has evidently come to an end. The other day I received a message from the IT division of a university where I do research. It informed me that a Mac on a network node in the lab I was working in was being remotely controlled by a bot. I was alarmed until I called the people and checked the Ethernet ID address, or whatever it's called—an identifying number unique to my computer. The number didn't match mine, so my computer must not have been the one that was zombified. Still, it means there could be a problem in the future.
It turns out that bots tend to use something called IRC, which stands for Internet Relay Chat. This is the old original protocol that enabled the first internet-based chats, before companies started selling proprietary versions. I am not a computer scientist and I don't know why this particular protocol is so useful to botnet masterminds, but it is.
Wouldn't it be nice if we could rewind to the day when the first wide-eyed innocent programmer came up with the neat idea of the IRC in the first place? "Hey, kids, let's make it so we can chat over the Internet in real time." Sounds great. But apparently, there is something fundamentally flawed about that IRC protocol that makes it able to take over people's computers.
I'm sure that was the last thing in the programmer's mind, to put in a built-in flaw that would later be exploited by criminal elements to the harm of thousands of victims, and to the possible legal compromise of millions of people who unknowingly participate in these crimes simply because their computers are hosting bots and follow the orders of their evil digital masters. But hey—with opportunity comes responsibility.
There is an idea in the engineering ethics world called the precautionary principle. Wikipedia defines it this way: "If there is a risk that an action could cause harm, and there is a lack of scientific consensus on the matter, the burden of proof is on those who would support taking the action." You hear more about it in European ethics discussions than in the U. S. Taking it seriously would severely hamper development of new technologies of all kinds. I wonder, though, if the people who developed the early Internet protocols had taken a more cynical view of human nature, and tried to think of all the evil things ill-willed programmers could do with the neat tools they were putting out there, if we might not have some of the problems we struggle with today.
If, for example, the developers of the IRC had taken a prototype version to some creative young bucks who spent their days trying to devise malevolent uses for new software, they might have discovered the extreme usefulness of IRC in botnets. And who knows?—they might have fixed it in a way that stayed permanently embedded in the Internet as it grew faster than almost anyone expected.
It's obviously too late to close the barn door on that particular horse. Now that Macs can harbor bots, I'll just have to be careful and try to make sure I follow good computer hygiene, for whatever good that will do. But people are writing new software all the time, and some of it is destined to be as influential and ubiquitous as the infamous IRC protocol is now. Surely we have learned a lesson about the depths of depravity to which some programmers will stoop. I just hope that people who write software these days take some thought as to how what they develop could be misused in the future, and even twist their minds around to be creative about it—and then fix it so it can't be used that way.
Sources: Slate has a good introduction to the subject of bots at http://www.slate.com/id/2190275/. A recent overview of the subject from a technical perspective can be found at http://8e6labs.com/2007/11/02/overview-of-the-threats-posed-by-bots/.
Monday, April 28, 2008
Should Google Be the World's Librarian?
Book Search is a portal that Google, Inc. is developing to provide access to all the world's books in digital form. How many is that? If you count editions (not individual copies), a recent Associated Press article about the project says there are between 50 and 100 million books in the world. The largest research library that I deal with on a regular basis, at the University of Texas at Austin, has only eight million of these. So clearly, Google will have done a great thing if and when it finishes—although with new books coming out all the time, a project like that is never really finished.
At first glance, this sounds like a great step forward in the history of information, on a par with the invention of printing. There are many parallels between the two events. Before movable type made it possible to produce thousands of identical copies of a manuscript, hand-copied books were rare, expensive treasures that only the wealthy and powerful classes could afford, by and large. But once Europe had dozens of print shops churning out books and pamphlets by the hundreds, prices came down to the point that artisans, shopkeepers, and even some farmers and peasants could afford them. You can make arguments that the Renaissance, the Protestant Reformation, and the Industrial Revolution all depended vitally on the invention of printing.
However, there is one critical difference between the invention of printing and what Google is doing. Print shops, publishers, and the whole network of book production, distribution, and the libraries that developed to house them were under the control of a diverse array of entrepreneurs, private organizations, schools, and governments. On the other hand, Google is, well, Google—a single, monolithic, centrally controlled corporation. Is there any ethical problem with that? It depends.
One thing that may be in danger is what I would term the universal freedom of library access. At any university library worthy of the name, anywhere in the world, any person can simply walk in and look at the general collections, generally without charge. And if you can produce scholarly credentials, you will usually be allowed to examine even the rarest items in their collections, under proper security controls, of course. The only limitation (and this is a severe one, admittedly) is that you have to travel physically to the library in question. But once you're there, you're in.
We have already seen how many internet firms have submitted to the will of dictatorial nations in exchange for the privilege of operating there. In my Mar. 30, 2006 blog, I criticized Google, Yahoo, and Microsoft for kowtowing to the government of the Peoples' Republic of China by restricting users' access to certain sites that the government deemed objectionable. Surely the books and other published works of Chinese dissidents will not be welcome there in electronic form any more than the people themselves, many of whom have endured long prison terms or even death for the "crime" of expressing their opinions.
But that is only one example of how Google, or any entity which has exclusive legal rights to the propagation of large amounts of information in a single medium, could distort or restrict access to the written heritage of the human race.
Am I being paranoid in sensing the potential for some sinister goings-on? I do not presently attribute evil or malign motives to Google, but sometimes things that look good to start with have bad unintended consequences. All I'm saying is that letting a single firm be in control of the way most of the world will in the future access its own written heritage, is at the least an unprecedented step, and potentially a very dangerous one.
The management of Google may all be nice folks now. But what if China gets more prosperous and has so much money in its government-controlled stock investment option that one day it hauls off and buys Google? Sounds ridiculous now, but if you had said in 1965 that in forty years, General Motors would be a money-losing basket case and Japanese car makers would beat them in worldwide sales, you would have gotten peculiar glances then too. Then China would get to say who gets access to what—an eventuality that few people would enjoy or benefit from.
My point is that the concentration of information control in the hands of a few is something to be regarded with caution, to say the least. Same goes for news media, but here we're talking a lot more than just news media—the intellectual heritage of the entire human race is at stake.
Do I have any suggestions? Well, no, in this case I'm just trying to get the ball rolling on a discussion. Even if I owned stock in Google, I have no illusions that they would listen to my opinions about their project. But if we're going to go ahead with this thing, we should at least go into it with our eyes open—as long as we can still see on our own.
Sources: The Associated Press article by Natasha Robinson on Google's Book Search project and its efforts toward the preservation of historical books was published in numerous venues. I saw it in print in the Austin American-Statesman (p. D3 of the Apr. 28, 2008 edition), and a version is accessible online at http://abcnews.go.com/Technology/wireStory?id=4722073.
At first glance, this sounds like a great step forward in the history of information, on a par with the invention of printing. There are many parallels between the two events. Before movable type made it possible to produce thousands of identical copies of a manuscript, hand-copied books were rare, expensive treasures that only the wealthy and powerful classes could afford, by and large. But once Europe had dozens of print shops churning out books and pamphlets by the hundreds, prices came down to the point that artisans, shopkeepers, and even some farmers and peasants could afford them. You can make arguments that the Renaissance, the Protestant Reformation, and the Industrial Revolution all depended vitally on the invention of printing.
However, there is one critical difference between the invention of printing and what Google is doing. Print shops, publishers, and the whole network of book production, distribution, and the libraries that developed to house them were under the control of a diverse array of entrepreneurs, private organizations, schools, and governments. On the other hand, Google is, well, Google—a single, monolithic, centrally controlled corporation. Is there any ethical problem with that? It depends.
One thing that may be in danger is what I would term the universal freedom of library access. At any university library worthy of the name, anywhere in the world, any person can simply walk in and look at the general collections, generally without charge. And if you can produce scholarly credentials, you will usually be allowed to examine even the rarest items in their collections, under proper security controls, of course. The only limitation (and this is a severe one, admittedly) is that you have to travel physically to the library in question. But once you're there, you're in.
We have already seen how many internet firms have submitted to the will of dictatorial nations in exchange for the privilege of operating there. In my Mar. 30, 2006 blog, I criticized Google, Yahoo, and Microsoft for kowtowing to the government of the Peoples' Republic of China by restricting users' access to certain sites that the government deemed objectionable. Surely the books and other published works of Chinese dissidents will not be welcome there in electronic form any more than the people themselves, many of whom have endured long prison terms or even death for the "crime" of expressing their opinions.
But that is only one example of how Google, or any entity which has exclusive legal rights to the propagation of large amounts of information in a single medium, could distort or restrict access to the written heritage of the human race.
Am I being paranoid in sensing the potential for some sinister goings-on? I do not presently attribute evil or malign motives to Google, but sometimes things that look good to start with have bad unintended consequences. All I'm saying is that letting a single firm be in control of the way most of the world will in the future access its own written heritage, is at the least an unprecedented step, and potentially a very dangerous one.
The management of Google may all be nice folks now. But what if China gets more prosperous and has so much money in its government-controlled stock investment option that one day it hauls off and buys Google? Sounds ridiculous now, but if you had said in 1965 that in forty years, General Motors would be a money-losing basket case and Japanese car makers would beat them in worldwide sales, you would have gotten peculiar glances then too. Then China would get to say who gets access to what—an eventuality that few people would enjoy or benefit from.
My point is that the concentration of information control in the hands of a few is something to be regarded with caution, to say the least. Same goes for news media, but here we're talking a lot more than just news media—the intellectual heritage of the entire human race is at stake.
Do I have any suggestions? Well, no, in this case I'm just trying to get the ball rolling on a discussion. Even if I owned stock in Google, I have no illusions that they would listen to my opinions about their project. But if we're going to go ahead with this thing, we should at least go into it with our eyes open—as long as we can still see on our own.
Sources: The Associated Press article by Natasha Robinson on Google's Book Search project and its efforts toward the preservation of historical books was published in numerous venues. I saw it in print in the Austin American-Statesman (p. D3 of the Apr. 28, 2008 edition), and a version is accessible online at http://abcnews.go.com/Technology/wireStory?id=4722073.
Monday, April 21, 2008
Human Biological Enhancement and the Ethics of Personhood
Some philosophers of the mind like to try a little thought experiment on their students. It goes something like this. Suppose some years from now, a person—an ordinary human being—gets some dreaded brain disease that gradually destroys his gray matter. But also suppose that medical technology has advanced to the point that as the brain's biological tissue dies, it can be replaced by silicon (or some equivalent futuristic material) that is functionally equivalent to the dying brain part. And so as time goes on, Mr. Brain Patient has more and more of his brain replaced by the future's equivalent of computer chips. At what point, the philosopher asks, does the patient cease to be a human and begin to be a computer?
At one time, you could laugh off the whole thing by saying nobody has ever done such a thing and it's unlikely that they ever will. But no longer. Writing in Technology and Culture, historian Michael D. Bess points out that numerous blind and otherwise disabled people have received brain implants that allow them to see or communicate in ways that are utterly impossible for the rest of us mortals. Having a bunch of wires attached to your brain is not the same thing as replacing your cerebellum with a mainframe, but the border has been crossed. What happens from now on is more a matter of degree than of kind.
Bess foresees not just advances in brain science, but in genetic engineering and pharmacology as well, all leading to what he calls "human biological enhancement." Currently, the goal of most such projects is to use technology to restore the abilities of disabled people to something close to normal: curing genetic diseases, allowing the blind to see, allowing people with strokes or myasthenia gravis who end up "locked in" (unable to move or talk) to communicate via brain waves, and so on. But what is to prevent a person who sees through a computer from attaching an infrared camera to their input so they can see in the dark? Or what if we find a drug that restores Alzheimer's patients to normal brain function, and also gives normal people an IQ of 200? What is to keep us from taking human nature as merely raw material, a rough design to be improved on with increasingly advanced engineering? And what do we call these improved beings? People? Cyborgs? Or something in between?
Bess, for his part, sees no practical way to avoid these changes. The science will keep progressing, and as the natural desire on the part of people to take advantage of enhancements pulls the technology into the marketplace, we will face the issue of how to treat folks who have version numbers after their names (Bess titled his essay "Icarus 2.0"). He imagines that the only way to stop or regulate human biological enhancement would be to pass a worldwide set of laws together with a huge enforcement mechanism to chase down any miscreants trying to do enhancments under the table, so to speak. He sees the very public failure of the attempt to regulate performance-enhancing drugs in sports as a sign that this road is doomed to futility.
What we ought to do instead, he says, is get used to it. Start now to develop an "ethics of personhood" that in his words constitutes "an expanded conception of human dignity, a more generous understanding of the word 'us'." If one day you go to your job and find that the new hire you have to work with moves on wheels, sees through cameras, and accesses the Internet just by thinking, Bess is concerned that somehow you will be tempted to view that being as something other than human. We need to start now to work on that problem so that it doesn't lead to disastrous social consequences.
Well, I'm doing my little bit by drawing your attention to this matter. I'm already working with a colleague who gets around on wheels—he has osteomyelitis and spends most of his day in an electric wheelchair. Perhaps if these changes come along slowly enough, we can get used to them.
But for some reason, in searching history for an encounter between two very different orders of being who both happened to be human, the story of the early Spanish explorations of the New World comes to mind. With their armor, ships, and guns, the Spaniards must have looked to the native Americans like R2D2 looks to us. And sure enough, a whole lot of social disruption and suffering came about as a result of that encounter. But most of the misery and suffering was experienced by the native Americans, not the "enhanced" Spaniards.
Bess seems to be worried that un-enhanced humans will discriminate against the enhanced types, because they'll look odd or peculiar. But the case of Spanish exploitation of the New World suggests that the problems will mostly be experienced by those who, for whatever reason, don't benefit from technologically enhanced abilities. Especially if enhancement is expensive (it will always be at first), you could easily end up with an elite class of enhanced humans who would regard political and social power as their right.
Aldous Huxley's 1932 dystopia Brave New World divided the genetically engineered population of the future into alphas, betas, and gammas, as I recall. The alphas were the natural-born leaders with enhanced intelligence, and the gammas were bred (or manufactured, really) for menial jobs such as elevator operators (Huxley's crystal ball didn't include much in the way of automation). Huxley avoided the problem of having the gammas rise up in revolt when he made their genetic makeup include a natural-born enjoyment of menial tasks.
I don't know about you, but I wouldn't want to live in such a world. Bess is to be congratulated for raising a concern that we ought to start thinking about now. But I believe he's looking in the wrong places for problems. The enhanced types will do just fine—the people we need to start thinking about defending are the poor, the discriminated against, and the unborn, now and perhaps even more in the future.
Sources: Bess's essay "Icarus 2.0: A Historian's Perspective on Human Biological Enhancement" appears in the January 2008 issue of Technology and Culture (vol. 49, no. 1, pp. 114-126).
At one time, you could laugh off the whole thing by saying nobody has ever done such a thing and it's unlikely that they ever will. But no longer. Writing in Technology and Culture, historian Michael D. Bess points out that numerous blind and otherwise disabled people have received brain implants that allow them to see or communicate in ways that are utterly impossible for the rest of us mortals. Having a bunch of wires attached to your brain is not the same thing as replacing your cerebellum with a mainframe, but the border has been crossed. What happens from now on is more a matter of degree than of kind.
Bess foresees not just advances in brain science, but in genetic engineering and pharmacology as well, all leading to what he calls "human biological enhancement." Currently, the goal of most such projects is to use technology to restore the abilities of disabled people to something close to normal: curing genetic diseases, allowing the blind to see, allowing people with strokes or myasthenia gravis who end up "locked in" (unable to move or talk) to communicate via brain waves, and so on. But what is to prevent a person who sees through a computer from attaching an infrared camera to their input so they can see in the dark? Or what if we find a drug that restores Alzheimer's patients to normal brain function, and also gives normal people an IQ of 200? What is to keep us from taking human nature as merely raw material, a rough design to be improved on with increasingly advanced engineering? And what do we call these improved beings? People? Cyborgs? Or something in between?
Bess, for his part, sees no practical way to avoid these changes. The science will keep progressing, and as the natural desire on the part of people to take advantage of enhancements pulls the technology into the marketplace, we will face the issue of how to treat folks who have version numbers after their names (Bess titled his essay "Icarus 2.0"). He imagines that the only way to stop or regulate human biological enhancement would be to pass a worldwide set of laws together with a huge enforcement mechanism to chase down any miscreants trying to do enhancments under the table, so to speak. He sees the very public failure of the attempt to regulate performance-enhancing drugs in sports as a sign that this road is doomed to futility.
What we ought to do instead, he says, is get used to it. Start now to develop an "ethics of personhood" that in his words constitutes "an expanded conception of human dignity, a more generous understanding of the word 'us'." If one day you go to your job and find that the new hire you have to work with moves on wheels, sees through cameras, and accesses the Internet just by thinking, Bess is concerned that somehow you will be tempted to view that being as something other than human. We need to start now to work on that problem so that it doesn't lead to disastrous social consequences.
Well, I'm doing my little bit by drawing your attention to this matter. I'm already working with a colleague who gets around on wheels—he has osteomyelitis and spends most of his day in an electric wheelchair. Perhaps if these changes come along slowly enough, we can get used to them.
But for some reason, in searching history for an encounter between two very different orders of being who both happened to be human, the story of the early Spanish explorations of the New World comes to mind. With their armor, ships, and guns, the Spaniards must have looked to the native Americans like R2D2 looks to us. And sure enough, a whole lot of social disruption and suffering came about as a result of that encounter. But most of the misery and suffering was experienced by the native Americans, not the "enhanced" Spaniards.
Bess seems to be worried that un-enhanced humans will discriminate against the enhanced types, because they'll look odd or peculiar. But the case of Spanish exploitation of the New World suggests that the problems will mostly be experienced by those who, for whatever reason, don't benefit from technologically enhanced abilities. Especially if enhancement is expensive (it will always be at first), you could easily end up with an elite class of enhanced humans who would regard political and social power as their right.
Aldous Huxley's 1932 dystopia Brave New World divided the genetically engineered population of the future into alphas, betas, and gammas, as I recall. The alphas were the natural-born leaders with enhanced intelligence, and the gammas were bred (or manufactured, really) for menial jobs such as elevator operators (Huxley's crystal ball didn't include much in the way of automation). Huxley avoided the problem of having the gammas rise up in revolt when he made their genetic makeup include a natural-born enjoyment of menial tasks.
I don't know about you, but I wouldn't want to live in such a world. Bess is to be congratulated for raising a concern that we ought to start thinking about now. But I believe he's looking in the wrong places for problems. The enhanced types will do just fine—the people we need to start thinking about defending are the poor, the discriminated against, and the unborn, now and perhaps even more in the future.
Sources: Bess's essay "Icarus 2.0: A Historian's Perspective on Human Biological Enhancement" appears in the January 2008 issue of Technology and Culture (vol. 49, no. 1, pp. 114-126).
Monday, April 14, 2008
Thoughts on the Passing of a Zip Drive
In my household we try not to let too much old technology pile up, so after my wife bought a new laptop the other day, we began saying good-bye to her old Mac tower. It gave good service from about 2002 to a couple of years ago, and one of its features we're going to miss is its Zip drive. Zip disks were a removable magnetic-disk storage medium that were popular from the mid-nineties until flash drives came along. The first Zip disks held 100 MB, which was later boosted to 250 MB, but with 1-gig flash drives so cheap now I can't imagine there's much of a market for Zip drives now. Thing is, we have about 40 or so Zip disks that have stuff on them going all the way back to 1988, when my wife first learned to do graphics on a computer. Some of it has been backed up here and there, but if I had to tell you where, I'd be in trouble. So I spent yesterday afternoon transferring a good many of those old Zip disks to a backup drive, and it got me to thinking about the permanent impermanence of digital storage.
Every two to five years or so, a new generation of storage media come along. If the new generation didn't rise up and commit parricide on the previous generation, it wouldn't be so bad. But the hallmark of modern technology is "creative destruction," so for a new storage medium to be successful, it has to drive the previous medium out of existence. True, you can usually find antique drives, media, and even computers that use them if you look hard enough, but having to hunt around and assemble your own computer museum just to read some old files is hardly practical for most people. So the only alternative if you don't want your old data to go away as definitely as if you wrote it on paper and threw the paper on a bonfire, is to transfer it to the next medium. Which is fine for another two to five years, and then. . . .
And that gets me to wondering, what am I saving all this stuff for anyway? The inventor and futurist Ray Kurzweil wrote about this in one of the most human-sounding passages of a book about how we're all eventually going to live as software on hardware that will take over the universe (you think I'm kidding, go read The Singularity Is Near). His father Fredric was a musician and music teacher who fled Germany in the 1930s for the U. S. When he died at 58, the son inherited a large volume of paper documents, recordings, and other memorabilia. After starting a project to digitize all this stuff, Ray reached a conclusion which is as simple as it is startling. It was this: "Information lasts only so long as someone cares about it."
Like many of Kurzweil's philosophical epigrams, it contains elements of truth. I'm sure lots of information, in the form of paper, hard drives, old floppy disks, and so on, is eradicated every day simply because nobody needs or wants it any more, and the space or money it takes up is needed for something else. But just because somebody cares about information doesn't mean it will necessarily endure. Along with caring, the people interested in the data need the resources it takes to preserve it—whether that means space, funding for periodic migrations to new media, or archeological work.
In a way there's nothing new about this. People have been making choices about what information to save and what to toss ever since the invention of writing. Writing and paper are different in degree from Zip disks and flash drives, but not in kind. They are all technologies for the storage of a non-material entity—namely, information—using material media. You can make a good argument that the invention of writing made civilization possible, in that laws, history, customs, religious traditions, and most of what makes a culture could then be preserved independently of particular people with both good memories and the ability to pass their memories on to other people who could do the same. And I'm not one of these people who sit up at night worrying that historians of the future will have nothing to go on after the global catastrophe that wipes out all computer memories everywhere—although if that did happen, we'd all have a lot to worry about, not just the historians.
If we knew for certain whether anybody in the future would care about this or that data file, things would be easier. But you never know. Certain kinds of information, such as emails in the Executive Branch of the U. S. government, are just assumed to have historical importance, which is why the Bush administration got in some trouble a few months ago after admitting that they appear to have "lost" some emails covering several years, and had to recover them from backup tapes.
But for most ordinary, non-historical personages like myself, the candidates for people who will care about your information include yourself in the future, your relatives and children, and maybe a few friends and associates. It's actually a pretty short list. And unless you're a professional historian or plan to become the subject of one, if you don't think your list of carers-in-the-future would be interested in your tax return for 1982, you can just go ahead and throw it away.
Sources: Ray Kurzweil's The Singularity Is Near (Viking, 2005) carries the story of his attempts to archive his father's legacy on pp. 326-330. Zip is a registered trademark of Iomega Corporation, which still sells Zip drives, so maybe I won't worry about backing up those remaining disks just yet.
Every two to five years or so, a new generation of storage media come along. If the new generation didn't rise up and commit parricide on the previous generation, it wouldn't be so bad. But the hallmark of modern technology is "creative destruction," so for a new storage medium to be successful, it has to drive the previous medium out of existence. True, you can usually find antique drives, media, and even computers that use them if you look hard enough, but having to hunt around and assemble your own computer museum just to read some old files is hardly practical for most people. So the only alternative if you don't want your old data to go away as definitely as if you wrote it on paper and threw the paper on a bonfire, is to transfer it to the next medium. Which is fine for another two to five years, and then. . . .
And that gets me to wondering, what am I saving all this stuff for anyway? The inventor and futurist Ray Kurzweil wrote about this in one of the most human-sounding passages of a book about how we're all eventually going to live as software on hardware that will take over the universe (you think I'm kidding, go read The Singularity Is Near). His father Fredric was a musician and music teacher who fled Germany in the 1930s for the U. S. When he died at 58, the son inherited a large volume of paper documents, recordings, and other memorabilia. After starting a project to digitize all this stuff, Ray reached a conclusion which is as simple as it is startling. It was this: "Information lasts only so long as someone cares about it."
Like many of Kurzweil's philosophical epigrams, it contains elements of truth. I'm sure lots of information, in the form of paper, hard drives, old floppy disks, and so on, is eradicated every day simply because nobody needs or wants it any more, and the space or money it takes up is needed for something else. But just because somebody cares about information doesn't mean it will necessarily endure. Along with caring, the people interested in the data need the resources it takes to preserve it—whether that means space, funding for periodic migrations to new media, or archeological work.
In a way there's nothing new about this. People have been making choices about what information to save and what to toss ever since the invention of writing. Writing and paper are different in degree from Zip disks and flash drives, but not in kind. They are all technologies for the storage of a non-material entity—namely, information—using material media. You can make a good argument that the invention of writing made civilization possible, in that laws, history, customs, religious traditions, and most of what makes a culture could then be preserved independently of particular people with both good memories and the ability to pass their memories on to other people who could do the same. And I'm not one of these people who sit up at night worrying that historians of the future will have nothing to go on after the global catastrophe that wipes out all computer memories everywhere—although if that did happen, we'd all have a lot to worry about, not just the historians.
If we knew for certain whether anybody in the future would care about this or that data file, things would be easier. But you never know. Certain kinds of information, such as emails in the Executive Branch of the U. S. government, are just assumed to have historical importance, which is why the Bush administration got in some trouble a few months ago after admitting that they appear to have "lost" some emails covering several years, and had to recover them from backup tapes.
But for most ordinary, non-historical personages like myself, the candidates for people who will care about your information include yourself in the future, your relatives and children, and maybe a few friends and associates. It's actually a pretty short list. And unless you're a professional historian or plan to become the subject of one, if you don't think your list of carers-in-the-future would be interested in your tax return for 1982, you can just go ahead and throw it away.
Sources: Ray Kurzweil's The Singularity Is Near (Viking, 2005) carries the story of his attempts to archive his father's legacy on pp. 326-330. Zip is a registered trademark of Iomega Corporation, which still sells Zip drives, so maybe I won't worry about backing up those remaining disks just yet.
Monday, April 07, 2008
Whistleblowing on Southwest Airlines: Cracks of Doom or Paperwork Errors?
The lot of a whistleblower is not an easy one. And I'm not talking about football referees. In engineering ethics parlance, a whistleblower is someone who goes public with information about a safety issue, after trying without success to deal with the problem through normal organizational channels. Whistleblowers can toot either before or after something terrible happens, but the consequences for them are usually the same: isolation, criticism, and often the loss of a job or even a career. Their only compensation is the knowledge that, in most cases at least, they did the right thing.
Charlambe "Bobby" Boutris is finding out right now what life as a whistleblower is like. In 1998, the Federal Aviation Administration (FAA) hired him, and an important part of his job was to make sure that airlines complied with what are called Airworthiness Directives (ADs for short). These are rules that the FAA makes to ensure the safety of aircraft, and detail such things as regular fuselage inspections, especially for older planes.
You'd think nothing much could go wrong with the fuselage compared to moving parts like the engine and so on, but think again. If you've ever been on a jet aircraft and looked through a window with a view over the wing, you have probably noticed that the wingtip wiggles up and down several inches during air turbulence. That is perfectly normal, and designed into the way the plane works. If the wing was built solidly enough not to wiggle at all, it would make the plane so heavy that it couldn't get off the ground.
But if you've ever bent a paper clip back and forth until it breaks, you know about a thing called metal fatigue. And not only the wing, but all stress-bearing parts of the fuselage experience tiny movements that over time, can cause metal fatigue and cracks. Most of the time these cracks are small and don't spread. But in 1988, they were responsible for one of the most spectacular airline accidents in aviation history.
Passengers in the first-class section of an Aloha Airlines flight over Maui were astonished to see the roof of the plane pop off and rip away in the violent decompression, taking a flight attendant with it. The pilot, not even fully aware of what happened, quickly adapted to the altered flying characteristics of his plane and safely landed at a nearby airport. The attendant was the only fatality, but clearly, airlines did not want to take the chance of this kind of thing happening again. Investigation showed that the plane, which was one of the oldest in Aloha's fleet, had developed fatigue cracks that had spread to cause the whole top section of the fuselage to fly off.
For this and other very good reasons, the FAA requires air carriers to inspect their fleets for fatigue cracks on a regular basis. Now, these cracks are a statistical thing, like mortality rates. It's hard to predict whether a given plane will develop a crack at a given place by a given time, but the inspections are timed so that on average, any cracks can be caught and repaired well before they become dangerous. But the system works only if you keep to the schedule.
Well, it appears that Southwest Airlines didn't keep to the inspection schedule. In testimony before Congress on April 4, Inspector Boutris told the story of how he found numerous cases in which inspection records were either too mixed up to tell whether the inspections had been done, or showed definitely that planes had gone as long as 30 months past the time when ADs specified they had to be pulled out of service to be inspected. It's illegal to fly a plane in revenue service if it's behind in certain kinds of inspections.
What made matters worse was that when Boutris asked permission from his FAA supervisor to issue a letter of investigation to Southwest in 2007, the supervisor told him to tone it down to a letter of concern, which does not carry the same impact. Eventually, in late March of 2007, Southwest did finish up the late inspections, but only after some airplanes had gone months or years without them. The FAA has announced its intention to fine Southwest ten million dollars for flying the uninspected planes, at least one of which was found to have fatigue cracks after inspections were finally performed.
On a scale of "who cares?" to "stick it to 'em," you can identify two extremes of how one can view this story. If you take the side of Southwest Airlines, you can point out that besides being one of the most profitable airlines in the business, they have never had a catastrophic accident in which more than one person was killed. And that incident, when a ground crew member was pulled into an engine, was due to pilot error, not mechanical failure. True, they didn't follow all the rules, but no harm was done—none of their planes popped their tops like the Aloha Airlines flight did.
On the other extreme, you can say that you keep good safety records like that by following the rules, even if it means grounding a large fraction of your fleet to make overdue inspections. The attitude of Boutris' supervisor appears to be one of "don't rock the boat," which might indicate that he was more concerned with how Southwest Airlines would fare than he was worried about the safety of the flying public, despite the fact that he worked for the government. That indicates systemic organizational problems both within the FAA and Southwest Airlines.
Back in high school, I attended Explorer Scout meetings that were held in the basement of a telephone exchange building. On the wall of the break room was a brass plaque, as I recall, and its words went something like this: "No service is so urgent or no business need is so critical that we fail to perform our work safely." Back then, Ma Bell had a guaranteed monopolistic income, and could afford to make safety priority number one. But I thought it was a great motto at the time, no matter what the business was or how it was doing financially. And I still do. I hope Southwest Airlines agrees with me, not just in words, but in actions as well.
Sources: A video of Mr. Boutris' opening statement before a Congressional committee investigating this matter can be viewed at http://salon.glenrose.net/?view=plink&id=6899. A CNN article on the Southwest Airlines actions and the FAA's response is at http://www.cnn.com/2008/US/03/06/southwest.planes/. The Wikipedia article on Aloha Airlines has a brief description of the 1988 accident.
Charlambe "Bobby" Boutris is finding out right now what life as a whistleblower is like. In 1998, the Federal Aviation Administration (FAA) hired him, and an important part of his job was to make sure that airlines complied with what are called Airworthiness Directives (ADs for short). These are rules that the FAA makes to ensure the safety of aircraft, and detail such things as regular fuselage inspections, especially for older planes.
You'd think nothing much could go wrong with the fuselage compared to moving parts like the engine and so on, but think again. If you've ever been on a jet aircraft and looked through a window with a view over the wing, you have probably noticed that the wingtip wiggles up and down several inches during air turbulence. That is perfectly normal, and designed into the way the plane works. If the wing was built solidly enough not to wiggle at all, it would make the plane so heavy that it couldn't get off the ground.
But if you've ever bent a paper clip back and forth until it breaks, you know about a thing called metal fatigue. And not only the wing, but all stress-bearing parts of the fuselage experience tiny movements that over time, can cause metal fatigue and cracks. Most of the time these cracks are small and don't spread. But in 1988, they were responsible for one of the most spectacular airline accidents in aviation history.
Passengers in the first-class section of an Aloha Airlines flight over Maui were astonished to see the roof of the plane pop off and rip away in the violent decompression, taking a flight attendant with it. The pilot, not even fully aware of what happened, quickly adapted to the altered flying characteristics of his plane and safely landed at a nearby airport. The attendant was the only fatality, but clearly, airlines did not want to take the chance of this kind of thing happening again. Investigation showed that the plane, which was one of the oldest in Aloha's fleet, had developed fatigue cracks that had spread to cause the whole top section of the fuselage to fly off.
For this and other very good reasons, the FAA requires air carriers to inspect their fleets for fatigue cracks on a regular basis. Now, these cracks are a statistical thing, like mortality rates. It's hard to predict whether a given plane will develop a crack at a given place by a given time, but the inspections are timed so that on average, any cracks can be caught and repaired well before they become dangerous. But the system works only if you keep to the schedule.
Well, it appears that Southwest Airlines didn't keep to the inspection schedule. In testimony before Congress on April 4, Inspector Boutris told the story of how he found numerous cases in which inspection records were either too mixed up to tell whether the inspections had been done, or showed definitely that planes had gone as long as 30 months past the time when ADs specified they had to be pulled out of service to be inspected. It's illegal to fly a plane in revenue service if it's behind in certain kinds of inspections.
What made matters worse was that when Boutris asked permission from his FAA supervisor to issue a letter of investigation to Southwest in 2007, the supervisor told him to tone it down to a letter of concern, which does not carry the same impact. Eventually, in late March of 2007, Southwest did finish up the late inspections, but only after some airplanes had gone months or years without them. The FAA has announced its intention to fine Southwest ten million dollars for flying the uninspected planes, at least one of which was found to have fatigue cracks after inspections were finally performed.
On a scale of "who cares?" to "stick it to 'em," you can identify two extremes of how one can view this story. If you take the side of Southwest Airlines, you can point out that besides being one of the most profitable airlines in the business, they have never had a catastrophic accident in which more than one person was killed. And that incident, when a ground crew member was pulled into an engine, was due to pilot error, not mechanical failure. True, they didn't follow all the rules, but no harm was done—none of their planes popped their tops like the Aloha Airlines flight did.
On the other extreme, you can say that you keep good safety records like that by following the rules, even if it means grounding a large fraction of your fleet to make overdue inspections. The attitude of Boutris' supervisor appears to be one of "don't rock the boat," which might indicate that he was more concerned with how Southwest Airlines would fare than he was worried about the safety of the flying public, despite the fact that he worked for the government. That indicates systemic organizational problems both within the FAA and Southwest Airlines.
Back in high school, I attended Explorer Scout meetings that were held in the basement of a telephone exchange building. On the wall of the break room was a brass plaque, as I recall, and its words went something like this: "No service is so urgent or no business need is so critical that we fail to perform our work safely." Back then, Ma Bell had a guaranteed monopolistic income, and could afford to make safety priority number one. But I thought it was a great motto at the time, no matter what the business was or how it was doing financially. And I still do. I hope Southwest Airlines agrees with me, not just in words, but in actions as well.
Sources: A video of Mr. Boutris' opening statement before a Congressional committee investigating this matter can be viewed at http://salon.glenrose.net/?view=plink&id=6899. A CNN article on the Southwest Airlines actions and the FAA's response is at http://www.cnn.com/2008/US/03/06/southwest.planes/. The Wikipedia article on Aloha Airlines has a brief description of the 1988 accident.
Monday, March 31, 2008
BitTorrent and Comcast: Who Pays and How?
Back on Feb. 4 of this year, I noted how a group of Swedish software experts got in trouble for running a peer-to-peer system for distributing video content over the Internet. The claim made by the prosecutors was that most of the content was pirated. Well, that turned out to be a sign of things to come. For some months now, the major U. S. cable television and Internet network operator Comcast has been in a dispute with BitTorrent Inc., a firm that provides software allowing peer-to-peer sharing of video. And the outcome of the fight may affect how all of us pay for Internet services for years to come.
The first punch in the public fight came when BitTorrent accused Comcast of singling out users of BitTorrent's protocol for interference and interruptions when Comcast's network traffic got too heavy for comfort. At first Comcast denied any such discrimination, but later under pressure, spokesmen for the cable and network firm admitted that they were doing exactly that. Then the Federal Communications Commission got involved and has held public hearings about the matter. On Mar. 27 (last Thursday), Comcast announced that it was making a number of changes that will both eliminate the discriminatory network measures against BitTorrent users and should make improvements in everyone's service through increased software and hardware efficiency and investment. But that hasn't stopped the FCC from announcing another hearing set for Apr. 17 at Stanford University in the heart of Silicon Valley, where I'm sure they will find people with an abundance of opinions on both sides.
What is BitTorrent and how does it work? You may recall the flaps about peer-to-peer sharing of audio files over the Internet a few years ago. BitTorrent's protocol also uses the fact that a file that one person wants is usually stored on thousands of other computers on the network. But video files are thousands of times bigger than audio files, especially if we're talking about HD video, which is becoming increasingly popular. The process of getting only one source computer to send a gigabyte-size file (1,000,000,000 bytes) over the Internet to another computer is tedious, error-prone, and takes a long time. So BitTorrent draws upon many of the other computers that have the file in question and gets them to cooperate by sending different pieces of the file to the target computer. Somehow the software coordinates all this confusion of activity, and the end result to the user is that he or she gets the desired file a lot faster than if only two computers were involved.
But as with so many things, what's good for the individual may not bode well for the group. Comcast and other network service providers estimate that because of BitTorrent's popularity, as much as half of all Internet traffic at certain times consists of peer-to-peer file sharing of this type. Comcast has defended its actions against BitTorrent protocols simply as their attempt to manage their limited network capacity fairly so that other customers were not left out in the cold with impaired service.
The word "fairly" means ethics has come into the picture. This ethical question arises from a tension that was born with the Internet some two decades ago, a tension between two competing philosophies.
Call the first the egalitarian-vision philosophy: the idea that information should be free, all Internet users should have the same privileges and access, and that such ideas should be built into the technical machinery of the Internet. The founders and early users of the Internet were imbued with this philosophy, and its legacy lives on in the basic structure of Internet protocols.
The second philosophy is the commercial free-enterprise notion that the Internet is a means to make money, and you should charge whatever the traffic will bear. It was years before anyone figured out how to make money with the Internet, but with the coming of Google I think it is fair to say that some people, anyway, have managed to do that. This philosophy sees the market as the best arbiter of resource distribution and even matters of fairness. Although there are now a few coarse-grained ways of charging people who want faster Internet service more money, hardly anyone pays any surcharge that depends on how much you actually use the thing. That is, if you ask your service provider for high-speed Internet service, you get a monthly bill that's the same whether you never touched your computer that month, or whether you downloaded seventeen movies in ten days using BitTorrent.
The network operators argue, and with some merit, that if five percent of their customers tie up half the resources of the entire network, it is not fair to the other 95% who pay just as much but have their service degraded by the overcrowding due to BitTorrent traffic. One alternative that Time Warner Cable is reported to be trying out in Beaumont, Texas on a trial basis is "metered" Internet use. That is, if you use more than a certain bandwidth-time product, let's call it, then you pay an extra fee. Metered use flies in the face of decades of Internet tradition and egalitarian philosophy, but if such distortions of the market as those caused by BitTorrent users continue, something will have to change, and the network companies may resort to metering on a wider scale.
A curious analogy to what is happening now with BitTorrent and Comcast went on for over a century in New York City. Until the late 1980s, residential users of the Big Apple's water supply had no meters—they just paid a flat monthly fee. You can imagine how this affected the way people used water. Finally, meters were installed, and the city as a whole used 28% less water in 2006 than it did in 1979. The Internet isn't water, but like water, it is not an infinite resource, and we may have to start paying by the drink if we don't want the whole thing to break down.
Sources: Bob Fernandez of the Philadelphia Inquirer has reported extensively on the BitTorrent-Comcast dispute, and I used his articles published on Mar. 23 (http://www.philly.com/philly/business/20080323_Online_Video__Data_Tidal_Wave_.html) and Mar. 27 (http://www.philly.com/philly/business/20080327_Comcast_agreement_in_dispute_with_BitTorrent.html). The statistic about New York City water use came from the Wikipedia article "Environmental issues in New York City."
The first punch in the public fight came when BitTorrent accused Comcast of singling out users of BitTorrent's protocol for interference and interruptions when Comcast's network traffic got too heavy for comfort. At first Comcast denied any such discrimination, but later under pressure, spokesmen for the cable and network firm admitted that they were doing exactly that. Then the Federal Communications Commission got involved and has held public hearings about the matter. On Mar. 27 (last Thursday), Comcast announced that it was making a number of changes that will both eliminate the discriminatory network measures against BitTorrent users and should make improvements in everyone's service through increased software and hardware efficiency and investment. But that hasn't stopped the FCC from announcing another hearing set for Apr. 17 at Stanford University in the heart of Silicon Valley, where I'm sure they will find people with an abundance of opinions on both sides.
What is BitTorrent and how does it work? You may recall the flaps about peer-to-peer sharing of audio files over the Internet a few years ago. BitTorrent's protocol also uses the fact that a file that one person wants is usually stored on thousands of other computers on the network. But video files are thousands of times bigger than audio files, especially if we're talking about HD video, which is becoming increasingly popular. The process of getting only one source computer to send a gigabyte-size file (1,000,000,000 bytes) over the Internet to another computer is tedious, error-prone, and takes a long time. So BitTorrent draws upon many of the other computers that have the file in question and gets them to cooperate by sending different pieces of the file to the target computer. Somehow the software coordinates all this confusion of activity, and the end result to the user is that he or she gets the desired file a lot faster than if only two computers were involved.
But as with so many things, what's good for the individual may not bode well for the group. Comcast and other network service providers estimate that because of BitTorrent's popularity, as much as half of all Internet traffic at certain times consists of peer-to-peer file sharing of this type. Comcast has defended its actions against BitTorrent protocols simply as their attempt to manage their limited network capacity fairly so that other customers were not left out in the cold with impaired service.
The word "fairly" means ethics has come into the picture. This ethical question arises from a tension that was born with the Internet some two decades ago, a tension between two competing philosophies.
Call the first the egalitarian-vision philosophy: the idea that information should be free, all Internet users should have the same privileges and access, and that such ideas should be built into the technical machinery of the Internet. The founders and early users of the Internet were imbued with this philosophy, and its legacy lives on in the basic structure of Internet protocols.
The second philosophy is the commercial free-enterprise notion that the Internet is a means to make money, and you should charge whatever the traffic will bear. It was years before anyone figured out how to make money with the Internet, but with the coming of Google I think it is fair to say that some people, anyway, have managed to do that. This philosophy sees the market as the best arbiter of resource distribution and even matters of fairness. Although there are now a few coarse-grained ways of charging people who want faster Internet service more money, hardly anyone pays any surcharge that depends on how much you actually use the thing. That is, if you ask your service provider for high-speed Internet service, you get a monthly bill that's the same whether you never touched your computer that month, or whether you downloaded seventeen movies in ten days using BitTorrent.
The network operators argue, and with some merit, that if five percent of their customers tie up half the resources of the entire network, it is not fair to the other 95% who pay just as much but have their service degraded by the overcrowding due to BitTorrent traffic. One alternative that Time Warner Cable is reported to be trying out in Beaumont, Texas on a trial basis is "metered" Internet use. That is, if you use more than a certain bandwidth-time product, let's call it, then you pay an extra fee. Metered use flies in the face of decades of Internet tradition and egalitarian philosophy, but if such distortions of the market as those caused by BitTorrent users continue, something will have to change, and the network companies may resort to metering on a wider scale.
A curious analogy to what is happening now with BitTorrent and Comcast went on for over a century in New York City. Until the late 1980s, residential users of the Big Apple's water supply had no meters—they just paid a flat monthly fee. You can imagine how this affected the way people used water. Finally, meters were installed, and the city as a whole used 28% less water in 2006 than it did in 1979. The Internet isn't water, but like water, it is not an infinite resource, and we may have to start paying by the drink if we don't want the whole thing to break down.
Sources: Bob Fernandez of the Philadelphia Inquirer has reported extensively on the BitTorrent-Comcast dispute, and I used his articles published on Mar. 23 (http://www.philly.com/philly/business/20080323_Online_Video__Data_Tidal_Wave_.html) and Mar. 27 (http://www.philly.com/philly/business/20080327_Comcast_agreement_in_dispute_with_BitTorrent.html). The statistic about New York City water use came from the Wikipedia article "Environmental issues in New York City."
Monday, March 24, 2008
Sustainable—But At What Cost?
I read a lot of discussion these days about "sustainability," "sustainable engineering," "sustainable agriculture," and so on. Sustainability, we are told, is the key to solving everything from global warming to finding world peace. What exactly is sustainability, and what are its implications?
One of the most obvious features of today's technological economy is not sustainable: the use of fossil fuels, which means mainly oil, natural gas, and coal. However these resources were formed (and there is still a good bit of debate about that), everybody agrees it took millions of years, and we stand a fair chance of running through them in a good deal less than 0.1% of that time, say a few hundred years. So the use of fossil fuels for energy is not sustainable.
So what? If you look around for anything at all, technological or not, which has turned out to be truly sustainable over recorded history, the list is fairly short. Things like the practice of begetting and raising families, farming, the life of some cities (e. g. Damascus, which is one of the oldest cities on earth), and even a few (very few) business firms have gone on for hundreds of years or more, and show no sign of disappearing because of lack of resources. I could add the professions of doctors and lawyers, and let's not forget taxes, but not governments that levy taxes—the habit endures even though the taxing entities don't.
The proponents of sustainability want basically everything we do to be a part of that kind of list—a list of things which have long traditions going back over many cultures and governments into the past.
In an article in the current issue of The New Atlantis, Yuval Levin makes the point that certain ideas vigorously promoted by political liberals in the U. S. are actually quite conservative. Sustainability, if successfully implemented, fits right into this pattern. If all social activities, technological and otherwise, were sustainable in the sense that liberals usually mean, the activities would go on and on without having to end because of physical limitations. While certain features might change, the physical resources needed would be either renewable or permanent.
Now that is a very conservative picture, meaning that the physical essentials of technology would not change. If new materials were invented that required using something that couldn't be recycled and reused–then they wouldn't be sustainable, and you couldn't use them. Everything would be recycled, with energy coming only from the sun. (Strictly speaking, even the sun isn't sustainable, although we can count on it shining for a few more million years.)
What if we went to such a totally sustainable economy? Some things wouldn't change much at all. Most steel is now made from recycled scrap, for instance, so that wouldn't be much of a problem.
But what about concrete? I have toyed with the idea of recycling concrete, because as far as I know, you could apply enough heat to it, drive off the water, and get back the calcium silicate that was in the original Portland cement. The trouble with this is, it would be vastly more expensive (and energy-intensive) to make cement from recycled concrete (laboriously hauled back from wherever it was poured to the recycling plant where huge amounts of energy would be required), than it would be simply to dig up some more limestone and sand from the ground. Ah, but limestone and sand are not renewable resources. Yes, there is enough limestone and sand to last us a long time, but if you're going to be a sustainable absolutist, you can't use anything that isn't recycled or in principle, recyclable.
I'm pushing this idea to the limits to make a point, but the point is a valid one. Namely, some things are more easily sustainable than others, and it simply doesn't make sense to hold sustainability up as a practical goal for every technological field, unless we are willing to make some very weird and silly changes in the way we do things.
While I was on vacation last week, I toured Indian City U. S. A. outside Anadarko, Oklahoma. It's a sort of outdoor museum where seven different kinds of Native American dwellings have been constructed and preserved. It was pouring rain at the time, but that didn't stop our guide from pointing out the different features of the various structures which were, of course, made from all-natural materials: tree trunks, mud, grass, and so on. Native Americans were the first recyclers, he said, since when they were finished with a structure they just abandoned it and let it return to Nature.
Though I didn't say anything at the time, I had a big "Yes, but. . . " in mind. Although estimates of how many people lived in what is now called North and South America before 1492 vary from 8 million to over 100 million, the figure is certainly less than the approximately 900 million people that the New World harbors today. And the Americas are some of the least densely populated regions of the developed world. If we all went back to living the way the first Native Americans did, there is no way that we would all be able to survive, even if we all suddenly acquired the hunting, gathering, and rudimentary agricultural skills necessary for such a life. And if we managed somehow to eke out a living, few of us would enjoy rising at dawn, doing back-breaking manual labor all day, and retiring at dusk only to do it all over again the next morning.
The only time when something like this has been tried on a massive scale recently was the Great Cultural Revolution under Mao Tse-tung in the Peoples' Republic of China, from 1968 to 1976. Millions of intellectuals and other suspicious persons, including most of the faculty members at all Chinese universities, were summarily hauled off to the countryside for a little bucolic "re-education" that lasted seven or eight years. I have known citizens of that country who lived through that period, and they tell me that it set back their lives a decade or more, and the progress of the country by a generation. But it was certainly sustainable, in the sense that they were still living and probably consuming fewer resources than they would have in the cities.
Few if any of the proponents of sustainability have in mind a radical, total shift to something like that. Or if they do, they're not talking about it openly. I favor a reasoned, appropriate move toward more nearly sustainable technology when it makes economic sense, when its adoption won't cause undue suffering or disruption, and when it leads to more human thriving than formerly. But a draconian swift transition to a totally sustainable economy would be in most respects indistinguishable from a worldwide depression. And I hope we don't get to that point any time soon.
Sources: Yuval Levin's article "Science and the Left" appears in the Winter 2008 edition of The New Atlantis.
One of the most obvious features of today's technological economy is not sustainable: the use of fossil fuels, which means mainly oil, natural gas, and coal. However these resources were formed (and there is still a good bit of debate about that), everybody agrees it took millions of years, and we stand a fair chance of running through them in a good deal less than 0.1% of that time, say a few hundred years. So the use of fossil fuels for energy is not sustainable.
So what? If you look around for anything at all, technological or not, which has turned out to be truly sustainable over recorded history, the list is fairly short. Things like the practice of begetting and raising families, farming, the life of some cities (e. g. Damascus, which is one of the oldest cities on earth), and even a few (very few) business firms have gone on for hundreds of years or more, and show no sign of disappearing because of lack of resources. I could add the professions of doctors and lawyers, and let's not forget taxes, but not governments that levy taxes—the habit endures even though the taxing entities don't.
The proponents of sustainability want basically everything we do to be a part of that kind of list—a list of things which have long traditions going back over many cultures and governments into the past.
In an article in the current issue of The New Atlantis, Yuval Levin makes the point that certain ideas vigorously promoted by political liberals in the U. S. are actually quite conservative. Sustainability, if successfully implemented, fits right into this pattern. If all social activities, technological and otherwise, were sustainable in the sense that liberals usually mean, the activities would go on and on without having to end because of physical limitations. While certain features might change, the physical resources needed would be either renewable or permanent.
Now that is a very conservative picture, meaning that the physical essentials of technology would not change. If new materials were invented that required using something that couldn't be recycled and reused–then they wouldn't be sustainable, and you couldn't use them. Everything would be recycled, with energy coming only from the sun. (Strictly speaking, even the sun isn't sustainable, although we can count on it shining for a few more million years.)
What if we went to such a totally sustainable economy? Some things wouldn't change much at all. Most steel is now made from recycled scrap, for instance, so that wouldn't be much of a problem.
But what about concrete? I have toyed with the idea of recycling concrete, because as far as I know, you could apply enough heat to it, drive off the water, and get back the calcium silicate that was in the original Portland cement. The trouble with this is, it would be vastly more expensive (and energy-intensive) to make cement from recycled concrete (laboriously hauled back from wherever it was poured to the recycling plant where huge amounts of energy would be required), than it would be simply to dig up some more limestone and sand from the ground. Ah, but limestone and sand are not renewable resources. Yes, there is enough limestone and sand to last us a long time, but if you're going to be a sustainable absolutist, you can't use anything that isn't recycled or in principle, recyclable.
I'm pushing this idea to the limits to make a point, but the point is a valid one. Namely, some things are more easily sustainable than others, and it simply doesn't make sense to hold sustainability up as a practical goal for every technological field, unless we are willing to make some very weird and silly changes in the way we do things.
While I was on vacation last week, I toured Indian City U. S. A. outside Anadarko, Oklahoma. It's a sort of outdoor museum where seven different kinds of Native American dwellings have been constructed and preserved. It was pouring rain at the time, but that didn't stop our guide from pointing out the different features of the various structures which were, of course, made from all-natural materials: tree trunks, mud, grass, and so on. Native Americans were the first recyclers, he said, since when they were finished with a structure they just abandoned it and let it return to Nature.
Though I didn't say anything at the time, I had a big "Yes, but. . . " in mind. Although estimates of how many people lived in what is now called North and South America before 1492 vary from 8 million to over 100 million, the figure is certainly less than the approximately 900 million people that the New World harbors today. And the Americas are some of the least densely populated regions of the developed world. If we all went back to living the way the first Native Americans did, there is no way that we would all be able to survive, even if we all suddenly acquired the hunting, gathering, and rudimentary agricultural skills necessary for such a life. And if we managed somehow to eke out a living, few of us would enjoy rising at dawn, doing back-breaking manual labor all day, and retiring at dusk only to do it all over again the next morning.
The only time when something like this has been tried on a massive scale recently was the Great Cultural Revolution under Mao Tse-tung in the Peoples' Republic of China, from 1968 to 1976. Millions of intellectuals and other suspicious persons, including most of the faculty members at all Chinese universities, were summarily hauled off to the countryside for a little bucolic "re-education" that lasted seven or eight years. I have known citizens of that country who lived through that period, and they tell me that it set back their lives a decade or more, and the progress of the country by a generation. But it was certainly sustainable, in the sense that they were still living and probably consuming fewer resources than they would have in the cities.
Few if any of the proponents of sustainability have in mind a radical, total shift to something like that. Or if they do, they're not talking about it openly. I favor a reasoned, appropriate move toward more nearly sustainable technology when it makes economic sense, when its adoption won't cause undue suffering or disruption, and when it leads to more human thriving than formerly. But a draconian swift transition to a totally sustainable economy would be in most respects indistinguishable from a worldwide depression. And I hope we don't get to that point any time soon.
Sources: Yuval Levin's article "Science and the Left" appears in the Winter 2008 edition of The New Atlantis.
Saturday, March 15, 2008
Robot Rats and SARs for PEPs
Sometimes things happen fast in politics. On Sunday morning, March 9, Eliot Spitzer woke up to the beginning of his 63rd week in office as Governor of New York State, an office which served as a stepping stone to the White House for his predecessors Theodore and Franklin D. Roosevelt. He had an apparently unstained reputation for fighting corruption in high places, which he had earned during his seven years as New York State's Attorney General, going after everything from Enron-type financial scandals to prostitution rings.
Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?
Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.
Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.
As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.
One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.
At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.
Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.
And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.
Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.
Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?
Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.
Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.
As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.
One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.
At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.
Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.
And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.
Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.
Tuesday, March 11, 2008
Engineering the End of Malaria
In my Feb. 25 entry, I used the idea of wiping out malaria as an example of what might be done with "a few billion dollars" that would otherwise go toward dealing with global warming. I will admit that I simply pulled that number out of the air. Since then I have learned that while eliminating malaria is something that people as wealthy as Bill and Melinda Gates have tried to do, it is by no means a simple or straightforward task. But engineers may be able to help in some ways you wouldn't expect.
As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.
The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.
At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.
One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.
This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?
In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.
Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.
Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.
As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.
The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.
At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.
One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.
This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?
In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.
Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.
Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.
Monday, March 03, 2008
Locked-In Profits or Service to the Downtrodden?
Suppose you're the wife of a man who got arrested in Oakland, California. You weren't with him at the time, and all you know is the bare fact that he was arrested. Until recently, your only alternative was to call the Alameda County public information number, work your way through a phone tree, and hope there would be a live person at the other end who could tell you something. Sometimes there was and sometimes there wasn't. But now, thanks to the initiative of some staff in the Alameda County Information Technology department, there is an Inmate Locator on the county's website. If you have the person's full name, or even if all you know is that they were booked in the last twenty-four hours, you can get online and see identifying information, the "custody status," and which jail they're in. Of course, you have to have a computer and a high-speed internet connection to do this efficiently, but doesn't everybody?
Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?
You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).
I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.
Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.
As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.
I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.
But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.
With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.
This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.
So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.
Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."
Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?
You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).
I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.
Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.
As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.
I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.
But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.
With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.
This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.
So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.
Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."
Monday, February 25, 2008
Discounting Global Warming, Revisited
Running this blog is a pretty one-sided deal most of the time. Every week I send out some thoughts into the blogosphere, and rarely do I get a response. But last week's post about applying the economics of discounting to global warming got not just one, but two responses, both making similar criticisms. For this blog, that amounts to a storm of controversy, and I can't resist responding. But first, let me summarize the criticisms.
The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"
The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."
The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.
Allow me to differ.
Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.
When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.
You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.
I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.
My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.
I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.
We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.
In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.
Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."
The first post (to be found under Nov. 19, 2007's "Yahoo Pays. . . ", to which it refers) accuses me of being either "sloppy or inconsistent." Here is some of what it says: " In the post about Yahoo, you get wrought up about the company not doing more to protect their [the Chinese citizens'] identity for engaging in free speech, but in "Should we discount global warming?" you advocate using a discount rate even though some of that $50 billion is lost lives due to less reliable weather, increased flooding, and more famine. (NOT jail time, death.) . . . . So should Yahoo continue its economic discounting, knowing that the occasional customer is jailed; or should the Yahoo-wannabes stop counting human suffering in dollars?"
The second post responding to last week's blog, signed "Cousin Mike" (yes, he is my cousin) says this, among other things: "A courtroom-drama movie once depicted an auto manufacturer as having made a conscious decision not to fix a problem with their brakes because they calculated economically that it was less expensive to pay off claims to people killed by the brake failures than to fix the flaw. The movie-makers obviously wanted the audience to view such conduct as morally odious, and I agree . . . . I know that if we really thought every life was infinitely valuable, we'd build autos like bumper cars,incapable of a fatal crash . . . . But it still gives me chills to think that the economically correct engineering solution to global warming is to leave the brakes flawed 'cause it'll cost too much money to fix."
The point these respondents are making, it seems to me, is that while I seem to hold up certain principles as absolutes (e. g. freedom rather than jail time for Chinese users of Yahoo), when I propose discounting global warming, I appear to be throwing away all these fine moral distinctions in preference to a cold economic calculation.
Allow me to differ.
Imagine a set of scales, like Lady Liberty (the gal with the blindfold) is often portrayed as holding up. If I were to do an editorial cartoon summarizing the criticisms above, it would show a pile of currency and gold coins on one pan of the scales, pulling it down, as a crowd of impoverished coastal fishermen drown in a miniature version of Hurricane Katrina on the rising pan. (You see why I don't do editorial cartoons for a living.) It looks like I'm cynically trading off money for lives. But that was not my intention.
When economic analyses are used on a large-scale problem such as global warming over a time scale of decades, the dollars involved are not exactly the same kind of thing that you pull out of your wallet. They are a symbol. Well, all money is symbolic in one sense, but what I mean is, the dollars in the global-warming discount calculation are a placeholder for the energy and wealth of nations. It isn't just dollars versus lives. It's lives versus lives, and dollars versus dollars, and Statues of Liberty versus who knows what unimaginable architectural achievements might be made in the next century if we don't wreck the world's economy with a misinformed economic dictatorship that has highly counterproductive effects, which could cost lives as well.
You want to talk lives? I'll talk lives. Malaria kills between one and three million people every year, most of them poor African youths and children, and debilitates hundreds of millions more. It is entirely possible to treat a population with prophylactic anti-malarial drugs so as to reduce the incidence of malaria to near zero. Doing so would not only eliminate an important direct cause of death, but would result in the equivalent of billions of dollars of economic stimulus to the areas affected because of the increased productivity of those who would no longer contract this disease.
I don't know what it would cost to wipe out malaria worldwide, but something similar has been done at least once: we eradicated smallpox. Say it would cost a few billion dollars. Now that few billion dollars is money that cannot be spent on reducing global warming. If you like, you can consider it as part of the money we could spend now on things other than global warming, if we buy into the economic-discounting idea that there is a reasonable and finite amount of money we should spend on global warming, and no more. And that money not spent on global warming, but spent on eradicating malaria, will absolutely save lives.
My point is, there are lives on both sides of the equation, not just dollars versus lives. What we're really talking about is the grand question of how to expend our current capital resources—natural, monetary, and most of all, human—and how much of them to expend on efforts to reduce global warming.
I have no objections to a calm, rational approach to reducing our use of fossil fuels. I think it's terrible that we fight over that black liquid that comes out of the ground in inconvenient places to get at, and I would love to see a coordinated global effort devoted to developing renewable energy sources that would eventually replace most of what we now use petroleum for. But the critical question is how this is to be done. I was listening to a discussion on the BBC the other morning about how air travel contributes to global warming. Both sides agreed that we had to quit burning fossil fuels to fly. To me, that poses a whole series of awkward questions. Okay, if we quit flying, how are we going to sustain the global economy? And if we keep flying without fossil fuels, how are we going to do it? The only battery-powered airplanes I know of could carry maybe a mouse, at a strain.
We saw what a hit the U. S. economy took with just a slight reduction in air travel after 9/11. Imagine what would happen to the world economy if somehow the U. N. passed a binding resolution to reduce air travel by 80% or something, and everybody stuck to it. The Great Depression in the U. S. is only a distant memory, but economic disasters are a lot more real to residents of many other countries which have suffered them more recently. If some ill-considered global-warming measure ended up putting the world economy in the tank for a few years, do you think that's not going to cost lives? And do you think the poorest and most vulnerable people won't pay the price in lost jobs and starvation? Think again.
In large measure, we are discussing imponderables, and that's one reason why talk about global warming inspires such overwrought emotions on both sides. The fact is, nobody knows exactly what would happen if we don't do anything about it, and nobody can guarantee that any given measure will avert the spectrum of catastrophes that Al Gore and company have laid out for our viewing pleasure. Like many things in life, it is a crapshoot. But we can definitely say what wrecking economies with arbitrary regulations can do, and whatever is done, we should avoid doing that to the extent possible and consistent with a measured approach toward the problem of global warming.
Sources: Statistics on malaria can be found at the Wikipedia entry under "Malaria."
Monday, February 18, 2008
Should We Discount Global Warming?
No, by "discount," I don't mean "ignore altogether." What I mean is what bankers and economists mean by the word. The discount rate is an assumed interest rate that is used to make economic decisions, as anyone who has taken engineering economics will recall. And the funny thing is, although discussions of global warming invariably deal with matters fifty or a hundred years in the future, hardly anyone applies the simple economics of discount rates to the problem. When you do, the result is a surprise.
Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.
This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.
But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.
Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.
Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.
Gary S. Becker is a Nobel-Prize-winning economist who thinks any discussion of global warming should factor in a reasonable discount rate. Here is his argument in a nutshell. Suppose, for the sake of argument, that if we do nothing about global warming, fifty years from now it will cause $2 trillion of damage (technically termed "utility costs" in terms of lost income from flooded coastlands, etc.). It turns out that if you roll the tape of time back to 2008, you could pay for that $2 trillion by investing only $500 billion at a rate of return of 3 percent, which is pretty easy to do (assuming you have the $500 billion in the first place). Becker makes the point that if we went ahead now with most of the more radical proposals for doing something about global warming—reducing carbon emissions by 70%, putting big restrictions on fossil-fuel-burning technologies, and so on—they would cost a lot more than $500 billion in the next few years. If these restrictions cost, say, $1 trillion, we are being foolish by spending all that money now to avert something we could offset with half that amount.
This is not an argument to do nothing. On the contrary, it is one of the few arguments I've seen on the subject that requires us to come up with some quantitative information in order to make a rational economic decision, which is what engineers do all the time. The usual approach used by advocates of extreme measures is to paint a picture of the end of civilization as we know it if we don't go green 24/7 and never allow the problem to leave our consciousness for the rest of our lives. Put more quantitatively, these folks use a discount rate of zero, which I suppose is a reasonable one if you assume that the alternative is either peace and security on the one hand by doing everything they advocate, or death to humanity on the other. If a mugger walks up to you in a dark alley, puts a knife to your ribs, and mutters, "Your money or your life," you're not likely to deliberate a long time before handing over all your cash, not just some of it.
But implicit in Becker's economic argument is the assumption that, as damaging as global warming and its consequences might be, it will not be the equivalent of a giant meteor smashing the earth to bits. Its effects will be gradual, not sudden; spotty, not universally bad everywhere; and will be quantifiable in economic terms. Anything with a finite future cost can be discounted using standard economic assumptions. The rate of 3 percent that Becker uses is quite conservative—many investments in physical capital pay rates of return much higher than that. What Becker is saying is that we shouldn't stop all economic growth and divert all our resources to fighting global warming, because we're wasting resources that would pay off better if invested in other things. Wise investment in future economic growth, which over the last century has raised billions of people from poverty into something approaching a middle class, can continue to bring prosperity to future generations even in the face of problems like global warming.
Economics isn't everything, of course. If we took a poll to find out what Americans would pay to keep the Statue of Liberty from submerging (which would also flood most of the East and West Coasts), the answer would probably come out close to "whatever it takes." But engineering is about economics as much as it is about technology. And any analysis of global warming that makes unrealistic economic assumptions is simply bad engineering, whatever else you might call it.
Sources: Becker makes his argument in an essay in the Hoover Digest (2007), no. 2, published by the Hoover Institution, at http://www.hoover.org/publications/digest/7465817.html.
Monday, February 11, 2008
The Price of Life: Industrial Accidents Then and Now
The refining giant British Petroleum has been in the news again lately, and not in a good way. At the firm's Texas City, Texas refinery on Jan. 14, a worker named William Gracia died when a lid blew off a water filtration vessel during a startup procedure and hit him in the head. The day before that, BP's board of directors fired its CEO, Lord Browne of Madingley, not quite three years after an explosion at the same refinery killed 15 people and injured 170 in the worst U. S. industrial accident in a decade. Although reasons are not usually given when a CEO is dismissed, one can speculate that the disaster had something to do with Lord Browne's departure—that and the $1.6 billion the firm paid out to settle some 4,000 lawsuits, and the $1 billion repair bill to get the refinery operating again. The $22 billion in profits that BP made in 2006 puts these numbers into perspective. Or does it?
What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.
Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?
Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.
It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.
On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.
In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.
Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.
Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.
Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).
What is a human life worth? The time was (and still is, unfortunately, in a few places) where a human life was a market commodity like any other. Fortunately, the human race has seen fit to abolish slavery nearly everywhere, but that doesn't mean that you can't figure out what a human life is worth in certain contexts.
Look at the BP situation from an economic point of view. I'm not saying that BP managers thought this way, but one way of looking at it is this. Okay, in 2005 something happened that ended up costing us an additional $2.6 billion. We might have been able to avoid that accident by spending more time and money on safety regulations, training, equipment, and so on. But who knows how much of that is enough? If we'd spent more than $2.6 billion extra on such programs, we would have ended up cutting into our 2006 profits of $22 billion. So how much safety is enough? And at what price?
Another way of looking at it is to ask how much BP spent on settlements per worker injured or killed: an average of about $8.6 million each, it turns out. Now much if not most of that went to lawyers: BP's lawyers, the contingency-fee lawyers that workers without other financial resources have to go to in situations like this, and miscellaneous lawyers, experts, and other highly paid professionals that tend to accumulate around disasters like flies around honey. And some of it probably went to the injured and the families of those who died. Is that what a worker's life is worth? At least in this case, it turned out to be that way for BP.
It's interesting to contrast the way these things are handled today with the way similar casualties were handled in the 1800s. The nineteenth century was an era of ambitious construction projects: bridges, dams, tunnels. Everybody knows about the Brooklyn Bridge. You may even know that its original designer, John Roebling, had his foot crushed while doing surveys for the bridge, and died of the resulting tetanus infection. His son Washington took over, but after going into an underwater high-pressure caisson during construction of the foundations, he succumbed to decompression sickness and became an invalid. His wife Emily taught herself enough engineering to serve as his chief assistant during the rest of the bridge's 13-year construction. Although many hundreds of workers were employed on the site, the project had a relatively good safety record for the time: only 27 people died, an average of about two a year.
On the other hand, the Hoosac Tunnel project, otherwise known as the "Bloody Pit", cost 193 lives to build. This 4.75-mile railroad tunnel in Western Massachusetts served as a test bed for modern construction techniques using pneumatic drills and nitroglycerine. It was completed in 1873, three years after the Brooklyn Bridge project began.
In those days, construction-worker fatalities were regarded as regrettable, but no one appears to have thought much the worse for the companies or engineers responsible if a few workers died on the job. The general attitude was that a worker taking on a job knew it was dangerous, and it was his look-out to keep alive.
Thomas Edison was (and is) one of my heroes, but in many ways Edison held some very typical 19th-century attitudes about the safety of his employees. In a new biography of Edison by Randall Stross, I read how Edison sent people far and wide in the summer of 1880 to search for bamboo that might have fibers suitable for incandescent-lamp filaments. One of the less popular members of his lab staff was named John Segredor, a hot-tempered man who had once responded to a sarcastic remark from another staff worker by going to his rooming house and getting a gun. Edison sent Segredor on an odyssey first to Georgia, then Florida, and finally to Cuba in search of different varieties of bamboo. Three days after his arrival in Cuba, Segredor died of yellow fever. In a private letter about the matter, Edison blamed Segredor for his own death, saying he was careless about drinking cold drinks in hot places "and this I doubt not caused his death." No lawsuits there, it seems.
Ideally, nobody would die in industrial accidents, or any other kind, for that matter. Considering the much larger number of people engaged in modern industry today compared to a hundred years ago, it is likely that the accident and fatality rates in modern industry are much lower than comparable rates in the 1880s. And at least in the U. S., our attitudes are much harsher nowadays toward the companies and executives who are involved in industrial accidents. True, the enforcement mechanism is largely a private-enterprise affair using the civil justice system and freelance contingency-fee lawyers, but I suppose free-market justice is better than no justice at all. But wouldn't it be nice if the lawyers ended up with nothing to do because nobody was dying of industrial accidents anymore? We should still hold out the ideal of no accidents or injuries due to technical causes as one to be striven for. But for a long time, I think, there will always be more to be done.
Sources: The latest BP accident is described in the San Francisco Examiner online edition at http://www.examiner.com/a-1160942~BP__victim_s_family_probing_fatal_Texas_City_refinery_accident.html. Lord Browne's departure and the BP financial statistics were carried in an article on the Ergoweb website, an ergonomics services company, at http://www.ergoweb.com/news/detail.cfm?id=1693. I also consulted Wikipedia articles on the Brooklyn Bridge and the Hoosac Tunnel. The Segredor incident is recounted on p. 110 of Stross's The Wizard of Menlo Park (New York: Crown, 2007).
Monday, February 04, 2008
If You Can't Trust the Experts. . .
Being an expert at something is both a privilege and a responsibility. Experts who abuse their special abilities make things harder for experts who follow the rules. There's nothing new about these ideas. But the experts who follow the rules often get ignored in the flaps over experts who violate the rules.
Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.
The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."
This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?
Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.
But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.
I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.
After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.
Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.
Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.
Let me get specific. David Kravets of Wired reports in his Threat Level column that four Swedish men have been charged with facilitating copyright infringement. Seems that they operate a "BitTorrent tracking site" called The Pirate Bay. According to Wikipedia, BitTorrent is a type of peer-to-peer network protocol that makes it easier to download large amounts of data through the Internet. Instead of requiring the user to receive an entire file from one central server, BitTorrent allows the user to get pieces of the file from multiple locations and assemble them later, making the whole process easier and often faster. Although the protocol can be used for almost any type of file, it is often used to obtain pirated copies of movies and software.
The Pirate Bay's operators claim they have spread their operation out so far with third-party intermediaries that they don't even know where the servers are. According to the report in Wired, they seem to think they're doing nothing wrong, and certainly aren't making money at it. If you had to boil down their motivation to one sentence, it might be something like "every bit deserves to be free."
This situation is a good example of what I'd call "technology gone bad" in the sense that some people have taken a clever and useful technological idea—BitTorrent protocols, in this case—and used it for, at best, quasi-legal purposes. Who are the injured parties in a case like this?
Copyright owners such as giant media and software companies will be quick to point out that they are losing revenue every time somebody gets a "free" copy of content via The Pirate Bay rather than through legitimate channels. And since the companies' revenue has to be made up somehow in order for them to stay in business, this leads to higher prices for everybody who gets the stuff legally. And there's your second group of wronged individuals: the consumers of legitimate content who have to pay more for it.
But one group who is often ignored in analyses of this kind of thing is the experts, such as yours truly, whose legitimate operations may be hampered or stifled altogether by draconian or ill-considered regulations. Although I don't think this will happen, it might come about that the corporate interests who dislike the illegal applications of BitTorrent protocols could enact some sort of binding regulation that would make the whole protocol illegal. That sounds almost unenforceable—the notion that simply having a protocol on your computer, without using it, would make you liable to jail time—but there are precedents in the area of child pornography. It is illegal simply to have child pornography in your computer, and if it's found, you can go to jail.
I have no argument against making child pornography illegal, but when you start getting into technologies where most users are legitimate technical people going about their harmless business, there's a real problem. I'm facing a situation like that right now. For a research project I'm engaged in, it turns out I would like to convert so-called "NTSC analog video" (the standard that's going to disappear from U. S. airwaves in about a year) to digital video. I'm not copying anything—I'm generating the video myself—and my need to convert analog to digital video is a legitimate research requirement. But I have had a heck of a time finding any equipment to do it. I mentioned this to my wife, and she said, "Well, sure. People are wanting to take their old analog VHS tapes and turn them into DVDs illegally." Yes, that can be done with this equipment I want, but I don't want to do that.
After much web searching, I found two companies that make such a device, or used to. Oddly (and somewhat suspiciously), both firms have either removed all mention of the units from their websites altogether, or have put up a big notice saying "This product has been discontinued." Fortunately, I think I have found a supplier who still has some in stock, and I'm waiting to find out if I can get one. But it's beginning to look like some corporation or trade group's lawyer has been sending out letters threatening legal action if such devices aren't withdrawn from the market.
Of course, maybe I'm just being paranoid. But whenever a few experts turn to unethical practices, you should remember that besides the people who are directly involved, all the other experts who use the same technology for legitimate reasons may be inconvenienced or worse when corporations and their lawyers overreact to cripple or ban an entire useful technology because of the malfeasance of a few bad actors. I hope I can get my video converter unit, but if I can't, I may have folks with attitudes like The Pirate Bay guys to thank.
Sources: The article describing The Pirate Bay's latest legal troubles is dated Feb. 1 and can be found at http://blog.wired.com/27bstroke6/2008/02/the-pirate-bay.html.
Subscribe to:
Posts (Atom)