Did you know that if you travel on an airliner from, say, London to Frankfurt, you use about ten times the greenhouse-gas-producing fossil fuel that it takes to carry you the same distance by train? Did you care?
That idea is the gist of an ad campaign sponsored by European environmental groups. The ads take the form of statements by an imaginary airline head who makes arrogant, disparaging comments about environmentalists, who he calls "lentil mobs." In Europe's largely pro-green culture, such comments are as inflammatory as running ads in U. S. media that show a fat white Southern sheriff saying disparaging things about blacks. Technique aside, the point the ads make is true: airline travel uses much more fossil fuel per passenger-mile than surface travel, and especially more than rail, which is more efficient than private cars. The way you react to that fact should depend on your view of the world and what it is all about.
Suppose you think this physical world is all there is, death is annihilation, and we are here to propagate our gene pool and along the way pick up whatever transient enjoyment we can. You may therefore view air travel as one of the greatest boons to humanity, since it lets us get from enjoyable place to enjoyable place much faster than surface transportation. Strangely, though, that attitude is uncommon in cultures where a frankly atheistic outlook prevails. In places such as France, Germany, and the Scandinavian countries, where publicly expressed religion is almost invisible, Greenpeace and similar green parties and beliefs are most common. The reasons for this are complex, but I can speculate.
If you believe man is the supreme intelligence in the universe, then he is therefore responsible for the efficient running of the planet. After all, we can't trust the elephants or the insects to do a good job. Or can we? They were here first. Down that line of thought lies the branch of environmentalism which views mankind as an unmitigated plague upon the planet, one which the Earth would be much better off without. In this view, the ideal world might be one in which the human population was reduced to the point where we could all live off the land like the pre-agriculture American Indians. The trouble with that is, estimates of the pre-Columbian population of North America run in the low dozens of millions, and that would be true in proportion to the rest of the world. To achieve that ideal, then, most of the world's people would have to go away. As it happens, the population of native Europeans (including Russians) is undergoing a population implosion that would be right on target to reduce Europe to its pre-civilization population levels, if it weren't for all the immigrants. But that is another story.
Even if you don't think mankind should commit mass suicide for the betterment of the planet, you may still feel some personal responsibility toward the globe which you cannot possibly fulfill. You may feel like a ten-year-old child put in charge of running General Motors: impossibly underqualified for the job. Accordingly, you turn to the experts, who are not quite as unqualified as you to run the planet, and they tell you that yes, the Earth is getting warmer, and yes, our burning fossil fuels has something to do with it, probably. So are you going to form an ironclad rule never to set foot on an airplane again?
Probably not. Instead, you'll fly when you can't avoid it, or maybe whenever you feel you can afford it, and feel guilty about it. And rightly so. Because if everybody quit flying and took the train, we'd burn less fossil fuel than we do now. Then what?
Well, you as an individual might live long enough to see a slight slowdown in the global-warming trend. But maybe not. And suppose it's too late? Suppose we've passed the invisible tipping point of no return, and the atmosphere is headed inexorably toward a catastrophe that will make the worst disaster movies look like child's play: storms, floods, inundated coastal cities and plains, radical rises in temperature. Again, there is nothing you can do but watch. In this case, the thought that years ago, you quit flying in airplanes as a protest against what you saw as environmental irresponsibility might furnish you some small solace, but it did nothing significant in the long run.
I don't know about you, but I find all these alternatives profoundly depressing. Doing nothing is bad, but doing something like abstaining from flying has such a small chance of making any real difference that it's not worth the effort. Of course, there is always the great mysterious process by which public opinion changes. And something like that might happen here, as it did in the sixties in the U. S. when environmentalism grew from being viewed primarily as the peculiar obsession of a few left-wing crackpots to something that President Richard M. Nixon himself embraced when he founded the Environmental Protection Agency. But such things are hardly predictable, and to trust in their occurrence takes a kind of faith akin to those who regularly buy lottery tickets.
Lest I appear to be bringing a counsel of despair, I will take a look at a different world view next week. I'll tell you right now, I won't necessarily come to any different conclusions about what to do. But the reasons will be very, very different.
Sources: The report on the spoofing airline ads is an Oct. 29, 2006 New York Times article by Eric Pfanner at http://www.nytimes.com/2006/10/30/business/media/30fuel.html. According to the Wikipedia article on the population history of American indigenous peoples, estimates of the North American native population before 1492 range from 12 million to over 100 million, and are probably no more than educated guesses. Whatever the figure is, it is much less than the current population.
Tuesday, October 31, 2006
Wednesday, October 25, 2006
Sniffing Through Your Wallet with RFID
We should all be glad that Superman was a nice guy. I mean, with his X-ray vision, his personal jet-powered cape, not to mention his lady-killing looks when he didn't have his glasses on, he would have made a formidable criminal. Well, some nice guys in the Department of Computer Science at the University of Massachusetts Amherst have shown us that it doesn't take X-ray vision to read your name and credit-card number off some new types of credit cards that incorporate something called "RFID."
First, full disclosure (I've always wanted to say that): I taught at the University of Massachusetts Amherst for fifteen years before moving south, though not in Computer Science. And even before that, my supervising professor in graduate school and I patented a system that could have been used for RFID, although nobody but the patent lawyers ever made a nickel off the patent, which has now expired.
What is RFID? It stands for "radio frequency identification," and it includes a variety of techniques to track inventories, monitor conditions remotely, and even read credit cards. The common thread in all these things is an RFID chip that goes onto the object in question: a box of Wheaties, a credit card, or even a person's body. You can think of this technology as on beyond bar codes—those little symbols that the checkout person scans at the grocery store. Using the proper RFID equipment, you can receive information about where the object is, its inventory number, and so on, all without contacting the object. So in a warehouse, for instance, every time a pallet full of computers goes out the door, an RFID transponder can count them and record each computer's serial number, and the guy driving the forklift doesn't even have to slow down. You just have to be within radio range, which can vary from inches to several feet. Which is how the clever guys at UMass Amherst did their trick.
According to the New York Times, Professor Kevin Fu asked a graduate student to take a sealed envelope bearing a new credit card and just tap it against a transponder box they had designed. In a few minutes, Professor Fu's name, the credit card number, and even the expiration date appeared on a screen. All without even opening the envelope.
The Times reporter dutifully made the rounds of credit-card firms such as American Express and J. P. Morgan Chase to describe Prof. Fu's magic trick. Visa's Brian Triplett said it was an "interesting technical exercise," but wasn't concerned that it would lead to widespread credit-card fraud. It should be noted that it wasn't Mr. Triplett's credit card number that showed up on the screen.
As with many other technologies that develop out of the public eye for years or decades before emerging into visibility, RFID has been around a lot longer than you might think. Back in World War II, a primitive form of RFID was used with aircraft to "identify friend or foe" (IFF). The equipment was far too bulky or expensive back then to be considered for consumer products, but advances in electronics have given us RFID chips cheap enough to throw away with the empty box of Wheaties. Some experts believe RFID will largely replace bar codes as the inventory technology of the future. And that's not all.
Attaching an RFID tag to one's person would lead to all sorts of situations, not all of which are pleasant. Strangely enough, one of the more popular paranoid delusions in recent years, but not so recent that RFID was developed to do it yet, was that the FBI or some equally secretive outfit had implanted a chip in the patient's body, and the chip was spying on their whereabouts and even their thoughts. I actually had dealings with such an individual when I was back at UMass, and it wasn't a pretty picture. It's not every day that billions of dollars are spent with the unintended byproduct of bringing some nut case's delusion into the realm of reality, but it happens. RFID is a long way from reading peoples' thoughts yet, but even that notion doesn't sound as goofy as it used to, what with PET scans and other noninvasive brain-monitoring techniques.
For now, RFID will begin to show up only in places like grocery stores, automated tollbooth tags such as New York State's "EZPass," and some credit cards. I don't think we need to worry about Prof. Fu's trick falling into the hands of some evil computer scientist, because it's fairly easy to foil. And fortunately, the laws about credit-card fraud in this country are written so that the consumer is liable only for the first $50 of loss, and the credit-card issuer is left holding the rest of the bag. So if Visa and company start losing substantial amounts of money to people who cobble together a duplicate of Prof. Fu's remote card reader, the firms will take the straightforward steps needed to fix that particular problem.
All the same, we need to think about how RFID could be abused, before some clever thief or saboteur does, and take reasonable precautions. And it's going to be a long while before yours truly consents to having any chips embedded in his person. But then, I was born old-fashioned.
Sources: The New York Times story appeared online on Oct. 23, 2006 at http://www.nytimes.com/2006/10/23/business/23card.html. I have recently received a copy of RFID Strategic Implementation and ROI: A Practical Roadmap to Success by Charles Poirer and Duncan Mccollum, which has a good nontechnical discussion of RFID's history and how it works.
First, full disclosure (I've always wanted to say that): I taught at the University of Massachusetts Amherst for fifteen years before moving south, though not in Computer Science. And even before that, my supervising professor in graduate school and I patented a system that could have been used for RFID, although nobody but the patent lawyers ever made a nickel off the patent, which has now expired.
What is RFID? It stands for "radio frequency identification," and it includes a variety of techniques to track inventories, monitor conditions remotely, and even read credit cards. The common thread in all these things is an RFID chip that goes onto the object in question: a box of Wheaties, a credit card, or even a person's body. You can think of this technology as on beyond bar codes—those little symbols that the checkout person scans at the grocery store. Using the proper RFID equipment, you can receive information about where the object is, its inventory number, and so on, all without contacting the object. So in a warehouse, for instance, every time a pallet full of computers goes out the door, an RFID transponder can count them and record each computer's serial number, and the guy driving the forklift doesn't even have to slow down. You just have to be within radio range, which can vary from inches to several feet. Which is how the clever guys at UMass Amherst did their trick.
According to the New York Times, Professor Kevin Fu asked a graduate student to take a sealed envelope bearing a new credit card and just tap it against a transponder box they had designed. In a few minutes, Professor Fu's name, the credit card number, and even the expiration date appeared on a screen. All without even opening the envelope.
The Times reporter dutifully made the rounds of credit-card firms such as American Express and J. P. Morgan Chase to describe Prof. Fu's magic trick. Visa's Brian Triplett said it was an "interesting technical exercise," but wasn't concerned that it would lead to widespread credit-card fraud. It should be noted that it wasn't Mr. Triplett's credit card number that showed up on the screen.
As with many other technologies that develop out of the public eye for years or decades before emerging into visibility, RFID has been around a lot longer than you might think. Back in World War II, a primitive form of RFID was used with aircraft to "identify friend or foe" (IFF). The equipment was far too bulky or expensive back then to be considered for consumer products, but advances in electronics have given us RFID chips cheap enough to throw away with the empty box of Wheaties. Some experts believe RFID will largely replace bar codes as the inventory technology of the future. And that's not all.
Attaching an RFID tag to one's person would lead to all sorts of situations, not all of which are pleasant. Strangely enough, one of the more popular paranoid delusions in recent years, but not so recent that RFID was developed to do it yet, was that the FBI or some equally secretive outfit had implanted a chip in the patient's body, and the chip was spying on their whereabouts and even their thoughts. I actually had dealings with such an individual when I was back at UMass, and it wasn't a pretty picture. It's not every day that billions of dollars are spent with the unintended byproduct of bringing some nut case's delusion into the realm of reality, but it happens. RFID is a long way from reading peoples' thoughts yet, but even that notion doesn't sound as goofy as it used to, what with PET scans and other noninvasive brain-monitoring techniques.
For now, RFID will begin to show up only in places like grocery stores, automated tollbooth tags such as New York State's "EZPass," and some credit cards. I don't think we need to worry about Prof. Fu's trick falling into the hands of some evil computer scientist, because it's fairly easy to foil. And fortunately, the laws about credit-card fraud in this country are written so that the consumer is liable only for the first $50 of loss, and the credit-card issuer is left holding the rest of the bag. So if Visa and company start losing substantial amounts of money to people who cobble together a duplicate of Prof. Fu's remote card reader, the firms will take the straightforward steps needed to fix that particular problem.
All the same, we need to think about how RFID could be abused, before some clever thief or saboteur does, and take reasonable precautions. And it's going to be a long while before yours truly consents to having any chips embedded in his person. But then, I was born old-fashioned.
Sources: The New York Times story appeared online on Oct. 23, 2006 at http://www.nytimes.com/2006/10/23/business/23card.html. I have recently received a copy of RFID Strategic Implementation and ROI: A Practical Roadmap to Success by Charles Poirer and Duncan Mccollum, which has a good nontechnical discussion of RFID's history and how it works.
Tuesday, October 17, 2006
Is Any Technology Ethically Neutral? The Sony Reader
A recent New York Times article announced the debut of the Sony Reader, an electronic book that uses tiny plastic spheres to simulate the appearance of an actual page of print. Unlike a laptop display with its energy-hogging backlighting, the Reader uses only existing room light and consumes essentially no power until you turn the page. A reader of the Reader can take satisfaction in the notion that no trees were cut down and hardly any oil or coal burned to produce the miniscule amount of energy needed to operate it.
A more environmentally friendly technology can hardly be imagined, it seems. So should we all pitch our old-fashioned stacks of paper bound together and buy Readers? It depends.
When I try to engage certain people in a discussion of the ethics of a given technology, an argument I often hear goes like this: "Well, technology by itself is neutral. It's only the ways people use technology that are good or bad." That is one of those nice-sounding phrases that look good at first, but tend to disintegrate under scrutiny. The Sony Reader would seem to be a good candidate to exemplify the idea of the neutrality of technology. No one is making us go out and buy Readers. It's simply another item on the market which may or may not prove popular. It seems to be environmentally benign, and as long as it does what its maker claims for it, what downsides could it possibly have?
That question actually sends us out upon deep philosophical waters. There is a school of thought popular in Europe that goes under the name of the "precautionary principle." Followers of this principle take the stand that any new technology must be examined thoroughly for possible harmful effects before it can be generally distributed. If no actual harm has occurred yet, the examination of a technology for possible harm necessarily involves reasoned speculation about what might occur. There is nothing intrinsically wrong with basing technical decisions upon hypotheticals. After all, the Sony Reader's designers were speculating that people would want to buy their product if they developed it, and so the use of speculation in evaluating its effects, both good and bad, is no less warranted.
For example, one could imagine Readers sweeping the world to become as popular as books, if not more so. (To a great extent, this has already taken place as computers have replaced reference volumes in libraries.) Would the world be a better place if every book was an e-book?
That depends. The people who make conventional books wouldn't think so. Technological unemployment has been around ever since there was technology. Somehow the world's economies have absorbed the paste-up artists, the platemakers, the hot-type linotypers, and all the other superseded occupations that pre-electronic forms of printing required. What has happened to a good fraction of the printing industry's past workers might eventually happen to all of them. But unless you believe in state control and ownership of the means of production, technological unemployment is just one of those things that happen.
How could this possibility be forestalled? In the world's continuing embrace of a free-market global economy, consumers can exert a certain amount of control over what they buy. But consumers can't buy what isn't there, and much of the power to decide what gets sold lies with those who control the large firms whose investments determine the directions of the markets. If next year, most investors decide that paper books are going the way of the slide rule when electronic calculators came along, the rest of us will not be able to do much about it.
Next, consider what the Reader is made of: probably some conventional electronics, a battery, and a display containing thousands if not millions of tiny plastic spheres suspended in some kind of liquid. Some day—probably sooner than later if the useful lifetime of the typical laptop is any guide—the brand-new Readers now waiting on store shelves will accumulate in attics and closets, only to be thrown out when the next model comes along. As we have learned, you can't simply throw things away these days, because there isn't any "away" anymore. More and more environmentally conscious manufacturers are doing what is called life-cycle design, which takes into account the problem of how to dispose of a used piece of equipment with minimal impact to the environment. I have no specific information on the Sony Reader in this regard, but at the least, its disposal will take up some room in a landfill somewhere. And if it contains any hazardous chemicals in its battery or display, these chemicals could cause problems later.
Finally, there is the subtle but real change in the habits of millions who change from one form of information exchange to another. No matter how closely the makers of a new technology try to imitate the experience produced by a previous one, some things are different. And sometimes the new technology imposes a whole set of new habits on the user, not all of them good ones. How many of us have rattled out an angry email and hit the send key only to regret it later? Somehow, the act of writing or typing a paper letter, signing it, folding it, addressing it, and putting it in the mailbox provided a number of additional points of decision where we could give heed to our second thoughts and at least put the letter aside instead of mailing it. What at first looked like nothing more than obstacles to the rapid communication of thought now looks more like a kind of psychological buffer that may have made society a better place.
I have no idea whether the Reader will catch on, or whether it is only a precursor of something better, or whether, like the poor, the paper books we will always have with us. And my little exercise in applying the precautionary principle to such a benign-looking piece of technology as the Reader should not be misunderstood to mean that I feel it is a threat to civilization. But I hope I have made clear that any technology whatsoever that ends up in the hands of people has intrinsic potential for both good and bad consequences, and the way it is designed can influence how those consequences develop over time.
Sources: The New York Times article by David Pogue on Oct. 12, 2006 describing the Sony Reader was located at http://www.nytimes.com/2006/10/12/technology/12pogue.html.
A more environmentally friendly technology can hardly be imagined, it seems. So should we all pitch our old-fashioned stacks of paper bound together and buy Readers? It depends.
When I try to engage certain people in a discussion of the ethics of a given technology, an argument I often hear goes like this: "Well, technology by itself is neutral. It's only the ways people use technology that are good or bad." That is one of those nice-sounding phrases that look good at first, but tend to disintegrate under scrutiny. The Sony Reader would seem to be a good candidate to exemplify the idea of the neutrality of technology. No one is making us go out and buy Readers. It's simply another item on the market which may or may not prove popular. It seems to be environmentally benign, and as long as it does what its maker claims for it, what downsides could it possibly have?
That question actually sends us out upon deep philosophical waters. There is a school of thought popular in Europe that goes under the name of the "precautionary principle." Followers of this principle take the stand that any new technology must be examined thoroughly for possible harmful effects before it can be generally distributed. If no actual harm has occurred yet, the examination of a technology for possible harm necessarily involves reasoned speculation about what might occur. There is nothing intrinsically wrong with basing technical decisions upon hypotheticals. After all, the Sony Reader's designers were speculating that people would want to buy their product if they developed it, and so the use of speculation in evaluating its effects, both good and bad, is no less warranted.
For example, one could imagine Readers sweeping the world to become as popular as books, if not more so. (To a great extent, this has already taken place as computers have replaced reference volumes in libraries.) Would the world be a better place if every book was an e-book?
That depends. The people who make conventional books wouldn't think so. Technological unemployment has been around ever since there was technology. Somehow the world's economies have absorbed the paste-up artists, the platemakers, the hot-type linotypers, and all the other superseded occupations that pre-electronic forms of printing required. What has happened to a good fraction of the printing industry's past workers might eventually happen to all of them. But unless you believe in state control and ownership of the means of production, technological unemployment is just one of those things that happen.
How could this possibility be forestalled? In the world's continuing embrace of a free-market global economy, consumers can exert a certain amount of control over what they buy. But consumers can't buy what isn't there, and much of the power to decide what gets sold lies with those who control the large firms whose investments determine the directions of the markets. If next year, most investors decide that paper books are going the way of the slide rule when electronic calculators came along, the rest of us will not be able to do much about it.
Next, consider what the Reader is made of: probably some conventional electronics, a battery, and a display containing thousands if not millions of tiny plastic spheres suspended in some kind of liquid. Some day—probably sooner than later if the useful lifetime of the typical laptop is any guide—the brand-new Readers now waiting on store shelves will accumulate in attics and closets, only to be thrown out when the next model comes along. As we have learned, you can't simply throw things away these days, because there isn't any "away" anymore. More and more environmentally conscious manufacturers are doing what is called life-cycle design, which takes into account the problem of how to dispose of a used piece of equipment with minimal impact to the environment. I have no specific information on the Sony Reader in this regard, but at the least, its disposal will take up some room in a landfill somewhere. And if it contains any hazardous chemicals in its battery or display, these chemicals could cause problems later.
Finally, there is the subtle but real change in the habits of millions who change from one form of information exchange to another. No matter how closely the makers of a new technology try to imitate the experience produced by a previous one, some things are different. And sometimes the new technology imposes a whole set of new habits on the user, not all of them good ones. How many of us have rattled out an angry email and hit the send key only to regret it later? Somehow, the act of writing or typing a paper letter, signing it, folding it, addressing it, and putting it in the mailbox provided a number of additional points of decision where we could give heed to our second thoughts and at least put the letter aside instead of mailing it. What at first looked like nothing more than obstacles to the rapid communication of thought now looks more like a kind of psychological buffer that may have made society a better place.
I have no idea whether the Reader will catch on, or whether it is only a precursor of something better, or whether, like the poor, the paper books we will always have with us. And my little exercise in applying the precautionary principle to such a benign-looking piece of technology as the Reader should not be misunderstood to mean that I feel it is a threat to civilization. But I hope I have made clear that any technology whatsoever that ends up in the hands of people has intrinsic potential for both good and bad consequences, and the way it is designed can influence how those consequences develop over time.
Sources: The New York Times article by David Pogue on Oct. 12, 2006 describing the Sony Reader was located at http://www.nytimes.com/2006/10/12/technology/12pogue.html.
Wednesday, October 11, 2006
Doctors, Data, and Doomsday
Nearly every business, government office, and organizations of any size down to the local barber shop have made the transition from paper records to computers—except doctors and hospitals. Go into any doctor's office and you will still see big file cabinets filled with cardboard folders bearing colored tabs. The system of keeping a file for each patient was an innovation when the Mayo Clinic came up with the idea in the early 1900s. As Robert Charrette reports in a recent article in IEEE Spectrum, the Clinic is one of the few medical facilities so far to make a successful transition to all-electronic records. But he warns that while we aren't necessarily facing a medical Doomsday, troubles lie ahead along the way to converting the entire U. S. medical system to computerized recordkeeping
As Charrette points out, the history of large-scale software projects is littered with the bones of huge, expensive failures. One of the most egregious was the FBI's attempt to computerize their elaborate system of case files, which had been kept on paper since the days of J. Edgar Hoover in the 1930s. After spending over $100 million, the FBI gave up on the project altogether. Why is it that society tolerates such disasters in software engineering? If banks lost your money as readily as some software firms do, people would still be keeping their cash in mattresses.
Software engineering differs from almost every other kind of engineering in two fundamental ways. In electrical, mechanical, civil, and chemical engineering, the subject matter of the discipline is something physical: steel dirt, chemicals, or electromagnetic waves. But in software engineering, the "material cause" (as Aristotle would put it), the matter out of which the discipline emerges, is thought. And thoughts are notoriously hard things to pin down. Secondly, most large-scale software projects invariably deal with the largely undocumented and tremendously variable behavior of thousands of people as they do comparatively complex intellectual tasks. This is nowhere more true than in the medical profession, where some of the most highly educated and individualistic professionals deal daily with life-or-death situations. These two factors make software engineering the most unpredictable of engineering disciplines, in that despite the best plans of competent engineers, projects often run off the rails of budgets and schedules to crash in the woods of failure (metaphorically speaking).
To what extent are software engineers morally culpable for the failure of a major software project they are involved in? Failures are a normal part of engineering. And it can be said in behalf of most software project failures that no one dies or is seriously injured, at least directly. A building that collapses usually takes someone with it, but a failed software project's worst consequences for individuals are usually the loss of jobs, not life itself. But the expenditure of millions of dollars toward an end that is ultimately never realized is hardly a social good, either.
Despite such notable failures, no one seems inclined to give up on the idea that computerizing paper medical records, if we can do it, will be better than the situation we have now, where the present limited access to data results in thousands of misdiagnoses and hundreds of deaths every year. Of course, along with the promises of better access for those who need to know medical records comes the threat of abuse by unscrupulous businesses and criminals. Patient advocacy groups have already weighed in to oppose the present versions of health information technology legislation which do not protect the privacy rights of patients enough, in their opinion. This is a problem that can be dealt with, as the largely successful effort to put private banking records on the Internet has shown. But the challenges are greater with medical records, and it would be easy to promulgate a system that would have as many security holes as a Swiss cheese if things aren't done right.
Some people advocate an increased role for the federal government in this area, pointing out that many medical practices are small and simply don't have the resources to adapt on their own. The track record of government involvement in medicine in this country is excellent with regard to research, problematic with regard to large-scale social programs such as Medicare, and largely unknown with regard to standardized software. As with anything else, if enough good people of good will are put to the task, it could be made to work. But in the present political atmosphere in which government is often regarded as the enemy of the free market and the good in general, it is hard to imagine how enough public and professional support for a government-sponsored project could be raised.
The field of software engineering itself is only about a generation old, and its practitioners are increasingly aware of the need to borrow from fields such as sociology, ethics, and psychology to do their jobs better. The old days of a geeky nerd sitting alone in a cubicle churning out code that no one else can understand are passing, if not completely over. Good software engineers study the project's intended users as thoroughly as anthropologists observe primitive tribes, in order to figure out not only what the customers say they want, but in order to discover existing methods and connections that the users may not even know about themselves and their organizations. The ideal paper-to-software transition in the medical profession will still be a lot of work. But if it is staged properly, using good examples such as the Mayo Clinic as paradigms and checking results in each new case before proceeding, it could work as smoothly as the introduction of computers into banking. But in this case, it won't be your money, it will be your life.
Sources: The article "Dying for Data" in IEEE Spectrum's October 2006 issue is available online at http://www.spectrum.ieee.org/oct06/4589. Charrette also wrote about the FBI project failure in "Why Software Fails" at http://www.spectrum.ieee.org/sep05/inthisissue. An example of one organization advocating in favor of better patient privacy rights can be found at http://www.patientprivacyrights.org.
As Charrette points out, the history of large-scale software projects is littered with the bones of huge, expensive failures. One of the most egregious was the FBI's attempt to computerize their elaborate system of case files, which had been kept on paper since the days of J. Edgar Hoover in the 1930s. After spending over $100 million, the FBI gave up on the project altogether. Why is it that society tolerates such disasters in software engineering? If banks lost your money as readily as some software firms do, people would still be keeping their cash in mattresses.
Software engineering differs from almost every other kind of engineering in two fundamental ways. In electrical, mechanical, civil, and chemical engineering, the subject matter of the discipline is something physical: steel dirt, chemicals, or electromagnetic waves. But in software engineering, the "material cause" (as Aristotle would put it), the matter out of which the discipline emerges, is thought. And thoughts are notoriously hard things to pin down. Secondly, most large-scale software projects invariably deal with the largely undocumented and tremendously variable behavior of thousands of people as they do comparatively complex intellectual tasks. This is nowhere more true than in the medical profession, where some of the most highly educated and individualistic professionals deal daily with life-or-death situations. These two factors make software engineering the most unpredictable of engineering disciplines, in that despite the best plans of competent engineers, projects often run off the rails of budgets and schedules to crash in the woods of failure (metaphorically speaking).
To what extent are software engineers morally culpable for the failure of a major software project they are involved in? Failures are a normal part of engineering. And it can be said in behalf of most software project failures that no one dies or is seriously injured, at least directly. A building that collapses usually takes someone with it, but a failed software project's worst consequences for individuals are usually the loss of jobs, not life itself. But the expenditure of millions of dollars toward an end that is ultimately never realized is hardly a social good, either.
Despite such notable failures, no one seems inclined to give up on the idea that computerizing paper medical records, if we can do it, will be better than the situation we have now, where the present limited access to data results in thousands of misdiagnoses and hundreds of deaths every year. Of course, along with the promises of better access for those who need to know medical records comes the threat of abuse by unscrupulous businesses and criminals. Patient advocacy groups have already weighed in to oppose the present versions of health information technology legislation which do not protect the privacy rights of patients enough, in their opinion. This is a problem that can be dealt with, as the largely successful effort to put private banking records on the Internet has shown. But the challenges are greater with medical records, and it would be easy to promulgate a system that would have as many security holes as a Swiss cheese if things aren't done right.
Some people advocate an increased role for the federal government in this area, pointing out that many medical practices are small and simply don't have the resources to adapt on their own. The track record of government involvement in medicine in this country is excellent with regard to research, problematic with regard to large-scale social programs such as Medicare, and largely unknown with regard to standardized software. As with anything else, if enough good people of good will are put to the task, it could be made to work. But in the present political atmosphere in which government is often regarded as the enemy of the free market and the good in general, it is hard to imagine how enough public and professional support for a government-sponsored project could be raised.
The field of software engineering itself is only about a generation old, and its practitioners are increasingly aware of the need to borrow from fields such as sociology, ethics, and psychology to do their jobs better. The old days of a geeky nerd sitting alone in a cubicle churning out code that no one else can understand are passing, if not completely over. Good software engineers study the project's intended users as thoroughly as anthropologists observe primitive tribes, in order to figure out not only what the customers say they want, but in order to discover existing methods and connections that the users may not even know about themselves and their organizations. The ideal paper-to-software transition in the medical profession will still be a lot of work. But if it is staged properly, using good examples such as the Mayo Clinic as paradigms and checking results in each new case before proceeding, it could work as smoothly as the introduction of computers into banking. But in this case, it won't be your money, it will be your life.
Sources: The article "Dying for Data" in IEEE Spectrum's October 2006 issue is available online at http://www.spectrum.ieee.org/oct06/4589. Charrette also wrote about the FBI project failure in "Why Software Fails" at http://www.spectrum.ieee.org/sep05/inthisissue. An example of one organization advocating in favor of better patient privacy rights can be found at http://www.patientprivacyrights.org.
Tuesday, October 03, 2006
Legislating Morality: The Unlawful Internet Gambling Enforcement Act
Over the weekend, the U. S. Congress approved and passed to the President a bill to prohibit financial institutions from sending payments to offshore internet gambling websites. President Bush is expected to sign it. The internet gambling industry was taken somewhat by surprise, and stocks in online casinos are tumbling all over the globe. Some view the action as a purely political ploy to help Republicans retain control of Congress after the November elections. Others see it as one more belated attempt for the law to catch up to technology.
The name of a popular bestseller some years ago was "Please Don't Eat the Daisies." The author, a mother of several young children, was preparing a dinner party and told her kids not to track mud into the living room, not to touch the china on the table, and so on. But she forgot to tell them not to eat the daisies in the centerpiece, and so they did. People will come up with ways of doing things that regulators, legislatures, and competitors simply cannot think of in advance. But the effects of these novel ideas are not always welcome.
Once enough people got onto the Internet, gambling websites were probably inevitable. The same privacy, anonymity, and ability to operate anywhere in the world with T1 lines that makes the Internet so attractive for pornographers also attracts internet gaming firms. As I noted in my Aug. 1 column, various governments over the centuries have taken attitudes toward gambling ranging from pure laissez-faire to near-total prohibition. But until recently, a government that wanted to regulate gambling could identify the bookies, their hangouts, and their customers without too much trouble. The advent of the Internet changed all that.
Because of the dispersed nature of communication over computer networks, it is impractical to identify individuals who place bets online without serious curtailment of individual liberties. In principle, Federal agents could stage raids on college dorm rooms and other places where they suspect Internet gambling is occurring, but this kind of action would be tantamount to creating a police state.
If you examine the machinery of internet gambling by U. S. customers who use offshore companies, most of it is dispersed widely. Customers gamble online, paying mostly by credit card to foreign internet casinos. The thousands of individual customers are spread all over the place. There are fewer foreign sites with servers and operators, but they are inaccessible to U. S. enforcement officials. The one link in the chain that is both accessible and fairly concentrated is the group of U. S. financial institutions which forward their customers' money to the internet casinos. This is precisely the group targeted by the law that Congress just passed.
If you are a credit-card company, what your customers do with their money is normally none of your business. Outright fraud is a concern, since by law a customer's liability in most cases of credit-card fraud is limited to $50, with banks picking up the rest of the tab. Thus motivated, banks have developed sophisticated ways of ferreting out fraudulent companies who abuse their credit-card systems. But most internet gamblers tacitly agree to the rules of the game, which over the long term mean that most gamblers lose big to the casinos, just as in real life. Nevertheless, in the eyes of the law they have not been defrauded. Rather, they chose to take an action which is technically illegal, so they can't have any recourse except to deduct gambling losses on their tax returns to the extent allowed by law (a loophole I have never understood).
The present law simply prohibits banks from reimbursing online casinos, which puts the banks in a bind. If they don't obey the law and keep on sending funds to the casinos, they will be liable to legal penalties. But if they do obey and refuse to pay online casinos, how will this affect the other parties involved?
Well, pretty soon you will see lists of unacceptable credit cards on the online gambling sites: cards issued by companies who have begun to obey the law. Depending on how dedicated a gambler is, he may shift to another card, or he may just drop that site for another one that is less picky about credit cards. What he probably won't do is quit gambling, especially if he has a habit established.
If the U. S. banking industry as a whole stands firm, foreign-owned credit firms will rush in to fill the vacuum. If this occurs, we will simply have succeeded in moving a major part of the system offshore. After all, the only pieces that have to stay here are the customers.
While I have no special insight into the mentality of those who passed this law, I suspect that they view gambling as an intrinsic evil which should be curtailed or eliminated where possible. I happen to be in sympathy with that view, but I also happen to be in sympathy with the outlook that says when you decide to do a thing, find a good way of doing it.
What is gambling, after all? In my very limited experience, accumulated chiefly in convenience store lines behind people who just wanted one more scratch ticket and yeah, lemme have five of them Texas Holdems, gambling is a way people have of facing the apparent randomness of life headon, and trying to win. It has everything to do with emotion, desire, and the consumer mentality, and very little to do with logic, higher education (except for the ill-gotten gambling dollars that pay for some of it), or the nobler aspects of life. If we went about creating a society of self-controlled, self-directed citizens who knew who they were, were largely content with their lot in life, and could count on their circumstances maintaining some stability over the next few years, I suspect we'd have a lot fewer gamblers to start with. The ones who were left could send all their money to Bermuda for all I care.
So while I agree with the goal of the anti-gambling legislation just passed, as an engineer I can see several big problems that stand in the way of its achieving it. Maybe I'm wrong and this will put a big damper on the whole business. I hope so. But some problems are deeper than a solution by legislation can address.
Sources: A summary of the recent legislation is at http://www.canada.com/nationalpost/columnists/story.html?id=101747ec-8d41-42f5-9209-1236e3ced739&p=1
The name of a popular bestseller some years ago was "Please Don't Eat the Daisies." The author, a mother of several young children, was preparing a dinner party and told her kids not to track mud into the living room, not to touch the china on the table, and so on. But she forgot to tell them not to eat the daisies in the centerpiece, and so they did. People will come up with ways of doing things that regulators, legislatures, and competitors simply cannot think of in advance. But the effects of these novel ideas are not always welcome.
Once enough people got onto the Internet, gambling websites were probably inevitable. The same privacy, anonymity, and ability to operate anywhere in the world with T1 lines that makes the Internet so attractive for pornographers also attracts internet gaming firms. As I noted in my Aug. 1 column, various governments over the centuries have taken attitudes toward gambling ranging from pure laissez-faire to near-total prohibition. But until recently, a government that wanted to regulate gambling could identify the bookies, their hangouts, and their customers without too much trouble. The advent of the Internet changed all that.
Because of the dispersed nature of communication over computer networks, it is impractical to identify individuals who place bets online without serious curtailment of individual liberties. In principle, Federal agents could stage raids on college dorm rooms and other places where they suspect Internet gambling is occurring, but this kind of action would be tantamount to creating a police state.
If you examine the machinery of internet gambling by U. S. customers who use offshore companies, most of it is dispersed widely. Customers gamble online, paying mostly by credit card to foreign internet casinos. The thousands of individual customers are spread all over the place. There are fewer foreign sites with servers and operators, but they are inaccessible to U. S. enforcement officials. The one link in the chain that is both accessible and fairly concentrated is the group of U. S. financial institutions which forward their customers' money to the internet casinos. This is precisely the group targeted by the law that Congress just passed.
If you are a credit-card company, what your customers do with their money is normally none of your business. Outright fraud is a concern, since by law a customer's liability in most cases of credit-card fraud is limited to $50, with banks picking up the rest of the tab. Thus motivated, banks have developed sophisticated ways of ferreting out fraudulent companies who abuse their credit-card systems. But most internet gamblers tacitly agree to the rules of the game, which over the long term mean that most gamblers lose big to the casinos, just as in real life. Nevertheless, in the eyes of the law they have not been defrauded. Rather, they chose to take an action which is technically illegal, so they can't have any recourse except to deduct gambling losses on their tax returns to the extent allowed by law (a loophole I have never understood).
The present law simply prohibits banks from reimbursing online casinos, which puts the banks in a bind. If they don't obey the law and keep on sending funds to the casinos, they will be liable to legal penalties. But if they do obey and refuse to pay online casinos, how will this affect the other parties involved?
Well, pretty soon you will see lists of unacceptable credit cards on the online gambling sites: cards issued by companies who have begun to obey the law. Depending on how dedicated a gambler is, he may shift to another card, or he may just drop that site for another one that is less picky about credit cards. What he probably won't do is quit gambling, especially if he has a habit established.
If the U. S. banking industry as a whole stands firm, foreign-owned credit firms will rush in to fill the vacuum. If this occurs, we will simply have succeeded in moving a major part of the system offshore. After all, the only pieces that have to stay here are the customers.
While I have no special insight into the mentality of those who passed this law, I suspect that they view gambling as an intrinsic evil which should be curtailed or eliminated where possible. I happen to be in sympathy with that view, but I also happen to be in sympathy with the outlook that says when you decide to do a thing, find a good way of doing it.
What is gambling, after all? In my very limited experience, accumulated chiefly in convenience store lines behind people who just wanted one more scratch ticket and yeah, lemme have five of them Texas Holdems, gambling is a way people have of facing the apparent randomness of life headon, and trying to win. It has everything to do with emotion, desire, and the consumer mentality, and very little to do with logic, higher education (except for the ill-gotten gambling dollars that pay for some of it), or the nobler aspects of life. If we went about creating a society of self-controlled, self-directed citizens who knew who they were, were largely content with their lot in life, and could count on their circumstances maintaining some stability over the next few years, I suspect we'd have a lot fewer gamblers to start with. The ones who were left could send all their money to Bermuda for all I care.
So while I agree with the goal of the anti-gambling legislation just passed, as an engineer I can see several big problems that stand in the way of its achieving it. Maybe I'm wrong and this will put a big damper on the whole business. I hope so. But some problems are deeper than a solution by legislation can address.
Sources: A summary of the recent legislation is at http://www.canada.com/nationalpost/columnists/story.html?id=101747ec-8d41-42f5-9209-1236e3ced739&p=1
Subscribe to:
Posts (Atom)