Back on June 20, I wrote about the Texas Attorney General's efforts to track down cyber predators who abuse popular social-networking websites such as MySpace. At last report, he had rounded up eighty alleged criminals who tried to meet cute under-age girls or boys for nefarious purposes, only to find themselves at the wrong end of a sting operation. The very next day, on June 21, MySpace.com announced a series of new restrictions to help fix the problem. I am certain that this blog played no role in MySpace's decision, but it is equally certain that publicity about the potential for abuse as well as the potential for lawsuits did have an effect.
According to an Associated Press report, the changes make it impossible for anyone registered as being over 18 to view the full profiles of members under 16, unless the older user knows the younger one's email address or full name. (MySpace has long had a lower age limit of 14.) While this is undoubtedly an improvement, the report also pointed out that MySpace simply takes a user's word about age. There is still nothing like the credit-card verification mechanism recommended by the Texas Attorney General to verify the user's age by independent means. So if I decided to masquerade as a 14-year-old boy in order to view the full profiles of 14-year-old girls, I could still do so.
The controversy over MySpace is just one battle in the larger war about privacy and technology. These days, "technology" usually means computers, networks, and the whole communications infrastructure of iPods, websites, and other hardware and software that makes us the most connected society in history. In examining a problem, engineers sometimes like to cook up a worst-case scenario in which everything that could conceivably go wrong does go wrong. If the system they are designing nevertheless withstands such a perfect storm of Murphy's Law ("whatever can go wrong will go wrong"), then the engineers can generally breathe a sigh of relief that the system will make it through more likely incidents in which only some things go wrong. Of course, this assumes that the system is simple enough, and the engineers are imaginative enough, to come up with a truly worst-case situation. But even if these conditions don't always apply, the technique is still a useful one.
What is a worst-case scenario in terms of privacy and technology? The answer may depend on what your own worst fears are.
Say you feel strongly that your financial matters are nobody else's business, and that you value your good credit rating. Your worst cyber-privacy nightmare might then be to have your identity stolen by a gang of hot-check-writing, heroin-using, credit-card-busting criminals who pay for a million-dollar orgy of consumer spending with your financial resources and then flee the country, leaving your credit rating in tatters that will take years to repair.
Say that you like to speak your mind about politics or anything else. Then your worst fears might be that a kind of super-Patriot Act would allow the government to spy on everything you email, blog, say, or see online. Imagine what Joseph Stalin would have done with a Communist version of the Internet. In the old days of manual telephone taps and flesh-and-blood spies, the ability of a government to spy on its citizens was limited by the fact that you could hire only so many spies, and there were never enough to keep tabs on all the citizens all the time. But new automated spyware has lifted that restriction and brought the blessings of increased productivity to the espionage business. My blog on "Engineering Censorship in China" shows how a totalitarian government can use technology to monitor or censor the online activities of over a billion people, with the help of companies like Microsoft.
Say that you have a rare genetic disorder that has a good, but not certain, chance of striking you as a young adult. It won't be fatal, but will require many thousands of dollars' worth of specialized health care over the rest of your lifetime. Do you want your prospective employers or health insurance companies to know this fact about you? Even if they say they will not let it influence their decisions about you, do you believe them? There are laws currently under consideration by the U. S. Congress that will mandate the electronic storage of medical data, which is now largely maintained in the form of paper files. This change does not guarantee that any Joe or Jane off the street will be able to access your medical records, but it is not clear that it will safeguard them perfectly, either.
In each of these cases, something that was at first intended to be a good, convenient, or more efficient way of doing things gets twisted around and used to harm. Systems designed to make it easier to buy things also make it easier to steal things. Those who built features into the Internet to encourage the small-d democratic exchange of ideas now find that some governments use it to repress ideas. Attempts to make medical records more accurate and accessible can also hurt someone with a costly medical problem if insurers or employers use their medical records against them. And a great idea about how to bring people closer together with technology-assisted social networking occasionally helps cyber predators carry out their evil intentions.
While there are many laws of physics that engineers must obey at their peril, there is also one principle of human behavior that is equally important. It goes by various names. In the Christian tradition, it is called "original sin," which means that everyone on Earth has an inherent tendency to do the wrong thing, even if they know the right thing. G. K. Chesterton called this doctrine "the only part of Christian theology which can really be proved." The proof, of course, is empirical. There has never been a technology that has actually been used, which has not ended up causing at least some harm as well as good. And it is foolish to design anything without taking this tried-and-true human factor into account.
Sources: The Associated Press report on MySpace's new restrictions is at http://www.msnbc.msn.com/id/13447786/. One view of the issue of medical privacy rights (the patient-advocate view) can be found at http://www.patientprivacyrights.org. The Chesterton quote is from Orthodoxy (New York: Doubleday, 1990, orig. published 1908), p. 15.
Wednesday, July 26, 2006
Wednesday, July 19, 2006
The Big Dig in Big Trouble
Boston's Big Dig project to put much of I-90 underground spanned parts of two centuries and cost more than any other single highway project in the United States. On July 11, when the project was mostly finished and people in Massachusetts thought they could begin to put the disruption and cost overruns behind them, a three-ton ceiling tile came loose in a connector tunnel and killed a newlywed woman. Further investigation has revealed that over a thousand fasteners used to hold up similar tiles are probably defective. What can we learn from all this?
The first lesson is an old one: nothing draws attention like death and destruction. According to a report by Sean Murphy and Raja Mishra in the July 18 Boston Globe, lab tests of the epoxy glue used to hold the fasteners in place were originally scheduled during construction. But officials of Bechtel/Parsons Brinckerhoff, the engineering firm in charge of the Big Dig, felt so confident in the epoxy that they canceled the tests. Now it looks like the tests would have been a good idea, because they might have revealed the kind of problems that ultimately led to the fatal ceiling collapse. But there was no immediate harm that resulted from skipping the tests, so the incident went by unnoticed.
The next lesson is one we hear starting in kindergarten: be sure to follow instructions. Engineering is a constant battle between expensive over-caution on the one hand, and reckless negligence on the other hand. Where lives are at stake, as in the construction of bridges and tunnels, laws require licensed engineers to sign off on plans and specifications. But all the licensed engineers in the world won't do any good if the contractors and builders don't carry out the engineers' instructions to the letter.
Speculation by experts centers on the possibility that the epoxy used to hold the concrete ceiling tiles up was either not prepared and applied correctly, or used with oily steel. Steel as it comes from the factory has a thin coating of oil on it, and unless this oil is cleaned off prior to use, adhesives such as epoxy cannot form a good bond. Even if the steel was clean, the widely varying temperatures at a Boston construction site may have interfered with the chemical changes that epoxy goes through in order to harden. Inadequately hardened plastic adhesives can "creep" under stress, moving a tiny fraction of an inch every month, until the entire joint fails. Whatever was done wrong, it appears to have been done wrong consistently, because Governor Mitt Romney has announced that over 1300 fasteners are suspect and will have to be removed or replaced.
Further investigations will eventually reveal what went wrong, and possibly who was responsible. Structural engineering is based mostly on physical science, and things don't generally fall down for no reason at all. But finding the physical cause gets us only part way toward preventing similar accidents in the future. Until the human organizations that let such things happen are repaired and kept in order, the same thing can happen again. In a way, it has.
The Boston tunnel collapse is strangely similar in some ways to a much more serious tragedy that happened twenty-five years ago this month. On July 11, 1981, several hundred people gathered on a suspended concrete walkway to watch a dance party in the newly opened Hyatt Hotel in Kansas City, Missouri. The walkway was held up by steel rods which should have been strong enough to support the weight of the crowd. If they had been installed according to the original engineering plan, everything would have been fine. But on the site, a contractor decided to make a subtle change in the way the rods were made and assembled. This change greatly weakened the structure and caused it to collapse that evening, killing 114 people and injuring 200. Again, we had heavy concrete slabs, dangerous to life, suspended by thin steel rods. Again, if the plans had been carried out to the letter, the disaster would not have occurred. This is not to say that nobody should ever suspend heavy concrete slabs with thin steel rods again, or that engineers never make mistakes. They do. But the point is that responsibility inheres not only in those who make plans, but in those who carry them out and those charged with making sure that the work agrees with the plans.
Everyone involved in a building project, from those who pay for it, to the architects and engineers, to the contractors, to inspectors, down to the lowliest laborer cleaning up afterwards, has to walk that same line between excessive over-caution and reckless carelessness. Since the vast majority of engineering projects work without major failures or loss of life, we can assume that most of these folks do their job well enough most of the time. But an accident like the Big Dig tunnel collapse reminds us of what has to happen at every step of the way, and what can go wrong if somebody doesn't pay enough attention to details that don't seem to matter at the time.
Sources: The Boston Globe articles cited are at http://www.boston.com/news/globe/city_region/breaking_news/2006/07/romney_number_o.html (Gov. Romney's announcement) and http://www.boston.com/news/traffic/bigdig/articles/2006/07/18/workers_doubted_ceiling_method/ (the neglected lab tests). A string of technical discussions on the general subject of epoxy ceiling fasteners and how they can fail is at the Engineering Tips website http://www.eng-tips.com/viewthread.cfm?qid=159632&page=1. The Wikipedia article about the Kansas City Hyatt Regency walkway collapse is at http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse.
The first lesson is an old one: nothing draws attention like death and destruction. According to a report by Sean Murphy and Raja Mishra in the July 18 Boston Globe, lab tests of the epoxy glue used to hold the fasteners in place were originally scheduled during construction. But officials of Bechtel/Parsons Brinckerhoff, the engineering firm in charge of the Big Dig, felt so confident in the epoxy that they canceled the tests. Now it looks like the tests would have been a good idea, because they might have revealed the kind of problems that ultimately led to the fatal ceiling collapse. But there was no immediate harm that resulted from skipping the tests, so the incident went by unnoticed.
The next lesson is one we hear starting in kindergarten: be sure to follow instructions. Engineering is a constant battle between expensive over-caution on the one hand, and reckless negligence on the other hand. Where lives are at stake, as in the construction of bridges and tunnels, laws require licensed engineers to sign off on plans and specifications. But all the licensed engineers in the world won't do any good if the contractors and builders don't carry out the engineers' instructions to the letter.
Speculation by experts centers on the possibility that the epoxy used to hold the concrete ceiling tiles up was either not prepared and applied correctly, or used with oily steel. Steel as it comes from the factory has a thin coating of oil on it, and unless this oil is cleaned off prior to use, adhesives such as epoxy cannot form a good bond. Even if the steel was clean, the widely varying temperatures at a Boston construction site may have interfered with the chemical changes that epoxy goes through in order to harden. Inadequately hardened plastic adhesives can "creep" under stress, moving a tiny fraction of an inch every month, until the entire joint fails. Whatever was done wrong, it appears to have been done wrong consistently, because Governor Mitt Romney has announced that over 1300 fasteners are suspect and will have to be removed or replaced.
Further investigations will eventually reveal what went wrong, and possibly who was responsible. Structural engineering is based mostly on physical science, and things don't generally fall down for no reason at all. But finding the physical cause gets us only part way toward preventing similar accidents in the future. Until the human organizations that let such things happen are repaired and kept in order, the same thing can happen again. In a way, it has.
The Boston tunnel collapse is strangely similar in some ways to a much more serious tragedy that happened twenty-five years ago this month. On July 11, 1981, several hundred people gathered on a suspended concrete walkway to watch a dance party in the newly opened Hyatt Hotel in Kansas City, Missouri. The walkway was held up by steel rods which should have been strong enough to support the weight of the crowd. If they had been installed according to the original engineering plan, everything would have been fine. But on the site, a contractor decided to make a subtle change in the way the rods were made and assembled. This change greatly weakened the structure and caused it to collapse that evening, killing 114 people and injuring 200. Again, we had heavy concrete slabs, dangerous to life, suspended by thin steel rods. Again, if the plans had been carried out to the letter, the disaster would not have occurred. This is not to say that nobody should ever suspend heavy concrete slabs with thin steel rods again, or that engineers never make mistakes. They do. But the point is that responsibility inheres not only in those who make plans, but in those who carry them out and those charged with making sure that the work agrees with the plans.
Everyone involved in a building project, from those who pay for it, to the architects and engineers, to the contractors, to inspectors, down to the lowliest laborer cleaning up afterwards, has to walk that same line between excessive over-caution and reckless carelessness. Since the vast majority of engineering projects work without major failures or loss of life, we can assume that most of these folks do their job well enough most of the time. But an accident like the Big Dig tunnel collapse reminds us of what has to happen at every step of the way, and what can go wrong if somebody doesn't pay enough attention to details that don't seem to matter at the time.
Sources: The Boston Globe articles cited are at http://www.boston.com/news/globe/city_region/breaking_news/2006/07/romney_number_o.html (Gov. Romney's announcement) and http://www.boston.com/news/traffic/bigdig/articles/2006/07/18/workers_doubted_ceiling_method/ (the neglected lab tests). A string of technical discussions on the general subject of epoxy ceiling fasteners and how they can fail is at the Engineering Tips website http://www.eng-tips.com/viewthread.cfm?qid=159632&page=1. The Wikipedia article about the Kansas City Hyatt Regency walkway collapse is at http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse.
Monday, July 10, 2006
Counterfeit Electronics: Coming to a Store Near You
Ten days ago, on July 1, 2006, it became illegal in the European Union to sell electronics that contain more than a very small amount of lead, mercury, cadmium, and a few other hazardous chemicals. These new Reduction of Hazardous Substances (RoHS) regulations present a golden opportunity for electronics counterfeiters to re-label and re-package lead-containing electronics to look like they meet the RoHS requirements.
What is electronics counterfeiting? Anyone who has strolled through a crowded street-level market in New York City has had the chance to buy things like "Rolix" watches and maybe even "Ipods" (not "iPods"). This kind of counterfeiting, where someone makes a cheap imitation of an expensive product and labels it with an almost-like name, is pretty easy to spot and avoid. But it is only the tip of a huge iceberg that costs legitimate manufacturers up to $100 billion a year in lost revenue, according to some estimates.
Most of the counterfeiting goes on far out of sight of consumers, among the thousands of manufacturers, suppliers, and parts brokers who provide the components for both consumer items and industrial electronics systems. Electronics supply chains are increasingly global, and increasingly use the Internet as a marketing and communications tool. The problem with global Internet-based supply chains is that purchasers and suppliers rarely meet face-to-face. This makes it easy for an unethical engineering firm to set up as a legitimate manufacturer and resell used ICs salvaged from old computers as new parts, for example. Another ploy is to relabel cheap, poorly performing parts as expensive better-performing ones. The manufacturer who trusts the part's label and builds a bogus two-dollar IC into a five-hundred-dollar motherboard, which thereupon fails, has got a huge financial headache on his hands. And even worse, the part can perform just well enough to leave the factory, only to fail when it gets to the consumer.
A recent article in IEEE Spectrum Magazine by Michael Pecht and Sanjay Tiku describes some of the ways manufacturers can guard against these problems. One obvious way would be to test parts as they arrive. Years ago, this practice was not uncommon, but it is costly and recent trends have been to move component testing away from the user and toward the supplier. But this requires a level of trust between supplier and user that some suppliers obviously don't deserve.
If the supply chain consisted of just two links, a manufacturer might be able to vet each supplier thoroughly and establish trustworthiness that way. But take the example of a criminally incompetent supplier a few years ago, who stole a formula for the electrolyte used in electrolytic capacitors, a very common type of cheap electronic component. He got the formula wrong, but went ahead and mixed up a batch anyway and sold it to some capacitor manufacturers. They used it to make their capacitors, they sold the capacitors to a board-making company, who sold the boards to computer makers. Some time later, the bad electrolyte began to fail and ruined hundreds, if not thousands, of computers. There were at least five links in this defective supply chain, not counting middlemen and suppliers, and the only problem was at the head of the chain, where it was hard to detect. The harm in this case was a flurry of failed computers, but suppose a bad capacitor went into a heart pacemaker? The harm that counterfeit parts cause isn't only financial. Reputations can be ruined and people can die. But connecting the dots to find out who was responsible is often an impossible task.
Counterfeit electronics is an obvious case of unethical engineering. Someone with enought technical expertise to know what parts are in demand and how to fake them is profiting illegally and immorally from counterfeiting of this kind. Although it happens all over the world, including the United States, the fact that a huge part of all electronics manufacturing is done in Asia means that many counterfeiters also hail from the East. Ironically, a friend of mine who is a native of Hong Kong characterizes the engineering environment in China in recent years as "the wild wild West," associating it with California gold rushes, wide-open cities, and general hell-raising. This anything-goes atmosphere encourages fly-by-night counterfeiting operations and worse. Although China has anti-counterfeiting laws on the books and stages highly publicized raids on counterfeiters from time to time, the sheer volume of fake goods produced means that most fakers never get caught.
If there weren't so many fakers in the first place, things would improve on their own. What if more engineers in China joined professional organizations with a strong commitment to ethical behavior? The Chinese government is suspicious of any organization that is not tightly under its control, but it would certainly have no objection to professional organizations that oblige their members not to engage in counterfeiting.
By many measures, the economies in China, India, Malaysia, Singapore, and elsewhere in Asia are still maturing. In the 1800s, when the British Empire's economy vastly overshadowed that of the United States, it was very common for unethical U. S. publishers to print unauthorized editions of British authors' works. Eventually, an international copyright agreement was hammered out, and as more U. S. publishers agreed to pay copyright to British authors, British publishers did the same for U. S. authors, and the marketplace became more efficient overall. Something like this may take place in Asia, but first, as in the United States, the professional culture will have to change.
Counterfeiting electronics, like counterfeiting money, is an act that benefits the counterfeiter substantially (for a while, anyway) while spreading harm randomly and diffusely everywhere else. There will always be some criminals, but wherever there are enough professionals to band together to take common action and to declare themselves committed to upholding the highest principles of their profession, they can bring about a change in their culture. And this is something no amount of law enforcement can do.
Sources: The IEEE Spectrum article "Bogus" is at http://www.spectrum.ieee.org/may06/3423.
What is electronics counterfeiting? Anyone who has strolled through a crowded street-level market in New York City has had the chance to buy things like "Rolix" watches and maybe even "Ipods" (not "iPods"). This kind of counterfeiting, where someone makes a cheap imitation of an expensive product and labels it with an almost-like name, is pretty easy to spot and avoid. But it is only the tip of a huge iceberg that costs legitimate manufacturers up to $100 billion a year in lost revenue, according to some estimates.
Most of the counterfeiting goes on far out of sight of consumers, among the thousands of manufacturers, suppliers, and parts brokers who provide the components for both consumer items and industrial electronics systems. Electronics supply chains are increasingly global, and increasingly use the Internet as a marketing and communications tool. The problem with global Internet-based supply chains is that purchasers and suppliers rarely meet face-to-face. This makes it easy for an unethical engineering firm to set up as a legitimate manufacturer and resell used ICs salvaged from old computers as new parts, for example. Another ploy is to relabel cheap, poorly performing parts as expensive better-performing ones. The manufacturer who trusts the part's label and builds a bogus two-dollar IC into a five-hundred-dollar motherboard, which thereupon fails, has got a huge financial headache on his hands. And even worse, the part can perform just well enough to leave the factory, only to fail when it gets to the consumer.
A recent article in IEEE Spectrum Magazine by Michael Pecht and Sanjay Tiku describes some of the ways manufacturers can guard against these problems. One obvious way would be to test parts as they arrive. Years ago, this practice was not uncommon, but it is costly and recent trends have been to move component testing away from the user and toward the supplier. But this requires a level of trust between supplier and user that some suppliers obviously don't deserve.
If the supply chain consisted of just two links, a manufacturer might be able to vet each supplier thoroughly and establish trustworthiness that way. But take the example of a criminally incompetent supplier a few years ago, who stole a formula for the electrolyte used in electrolytic capacitors, a very common type of cheap electronic component. He got the formula wrong, but went ahead and mixed up a batch anyway and sold it to some capacitor manufacturers. They used it to make their capacitors, they sold the capacitors to a board-making company, who sold the boards to computer makers. Some time later, the bad electrolyte began to fail and ruined hundreds, if not thousands, of computers. There were at least five links in this defective supply chain, not counting middlemen and suppliers, and the only problem was at the head of the chain, where it was hard to detect. The harm in this case was a flurry of failed computers, but suppose a bad capacitor went into a heart pacemaker? The harm that counterfeit parts cause isn't only financial. Reputations can be ruined and people can die. But connecting the dots to find out who was responsible is often an impossible task.
Counterfeit electronics is an obvious case of unethical engineering. Someone with enought technical expertise to know what parts are in demand and how to fake them is profiting illegally and immorally from counterfeiting of this kind. Although it happens all over the world, including the United States, the fact that a huge part of all electronics manufacturing is done in Asia means that many counterfeiters also hail from the East. Ironically, a friend of mine who is a native of Hong Kong characterizes the engineering environment in China in recent years as "the wild wild West," associating it with California gold rushes, wide-open cities, and general hell-raising. This anything-goes atmosphere encourages fly-by-night counterfeiting operations and worse. Although China has anti-counterfeiting laws on the books and stages highly publicized raids on counterfeiters from time to time, the sheer volume of fake goods produced means that most fakers never get caught.
If there weren't so many fakers in the first place, things would improve on their own. What if more engineers in China joined professional organizations with a strong commitment to ethical behavior? The Chinese government is suspicious of any organization that is not tightly under its control, but it would certainly have no objection to professional organizations that oblige their members not to engage in counterfeiting.
By many measures, the economies in China, India, Malaysia, Singapore, and elsewhere in Asia are still maturing. In the 1800s, when the British Empire's economy vastly overshadowed that of the United States, it was very common for unethical U. S. publishers to print unauthorized editions of British authors' works. Eventually, an international copyright agreement was hammered out, and as more U. S. publishers agreed to pay copyright to British authors, British publishers did the same for U. S. authors, and the marketplace became more efficient overall. Something like this may take place in Asia, but first, as in the United States, the professional culture will have to change.
Counterfeiting electronics, like counterfeiting money, is an act that benefits the counterfeiter substantially (for a while, anyway) while spreading harm randomly and diffusely everywhere else. There will always be some criminals, but wherever there are enough professionals to band together to take common action and to declare themselves committed to upholding the highest principles of their profession, they can bring about a change in their culture. And this is something no amount of law enforcement can do.
Sources: The IEEE Spectrum article "Bogus" is at http://www.spectrum.ieee.org/may06/3423.
Thursday, July 06, 2006
Willie Nelson, Environmental Engineer
The last time I drove from San Marcos up to Fort Worth on Interstate 35, I passed a billboard that bore the grizzled visage of Willie Nelson, the living legend of country music. But instead of advertising his latest album, this billboard urges me to go "BioWillie." Mr. Nelson, it turns out, is using his popularity among truckers to promote biodiesel, a type of diesel fuel made partly from animal and vegetable fats as well as ordinary petroleum. A recent New York Times article said that he drives cars and runs tractors on his farm which are modified to operate on 100% renewable oil, which according to some reports makes the exhaust smell like French fries. So far, biodiesel is available at only a few truck stops, mostly in Texas, but the entertainer has high hopes that his environmentally friendly fuel will become at least as popular as his music.
Does this make Willie Nelson an environmental engineer? I'm not sure he can even spell "methyl ester," much less synthesize it from the used restaurant frying oil that forms much of the raw stock that his refinery uses to make the stuff. But his interest in biodiesel and his clever promotion of the fuel to a market of likely users shows the kind of imagination and initiative that characterizes good engineers.
For that matter, the definition of a good engineer has been changing. It used to be the case in my grandfather's day that technical ability was the only thing expected of engineers. Before the dawn of the computer age, designs of any complexity, from a bridge to a telephone network, needed lengthy, tedious calculations combined with the kind of judgment learned only from experience. But today, technical expertise surpassing even the best of the earlier engineers has been canned into computer software packages. It requires a different kind of genius to use these packages, but the need to spend time on all the nitty-gritty details is less than it used to be.
In many fields, engineers have been freed by these changes to consider other matters beyond the strictly technical features of a project. These include safety concerns, marketing and cost factors, manufacturing problems, and environmental issues. Not that the earlier engineers ignored these factors altogether. But back then simply getting a design to work took so much effort that the other things didn't receive as much attention as they could have.
Biodiesel is a good example of a product whose appeal derives from the simple fact that it is made or grown in an environmentally friendly way, even if it costs more and doesn't perform much better than a competing product. These so-called "soft" issues can actually be harder to deal with than the "hard" technical questions, which nowadays can often be settled in a few computer runs rather than having to build prototype after prototype until the right combination of design factors falls together. And the soft issues are where engineering ethics comes in.
Take for example the thing that is making biodiesel and other bio-derived fuels such as ethanol (made from corn) so attractive: the relatively high price of oil, seventy-five dollars a barrel at this writing. There is one school of economic thought that favors minimal interference in markets, from the convenience store down the street to the global market for oil or any other commodity. If oil becomes too expensive, they say, people will scout around for other ways to get from A to B: a hybrid car, biodiesel, hydrogen, or even a bicycle. In the meantime, such meddlesome practices as higher fuel taxes to force drivers to conserve are counterproductive. When the price of oil gets high enough, the chance to make money with alternative fuels will attract inventors, engineers, and entrepreneurs like Willie Nelson, and in the meantime, we should leave things alone.
That argument is fine as far as it goes, but the trouble is, sometimes it doesn't go far enough. Simple free-market analyses often leave out what are called "externalities." These are things like air pollution, global warming, and other effects that result from the use of a certain commodity, but are not easily expressed in terms of the commodity's market cost. In an insightful article in IEEE Technology and Society Magazine, regional planning expert Clint Andrews showed what happens if you look at global energy costs in recent history and include the externality of military expenditures.
Andrews supposes for the sake of argument that concerns over energy security represent half of the reasons that the U. S. went to war in Iraq in 2003. Estimating the annual cost of the war at $40 billion, half of that figure is $20 billion a year. Andrews points out that $20 billion is also about what the U. S. spends on imported Persian Gulf oil annually. So if we include only half of a modest estimate of what we spend on the Iraq war as an externality of our oil supply and "internalize" it, we really spend $40 billion a year, not $20 billion. And of course this neglects the cost in human lives, which is—or should be—incalculable.
Andrews concludes that while a reasonably free market is a necessary condition to good energy policies, it isn't sufficient. When you include externalities such as wars and other government interventions in energy markets (the billions of dollars in state and federal highway taxes are another example), we are very far from the ideal free market envisioned by libertarians.
An ethical engineer will not simply sell technical services to the highest bidder, but will also think about the far-reaching effects of each project or job. That's exactly what Willie Nelson is doing with his French-fry-smelling tractors and BioWillie billboards. May all engineers do the same.
Sources: Willie Nelson's activities in biodiesel were described in an article by Eric O'Keefe in the New York Times on July 5, 2006 at http://www.nytimes.com/2006/07/05/business/05biowillie.html. Mr. Nelson's website describing his project is at http://www.wnbiodiesel.com. Clinton Andrews' article "Energy security as a rationale for government action" was in the Summer 2005 issue of IEEE Technology and Society Magazine, available through many university libraries and at www.ieeessit.org.
Does this make Willie Nelson an environmental engineer? I'm not sure he can even spell "methyl ester," much less synthesize it from the used restaurant frying oil that forms much of the raw stock that his refinery uses to make the stuff. But his interest in biodiesel and his clever promotion of the fuel to a market of likely users shows the kind of imagination and initiative that characterizes good engineers.
For that matter, the definition of a good engineer has been changing. It used to be the case in my grandfather's day that technical ability was the only thing expected of engineers. Before the dawn of the computer age, designs of any complexity, from a bridge to a telephone network, needed lengthy, tedious calculations combined with the kind of judgment learned only from experience. But today, technical expertise surpassing even the best of the earlier engineers has been canned into computer software packages. It requires a different kind of genius to use these packages, but the need to spend time on all the nitty-gritty details is less than it used to be.
In many fields, engineers have been freed by these changes to consider other matters beyond the strictly technical features of a project. These include safety concerns, marketing and cost factors, manufacturing problems, and environmental issues. Not that the earlier engineers ignored these factors altogether. But back then simply getting a design to work took so much effort that the other things didn't receive as much attention as they could have.
Biodiesel is a good example of a product whose appeal derives from the simple fact that it is made or grown in an environmentally friendly way, even if it costs more and doesn't perform much better than a competing product. These so-called "soft" issues can actually be harder to deal with than the "hard" technical questions, which nowadays can often be settled in a few computer runs rather than having to build prototype after prototype until the right combination of design factors falls together. And the soft issues are where engineering ethics comes in.
Take for example the thing that is making biodiesel and other bio-derived fuels such as ethanol (made from corn) so attractive: the relatively high price of oil, seventy-five dollars a barrel at this writing. There is one school of economic thought that favors minimal interference in markets, from the convenience store down the street to the global market for oil or any other commodity. If oil becomes too expensive, they say, people will scout around for other ways to get from A to B: a hybrid car, biodiesel, hydrogen, or even a bicycle. In the meantime, such meddlesome practices as higher fuel taxes to force drivers to conserve are counterproductive. When the price of oil gets high enough, the chance to make money with alternative fuels will attract inventors, engineers, and entrepreneurs like Willie Nelson, and in the meantime, we should leave things alone.
That argument is fine as far as it goes, but the trouble is, sometimes it doesn't go far enough. Simple free-market analyses often leave out what are called "externalities." These are things like air pollution, global warming, and other effects that result from the use of a certain commodity, but are not easily expressed in terms of the commodity's market cost. In an insightful article in IEEE Technology and Society Magazine, regional planning expert Clint Andrews showed what happens if you look at global energy costs in recent history and include the externality of military expenditures.
Andrews supposes for the sake of argument that concerns over energy security represent half of the reasons that the U. S. went to war in Iraq in 2003. Estimating the annual cost of the war at $40 billion, half of that figure is $20 billion a year. Andrews points out that $20 billion is also about what the U. S. spends on imported Persian Gulf oil annually. So if we include only half of a modest estimate of what we spend on the Iraq war as an externality of our oil supply and "internalize" it, we really spend $40 billion a year, not $20 billion. And of course this neglects the cost in human lives, which is—or should be—incalculable.
Andrews concludes that while a reasonably free market is a necessary condition to good energy policies, it isn't sufficient. When you include externalities such as wars and other government interventions in energy markets (the billions of dollars in state and federal highway taxes are another example), we are very far from the ideal free market envisioned by libertarians.
An ethical engineer will not simply sell technical services to the highest bidder, but will also think about the far-reaching effects of each project or job. That's exactly what Willie Nelson is doing with his French-fry-smelling tractors and BioWillie billboards. May all engineers do the same.
Sources: Willie Nelson's activities in biodiesel were described in an article by Eric O'Keefe in the New York Times on July 5, 2006 at http://www.nytimes.com/2006/07/05/business/05biowillie.html. Mr. Nelson's website describing his project is at http://www.wnbiodiesel.com. Clinton Andrews' article "Energy security as a rationale for government action" was in the Summer 2005 issue of IEEE Technology and Society Magazine, available through many university libraries and at www.ieeessit.org.
Tuesday, June 27, 2006
Discovery Launch: Hopes, Prayers, and Engineering Judgment
This morning, Tuesday, June 27, 2006, four days and some hours before the scheduled launch of NASA's Space Shuttle Discovery, the director of engineering at the Johnson Space Center, Charlie Camarda, was removed from the mission's management team. The Houston Chronicle reports that this reassignment, which Camarda says was against his will, took place after Camarda sent an email to colleagues supporting them for expressing their "dissenting opinions and your exceptions/constraints for flight." Ten days ago, in the June 17 flight readiness review meeting, NASA's head safety official Bryan O'Connor and Christopher Scolese, NASA's chief engineer, voted not to launch. Despite their opposition, NASA managers decided to proceed with the scheduled flight anyway. According to comments the two made after the meeting, their concerns were more that Discovery may suffer irreparable damage during the launch, not that the crew of seven astronauts is in more than the usual danger involved in a ride into space. Nevertheless, it's very clear from these and other reports that NASA is far from one big happy family these days.
Camarda's dismissal may have more to do with internal NASA politics than with shuttle safety. But the two cannot be separated. NASA maintains the shuttles, trains the astronauts, and decides when and how often to fly the remaining three orbiters: Atlantis, Discovery, and Endeavor. NASA head Michael Griffin has gone on record as saying that if Discovery is seriously damaged by pieces of insulating foam—the same problem that doomed Columbia in 2003—he would consider shutting down the entire shuttle program. That policy no doubt influenced the votes of O'Connor and Scolese, who feel that engineering modifications to foam on a number of support brackets should be made to prevent irreparable damage to Discovery's vital heat shield. Everyone agrees that if the kind of damage sustained by Columbia occurs, and is discovered in orbit, and can't be repaired, then the astronauts can take refuge in the International Space Station until a rescue flight can be arranged with one of the two remaining shuttles. This despite the fact that the Station has lately had trouble accommodating only two or three residents at a time. But being uncomfortable and cramped in weightlessness for a few weeks is better than a fiery death. You haven't seen a lot of news items about billionaires paying for rides into space lately, have you? Maybe there's a reason.
In my Mar. 21, 2006 blog, "Retire the Space Shuttle Now," I stated a number of good reasons that we should go straight to the next model of space orbiter without risking any more people's lives in antiquated, patched-up shuttles that deserve an honored place in the Smithsonian, not reuse in space long after their design lifetimes. The recent news out of NASA has only increased my concern that yet another known problem that we haven't heard about in public, but which the engineers are all too familiar with, will reach out and cause another hair-raising space adventure like Apollo 13's near-disaster, if not worse.
Unfortunately, the shuttle program has achieved canonical status in the engineering ethics literature for a couple of reasons. One is that NASA, being a public agency, is unusually open about its internal processes and debates, which means that records of data and decisions are easy to obtain. The second is that both the Challenger and Columbia disasters were caused by known problems that were technically fairly well understood. The failures were not mysterious scientific puzzles; they were failures in management decision-making.
In most well-run organizations, the chief safety officer is king in his or her limited domain. In an oil refinery, for instance, if the president and owner of the plant walks into a hazardous area and attempts to light a cigar, the lowliest safety official present is entirely within his rights to do anything necessary to prevent it, including knocking the president down. On June 17, we witnessed the spectacle of not only NASA's chief safety officer, but its chief engineer as well, say that for reasons of property protection, the launch should not proceed—and they were overruled. And Charles Camarda, an engineer who himself flew on the 2005 Discovery flight, the first one after the Columbia disaster, has just gotten sacked from his mission responsibilities for commending the way some of his underlings spoke out at the flight review. It is not a pretty picture.
In Greek mythology, a young woman named Cassandra had the misfortune to attract the eye of the god Apollo. In an attempt to put himself in her good graces, he gave her the gift of prophecy. But when she refused his advances, he ran up against the rule that says what the gods giveth, the gods can't taketh away. He couldn't keep her from being a prophet, but he could spoil it another way: he made sure that whatever Cassandra prophesied in the way of dire forecasts would not be believed by anybody else. So when she ran around in Troy saying, "You'll be sorry if you bring that big wooden horse in here," she warned the Trojans in vain, the Greeks popped out anyway, and Troy fell. This made Cassandra wish she had never seen Apollo in the first place. Since then her name has passed into the language to mean one whose accurate foretellings of disaster are ignored.
I don't want to be a NASA Cassandra. I have no illusions that one blogger, or even an entire Greek chorus of bloggers, will influence NASA's decision-making process. My hopes and my prayers are that STS-121 will go smoothly, with no headlines other than the routine ones. But we face three possible outcomes on this trip: a routine flight with no significant problems, a flight in which Discovery is damaged enough to scuttle the remaining Shuttle fleet, or a more serious problem that endangers life. May God grant that the third possibility doesn't happen. But I'm going to leave it up to Him as to which of the other two takes place.
Sources: For Camarda's reassignment, see the Houston Chronicle at http://www.chron.com/disp/story.mpl/front/4004817.html. For Camarda's comments on NASA's changed culture, see the 2004 interview at the NASA website
http://www.nasa.gov/vision/space/preparingtravel/rtf_interview_camarda_04.html. For a report on the June 17 meeting, see
http://news.yahoo.com/s/space/20060620/sc_space/nasaschiefengineersafetyofficerweighinonsts121launchdecision.
Camarda's dismissal may have more to do with internal NASA politics than with shuttle safety. But the two cannot be separated. NASA maintains the shuttles, trains the astronauts, and decides when and how often to fly the remaining three orbiters: Atlantis, Discovery, and Endeavor. NASA head Michael Griffin has gone on record as saying that if Discovery is seriously damaged by pieces of insulating foam—the same problem that doomed Columbia in 2003—he would consider shutting down the entire shuttle program. That policy no doubt influenced the votes of O'Connor and Scolese, who feel that engineering modifications to foam on a number of support brackets should be made to prevent irreparable damage to Discovery's vital heat shield. Everyone agrees that if the kind of damage sustained by Columbia occurs, and is discovered in orbit, and can't be repaired, then the astronauts can take refuge in the International Space Station until a rescue flight can be arranged with one of the two remaining shuttles. This despite the fact that the Station has lately had trouble accommodating only two or three residents at a time. But being uncomfortable and cramped in weightlessness for a few weeks is better than a fiery death. You haven't seen a lot of news items about billionaires paying for rides into space lately, have you? Maybe there's a reason.
In my Mar. 21, 2006 blog, "Retire the Space Shuttle Now," I stated a number of good reasons that we should go straight to the next model of space orbiter without risking any more people's lives in antiquated, patched-up shuttles that deserve an honored place in the Smithsonian, not reuse in space long after their design lifetimes. The recent news out of NASA has only increased my concern that yet another known problem that we haven't heard about in public, but which the engineers are all too familiar with, will reach out and cause another hair-raising space adventure like Apollo 13's near-disaster, if not worse.
Unfortunately, the shuttle program has achieved canonical status in the engineering ethics literature for a couple of reasons. One is that NASA, being a public agency, is unusually open about its internal processes and debates, which means that records of data and decisions are easy to obtain. The second is that both the Challenger and Columbia disasters were caused by known problems that were technically fairly well understood. The failures were not mysterious scientific puzzles; they were failures in management decision-making.
In most well-run organizations, the chief safety officer is king in his or her limited domain. In an oil refinery, for instance, if the president and owner of the plant walks into a hazardous area and attempts to light a cigar, the lowliest safety official present is entirely within his rights to do anything necessary to prevent it, including knocking the president down. On June 17, we witnessed the spectacle of not only NASA's chief safety officer, but its chief engineer as well, say that for reasons of property protection, the launch should not proceed—and they were overruled. And Charles Camarda, an engineer who himself flew on the 2005 Discovery flight, the first one after the Columbia disaster, has just gotten sacked from his mission responsibilities for commending the way some of his underlings spoke out at the flight review. It is not a pretty picture.
In Greek mythology, a young woman named Cassandra had the misfortune to attract the eye of the god Apollo. In an attempt to put himself in her good graces, he gave her the gift of prophecy. But when she refused his advances, he ran up against the rule that says what the gods giveth, the gods can't taketh away. He couldn't keep her from being a prophet, but he could spoil it another way: he made sure that whatever Cassandra prophesied in the way of dire forecasts would not be believed by anybody else. So when she ran around in Troy saying, "You'll be sorry if you bring that big wooden horse in here," she warned the Trojans in vain, the Greeks popped out anyway, and Troy fell. This made Cassandra wish she had never seen Apollo in the first place. Since then her name has passed into the language to mean one whose accurate foretellings of disaster are ignored.
I don't want to be a NASA Cassandra. I have no illusions that one blogger, or even an entire Greek chorus of bloggers, will influence NASA's decision-making process. My hopes and my prayers are that STS-121 will go smoothly, with no headlines other than the routine ones. But we face three possible outcomes on this trip: a routine flight with no significant problems, a flight in which Discovery is damaged enough to scuttle the remaining Shuttle fleet, or a more serious problem that endangers life. May God grant that the third possibility doesn't happen. But I'm going to leave it up to Him as to which of the other two takes place.
Sources: For Camarda's reassignment, see the Houston Chronicle at http://www.chron.com/disp/story.mpl/front/4004817.html. For Camarda's comments on NASA's changed culture, see the 2004 interview at the NASA website
http://www.nasa.gov/vision/space/preparingtravel/rtf_interview_camarda_04.html. For a report on the June 17 meeting, see
http://news.yahoo.com/s/space/20060620/sc_space/nasaschiefengineersafetyofficerweighinonsts121launchdecision.
Tuesday, June 20, 2006
Hunting the Cyber Predator
The scene: a ballroom in a fancy hotel in Denver, Colorado. The room is crammed with teenagers of both sexes, as well as a preponderance of young men in their twenties, from all across the U. S. and from many foreign countries as well. Each person wears a mask and a costume that completely disguises identity. What brought them here? In malls and shopping centers all across the nation, attractive advertisements enticed these young people to a free party. To respond to an ad, you entered a small office where you encountered a man wearing a blindfold. The man asked you a few not particularly personal questions about yourself, and handed you a free round-trip airline ticket to Colorado. Some of the younger teens told their parents what they were up to, but many of them neglected that little detail.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
Tuesday, June 13, 2006
Engineering the Perfect Baby
Most engineering societies publish codes of ethics, and most of these codes say something about the health and welfare of the public. My own professional society, the IEEE, has over 300,000 members involved in electrotechnology of all kinds, including the ultrasound machines that produce images of unborn babies. The IEEE code of ethics says among other things that its members agree "to accept responsibility in making decisions consistent with the safety, health and welfare of the public" and "to treat fairly all persons regardless of such factors as race, religion, gender, disability, age, or national origin." Many people—and several U. S. states—include unborn babies in the category of "persons," even if they are found to have disabilities.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Sunday, June 04, 2006
Hurricane Katrina: Good News for Flood Control Engineering
Last August's Hurricane Katrina left well over a thousand people dead, most of New Orleans flooded, and many thousands homeless. You have to look long and hard to find any good news in the aftermath of the worst natural disaster to hit the United States in many decades. But ironically, one of the best things that may happen as a result is a badly needed top-to-bottom reorganization of coastal flood control work.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Monday, May 29, 2006
Model Railroading: Coming to Your Town in a Big Way
A friend of mine is an avid model railroader. He has spent countless hours assembling intricate scale-model railroad cars and locomotives, constructing miles of model track, and attending meets where dozens of his fellow enthusiasts put together entire scale-model counties of rail routes through scenic landscapes and busy towns. The remote controls for these toys have grown increasingly sophisticated with time as well, all the way down to realistic engine noises produced digitally. The only people who may resent the time and energy spent on such a harmless hobby are the wives thus deprived of their husbands' time (and husbands, if any women pursue this avocation, of which I am unaware). But a parallel development—the remote control of real railroad locomotives with no one on board—is stirring up a considerable controversy.
Since the decline of passenger rail transportation in the U. S. in the last half of the twentieth century, the U. S. rail system has faded into the background of public consciousness. But the freight operations that rail lines support have actually become more critical than ever to the country's economy. Nearly all the coal that fuels our coal-fired power plants (and that is about half of them) is carried by rail, as well as numerous other bulk materials such as gravel, cement, chemicals, and food products, not to mention imported merchandise, automobiles, and so on. Since very few additional rail lines are being built, the railroad industry is searching for ways to put more and more freight through a physically limited system. And one of these ways involves remote control of unmanned locomotives.
An article in the May 28 issue of the Austin American-Statesman describes how this works. An operator who has completed an 80-hour training course stands by a track on which a remote-control locomotive sits. Strapped to his chest is a box sprouting joysticks, crank knobs, and a stubby antenna, rather like an overgrown model-airplane radio-control unit. With this remote control system, the operator can perform most of the operations that the engineer in the cab can do, only without any engineer in the cab. If radio control is lost for any reason, the system automatically stops the train.
Most of these systems are being used in switchyards, where the relatively short range of the radio transmitter is not a problem. But recently, some lines have been experimenting with using the system to send trains to nearby industrial sites for short hauls.
Safety is an obvious concern. If there is nobody in the cab, how can the operator stop the train if an obstruction unexpectedly shows up? Unfortunately, stopping a train is not an instantaneous act. Depending on speed and size, it can take up to a mile or more to stop a train even under emergency conditions. The engineers who designed the remote-control systems have presumably taken these factors into consideration, but as with many technologies, the way it is used has a lot to do with how safe it is.
Railroads are one of the most highly unionized industries in America, and opinions among the unions about the new technology are divided. The Brotherhood of Railway Engineers' feelings about the matter are clear from their main website, which shows a tipped-over railway engine with the legend "Remote Control" plastered across it. Since a locomotive running without an engineer represents direct job loss, their concern is understandable. They are, in the colloquial phrase, "agin it," and have commissioned a report which criticizes wider adoption of the technology before better operating rules are put in place. Numerous attempts by the BLE to slow the technology through strikes or other means have been blocked by federal judges.
On the other hand, the United Transportation Union, which represents conductors and switchmen, has come out, after some waffling, in favor of limited use of the technology. The Federal Railroad Administration, for its part, has studied the issue and allowed limited experimentation as long as the operators (generally switchmen) have received an 80-hour training course. This annoys the railway engineers, who have to take a six-month-long course and pass tests to qualify for their jobs.
What about accidents? There have not been many serious accidents reported as yet, possibly because the technology is so new: a few derailing and three fatalities, but no major large-scale accidents with multiple loss of life. It is not clear how far the rail lines wish to go with remote-control locomotives. It is easy to imagine a single model-railroad-style system the size of the U. S. with thousands of trains running completely under computer control. Even now, locomotive engineers are like airline pilots in that they do what centralized traffic-control operators tell them to via microwave radio links from a few control centers that continuously monitor train positions and movements. So replacing the engineers with "robotic" control would not be as great a change as you might think. What the people on the train supply now, of course, is eyes and ears and hands to do the great variety of things that computers and robots cannot yet do. Some of these things are related to safety and some are not.
So it will be some time before the average train you see trundling across a grade crossing while you wait in your car will be nothing but a pile of steel and cargo, bereft of any human presence. If the Brotherhood of Locomotive Engineers has its way, it will never happen. On the other hand, remote control may spread gradually until some big disaster occurs with a remotely-controlled locomotive, which might energize legislators to prohibit the practice altogether. In the meantime, you might visit the next model-railroaders meet in your town to see what the future of real railroading may be like.
Sources: The Federal Railroad Administration has a statement "Remote Control Locomotive Operations" at http://www.fra.dot.gov/us/content/94. The website http://www.labornotes.org/archives/2003/08/b.html has an article "Rail Workers Battle Unsafe Remote Control Technology" written by Ron Hume. The Brotherhood of Locomotive Engineers website has an article "BLET releases remote control hazard study" at http://www.ble.org/pr/news/newsflash.asp?id=4156.
Since the decline of passenger rail transportation in the U. S. in the last half of the twentieth century, the U. S. rail system has faded into the background of public consciousness. But the freight operations that rail lines support have actually become more critical than ever to the country's economy. Nearly all the coal that fuels our coal-fired power plants (and that is about half of them) is carried by rail, as well as numerous other bulk materials such as gravel, cement, chemicals, and food products, not to mention imported merchandise, automobiles, and so on. Since very few additional rail lines are being built, the railroad industry is searching for ways to put more and more freight through a physically limited system. And one of these ways involves remote control of unmanned locomotives.
An article in the May 28 issue of the Austin American-Statesman describes how this works. An operator who has completed an 80-hour training course stands by a track on which a remote-control locomotive sits. Strapped to his chest is a box sprouting joysticks, crank knobs, and a stubby antenna, rather like an overgrown model-airplane radio-control unit. With this remote control system, the operator can perform most of the operations that the engineer in the cab can do, only without any engineer in the cab. If radio control is lost for any reason, the system automatically stops the train.
Most of these systems are being used in switchyards, where the relatively short range of the radio transmitter is not a problem. But recently, some lines have been experimenting with using the system to send trains to nearby industrial sites for short hauls.
Safety is an obvious concern. If there is nobody in the cab, how can the operator stop the train if an obstruction unexpectedly shows up? Unfortunately, stopping a train is not an instantaneous act. Depending on speed and size, it can take up to a mile or more to stop a train even under emergency conditions. The engineers who designed the remote-control systems have presumably taken these factors into consideration, but as with many technologies, the way it is used has a lot to do with how safe it is.
Railroads are one of the most highly unionized industries in America, and opinions among the unions about the new technology are divided. The Brotherhood of Railway Engineers' feelings about the matter are clear from their main website, which shows a tipped-over railway engine with the legend "Remote Control" plastered across it. Since a locomotive running without an engineer represents direct job loss, their concern is understandable. They are, in the colloquial phrase, "agin it," and have commissioned a report which criticizes wider adoption of the technology before better operating rules are put in place. Numerous attempts by the BLE to slow the technology through strikes or other means have been blocked by federal judges.
On the other hand, the United Transportation Union, which represents conductors and switchmen, has come out, after some waffling, in favor of limited use of the technology. The Federal Railroad Administration, for its part, has studied the issue and allowed limited experimentation as long as the operators (generally switchmen) have received an 80-hour training course. This annoys the railway engineers, who have to take a six-month-long course and pass tests to qualify for their jobs.
What about accidents? There have not been many serious accidents reported as yet, possibly because the technology is so new: a few derailing and three fatalities, but no major large-scale accidents with multiple loss of life. It is not clear how far the rail lines wish to go with remote-control locomotives. It is easy to imagine a single model-railroad-style system the size of the U. S. with thousands of trains running completely under computer control. Even now, locomotive engineers are like airline pilots in that they do what centralized traffic-control operators tell them to via microwave radio links from a few control centers that continuously monitor train positions and movements. So replacing the engineers with "robotic" control would not be as great a change as you might think. What the people on the train supply now, of course, is eyes and ears and hands to do the great variety of things that computers and robots cannot yet do. Some of these things are related to safety and some are not.
So it will be some time before the average train you see trundling across a grade crossing while you wait in your car will be nothing but a pile of steel and cargo, bereft of any human presence. If the Brotherhood of Locomotive Engineers has its way, it will never happen. On the other hand, remote control may spread gradually until some big disaster occurs with a remotely-controlled locomotive, which might energize legislators to prohibit the practice altogether. In the meantime, you might visit the next model-railroaders meet in your town to see what the future of real railroading may be like.
Sources: The Federal Railroad Administration has a statement "Remote Control Locomotive Operations" at http://www.fra.dot.gov/us/content/94. The website http://www.labornotes.org/archives/2003/08/b.html has an article "Rail Workers Battle Unsafe Remote Control Technology" written by Ron Hume. The Brotherhood of Locomotive Engineers website has an article "BLET releases remote control hazard study" at http://www.ble.org/pr/news/newsflash.asp?id=4156.
Thursday, May 25, 2006
Engineering Laptop Data Security, or, 26.5 Million Veterans Can't Be Wrong
On Monday, May 22, we learned that some time in the preceding three weeks, a burglar broke into the house of a mid-level analyst in the Department of Veterans Affairs in Washington, D. C. Among the items missing the next day was the employee's laptop computer. That by itself is not news—laptops are stolen every day. But the thing that motivated Veteran Affairs Secretary Jim Nicholson to announce the theft to the news media was the fact that on that laptop's hard drive were the names, Social Security numbers, and other personal information belonging to over 26 million veterans.
It is not hard to imagine what someone with the scruples of a burglar could do with that information. We can only hope that the miscreant does not read the newspapers, watch TV news, or download iPod newsblogs, and that he fenced the machine to someone who will divest it of all identifying indications, including the hard drive data. But the very small chance that a very big problem will occur is still a very big problem. And since Social Security numbers last for the lifetime of their owners, the concern that one of those veterans will be a victim of identity theft may not go away unless the machine is recovered with the knowledge that the data wasn't copied. This happy eventuality is, to say the least, unlikely.
As it does in many other areas, the advance of technology has blurred the distinction between two groups of people who formerly had very different responsibilities. Back in the 1970s when it took a roomful of refrigerator-size tape drives to store twenty-seven million personal records, there were only a handful of people in any given organization who had the technical ability to manipulate or copy the information. The computer-science specialists who designed, operated, and maintained the systems were generally aware of their special responsibilities that came with the power to work with personal data. Besides which, a putative thief would have had to bring a small loading van along to steal such a large amount of data. Although data theft and identity theft have been a problem at some level since the earliest days of computers, the sheer bulk and awkwardness of large amounts of data, and the relatively scarce and highly secure computer rooms in which they were housed, meant that such a theft had to be carefully planned and executed like a bank or payroll heist. For the average non-technical user of such information, the most data handled at once was contained in a bulky folder of green-and-white-striped computer paper, which nobody wanted to carry out of the office anyway. So computer security was an issue mainly for those few specialists who dealt directly with mainframe computers, and the rest of us scarcely knew it existed.
No longer. Because of the democratization of technology we now enjoy, most laptops sold today with 100 GB hard drives can hold the digital equivalent of all the printed contents of a small town's public library. The size of digital storage has changed, but the responsibilities are still the same. Every person who is in charge of a laptop with sensitive information on it has the same moral obligations as those (now retired) computer operators in the glass-walled computer rooms of yore. But in these days of high-pressure work and high-speed internet connections at home, what is more natural than to throw the laptop in the car and finish that special project in the evening just this once, even though you seem to recall some office rule against taking work home? That is just what the anonymous Veterans Administration employee did, and now look what's happened.
There are technological fixes for this technological problem, of course. A ten-second Google search turns up companies such as Eracom Technologies, which offers a variety of data encryption methods for servers, desktops, and laptops. The idea is that the authorized user types in a special password, and for extra security plugs in a special module to enable the laptop to boot up. Once the computer is satisfied that it is being used by the right person, it acts just like a normal computer. But all the data on the hard drive is actually encrypted with advanced techniques and de-encrypted as needed. Were a thief to steal the unit, he or she would be unable to start the machine. Even if the hard drive were removed and copied, the result would be nonsense.
Of course, Eracom doesn't give this technology away for free. I don't know what it costs, but it must be considerably less than the cost of a laptop, and they probably give quantity discounts for large organizations such as the U. S. Department of Veterans Affairs. But even advanced security technology like this can be thwarted if the user does something dumb, like writing the password on a note taped to the keyboard, or keeping the special unlocking module in the same bag with the computer. As an engineer told me recently, he tries to design systems that are foolproof, but doesn't bother to make them "damn-fool proof."
If a pattern of identity theft matching the stolen records does not emerge soon, our returning soldiers may not have to worry about the consequences of this particular laptop burglary. After all, they have seen and dealt with a lot bigger problems than this one. The rest of us, especially those who have any kind of sensitive data that we carry around in laptops, Blackberries, or data storage devices, should think twice before we take it out of a secure area. And ask what your organization does in case such data is stolen. If the answer isn't satisfactory, maybe someone should invest in a little added security. But all the data-security technology in the world cannot substitute for simply being careful.
Sources: An article describing the news conference at which Jim Nicholson revealed the laptop theft is at http://www.acm.org/serving/se/code.htm. Information on encrypting hard-drive data is available at such sites as http://www.eracom-tech.com/hard_disk_encryption.0.html.
It is not hard to imagine what someone with the scruples of a burglar could do with that information. We can only hope that the miscreant does not read the newspapers, watch TV news, or download iPod newsblogs, and that he fenced the machine to someone who will divest it of all identifying indications, including the hard drive data. But the very small chance that a very big problem will occur is still a very big problem. And since Social Security numbers last for the lifetime of their owners, the concern that one of those veterans will be a victim of identity theft may not go away unless the machine is recovered with the knowledge that the data wasn't copied. This happy eventuality is, to say the least, unlikely.
As it does in many other areas, the advance of technology has blurred the distinction between two groups of people who formerly had very different responsibilities. Back in the 1970s when it took a roomful of refrigerator-size tape drives to store twenty-seven million personal records, there were only a handful of people in any given organization who had the technical ability to manipulate or copy the information. The computer-science specialists who designed, operated, and maintained the systems were generally aware of their special responsibilities that came with the power to work with personal data. Besides which, a putative thief would have had to bring a small loading van along to steal such a large amount of data. Although data theft and identity theft have been a problem at some level since the earliest days of computers, the sheer bulk and awkwardness of large amounts of data, and the relatively scarce and highly secure computer rooms in which they were housed, meant that such a theft had to be carefully planned and executed like a bank or payroll heist. For the average non-technical user of such information, the most data handled at once was contained in a bulky folder of green-and-white-striped computer paper, which nobody wanted to carry out of the office anyway. So computer security was an issue mainly for those few specialists who dealt directly with mainframe computers, and the rest of us scarcely knew it existed.
No longer. Because of the democratization of technology we now enjoy, most laptops sold today with 100 GB hard drives can hold the digital equivalent of all the printed contents of a small town's public library. The size of digital storage has changed, but the responsibilities are still the same. Every person who is in charge of a laptop with sensitive information on it has the same moral obligations as those (now retired) computer operators in the glass-walled computer rooms of yore. But in these days of high-pressure work and high-speed internet connections at home, what is more natural than to throw the laptop in the car and finish that special project in the evening just this once, even though you seem to recall some office rule against taking work home? That is just what the anonymous Veterans Administration employee did, and now look what's happened.
There are technological fixes for this technological problem, of course. A ten-second Google search turns up companies such as Eracom Technologies, which offers a variety of data encryption methods for servers, desktops, and laptops. The idea is that the authorized user types in a special password, and for extra security plugs in a special module to enable the laptop to boot up. Once the computer is satisfied that it is being used by the right person, it acts just like a normal computer. But all the data on the hard drive is actually encrypted with advanced techniques and de-encrypted as needed. Were a thief to steal the unit, he or she would be unable to start the machine. Even if the hard drive were removed and copied, the result would be nonsense.
Of course, Eracom doesn't give this technology away for free. I don't know what it costs, but it must be considerably less than the cost of a laptop, and they probably give quantity discounts for large organizations such as the U. S. Department of Veterans Affairs. But even advanced security technology like this can be thwarted if the user does something dumb, like writing the password on a note taped to the keyboard, or keeping the special unlocking module in the same bag with the computer. As an engineer told me recently, he tries to design systems that are foolproof, but doesn't bother to make them "damn-fool proof."
If a pattern of identity theft matching the stolen records does not emerge soon, our returning soldiers may not have to worry about the consequences of this particular laptop burglary. After all, they have seen and dealt with a lot bigger problems than this one. The rest of us, especially those who have any kind of sensitive data that we carry around in laptops, Blackberries, or data storage devices, should think twice before we take it out of a secure area. And ask what your organization does in case such data is stolen. If the answer isn't satisfactory, maybe someone should invest in a little added security. But all the data-security technology in the world cannot substitute for simply being careful.
Sources: An article describing the news conference at which Jim Nicholson revealed the laptop theft is at http://www.acm.org/serving/se/code.htm. Information on encrypting hard-drive data is available at such sites as http://www.eracom-tech.com/hard_disk_encryption.0.html.
Thursday, May 18, 2006
Engineering Privacy in the Computer Age
The Association for Computing Machinery (ACM) is the world's leading society for computer professionals. Founded in 1947, it is for professionals involved in information technology what the American Medical Association is for U. S. doctors. Prominently displayed on the ACM's website is a lengthy Code of Ethics, which includes the following words about privacy rights:
"Computing and communication technology enables the collection and exchange of personal information on a scale unprecedented in the history of civilization. Thus there is increased potential for violating the privacy of individuals and groups. . . . It is the responsibility of professionals to maintain the privacy and integrity of data describing individuals."
So far, so good. Few will argue that the ubiquity of computers has made it possible to collect, analyze, or steal unimaginable amounts of highly personal information. But the code doesn't simply stop with a call to maintain privacy. It goes into further detail:
". . . This imperative implies that only the necessary amount of personal information be collected in a system, that retention and disposal periods for that information be clearly defined and enforced, and that personal information gathered for a specific purpose not be used for other purposes without consent of the individual(s). These principles apply to electronic communications, including electronic mail, and prohibit procedures that capture or monitor electronic user data, including messages,without the permission of users . . . ."
President Bush has been in hot water this week after a report in USA Today that the National Security Agency has been collecting the phone call records of millions of Americans. One phone company after another has denied providing such information. While it is perhaps too early to decide the truth about the matter, the record of numbers dialed and calls received is something that most citizens would regard as personal information.
On the other hand, we have all seen TV shows in which the dialing records of a criminal suspect have provided important clues to the solution of a crime. Phone taps, call records, and traces have been a part of domestic law enforcement for decades. And of course, computers are involved in nearly all electronic communications of any description these days. How do the computer professionals deal with these cases? Here's how:
"User data observed during the normal duties of system operation and maintenance must be treated with strictest confidentiality, except in cases where it is evidence for the violation of law, organizational regulations, or this Code. In these cases, the nature or contents of that information must be disclosed only to proper authorities."
So, at least according to the ACM Code of Ethics, information such as call records should be disclosed to the "proper authorities" (e. g. the NSA) only when the user data is evidence for the violation of (1) law, (2) "organizational regulations," or (3) the Code itself. The ACM Code of Ethics or the internal regulations of the phone companies are not the inspiration for NSA activities, we hope. So it seems that an ACM member in good standing could participate in such an activity only if the records obtained were evidence for the violation of law.
That's a pretty narrow scope. Somehow I doubt that the phone records of all Americans, or even a substantial fraction of all Americans, constitute evidence for the violation of law. Maybe some of them do, but that is why most phone tap, trace, and call record requests are made by law enforcement officials only for specific individuals who are already under suspicion. If anything like the reported wholesale phone-record transfer took place, those members of the ACM who participated in it are under a cloud ethically, to say the least.
Some days it seems like the great internet-website-phone-fax-TV-MP3-instant message-chatroom behemoth runs on its own without human intervention of any kind. But there are people behind all the systems, and people make the decisions that protect or violate your privacy. Just the other day, I learned that the operator of the website at my church (!) has a way to tell if particular viewers bookmark the site. When I heard this, I had a chilling vision of some invisible guy looking over my shoulder as I sat in front of my computer in my supposedly private room at home. So far, no harm that I know of has come to me because people I don't know and will never meet can tell which websites I bookmark. But it may have something to do with the fact that even after we signed up for the national do-not-call list, I keep getting phone calls right at suppertime from organizations I could swear I have never had any dealings with. But maybe if bookmarking a website counts as a "dealing," this gives them the right to call me. Who knows?
The truth will eventually emerge about the NSA and national calling records. Laws always lag behind rapidly advancing technologies, and a certain amount of confusion and injustice results. But at some point, if things get too out of hand, the legal system may overreact with burdensome regulations that in some cases are worse than the disease they were designed to cure. The best protection against such an outcome is for everyone, especially members of the Association for Computing Machinery, to abide by sound ethical principles and every so often ask, "If I were on the receiving end of this, would it bother me?"
Sources: The Association for Computing Machinery's Code of Ethics is at http://www.acm.org/serving/se/code.htm.
"Computing and communication technology enables the collection and exchange of personal information on a scale unprecedented in the history of civilization. Thus there is increased potential for violating the privacy of individuals and groups. . . . It is the responsibility of professionals to maintain the privacy and integrity of data describing individuals."
So far, so good. Few will argue that the ubiquity of computers has made it possible to collect, analyze, or steal unimaginable amounts of highly personal information. But the code doesn't simply stop with a call to maintain privacy. It goes into further detail:
". . . This imperative implies that only the necessary amount of personal information be collected in a system, that retention and disposal periods for that information be clearly defined and enforced, and that personal information gathered for a specific purpose not be used for other purposes without consent of the individual(s). These principles apply to electronic communications, including electronic mail, and prohibit procedures that capture or monitor electronic user data, including messages,without the permission of users . . . ."
President Bush has been in hot water this week after a report in USA Today that the National Security Agency has been collecting the phone call records of millions of Americans. One phone company after another has denied providing such information. While it is perhaps too early to decide the truth about the matter, the record of numbers dialed and calls received is something that most citizens would regard as personal information.
On the other hand, we have all seen TV shows in which the dialing records of a criminal suspect have provided important clues to the solution of a crime. Phone taps, call records, and traces have been a part of domestic law enforcement for decades. And of course, computers are involved in nearly all electronic communications of any description these days. How do the computer professionals deal with these cases? Here's how:
"User data observed during the normal duties of system operation and maintenance must be treated with strictest confidentiality, except in cases where it is evidence for the violation of law, organizational regulations, or this Code. In these cases, the nature or contents of that information must be disclosed only to proper authorities."
So, at least according to the ACM Code of Ethics, information such as call records should be disclosed to the "proper authorities" (e. g. the NSA) only when the user data is evidence for the violation of (1) law, (2) "organizational regulations," or (3) the Code itself. The ACM Code of Ethics or the internal regulations of the phone companies are not the inspiration for NSA activities, we hope. So it seems that an ACM member in good standing could participate in such an activity only if the records obtained were evidence for the violation of law.
That's a pretty narrow scope. Somehow I doubt that the phone records of all Americans, or even a substantial fraction of all Americans, constitute evidence for the violation of law. Maybe some of them do, but that is why most phone tap, trace, and call record requests are made by law enforcement officials only for specific individuals who are already under suspicion. If anything like the reported wholesale phone-record transfer took place, those members of the ACM who participated in it are under a cloud ethically, to say the least.
Some days it seems like the great internet-website-phone-fax-TV-MP3-instant message-chatroom behemoth runs on its own without human intervention of any kind. But there are people behind all the systems, and people make the decisions that protect or violate your privacy. Just the other day, I learned that the operator of the website at my church (!) has a way to tell if particular viewers bookmark the site. When I heard this, I had a chilling vision of some invisible guy looking over my shoulder as I sat in front of my computer in my supposedly private room at home. So far, no harm that I know of has come to me because people I don't know and will never meet can tell which websites I bookmark. But it may have something to do with the fact that even after we signed up for the national do-not-call list, I keep getting phone calls right at suppertime from organizations I could swear I have never had any dealings with. But maybe if bookmarking a website counts as a "dealing," this gives them the right to call me. Who knows?
The truth will eventually emerge about the NSA and national calling records. Laws always lag behind rapidly advancing technologies, and a certain amount of confusion and injustice results. But at some point, if things get too out of hand, the legal system may overreact with burdensome regulations that in some cases are worse than the disease they were designed to cure. The best protection against such an outcome is for everyone, especially members of the Association for Computing Machinery, to abide by sound ethical principles and every so often ask, "If I were on the receiving end of this, would it bother me?"
Sources: The Association for Computing Machinery's Code of Ethics is at http://www.acm.org/serving/se/code.htm.
Tuesday, May 09, 2006
Mobile Phones on Airplanes: Too Soon to Talk?
To some airline passengers, a mobile phone is God's gift to air travel. You can see how eagerly they relieve the boredom of watching other passengers struggle into their seats by chatting with friends and relatives until the last possible second—and sometimes longer. I've watched the test of wills as a flight attendant stood by an oblivious businessman who simply would not put up his phone until she repeated her request three times and threatened to delay the flight for everybody. And it sometimes looks like a contest to see who can whip out their phone and make the first call after the announcement that it's okay to use phones again after landing. Clearly, people would like to use their mobile phones all the time, not just on the ground. Possibly in view of this fact, the Federal Communications Commission has announced that it is considering whether to lift the restriction on in-flight mobile phone calls. So is there anything to the notion that electronic devices such as mobile phones can seriously affect the avionics of a modern jet aircraft? Or is it just a silly bureaucratic exhibition of meaningless power without foundation in fact?
Sources: An online version of the March 2006 IEEE Spectrum article, "Unsafe at Any Airspeed," is at http://www.spectrum.ieee.org/mar06/3069.
Surprisingly little research has been done into whether people actually use mobile phones on plane flights, and if such use can interfere with navigation or communication systems. In the March 2006 issue of the magazine IEEE Spectrum, a publication for professional electronic engineers, researchers at Carnegie Mellon University reported the findings of a three-month investigation in which they placed a radio-wave "sniffer" on board numerous commercial flights. This instrument package was designed to receive and record radio emissions in the frequencies used by mobile phones. After the equipment flew in the overhead luggage rack on 37 different commercial flights, the data was downloaded and analyzed.
It turned out that on average, at least one person on every flight, and sometimes several people, made one or more mobile phone calls at times that clearly violated FAA and airline rules. While none of the planes in the study crashed or reported any harmful interference with avionics, the researchers found from independent data collected by NASA that there have been over seventy incidents in which portable electronic devices on board a plane have interfered with aircraft systems. The increasing use of global positioning system (GPS) navigation tools makes newer avionics even more vulnerable to interference than in the past, since GPS relies on receiving weak satellite signals that can disappear under interference from onboard phones, laptops, or other unauthorized electronics. While the Carnegie Mellon study does not cite a particular plane crash as being caused by interference from portable electronic devices, it implies that interference may have contributed to crashes in the past, given what we now know about mobile phone use on airliners.
Based on the results of their study, the researchers made several recommendations. A total ban on mobile phones in airplanes was not one of them. One of their most innovative proposals is to equip flight crews with a hand-held version of their "sniffer." This could be made as small as a pager and could be slipped into a pocket. At the same time that the flight attendant offers coffee, tea, or snacks, he or she could be patrolling the aisles for illicit mobile-phone use. Simply warning passengers that any mobile phone use can be detected in this way would probably go far toward discouraging the practice.
Other recommendations include better coordination between the Federal Aviation Administration, in charge of airline safety, and the Federal Communications Commission, in charge of the airwaves. Also, the NASA program that accumulated data about airline safety problems has had its budget cut in recent years, and the researchers called for its funding to be restored. All of these ideas are good ones, but unless politicians, industry representatives, and regulators take action, things may go on as they are until a tragedy occurs.
Tragedies are, unfortunately, great motivators for regulators and politicians to do something. The trouble with the interference problem in this regard is that, unlike a broken turbine blade or other physical cause, radio interference leaves little or no trace of itself after a crash. Even if a crash was caused by interference that produced a false reading from a GPS display, discovering this cause after the fact would be difficult or impossible without much better in-flight data recording than we now have.
So this is one problem that may be difficult to fix technologically. Of course, if everybody followed the rules, it would disappear. And here is one instance where you, the individual airline passenger, can do something. Not only can you refrain from using your mobile phone during prohibited parts of the flight, but if you see someone else doing it, you might try speaking to them about it. The life you save may be your own!
Sources: An online version of the March 2006 IEEE Spectrum article, "Unsafe at Any Airspeed," is at http://www.spectrum.ieee.org/mar06/3069.
Surprisingly little research has been done into whether people actually use mobile phones on plane flights, and if such use can interfere with navigation or communication systems. In the March 2006 issue of the magazine IEEE Spectrum, a publication for professional electronic engineers, researchers at Carnegie Mellon University reported the findings of a three-month investigation in which they placed a radio-wave "sniffer" on board numerous commercial flights. This instrument package was designed to receive and record radio emissions in the frequencies used by mobile phones. After the equipment flew in the overhead luggage rack on 37 different commercial flights, the data was downloaded and analyzed.
It turned out that on average, at least one person on every flight, and sometimes several people, made one or more mobile phone calls at times that clearly violated FAA and airline rules. While none of the planes in the study crashed or reported any harmful interference with avionics, the researchers found from independent data collected by NASA that there have been over seventy incidents in which portable electronic devices on board a plane have interfered with aircraft systems. The increasing use of global positioning system (GPS) navigation tools makes newer avionics even more vulnerable to interference than in the past, since GPS relies on receiving weak satellite signals that can disappear under interference from onboard phones, laptops, or other unauthorized electronics. While the Carnegie Mellon study does not cite a particular plane crash as being caused by interference from portable electronic devices, it implies that interference may have contributed to crashes in the past, given what we now know about mobile phone use on airliners.
Based on the results of their study, the researchers made several recommendations. A total ban on mobile phones in airplanes was not one of them. One of their most innovative proposals is to equip flight crews with a hand-held version of their "sniffer." This could be made as small as a pager and could be slipped into a pocket. At the same time that the flight attendant offers coffee, tea, or snacks, he or she could be patrolling the aisles for illicit mobile-phone use. Simply warning passengers that any mobile phone use can be detected in this way would probably go far toward discouraging the practice.
Other recommendations include better coordination between the Federal Aviation Administration, in charge of airline safety, and the Federal Communications Commission, in charge of the airwaves. Also, the NASA program that accumulated data about airline safety problems has had its budget cut in recent years, and the researchers called for its funding to be restored. All of these ideas are good ones, but unless politicians, industry representatives, and regulators take action, things may go on as they are until a tragedy occurs.
Tragedies are, unfortunately, great motivators for regulators and politicians to do something. The trouble with the interference problem in this regard is that, unlike a broken turbine blade or other physical cause, radio interference leaves little or no trace of itself after a crash. Even if a crash was caused by interference that produced a false reading from a GPS display, discovering this cause after the fact would be difficult or impossible without much better in-flight data recording than we now have.
So this is one problem that may be difficult to fix technologically. Of course, if everybody followed the rules, it would disappear. And here is one instance where you, the individual airline passenger, can do something. Not only can you refrain from using your mobile phone during prohibited parts of the flight, but if you see someone else doing it, you might try speaking to them about it. The life you save may be your own!
Tuesday, May 02, 2006
Engineering the Distracted Driver
On the afternoon of June 19, 1999, Bryan Smith was driving along Maine's Route 5 in the White Mountains near the New Hampshire border. His Rottweiler was with him in the back of his Dodge Caravan. The dog did something that caused Smith to turn around to see what was the matter. While his attention was diverted from the roadway in front of him, his vehicle hit an object on the edge of the road. When Smith stopped the car to see what he'd hit, he found that it was famed author Stephen King, who subsequently underwent five operations for the injuries he sustained. Smith was not intoxicated or speeding. The only thing that kept him from seeing King in time to avoid the collision was the distraction caused by his dog.
While this is probably the most famous recent automotive accident involving a distracted driver, recent research by the Virginia Tech Transportation Institute indicates that it was the tip of an iceberg that is much larger than we thought. Using high-tech instrumentation such as Doppler radars, accelerometers, and five channels of compressed video to provide a second-by-second record of over two million miles of driving, the Virginia Tech researchers analyzed events leading up to over 60 crashes documented during the study of 100 instrumented cars and their drivers. The researchers were surprised to find that driver inattention was a factor in nearly four out of five crashes. This category includes fatigue and glancing away from the forward roadway for any reason. The most common cause of driver inattention was found to be "wireless devices," which includes cellphones, although other passengers, radios, and CD players were also implicated. Further information on the study can be found at the website of the sponsoring agency, the National Highway Traffic Safety Administration, at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html.
Over 43,000 people die in U. S. auto crashes every year. In the hierarchy of things to be concerned about in engineering ethics, death is at the top. Any innovation that leads to increasing fatalities needs to be scrutinized thoroughly. From a system point of view, however, the things people do in their cars are almost uncharted territory, as the Virginia Tech research shows.
Consider a typical Saturday-morning outing for a mother and her children. Their vehicle may contain a built-in GPS navigation system, a satellite radio, a conventional radio, a CD player, and air-conditioning controls, all of which need attention at various times. She may be carrying her own cellphone and Blackberry, and her children may be watching a DVD on a player in the back seat, in addition to carrying their own phones. All of these pieces of equipment were designed without the knowledge that driver inattention is apparently a factor in almost four out of five crashes. The timing and usage of all these devices is left entirely up to the owners and operators, whose last drivers' ed course might have been two decades ago, if ever. The wonder is that anybody can drive more than a couple of miles amid such electronic chaos without hitting something.
This kind of problem has been faced before by the military, whose interest in giving fighter pilots the information they need without unduly distracting them is truly a life-or-death matter. A fighter-plane cockpit is a highly coordinated and uniform environment in which pilots know exactly what to expect, and where instruments and visual cues are placed with careful attention to their effects on the ability of the pilot to perform his job quickly and without needless fumbling.
I don't propose that we hand over control of everyone's car interior to the Department of Defense. But at some point we need to recognize that the original purpose of the automobile driver's seat—to provide a place where the operator can devote his or her full attention to the demanding task of controlling a potentially fatal piece of equipment moving at high speed—is becoming lost in the proliferation of options, gadgets, and distractions that most state driving laws permit. The one exception I am aware of is a law in most states that prohibits the operation of a television screen within the driver's line of vision. But watching TV while driving would be safer than trying to operate some of the latest digital gizmos with their multiple menus and tiny display screens.
Laws almost always lag behind technology, and with good reason. Unless a new technology poses a "clear and present danger," it is best to let enough history accumulate to allow a reasoned judgment based on sufficient evidence. The evidence of driver inattention has been long in coming, but it has now arrived. Engineers need to consider safety ideas that are out of the conventional boxes with regard to technologies used in automobiles. For example, it is technically feasible, given enough standards and agreements, to devise an interlock system that makes all controls for non-essential electronics (GPS, cellphones, etc.) inoperable while the car is in motion. If everyone had to stop or pull off to the side of the road to make a phone call or read a map, would the world come to an end? No. Time was when nobody could make phone calls from cars at all, and somehow people survived.
This isn't necessarily a call for regulation. The people with the greatest financial interest at stake in automotive safety are the insurance companies. What if they offered deep discounts for people who drove interlock-equipped cars? The automakers know that safety sells to a certain segment of consumers, primarily those with young families. Enough clever people working on this problem could come up with solutions that would not require drastic laws and would end up making the highways safer, and probably the electronics easier to operate too. The evidence is in. Now it's time to do something about it.
In the meantime, I suggest adopting the "two-second rule." The 100-car study found that short glances away from the roadway, especially for environmental checks like looking at one's rear-view mirror, were not risky as long as they took less than two seconds. But taking your eyes away from front and center for any longer than that led to increased chances of a wreck. So look away if you must, but not for longer than two seconds if you can avoid it.
Sources: The National Highway Traffic Safety Administration has more information on the Virginia Tech study "The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data" at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html. The biographical information on Stephen King is from the Wikipedia article on King, http://en.wikipedia.org/wiki/Stephen_King.
While this is probably the most famous recent automotive accident involving a distracted driver, recent research by the Virginia Tech Transportation Institute indicates that it was the tip of an iceberg that is much larger than we thought. Using high-tech instrumentation such as Doppler radars, accelerometers, and five channels of compressed video to provide a second-by-second record of over two million miles of driving, the Virginia Tech researchers analyzed events leading up to over 60 crashes documented during the study of 100 instrumented cars and their drivers. The researchers were surprised to find that driver inattention was a factor in nearly four out of five crashes. This category includes fatigue and glancing away from the forward roadway for any reason. The most common cause of driver inattention was found to be "wireless devices," which includes cellphones, although other passengers, radios, and CD players were also implicated. Further information on the study can be found at the website of the sponsoring agency, the National Highway Traffic Safety Administration, at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html.
Over 43,000 people die in U. S. auto crashes every year. In the hierarchy of things to be concerned about in engineering ethics, death is at the top. Any innovation that leads to increasing fatalities needs to be scrutinized thoroughly. From a system point of view, however, the things people do in their cars are almost uncharted territory, as the Virginia Tech research shows.
Consider a typical Saturday-morning outing for a mother and her children. Their vehicle may contain a built-in GPS navigation system, a satellite radio, a conventional radio, a CD player, and air-conditioning controls, all of which need attention at various times. She may be carrying her own cellphone and Blackberry, and her children may be watching a DVD on a player in the back seat, in addition to carrying their own phones. All of these pieces of equipment were designed without the knowledge that driver inattention is apparently a factor in almost four out of five crashes. The timing and usage of all these devices is left entirely up to the owners and operators, whose last drivers' ed course might have been two decades ago, if ever. The wonder is that anybody can drive more than a couple of miles amid such electronic chaos without hitting something.
This kind of problem has been faced before by the military, whose interest in giving fighter pilots the information they need without unduly distracting them is truly a life-or-death matter. A fighter-plane cockpit is a highly coordinated and uniform environment in which pilots know exactly what to expect, and where instruments and visual cues are placed with careful attention to their effects on the ability of the pilot to perform his job quickly and without needless fumbling.
I don't propose that we hand over control of everyone's car interior to the Department of Defense. But at some point we need to recognize that the original purpose of the automobile driver's seat—to provide a place where the operator can devote his or her full attention to the demanding task of controlling a potentially fatal piece of equipment moving at high speed—is becoming lost in the proliferation of options, gadgets, and distractions that most state driving laws permit. The one exception I am aware of is a law in most states that prohibits the operation of a television screen within the driver's line of vision. But watching TV while driving would be safer than trying to operate some of the latest digital gizmos with their multiple menus and tiny display screens.
Laws almost always lag behind technology, and with good reason. Unless a new technology poses a "clear and present danger," it is best to let enough history accumulate to allow a reasoned judgment based on sufficient evidence. The evidence of driver inattention has been long in coming, but it has now arrived. Engineers need to consider safety ideas that are out of the conventional boxes with regard to technologies used in automobiles. For example, it is technically feasible, given enough standards and agreements, to devise an interlock system that makes all controls for non-essential electronics (GPS, cellphones, etc.) inoperable while the car is in motion. If everyone had to stop or pull off to the side of the road to make a phone call or read a map, would the world come to an end? No. Time was when nobody could make phone calls from cars at all, and somehow people survived.
This isn't necessarily a call for regulation. The people with the greatest financial interest at stake in automotive safety are the insurance companies. What if they offered deep discounts for people who drove interlock-equipped cars? The automakers know that safety sells to a certain segment of consumers, primarily those with young families. Enough clever people working on this problem could come up with solutions that would not require drastic laws and would end up making the highways safer, and probably the electronics easier to operate too. The evidence is in. Now it's time to do something about it.
In the meantime, I suggest adopting the "two-second rule." The 100-car study found that short glances away from the roadway, especially for environmental checks like looking at one's rear-view mirror, were not risky as long as they took less than two seconds. But taking your eyes away from front and center for any longer than that led to increased chances of a wreck. So look away if you must, but not for longer than two seconds if you can avoid it.
Sources: The National Highway Traffic Safety Administration has more information on the Virginia Tech study "The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data" at http://www-nrd.nhtsa.dot.gov/departments/nrd-13/newDriverDistraction.html. The biographical information on Stephen King is from the Wikipedia article on King, http://en.wikipedia.org/wiki/Stephen_King.
Monday, April 24, 2006
Nuclear Power Reconsidered
Twenty years ago this week, a late-night experiment at an obscure nuclear power plant in the former Soviet Union turned into the worst nuclear accident in history. During the early morning hours of April 26, 1986, operators at the graphite-core plant in Chernobyl, some eighty miles north of the Ukranian capital of Kiev, violated numerous regulations and disabled safety mechanisms during an ill-considered reactor test. The reactor blew apart and the graphite (carbon) core caught fire like a giant nuclear barbecue pit, sending radioactive smoke into the atmosphere. The accident was compounded by the criminally slow response of the Soviet government, which first attempted to cover up the incident. When Scandinavian nations detected abnormal levels of airborne radioactivity and started asking questions, the USSR reluctantly admitted there was a problem, but not before thousands of people living near the plant had been exposed to dangerous levels of radioactivity.
An Associated Press story by Mara D. Bellaby published this week recounts estimates of the total number of fatalities and illnesses caused by the accident. Not as many people died from Chernobyl as was originally feared. Eventually the government got around to evacuating some 116,000 people who lived within twenty miles of the plant. Official reports released by United Nations agencies recently say that only 50 people have died so far as a direct result of radiation poisoning traceable to the accident. Surprisingly, this includes those who fought the fire in the first hours of the accident and who were exposed to the most intense levels of radiation. The most significant problem in the general public has turned out to be a sharp increase in thyroid cancer among young people. Since radioactive iodine is taken up preferentially by the thyroid in children and adolescents, this increase was expected. Careful screening for early signs of thyroid cancer and prompt treatment have cured nearly all of those who contracted the disease, according to the reports. So if the world's worst nuclear accident caused only 50 deaths, why is it that no new nuclear power plants have been ordered in the United States since 1978?
The last nuclear plant to be completed in this country was finished in 1996. The nearly twenty-year span between these two dates alone give you some idea as to why utilities are reluctant to order nuclear plants. For a variety of reasons, many of them good, the nuclear power industry in the U. S. is hedged with an incredible number of regulations, permit processes, and controls from overlapping Federal, state, and local jurisdictions. Our own worst nuclear-plant disaster, Three Mile Island, happened in Pennsylvania in 1979, and compares to Chernobyl as a fender-bender compares to a bus full of children tumbling down a mountain. Nevertheless, it was serious enough to create political turmoil that effectively shut down the nuclear power construction industry in this country. There are still U. S. companies that make nuclear plants—they just don't sell them here.
As a consequence, the increased demand for electricity in the U. S. has been met since the 1980s largely by more coal-fired plants, with a small but significant amount contributed from renewable sources such as wind power. There are many good reasons to oppose nuclear power: the problem of what to do with the highly hazardous wastes created by plant operation, the danger of nuclear proliferation to unstable countries, and the "yuck factor" that some people will always feel about a technology that is associated with nuclear weapons. But assuming that the nation's use of electric energy is not going to decrease in absolute terms any time soon, the power has to come from somewhere, namely coal in the last few years. And opponents of greenhouse-gas emissions, many of whom also oppose nuclear power, know (or should know) that you can't burn coal without making carbon dioxide, which is the greenhouse gas of most concern. Nuclear power, whatever its other drawbacks, produces virtually no greenhouse gases, which is one reason that even "greens" have been giving it a second glance lately.
Some countries such as France never abandoned nuclear power. France's example shows that given a moderate, stable regulatory environment and good engineering, nuclear power can be a safe and reliable source of electricity, leaving aside the question of wastes. Still, it is not at all clear that the nuclear industry will ever be able to build substantial numbers of new plants in the U. S. The new free-enterprise model of partially deregulated utilities makes it even more risky to plan a long-term capital investment such as a nuclear plant, which sucks in millions of dollars for years before even starting to produce revenue. So if we can't build new nuclear plants, and we don't want to contributed to global warming by building new fossil-fueled coal, oil, or natural gas plants, where will the energy come from?
Radical conservation combined with renewable and distributed energy generation is one possible answer. Here and there, enterprising architects have built houses and even commercial buildings whose net use of externally supplied energy in the form of electricity or natural gas is only a small fraction of what typical construction uses. The drawback, of course, is that it takes expensive custom engineering and materials to achieve these radical savings, and in the current economic environment there is no incentive to do these things. Perhaps some radical economic experimentation is in order here. If large tax breaks or even subsidies were provided for building structures whose energy usage was, say, 50% or less of the average level, this could really be regarded in the light of a loan, since the country as a whole will benefit from the fact that less energy usage is a net gain in a costly-energy economy. A whole raft of vested interests would first have to be placated, but that is what politics is for.
As the aftermath of Chernobyl has proved, our worst fears in some areas sometimes turn out to be not as bad as we thought. But before we in the U. S. go nuclear in a big way, we have time to consider other options.
Sources: An article by Mara Bellaby similar to the one carried in the Austin American-Statesman is at http://www.newsobserver.com/104/story/431637.html.
An Associated Press story by Mara D. Bellaby published this week recounts estimates of the total number of fatalities and illnesses caused by the accident. Not as many people died from Chernobyl as was originally feared. Eventually the government got around to evacuating some 116,000 people who lived within twenty miles of the plant. Official reports released by United Nations agencies recently say that only 50 people have died so far as a direct result of radiation poisoning traceable to the accident. Surprisingly, this includes those who fought the fire in the first hours of the accident and who were exposed to the most intense levels of radiation. The most significant problem in the general public has turned out to be a sharp increase in thyroid cancer among young people. Since radioactive iodine is taken up preferentially by the thyroid in children and adolescents, this increase was expected. Careful screening for early signs of thyroid cancer and prompt treatment have cured nearly all of those who contracted the disease, according to the reports. So if the world's worst nuclear accident caused only 50 deaths, why is it that no new nuclear power plants have been ordered in the United States since 1978?
The last nuclear plant to be completed in this country was finished in 1996. The nearly twenty-year span between these two dates alone give you some idea as to why utilities are reluctant to order nuclear plants. For a variety of reasons, many of them good, the nuclear power industry in the U. S. is hedged with an incredible number of regulations, permit processes, and controls from overlapping Federal, state, and local jurisdictions. Our own worst nuclear-plant disaster, Three Mile Island, happened in Pennsylvania in 1979, and compares to Chernobyl as a fender-bender compares to a bus full of children tumbling down a mountain. Nevertheless, it was serious enough to create political turmoil that effectively shut down the nuclear power construction industry in this country. There are still U. S. companies that make nuclear plants—they just don't sell them here.
As a consequence, the increased demand for electricity in the U. S. has been met since the 1980s largely by more coal-fired plants, with a small but significant amount contributed from renewable sources such as wind power. There are many good reasons to oppose nuclear power: the problem of what to do with the highly hazardous wastes created by plant operation, the danger of nuclear proliferation to unstable countries, and the "yuck factor" that some people will always feel about a technology that is associated with nuclear weapons. But assuming that the nation's use of electric energy is not going to decrease in absolute terms any time soon, the power has to come from somewhere, namely coal in the last few years. And opponents of greenhouse-gas emissions, many of whom also oppose nuclear power, know (or should know) that you can't burn coal without making carbon dioxide, which is the greenhouse gas of most concern. Nuclear power, whatever its other drawbacks, produces virtually no greenhouse gases, which is one reason that even "greens" have been giving it a second glance lately.
Some countries such as France never abandoned nuclear power. France's example shows that given a moderate, stable regulatory environment and good engineering, nuclear power can be a safe and reliable source of electricity, leaving aside the question of wastes. Still, it is not at all clear that the nuclear industry will ever be able to build substantial numbers of new plants in the U. S. The new free-enterprise model of partially deregulated utilities makes it even more risky to plan a long-term capital investment such as a nuclear plant, which sucks in millions of dollars for years before even starting to produce revenue. So if we can't build new nuclear plants, and we don't want to contributed to global warming by building new fossil-fueled coal, oil, or natural gas plants, where will the energy come from?
Radical conservation combined with renewable and distributed energy generation is one possible answer. Here and there, enterprising architects have built houses and even commercial buildings whose net use of externally supplied energy in the form of electricity or natural gas is only a small fraction of what typical construction uses. The drawback, of course, is that it takes expensive custom engineering and materials to achieve these radical savings, and in the current economic environment there is no incentive to do these things. Perhaps some radical economic experimentation is in order here. If large tax breaks or even subsidies were provided for building structures whose energy usage was, say, 50% or less of the average level, this could really be regarded in the light of a loan, since the country as a whole will benefit from the fact that less energy usage is a net gain in a costly-energy economy. A whole raft of vested interests would first have to be placated, but that is what politics is for.
As the aftermath of Chernobyl has proved, our worst fears in some areas sometimes turn out to be not as bad as we thought. But before we in the U. S. go nuclear in a big way, we have time to consider other options.
Sources: An article by Mara Bellaby similar to the one carried in the Austin American-Statesman is at http://www.newsobserver.com/104/story/431637.html.
Wednesday, April 19, 2006
Patent or Blackmail?
Here is a list of some of the great human achievements of the past five hundred years: the Scientific Revolution, the Industrial Revolution, the patent system . . . . What's that last one doing there? Historians of technology rightly regard the development of patent law as one of the most significant intellectual innovations of the early modern period. Beginning in Renaissance Europe and spreading to America, the idea that an inventor's rights to make and sell his invention should be protected by law for a limited period encouraged innovation while ensuring that the rights of the general public would also be protected from monopolies of indefinite lifetime. Engineers, whose ideas form the basis of many patents, should be interested to know that the present U. S. patent system is being gamed in a major way, to the detriment of nearly all concerned.
The most recent example of this concerns the firm Research in Motion, which makes the popular Blackberry wireless communication system. It used to be the case that patents were fairly difficult to obtain. The inventor's patent attorneys were pretty evenly matched by the U. S. government's patent examiners, whose job it was to make sure that trivial, obvious, or otherwise meritless patents were not issued. Patenting an idea was a serious and sometimes difficult undertaking, but when you got one, you knew you had something, and so did everyone else.
Not so anymore. A combination of factors—inadequate Patent Office funding, a hyper-pro-business attitude in government, and speedups in the pace of innovation—have made it much easier to get a patent in the last ten to twenty years. This includes dubious ones sometimes called "submarine patents"—not patents on the submarine, but patents deliberately designed to cover all parts of an emerging field, whether or not the supposed inventor has any genuinely innovative ideas or not. In the past, these types of patents would have never been issued, but in the current almost-anything-goes atmosphere, all it takes is enough money paid to a good patent firm.
What happened to Research in Motion this year shows what kind of harm can result from this over-liberalized issuing of patents. In the early 1990's, one Thomas Campagna patented some ideas for wireless email. In the meantime, Research in Motion put in a lot of work to develop the Blackberry, and obtained its own patents. In 2001, a company named NTP, formed to exploit Campagna's patents, sued RIM for patent infringement. The resulting legal hassle threatened to produce an injunction that would shut down all Blackberry services in the U. S., clearly an outcome that would benefit no one. This was despite the fact that the U. S. Patent and Trademark Office re-examined and rejected at least seven of NTP's patents along the way. In March of this year, RIM announced a settlement in which NTP would receive over $600 million. No doubt RIM views this as part of the cost of staying in business. But if the shady NTP patents had never been issued in the first place, none of this would have happened.
What has this got to do with engineering ethics? A lot. First, engineers can refrain from participating in the generation of "junk" patents. Unfortunately, this may not have much of an effect, since unscrupulous patent lawyers don't need much in the way of technical help to cobble together useless patents. This is not to say that patenting is unethical in general. Properly used in a well-conducted system, patents help to achieve the balance between monopolistic profit, innovation, and reasonably-priced new products and services that characterizes modern industrial societies. But the pendulum has swung way too far in favor of patent owners and patent attorneys to the detriment of the general public and those who actually do the hard work of developing and marketing new products, only to have their resources diverted into pointless patent battles. Under the present circumstances, the danger is that innovation will be stifled by artificially extended patents that allow established firms to exclude competition indefinitely. This is already happening in the pharmaceutical industry as some firms come up with patented repackaging of old patented drugs to prevent a cheaper generic form from coming onto the market. Who pays for this? The beleaguered patient who has to pay beaucoup bucks for the name-brand drug longer than necessary.
The second thing engineers can do is to make a political issue out of the patent system. True, it doesn't have the popular appeal of antiwar movements or tax reform. But it is critically important to fix a badly broken system before R&D departments of multinational firms decide to relocate in countries where the system is more rational. Ever since the U. S. patent system was founded in 1790, it has differed in significant ways from most European systems. One of the most important differences is that most European patent holders must show that they are licensing their patent to others or using their patents themselves, while there is no such requirement in the U. S. This allows U. S. patent holders to "sit on" submarine patents that lie dormant until a well-heeled company comes within the sights of the patent-holder's legal gun. Besides changes in the legal structure of patents, the U. S. Patent and Trademark Office simply needs a lot more good help in the form of funding and staff to stay competitive with the best private patent lawyers. Only then will they be able to reinstate the rigorous examination of patents that prevailed before the recent gold-rush atmosphere developed.
With their specialized training, engineers stand in a unique position to make an important political difference in this situation. Consider writing your U. S. senator or congressman about this matter, and see what happens. The worst that can happen is nothing, and the best could be a lot better than that.
Sources: The New York Times article "In Silicon Valley, A Man Without a Patent" by John Markoff was published online on Apr. 16, 2006, and is available from the NYT archives at http://select.nytimes.com/gst/abstract.html?res=F20811FA3D5B0C758DDDAD0894DE404482 for a fee. The Forbes.com article "More Patents Rejected in BlackBerry Case" by Arik Hesseldahl is at http://www.forbes.com/business/2005/06/22/rim-patent-infringement-cx_ah_0622rim.html.
The most recent example of this concerns the firm Research in Motion, which makes the popular Blackberry wireless communication system. It used to be the case that patents were fairly difficult to obtain. The inventor's patent attorneys were pretty evenly matched by the U. S. government's patent examiners, whose job it was to make sure that trivial, obvious, or otherwise meritless patents were not issued. Patenting an idea was a serious and sometimes difficult undertaking, but when you got one, you knew you had something, and so did everyone else.
Not so anymore. A combination of factors—inadequate Patent Office funding, a hyper-pro-business attitude in government, and speedups in the pace of innovation—have made it much easier to get a patent in the last ten to twenty years. This includes dubious ones sometimes called "submarine patents"—not patents on the submarine, but patents deliberately designed to cover all parts of an emerging field, whether or not the supposed inventor has any genuinely innovative ideas or not. In the past, these types of patents would have never been issued, but in the current almost-anything-goes atmosphere, all it takes is enough money paid to a good patent firm.
What happened to Research in Motion this year shows what kind of harm can result from this over-liberalized issuing of patents. In the early 1990's, one Thomas Campagna patented some ideas for wireless email. In the meantime, Research in Motion put in a lot of work to develop the Blackberry, and obtained its own patents. In 2001, a company named NTP, formed to exploit Campagna's patents, sued RIM for patent infringement. The resulting legal hassle threatened to produce an injunction that would shut down all Blackberry services in the U. S., clearly an outcome that would benefit no one. This was despite the fact that the U. S. Patent and Trademark Office re-examined and rejected at least seven of NTP's patents along the way. In March of this year, RIM announced a settlement in which NTP would receive over $600 million. No doubt RIM views this as part of the cost of staying in business. But if the shady NTP patents had never been issued in the first place, none of this would have happened.
What has this got to do with engineering ethics? A lot. First, engineers can refrain from participating in the generation of "junk" patents. Unfortunately, this may not have much of an effect, since unscrupulous patent lawyers don't need much in the way of technical help to cobble together useless patents. This is not to say that patenting is unethical in general. Properly used in a well-conducted system, patents help to achieve the balance between monopolistic profit, innovation, and reasonably-priced new products and services that characterizes modern industrial societies. But the pendulum has swung way too far in favor of patent owners and patent attorneys to the detriment of the general public and those who actually do the hard work of developing and marketing new products, only to have their resources diverted into pointless patent battles. Under the present circumstances, the danger is that innovation will be stifled by artificially extended patents that allow established firms to exclude competition indefinitely. This is already happening in the pharmaceutical industry as some firms come up with patented repackaging of old patented drugs to prevent a cheaper generic form from coming onto the market. Who pays for this? The beleaguered patient who has to pay beaucoup bucks for the name-brand drug longer than necessary.
The second thing engineers can do is to make a political issue out of the patent system. True, it doesn't have the popular appeal of antiwar movements or tax reform. But it is critically important to fix a badly broken system before R&D departments of multinational firms decide to relocate in countries where the system is more rational. Ever since the U. S. patent system was founded in 1790, it has differed in significant ways from most European systems. One of the most important differences is that most European patent holders must show that they are licensing their patent to others or using their patents themselves, while there is no such requirement in the U. S. This allows U. S. patent holders to "sit on" submarine patents that lie dormant until a well-heeled company comes within the sights of the patent-holder's legal gun. Besides changes in the legal structure of patents, the U. S. Patent and Trademark Office simply needs a lot more good help in the form of funding and staff to stay competitive with the best private patent lawyers. Only then will they be able to reinstate the rigorous examination of patents that prevailed before the recent gold-rush atmosphere developed.
With their specialized training, engineers stand in a unique position to make an important political difference in this situation. Consider writing your U. S. senator or congressman about this matter, and see what happens. The worst that can happen is nothing, and the best could be a lot better than that.
Sources: The New York Times article "In Silicon Valley, A Man Without a Patent" by John Markoff was published online on Apr. 16, 2006, and is available from the NYT archives at http://select.nytimes.com/gst/abstract.html?res=F20811FA3D5B0C758DDDAD0894DE404482 for a fee. The Forbes.com article "More Patents Rejected in BlackBerry Case" by Arik Hesseldahl is at http://www.forbes.com/business/2005/06/22/rim-patent-infringement-cx_ah_0622rim.html.
Thursday, April 13, 2006
Earthquake Prediction: Ready for Prime Time?
Earthquakes and the tsunamis that sometimes accompany them are one of the most frightening and fatal types of natural disasters. The December 26, 2004 earthquake and tsunami that struck in and around the Indian Ocean killed more than 200,000 people, and millions more have died in similar disasters. One of the main ways people die in an earthquake is in collapsing buildings, and over the years civil engineers have developed building codes and other techniques that reduce (but do not eliminate) the danger of structural collapse during an earthquake. Unfortunately for billions of people who live in developing countries, these measures are expensive. If the choice is between living in shaky but affordable housing on the one hand, and going without shelter on the other, most people take their chances with a house that may fall down in an earthquake. The poor of this world have more pressing things to worry about than earthquake safety, but that doesn't make their lives any less valuable.
Viewed as an engineering problem, the question of how to save lives in earthquakes and tsunamis has several possible solutions. The only one we have pursued to any great extent up to now is to make sure that structures will withstand the likely force of an earthquake. (As far as tsunamis go, there is little one can do except run for higher ground.) If—and this is a big "if"—earthquakes could be predicted with good accuracy, the problem becomes simpler. A few hours before an earthquake strikes, simply clear everyone out of dangerous buildings until the danger is past. This second solution is not without its own problems, but if it could be implemented, the cost of an early-warning system would be much less than earthquake-proof buildings for everybody, and the potential to save lives would therefore be much greater. The only problem is, how do you predict earthquakes?
Historically, earthquake prediction has been regarded as a pseudo-science. The abundance of post-earthquake "premonition" stories such as animals acting strangely, unusual sounds, and lights in the sky is a set of data that few scientists take seriously, and with some justification. Human beings are not emotionless recording machines, and memory is a highly subjective thing. Perfectly ordinary and random incidents that happen just before a frightening event take on an ominous cast when recalled later. But the shady neighborhood that earthquake prediction has resided in up to now should not prevent scientists and engineers from exploring ideas about how to do it.
The December 2005 issue of IEEE Spectrum, a highly regarded magazine for professional electrical and electronic engineers, carried an article on recent efforts to develop technical means of predicting earthquakes. (The article can be found at http://www.spectrum.ieee.org/dec05/2367). The lead author. Tom Bleier, described how ELF waves (extremely-low-frequency electromagnetic waves) and other measures such as satellite-sensed electromagnetic waves and surface temperatures have appeared at times to be correlated with certain large earthquake events. He made what to this author sounded like a good case that there is something to the idea that such correlations are real. However, a good physical explanation for why such correlations should occur is presently lacking.
The article inspired three geophysicists to write a letter to the editors of IEEE Spectrum protesting the publication of claims that they said should be rejected (the letter can be viewed at http://www.spectrum.ieee.org/apr06/3275). Robert J. Geller, Alex I. Braginski, and Wallace H. Campbell argued that there is no scientific basis for the kind of earthquake prediction that Bleier and his colleagues are doing. They claim there is so much noise from other natural and man-made sources at the frequencies in question that any exercise in earthquake prediction amounts to sophisticated tea-leaf reading. Their opinion is that the scientific community has examined the methods of Bleier and company and found them wanting.
This controversy reminds me of the early days of tornado prediction. From the late 19th century until 1938, forecasters at the U. S. Weather Bureau were forbidden even to use the word "tornado" in a forecast. The prevailing opinion was that there was no reliable way to predict tornadoes and such a forecast was likely only to cause needless panic. It wasn't until 1948 when some U. S. Air Force weathermen at Tinker Air Force Base in Oklahoma had their airfield trashed by a tornado that anyone began to apply serious scientific effort toward the problem of tornado forecasting. They came up with a combination of conditions that looked like it would work. Five days later, they noted the same conditions prevailed, and, not being under the restrictions of the civilian Weather Bureau, took it upon themselves to issue a tornado forecast to Air Force personnel. Later that same evening, probably the only tornado in history that was greeted with jubilation struck Tinker Air Force Base again! The weathermen published their findings in 1950 and 1951, but for several years afterwards tornado forecasts were restricted to military facilities unless they were leaked to the media. Other researchers attempting to publish research papers relating to tornado forecasting were blocked by skeptical reviewers. It took the better part of a decade to overcome the attitude that forecasting tornadoes was so chancy as to not be worth upsetting the public. But in combination with radar-based early warning systems for tornadoes that were put in place in the 1950s, annual tornado fatalities in the Midwest plummeted. (The story of tornado prediction is told in Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting.)
Time will tell whether the new techniques of earthquake forecasting will bear fruit in the form of reliable, specific predictions. In the meantime, its proponents should prepare themselves for a long battle with skeptics. We can hope that if there is anything to it, engineers, scientists, and the public will be open-minded enough to welcome the practice and take it seriously enough to save lives with it in the future.
Sources: See URLs above referring to items in IEEE Spectrum. Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting was published in 2001 by the University of Oklahoma Press, Norman.
Viewed as an engineering problem, the question of how to save lives in earthquakes and tsunamis has several possible solutions. The only one we have pursued to any great extent up to now is to make sure that structures will withstand the likely force of an earthquake. (As far as tsunamis go, there is little one can do except run for higher ground.) If—and this is a big "if"—earthquakes could be predicted with good accuracy, the problem becomes simpler. A few hours before an earthquake strikes, simply clear everyone out of dangerous buildings until the danger is past. This second solution is not without its own problems, but if it could be implemented, the cost of an early-warning system would be much less than earthquake-proof buildings for everybody, and the potential to save lives would therefore be much greater. The only problem is, how do you predict earthquakes?
Historically, earthquake prediction has been regarded as a pseudo-science. The abundance of post-earthquake "premonition" stories such as animals acting strangely, unusual sounds, and lights in the sky is a set of data that few scientists take seriously, and with some justification. Human beings are not emotionless recording machines, and memory is a highly subjective thing. Perfectly ordinary and random incidents that happen just before a frightening event take on an ominous cast when recalled later. But the shady neighborhood that earthquake prediction has resided in up to now should not prevent scientists and engineers from exploring ideas about how to do it.
The December 2005 issue of IEEE Spectrum, a highly regarded magazine for professional electrical and electronic engineers, carried an article on recent efforts to develop technical means of predicting earthquakes. (The article can be found at http://www.spectrum.ieee.org/dec05/2367). The lead author. Tom Bleier, described how ELF waves (extremely-low-frequency electromagnetic waves) and other measures such as satellite-sensed electromagnetic waves and surface temperatures have appeared at times to be correlated with certain large earthquake events. He made what to this author sounded like a good case that there is something to the idea that such correlations are real. However, a good physical explanation for why such correlations should occur is presently lacking.
The article inspired three geophysicists to write a letter to the editors of IEEE Spectrum protesting the publication of claims that they said should be rejected (the letter can be viewed at http://www.spectrum.ieee.org/apr06/3275). Robert J. Geller, Alex I. Braginski, and Wallace H. Campbell argued that there is no scientific basis for the kind of earthquake prediction that Bleier and his colleagues are doing. They claim there is so much noise from other natural and man-made sources at the frequencies in question that any exercise in earthquake prediction amounts to sophisticated tea-leaf reading. Their opinion is that the scientific community has examined the methods of Bleier and company and found them wanting.
This controversy reminds me of the early days of tornado prediction. From the late 19th century until 1938, forecasters at the U. S. Weather Bureau were forbidden even to use the word "tornado" in a forecast. The prevailing opinion was that there was no reliable way to predict tornadoes and such a forecast was likely only to cause needless panic. It wasn't until 1948 when some U. S. Air Force weathermen at Tinker Air Force Base in Oklahoma had their airfield trashed by a tornado that anyone began to apply serious scientific effort toward the problem of tornado forecasting. They came up with a combination of conditions that looked like it would work. Five days later, they noted the same conditions prevailed, and, not being under the restrictions of the civilian Weather Bureau, took it upon themselves to issue a tornado forecast to Air Force personnel. Later that same evening, probably the only tornado in history that was greeted with jubilation struck Tinker Air Force Base again! The weathermen published their findings in 1950 and 1951, but for several years afterwards tornado forecasts were restricted to military facilities unless they were leaked to the media. Other researchers attempting to publish research papers relating to tornado forecasting were blocked by skeptical reviewers. It took the better part of a decade to overcome the attitude that forecasting tornadoes was so chancy as to not be worth upsetting the public. But in combination with radar-based early warning systems for tornadoes that were put in place in the 1950s, annual tornado fatalities in the Midwest plummeted. (The story of tornado prediction is told in Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting.)
Time will tell whether the new techniques of earthquake forecasting will bear fruit in the form of reliable, specific predictions. In the meantime, its proponents should prepare themselves for a long battle with skeptics. We can hope that if there is anything to it, engineers, scientists, and the public will be open-minded enough to welcome the practice and take it seriously enough to save lives with it in the future.
Sources: See URLs above referring to items in IEEE Spectrum. Marlene Bradford's Scanning the Skies: A History of Tornado Forecasting was published in 2001 by the University of Oklahoma Press, Norman.
Subscribe to:
Posts (Atom)