Owing to circumstances too complex to summarize here, the other weekend I found myself working on the electrical system of a 1955 Oldsmobile Super 88 sedan. The original owner needed help in getting it running again after it had sat unused in a garage for three years. All it needed was a new battery, a set of ignition cables and distributor parts, and a little gasoline poured down the carburetor to give the gas pump a chance to suck new fuel from the gas tank. After a few tries, it started up fine and made all of us very happy, just as it did in 1955 when it became the owner's first new car.
While working around the vehicle, I was struck by the tremendous differences in design and engineering between what was considered a good, safe, responsible-family-man's car in 1955, and what would pass muster by today's standards. The biggest difference is the list of what is missing from the '55 compared to what you would find in virtually any new car today. That list includes: air bags, computers, shoulder belts, high-impact bumpers, a catalytic converter, numerous other anti-pollution technologies, anti-skid brakes, steel-belted radial tubeless tires, and unibody construction with added side-collision protection. On the other hand, what the '55 has in abundance that has been greatly reduced in most modern cars is expressed in one word: steel. The main longitudinal members of the chassis under the car look like they're strong enough to support a skyscraper. The bumpers are thick chrome-plated armorplate that must weigh close to a hundred pounds each. Carry this theme through the whole car, and you have a behemoth that needs a 300-cubic-inch V-8 just to get moving. And of course, the gas mileage (premium only, now that leaded gas is no longer generally available) can't be much better than 20 mpg.
Because you couldn't legally market a new car designed like this 1955 specimen today, does that mean that the engineers who designed it then were bad engineers? That they were doing something wrong in foisting such a dangerous vehicle upon the public? Of course not. No one is blamed for not using safety technology that doesn't yet exist. Today's air bags, for instance, depend on a micro-electromechanical IC to detect the rapid deceleration of a collision. Anti-skid brakes need computer technology for similar reasons, and such advances were simply not available in 1955. Seat belts, on the other hand, are hardly advanced technology. According to a document on the Ford Motor Company's website, 1955 was the year Ford became the first U. S. automaker to offer seat belts as an option. But it took decades for car companies to offer seat belts and later air bags as standard equipment. And until states began to pass laws mandating the use of seat belts around 1980, only about one out of ten drivers used them.
There are two morals to this little history lesson. The first one is pretty obvious: standards of what is considered safe and legal design change with time. Laws change as a matter of course, and engineers need to be aware of any changes in laws that affect their firm's products or services. But always staying just barely legal is not usually the best position for either a company or an individual. The best engineering looks ahead of the present legal environment to combine what new technology offers with desirable features that make products safer as well as better. And if you widen the concept of safety to include broader public goods such as less environmental pollution or less fossil fuel consumed for a given output, the range of potential engineering development grows even wider.
The second moral is related to the first: even if laws and technology change, changes in behavior don't always follow. It took a combination of public-awareness campaigns and legal sanctions to get most (not all!) drivers to buckle up. Many today would view the emission of leaded-gas fumes from a 20-mpg 1955 vehicle as at least as dangerous as the absence of seat belts. Many cars sold today get no better mileage than this, and the main reason is not technology, but buyer preference.
There is a debate going on today about how we in the U. S. can reduce our dependence on foreign oil imports. Some favor increasing mandatory federal fleet mileage standards, but auto manufacturers have shown how they can "game" that system in the past. Others take the view that as gasoline becomes more expensive, Americans will just naturally start buying more fuel-efficient cars. A third answer is to radically increase the taxes on gasoline to more accurately reflect the "externalities" of driving gas-hog vehicles: the cost of importing oil, the cost military measures to stabilize the Middle East, the hidden costs caused by air pollution, and the highway-construction costs associated with more cars on the roads, period.
While the means vary, the goal is the same: to change the way drivers behave toward driving and cars. And changes in behavior usually come slowly, if at all. But given the right combination of suitable new technology, economic incentives, and what for lack of a better word can be called the cultural factor, people can change the way they behave pretty fast. With most drivers having cell phones, it's trivially easy to call 911 when you see a road accident, and help can arrive much faster than when someone had to find a pay phone and call the local authorities. If the desire to help others wasn't there, all the cell phones in the world wouldn't do any good. But before it was so easy, the Good Samaritan who would undertake either to drive for help or to stop and render aid was not that common.
That sort of thing was probably not in the minds of cell-phone designers, but it is an unexpected benefit. Engineers can intentionally look for such benefits and design them into new projects, but only if they take a wider view of the ways their work will be used and consider the human factors as well as the purely technical ones.
We still haven't taken the '55 Olds out on the road. The brakes need a little work yet. But when we do, it will only be for special occasions, perhaps car shows, where we can show younger generations what dangers their parents faced in the bad old days of primitive technology. And will today's 20-somethings in their Honda Civics look adventurous and risk-taking, or profligate and careless about the environment, to their grandchildren? Only time will tell.
Sources: The Ford press release describing their introduction of safety belts in 1955 is at http://media.ford.com/newsroom/release_display.cfm?release=23485. The statistics on belt use are from http://en.wikipedia.org/wiki/Click_it_or_ticket.
Tuesday, September 05, 2006
Monday, August 28, 2006
When Is A Gallon Not a Gallon? When You're Buying Gas
On a hot August day in Texas, you notice your big pickup is nearly running on fumes, so you drive into a gas station where the price is $2.69 a gallon. You fill up the tank with 35 gallons of gas. Simple math tells you that's going to cost you $94.15. You're not happy about it, but at least you know across the country people are paying something close to the same for gas, so you go inside and get ready to pay. You tell the clerk behind the counter what pump it was, and she says, "Okay, $94.15 plus the $1.14 surcharge, that's $95.29."
"Hey, what's this surcharge business?"
"That's the heat surcharge. Any time the gas is hotter than sixty degrees, we get to charge extra for the same amount of gas."
You think something unprintable, but you've been brought up to be polite to ladies, so all you say is, "Well, I'll pay it, but this is the last time I'm buying gas here."
"Doesn't matter. They all do it."
Sound crazy? Well, it isn't. It happens every day, all across the country. Only the surcharge is a hidden one, and perfectly legal, so the sales clerks don't talk about it. As an article in the Aug. 27 Kansas City Star describes, if you buy gas in the U. S. that is hotter than sixty degrees F—and as a recent U. S. government study shows, that is most of us in the warmer parts of the country—you don't get what you think you're paying for. Here's how it works.
Each molecule of gasoline provides a certain amount of energy to your car's engine. What you're really buying when you pay for gas is energy, and so you'd think that the fairest way to charge for gas is so much money for so many molecules. Well, counting molecules is not too easy, so long ago it was decided that gasoline would be sold by the gallon, since measuring volume is simple and accurate.
The only trouble with that is that gasoline expands when it gets warmer. That means the same number of molecules take up more room at higher temperatures. If you have thirty-five gallons of gas at a temperature of sixty degrees F and warm it up to eighty degrees F, it's still a liquid. But it expands to occupy a volume of 35.42 gallons. Same number of molecules, same amount of energy—but a larger volume. And if you come along and buy 35 gallons of warm gas when it's eighty degrees, you pay the same money, but you get less gas (fewer molecules, less energy) than if it was at sixty degrees.
Petroleum engineers knew all about this decades ago, so they agreed on a standard temperature of 60 F for measuring volumes. And in the bulk transactions among refiners, pipeline operators, and wholesalers, you can bet that they take this expansion into account. In Canada, where the gas that's sold is on average colder than 60 F, an enterprising inventor went around to gasoline retailers and pointed out that when they sold cold gas, they were giving their customers more than they legally had to. So now nearly every gas pump in Canada compensates for temperature and delivers slightly less gas than it used to, for the same price on the pump screen. And the retailers happily paid the inventor for his idea, since they now make more money on each gallon of cold gas.
Nobody that the Star reporter talked to would admit it, but the reason temperature-compensated gasoline sales haven't spread to the U. S. is probably because the retailers would lose money instead of making money.
Engineering ethics is about experts who use their specialized knowledge for the good of their employers and society in general. Here is a clear case where an obscure technical effect is taking billions of dollars a year away from consumers. Engineers know about it, but the law permits it. As long as a gas pump delivers one gallon with a certain accuracy, it gets the stamp of approval from the local weights and measures authority and there is nothing illegal going on. But all that is legal is not moral, and the example of Canada shows that the technical fix for the problem is available at a cost that would not drive gas stations out of business.
Is this the worst problem you'll run into today? If it is, you're having a pretty good day. But it's real, and as gas prices go even higher, any changes in a direction that would improve matters would be welcome. Sooner or later, some clever advertising executive may think of a way to turn temperature-compensated gas sales into a competitive advantage. Unfortunately, logic does not play a large role in advertising, and the campaign to get this idea across would have to be very well planned. Another alternative is to change the laws regulating retail gas sales to require all gas to be sold with temperature compensation. The effect of this would be to make prices slightly lower in the summer and higher in winter, which might do something to offset the annual rise caused by summer driving. The prospects of either happening are not good. But until retailers start selling gas with the price compensated for temperature, at least now you know what you're paying for—which is not what it seems to be.
Sources: The original Kansas City Star article is at http://www.kansascity.com/mld/kansascity/15370193.htm. A method of figuring out what temperature does to the volume of gasoline is available from the Ohio Department of Transportation at www.dot.state.oh.us/construction/oca/Specs/SSandPN2002/10600402for2002.pdf.
"Hey, what's this surcharge business?"
"That's the heat surcharge. Any time the gas is hotter than sixty degrees, we get to charge extra for the same amount of gas."
You think something unprintable, but you've been brought up to be polite to ladies, so all you say is, "Well, I'll pay it, but this is the last time I'm buying gas here."
"Doesn't matter. They all do it."
Sound crazy? Well, it isn't. It happens every day, all across the country. Only the surcharge is a hidden one, and perfectly legal, so the sales clerks don't talk about it. As an article in the Aug. 27 Kansas City Star describes, if you buy gas in the U. S. that is hotter than sixty degrees F—and as a recent U. S. government study shows, that is most of us in the warmer parts of the country—you don't get what you think you're paying for. Here's how it works.
Each molecule of gasoline provides a certain amount of energy to your car's engine. What you're really buying when you pay for gas is energy, and so you'd think that the fairest way to charge for gas is so much money for so many molecules. Well, counting molecules is not too easy, so long ago it was decided that gasoline would be sold by the gallon, since measuring volume is simple and accurate.
The only trouble with that is that gasoline expands when it gets warmer. That means the same number of molecules take up more room at higher temperatures. If you have thirty-five gallons of gas at a temperature of sixty degrees F and warm it up to eighty degrees F, it's still a liquid. But it expands to occupy a volume of 35.42 gallons. Same number of molecules, same amount of energy—but a larger volume. And if you come along and buy 35 gallons of warm gas when it's eighty degrees, you pay the same money, but you get less gas (fewer molecules, less energy) than if it was at sixty degrees.
Petroleum engineers knew all about this decades ago, so they agreed on a standard temperature of 60 F for measuring volumes. And in the bulk transactions among refiners, pipeline operators, and wholesalers, you can bet that they take this expansion into account. In Canada, where the gas that's sold is on average colder than 60 F, an enterprising inventor went around to gasoline retailers and pointed out that when they sold cold gas, they were giving their customers more than they legally had to. So now nearly every gas pump in Canada compensates for temperature and delivers slightly less gas than it used to, for the same price on the pump screen. And the retailers happily paid the inventor for his idea, since they now make more money on each gallon of cold gas.
Nobody that the Star reporter talked to would admit it, but the reason temperature-compensated gasoline sales haven't spread to the U. S. is probably because the retailers would lose money instead of making money.
Engineering ethics is about experts who use their specialized knowledge for the good of their employers and society in general. Here is a clear case where an obscure technical effect is taking billions of dollars a year away from consumers. Engineers know about it, but the law permits it. As long as a gas pump delivers one gallon with a certain accuracy, it gets the stamp of approval from the local weights and measures authority and there is nothing illegal going on. But all that is legal is not moral, and the example of Canada shows that the technical fix for the problem is available at a cost that would not drive gas stations out of business.
Is this the worst problem you'll run into today? If it is, you're having a pretty good day. But it's real, and as gas prices go even higher, any changes in a direction that would improve matters would be welcome. Sooner or later, some clever advertising executive may think of a way to turn temperature-compensated gas sales into a competitive advantage. Unfortunately, logic does not play a large role in advertising, and the campaign to get this idea across would have to be very well planned. Another alternative is to change the laws regulating retail gas sales to require all gas to be sold with temperature compensation. The effect of this would be to make prices slightly lower in the summer and higher in winter, which might do something to offset the annual rise caused by summer driving. The prospects of either happening are not good. But until retailers start selling gas with the price compensated for temperature, at least now you know what you're paying for—which is not what it seems to be.
Sources: The original Kansas City Star article is at http://www.kansascity.com/mld/kansascity/15370193.htm. A method of figuring out what temperature does to the volume of gasoline is available from the Ohio Department of Transportation at www.dot.state.oh.us/construction/oca/Specs/SSandPN2002/10600402for2002.pdf.
Wednesday, August 23, 2006
Lithium Laments
Right now, Michael Dell is probably wishing he'd never heard of lithium. After Dell Inc. recalled over four million laptop batteries on Aug. 14 in the largest consumer-electronics recall ever, the New York Times sent out a photographer to Lake Mead to get a picture of one Thomas Forqueran looking at the gutted and smoky interior of his pickup truck. He had stored a Dell laptop in the glove box next to some ammo, and when the laptop battery caught fire, the ammo went too. Fortunately, Mr. Forqueran was not in the truck at the time.
The culprit in this case was a faulty lithium battery manufactured by Sony. Why is it that lithium batteries are so dangerous? Why did the National Transportation Safety Administration prohibit cargoes containing lithium batteries on passenger planes back in 2004, and why was a shipment of lithium batteries in a UPS plane suspected as the cause of a fire last February that destroyed the plane? Basically, for the same reasons that lithium is used in batteries at all.
Batteries store energy in the form of chemical compounds. The more energy you can store in a give size and weight of battery, the longer the battery can power a device such as an iPod or a laptop. Electrochemical reactions with lithium provide more voltage than almost any other single reaction, and lithium is the lightest known metal. For these and other reasons, battery makers have been using lithium in their latest and greatest products.
But for many of the same reasons that make lithium attractive for batteries, it is a nasty element to handle. If you throw pure lithium into water, it will spontaneously catch fire and give off noxious fumes. This makes it hard to battle fires involving lithium, needless to say. Even throwing sand or using CO2 fire extinguishers doesn't work—burying the fire in table salt or lime are about the only things that work. The lithium compounds used in rechargeable batteries are also hazardous, and can catch fire even if slightly contaminated by moisture. Once a lithium battery overheats and starts to burn, it tends to feed on itself as the cell ruptures and the lithium gets into contact with more material it can react with. What apparently happened with the Sony batteries is that a flaw in the manufacturing process left small metal particles in the wrong place. Mechanical stress on the battery once it was installed may have moved these particles around to short out the battery, creating enough heat for it to catch fire.
There are several lessons here. First, as we demand more and more from our portable electronics, we are also asking for more and more energy to be packed into batteries. On the horizon are fuel-cell batteries that run off propane or gasoline. Theoretically, one of these could run your laptop for days between fillups, but then there's the price of gasoline to worry about, not to mention the potential for leaks or spills. So there will be more battery hazards to watch out for if manufacturers don't enforce rigorous quality controls at every step of the way.
Next, it is unclear how long Sony had the manufacturing problem before fires started to occur. As engineer and author Henry Petroski likes to say, engineers often learn a lot more from failure than success. This emphasizes the importance of analyzing failures of products in the field until the engineers know exactly what caused the problem, and exactly how to fix it. But none of that can occur without good communication among vendors, suppliers, repair facilities, salesmen, and others.
This writer recalls an incident he heard about many years ago, when he was working for a large communications company which shall remain nameless. The company made the amplifiers for cable TV systems, metal boxes that hang on telephone poles and keep the cable TV signal strong enough to travel several miles between the "head-end" and the homes that take cable TV service. It seems that after several hundred of these amplifiers were shipped, they all started to fail in the same way. The circuit chip that was the heart of the amplifier was mounted to a metal heat sink, and when the engineers back at the plant opened up the failed amplifiers, they found that somehow the chips had separated from the heat sink, which caused them to burn up.
The engineers had been using this type of chip for some time in other products, and so they went back through the repair records to see if there had been any similar problems earlier. Sure enough, the problem began to show up several years before, but then it seemed to disappear—no more records of that kind of repair. The engineers called up the technician who had signed the failure reports and asked him what had happened at the point when the failures stopped occurring.
"Oh, we kept getting busted amps like that," he replied. "There was just so many of 'em, I got tired of filling out the same old failure report."
One hopes the quality-control system at Sony operates better than that. But any organization is only as good as the people in it, and if only one critical person fails to follow the procedure that others are expecting, the whole system can fail.
We can be glad that there have not been any reported fatalities resulting from flaming Dell laptops. Dell as a company will probably survive this incident. But a safety recall like this can ruin a small or new company's reputation permanently and put it out of business, even if no one is hurt. The daily routines of reliability engineering, quality control, and other related technical and managerial jobs can seem boring or even pointless at times. But like police patrols, they protect the safety and welfare of the public, and negligence in these areas can lead to disaster.
Sources: The New York Times article on the Dell laptop battery recall is at http://www.nytimes.com/2006/08/15/technology/15battery.html. The NTSB notice prohibiting passenger planes from carrying cargoes of lithium batteries is discussed at http://www.dot.gov/affairs/faa001.htm. The UPS plane fire is reported in TG Daily at http://www.tgdaily.com/2006/07/13/ntsb_laptopbattery_upsfire/.
The culprit in this case was a faulty lithium battery manufactured by Sony. Why is it that lithium batteries are so dangerous? Why did the National Transportation Safety Administration prohibit cargoes containing lithium batteries on passenger planes back in 2004, and why was a shipment of lithium batteries in a UPS plane suspected as the cause of a fire last February that destroyed the plane? Basically, for the same reasons that lithium is used in batteries at all.
Batteries store energy in the form of chemical compounds. The more energy you can store in a give size and weight of battery, the longer the battery can power a device such as an iPod or a laptop. Electrochemical reactions with lithium provide more voltage than almost any other single reaction, and lithium is the lightest known metal. For these and other reasons, battery makers have been using lithium in their latest and greatest products.
But for many of the same reasons that make lithium attractive for batteries, it is a nasty element to handle. If you throw pure lithium into water, it will spontaneously catch fire and give off noxious fumes. This makes it hard to battle fires involving lithium, needless to say. Even throwing sand or using CO2 fire extinguishers doesn't work—burying the fire in table salt or lime are about the only things that work. The lithium compounds used in rechargeable batteries are also hazardous, and can catch fire even if slightly contaminated by moisture. Once a lithium battery overheats and starts to burn, it tends to feed on itself as the cell ruptures and the lithium gets into contact with more material it can react with. What apparently happened with the Sony batteries is that a flaw in the manufacturing process left small metal particles in the wrong place. Mechanical stress on the battery once it was installed may have moved these particles around to short out the battery, creating enough heat for it to catch fire.
There are several lessons here. First, as we demand more and more from our portable electronics, we are also asking for more and more energy to be packed into batteries. On the horizon are fuel-cell batteries that run off propane or gasoline. Theoretically, one of these could run your laptop for days between fillups, but then there's the price of gasoline to worry about, not to mention the potential for leaks or spills. So there will be more battery hazards to watch out for if manufacturers don't enforce rigorous quality controls at every step of the way.
Next, it is unclear how long Sony had the manufacturing problem before fires started to occur. As engineer and author Henry Petroski likes to say, engineers often learn a lot more from failure than success. This emphasizes the importance of analyzing failures of products in the field until the engineers know exactly what caused the problem, and exactly how to fix it. But none of that can occur without good communication among vendors, suppliers, repair facilities, salesmen, and others.
This writer recalls an incident he heard about many years ago, when he was working for a large communications company which shall remain nameless. The company made the amplifiers for cable TV systems, metal boxes that hang on telephone poles and keep the cable TV signal strong enough to travel several miles between the "head-end" and the homes that take cable TV service. It seems that after several hundred of these amplifiers were shipped, they all started to fail in the same way. The circuit chip that was the heart of the amplifier was mounted to a metal heat sink, and when the engineers back at the plant opened up the failed amplifiers, they found that somehow the chips had separated from the heat sink, which caused them to burn up.
The engineers had been using this type of chip for some time in other products, and so they went back through the repair records to see if there had been any similar problems earlier. Sure enough, the problem began to show up several years before, but then it seemed to disappear—no more records of that kind of repair. The engineers called up the technician who had signed the failure reports and asked him what had happened at the point when the failures stopped occurring.
"Oh, we kept getting busted amps like that," he replied. "There was just so many of 'em, I got tired of filling out the same old failure report."
One hopes the quality-control system at Sony operates better than that. But any organization is only as good as the people in it, and if only one critical person fails to follow the procedure that others are expecting, the whole system can fail.
We can be glad that there have not been any reported fatalities resulting from flaming Dell laptops. Dell as a company will probably survive this incident. But a safety recall like this can ruin a small or new company's reputation permanently and put it out of business, even if no one is hurt. The daily routines of reliability engineering, quality control, and other related technical and managerial jobs can seem boring or even pointless at times. But like police patrols, they protect the safety and welfare of the public, and negligence in these areas can lead to disaster.
Sources: The New York Times article on the Dell laptop battery recall is at http://www.nytimes.com/2006/08/15/technology/15battery.html. The NTSB notice prohibiting passenger planes from carrying cargoes of lithium batteries is discussed at http://www.dot.gov/affairs/faa001.htm. The UPS plane fire is reported in TG Daily at http://www.tgdaily.com/2006/07/13/ntsb_laptopbattery_upsfire/.
Tuesday, August 15, 2006
The Price of Airline Security
On August 11, we received the unwelcome news that terrorists were planning yet another attack, this one involving US-bound flights from Britain that were targeted for demolition with liquid explosives. Fortunately, authorities rounded up many of the alleged plotters before they could do any damage, but the effects of their plans were felt immediately by thousands of would-be airline passengers whose flights were cancelled or missed because of tightened security checks. The problem of airline security is an interesting one from an engineering ethics point of view, because it brings to the surface matters of safety and expense that otherwise get little attention.
Air travel has not always been a relatively safe way to get from A to B. The primitive state of aviation technology in the 1920s meant that the few commercial passengers who flew back then were undertaking substantial risks. But improvements over the decades have made aviation one of the safest modes of transportation around, if only hazards from accidental crashes due to pilot error and hardware failures are considered. While every design effort has been expended to make planes intrinsically safe, modern commercial (as opposed to military) aircraft were not designed with terrorism in mind. The idea that someone inside the plane would brandish arms or set off a bomb was simply not in the imagination of design engineers until recently.
Now, of course, it is. After the World Trade Center attacks of Sept. 11, 2001, the only visible change to the structure of commercial aircraft was the presumably bullet-proof steel door that now protects the flight deck from assault from within the cabin. This was an obvious step, and cost the airlines something, but clearly isn't going to solve all of their terrorism problems. Once a person with a reasonably powerful bomb gets on board a commercial airliner, the game is over if the bomb is exploded. There is no practical way to make planes impervious to explosives detonated from within. Flying is a very weight-sensitive business, so the heavy armor required to withstand bomb-force blasts literally won't fly. And so the only way to keep planes from being blown up by terrorists with bombs on board is to keep the bombs off the planes in the first place.
But that isn't free either. Since early terrorist bombs were stowed in luggage, inspection of checked baggage by X-ray was one of the first security measures to be implemented. After the attempted shoe-bombing of an airliner, passengers got used to taking off their shoes for X-ray inspection as well. Now that the latest plot involved liquid explosives, most liquids are now banned from carryon luggage. One almost hates to speculate about these matters in a semi-public forum, but there is always the possibility of a suicide bomber who swallows a time bomb. Not even the most dedicated terrorists have gone to this extent yet, possibly because bomb technology cannot yet put a powerful enough charge into a volume small enough to swallow. But if such an infernal deed is ever done, we can reconcile ourselves to whole-body low-dosage X-rays of all passengers, which would be the ultimate invasion of privacy.
Loss of privacy, delays, inconvenience, and the high cost of inspection machinery are only some of the prices we pay for being able to fly. A company called Ahura is test-marketing a book-size device that can do a chemical analysis of any liquid that you can see, even through glass or plastic bottles. It uses a laser to stimulate vibrations in the molecules of the liquid, which in turn give off light that the device analyzes and interprets in terms of chemical composition. The process, called Raman spectroscopy, has until recently been confined to chemistry research labs. But high demand for security inspections and advances in compact computer and sensor technology have allowed companies like Ahura to develop these devices. Still, they are not cheap. According to an account in Time Magazine, the Ahura unit retails for about $30,000. It will be a while before every airport is equipped with such a device, and in the meantime, even bottled water has become a rarity in the air.
Engineers like to view a problem in enough detail to have a good idea of how design choices will affect the performance of the system in question. In the case of airline security, the system is the whole complex of air travel. The design choices include how much we will pay for increasingly sophisticated security technology on the ground, how much we charge for air travel, how much of the cost of security is borne by the government versus private sources, and (not least important) what kind of research and development we do to prepare for future security problems. This last item is currently being covered, if at all, by small private firms such as Ahura in conjunction with government-sponsored research related to terrorism. It is a well-known fact among researchers that adding the word "terrorism" to a research proposal with almost any conceivable justification increases its likelihood of funding, other things being equal. Whether or not this results in better ideas for anti-terrorism technology is so far an open question.
While the U. S. government has taken steps to coordinate anti-terrorism efforts with the creation of such entities as the Department of Homeland Security, it is not clear that such efforts are coordinated enough or directed well enough to do a good job at not only reacting to, but anticipating, new terrorist threats to airline safety. Most successful crash programs, from the Manhattan Project to the Apollo program, have been coordinated by a single, central authority with a single-minded purpose and enough resources to get the job done. Commercial airline safety differs from those programs in many ways, of course. Millions of ordinary citizens, hundreds of private companies large and small, and international relations all make it a complex picture. But whatever else it is, it is an engineering problem. And a more coordinated and focused effort to make airlines as safe from terrorists as they are now from accidental crashes would be worth whatever we paid for it. Even if I can't carry my soft drink onto the plane for a while.
Sources: The Time Magazine article "A New Way to Detect Liquid Explosives" on Ahura is at http://www.time.com/time/business/article/0,8599,1225412,00.html.
Air travel has not always been a relatively safe way to get from A to B. The primitive state of aviation technology in the 1920s meant that the few commercial passengers who flew back then were undertaking substantial risks. But improvements over the decades have made aviation one of the safest modes of transportation around, if only hazards from accidental crashes due to pilot error and hardware failures are considered. While every design effort has been expended to make planes intrinsically safe, modern commercial (as opposed to military) aircraft were not designed with terrorism in mind. The idea that someone inside the plane would brandish arms or set off a bomb was simply not in the imagination of design engineers until recently.
Now, of course, it is. After the World Trade Center attacks of Sept. 11, 2001, the only visible change to the structure of commercial aircraft was the presumably bullet-proof steel door that now protects the flight deck from assault from within the cabin. This was an obvious step, and cost the airlines something, but clearly isn't going to solve all of their terrorism problems. Once a person with a reasonably powerful bomb gets on board a commercial airliner, the game is over if the bomb is exploded. There is no practical way to make planes impervious to explosives detonated from within. Flying is a very weight-sensitive business, so the heavy armor required to withstand bomb-force blasts literally won't fly. And so the only way to keep planes from being blown up by terrorists with bombs on board is to keep the bombs off the planes in the first place.
But that isn't free either. Since early terrorist bombs were stowed in luggage, inspection of checked baggage by X-ray was one of the first security measures to be implemented. After the attempted shoe-bombing of an airliner, passengers got used to taking off their shoes for X-ray inspection as well. Now that the latest plot involved liquid explosives, most liquids are now banned from carryon luggage. One almost hates to speculate about these matters in a semi-public forum, but there is always the possibility of a suicide bomber who swallows a time bomb. Not even the most dedicated terrorists have gone to this extent yet, possibly because bomb technology cannot yet put a powerful enough charge into a volume small enough to swallow. But if such an infernal deed is ever done, we can reconcile ourselves to whole-body low-dosage X-rays of all passengers, which would be the ultimate invasion of privacy.
Loss of privacy, delays, inconvenience, and the high cost of inspection machinery are only some of the prices we pay for being able to fly. A company called Ahura is test-marketing a book-size device that can do a chemical analysis of any liquid that you can see, even through glass or plastic bottles. It uses a laser to stimulate vibrations in the molecules of the liquid, which in turn give off light that the device analyzes and interprets in terms of chemical composition. The process, called Raman spectroscopy, has until recently been confined to chemistry research labs. But high demand for security inspections and advances in compact computer and sensor technology have allowed companies like Ahura to develop these devices. Still, they are not cheap. According to an account in Time Magazine, the Ahura unit retails for about $30,000. It will be a while before every airport is equipped with such a device, and in the meantime, even bottled water has become a rarity in the air.
Engineers like to view a problem in enough detail to have a good idea of how design choices will affect the performance of the system in question. In the case of airline security, the system is the whole complex of air travel. The design choices include how much we will pay for increasingly sophisticated security technology on the ground, how much we charge for air travel, how much of the cost of security is borne by the government versus private sources, and (not least important) what kind of research and development we do to prepare for future security problems. This last item is currently being covered, if at all, by small private firms such as Ahura in conjunction with government-sponsored research related to terrorism. It is a well-known fact among researchers that adding the word "terrorism" to a research proposal with almost any conceivable justification increases its likelihood of funding, other things being equal. Whether or not this results in better ideas for anti-terrorism technology is so far an open question.
While the U. S. government has taken steps to coordinate anti-terrorism efforts with the creation of such entities as the Department of Homeland Security, it is not clear that such efforts are coordinated enough or directed well enough to do a good job at not only reacting to, but anticipating, new terrorist threats to airline safety. Most successful crash programs, from the Manhattan Project to the Apollo program, have been coordinated by a single, central authority with a single-minded purpose and enough resources to get the job done. Commercial airline safety differs from those programs in many ways, of course. Millions of ordinary citizens, hundreds of private companies large and small, and international relations all make it a complex picture. But whatever else it is, it is an engineering problem. And a more coordinated and focused effort to make airlines as safe from terrorists as they are now from accidental crashes would be worth whatever we paid for it. Even if I can't carry my soft drink onto the plane for a while.
Sources: The Time Magazine article "A New Way to Detect Liquid Explosives" on Ahura is at http://www.time.com/time/business/article/0,8599,1225412,00.html.
Tuesday, August 08, 2006
A Bribe By Many Other Names
When is a bribe a bribe? When is it a token of appreciation? And when is it a campaign contribution? Finally, why should engineers worry about these questions?
All engineering involves money, and wherever lots of money flows, you can find people who will try to get some in nefarious ways. The news that provokes these thoughts concerns one Brent R. Wilkes, a U. S. defense contractor whose enterprises have included a company that converts paper documents into digital form, and another that offers a noise-suppressing technology for military radio communications. In the nature of things, Mr. Wilkes has undoubtedly hired and paid engineers who work for these companies.
The reason Mr. Wilkes is in the news is that in order to procure defense contracts, he paid over two million dollars in cash and gifts to U. S. Rep. Randy Cunningham of California, who confessed to the bribes in a plea bargain with Federal prosecutors. Rep. Cunningham was sentenced to prison. Mr. Wilkes, for his part, feels that he himself did nothing illegal and was simply playing the game by the rules he learned. Unless a contractor pays for preferential treatment in the form of “earmarks,” according to Mr. Wilkes, he doesn’t stand a chance. The New York Times reports that over 12,000 such earmarks were inserted in this year’s Federal spending bills, amounting to a total of some $64 billion, and the number of earmarks is rising every year. Of course, not every earmark is the result of a bribe, but some clearly were.
Suppose you were an engineer working at one of Mr. Wilkes’ companies. Should this affair bother you? The writer of this blog has received in-kind support (not money) from a research center in Massachusetts that was set up via a funding mechanism that could be considered an earmark, so the question is a personal one. The answer depends on your ideas about how government should work, and what representative democracy really is.
Western democracies trace their roots to ancient Greece, where the Athenian democracy gave rise to the most influential culture the West has ever known. Plato could be called the first (and probably the best) political scientist. In The Republic, he put forward his views on the different types of government and the strengths and weaknesses of each. While it is impossible to know what Plato would think of the government of the United States today, if he were looking at how things are really done, as opposed to how we say they are done, he might well classify it as an oligarchy.
Plato defines an oligarchy as “a government resting on a valuation of property, in which the rich have power and the poor man is deprived of it.” Although ownership qualifications for voters and poll taxes have been abolished in this country, we now have a system that still requires candidates for state and Federal office to raise millions of dollars, becoming temporarily rich if only for the duration of the campaign. Why? Because without cash, no one can pay for campaign ads. And since it is easier to raise money from rich people than from poor ones, guess who gets special attention at the very least, and occasionally, illegal favors such as Rep. Cunningham granted in the case of the bribes from Mr. Wilkes.
We should distinguish between legal campaign contributions made to a candidate on the one hand, and illegal bribes paid for specific legislative favors on the other hand. Unfortunately, members of Congress don’t always distinguish between the two. The point is, whether legally or illegally, money has come to have a peculiarly loud voice in U. S. government today, overpowering the voices of people who suffer injustice but don’t have money to do anything about it.
Well, what of it? Is that so bad? Plato thinks it is: “. . . in proportion as riches and rich men are honoured in the State, virtue and the virtuous are dishonoured.” He says that just being rich doesn’t make you wise in the ways of government. All it shows is that you know how to get rich, or at least to keep the riches you inherited. The rich rulers’ “fondness for money makes them unwilling to pay taxes.” And “oligarchies have both the extremes of great wealth and great poverty.” You don’t have to look very far to see both of those effects in action today.
Now, Plato doesn’t say that an oligarchy won’t work. It will, after a fashion, but if you live in an oligarchy, you should get used to certain drawbacks. Less taxes for the rich and extremes of wealth and poverty are two. The rich having virtually all the effective power is another. The worst, he says, is that being poor makes you a kind of non-person, without influence or the hope of justice.
The founders of this country did build in some property qualifications for voters in Federal elections at first. But the wave of Jacksonian democracy that swept through the country in the early nineteenth century did away with most of them, and the civil rights movement of the 1960s abolished poll taxes. At the time, people thought these were good things. They brought the country closer to the ideal enunciated by Lincoln: government of, by, and for the people, not just for some particular favored group with well-funded influence in Washington. Of course, well-funded groups with influence in Washington have been with us always. But the balance between radically egalitarian democracy and highly discriminatory oligarchy has swung back and forth over the years.
Right now, it is swinging pretty heavily toward oligarchy. If you see this as a good thing, or at least an inevitable feature of the way things are done nowadays, then maybe you would not feel a qualm at reading about the adventures of your company’s founder in the realm of bribery. After all, it seems to be only an extreme form of making campaign contributions, and who can draw the line? But if you think bribery and corruption are corrosive to the body politic and need to be fought at every turn, then you won’t be so happy at the news. Maybe you’ll quit and go into politics yourself. The least you can do is vote, and not just for the candidate who runs the most campaign ads, either.
Sources: The New York Times article on Wilkes is at http://www.nytimes.com/2006/08/06/washington/06wilkes.html. Plato’s The Republic can be found at http://www.literaturepage.com/read/therepublic, and his comments on oligarchy are from Chapter 8. I thank Jeff Bogumil, former president of the IEEE Society on Social Implications of Technology, for drawing my attention to this matter.
All engineering involves money, and wherever lots of money flows, you can find people who will try to get some in nefarious ways. The news that provokes these thoughts concerns one Brent R. Wilkes, a U. S. defense contractor whose enterprises have included a company that converts paper documents into digital form, and another that offers a noise-suppressing technology for military radio communications. In the nature of things, Mr. Wilkes has undoubtedly hired and paid engineers who work for these companies.
The reason Mr. Wilkes is in the news is that in order to procure defense contracts, he paid over two million dollars in cash and gifts to U. S. Rep. Randy Cunningham of California, who confessed to the bribes in a plea bargain with Federal prosecutors. Rep. Cunningham was sentenced to prison. Mr. Wilkes, for his part, feels that he himself did nothing illegal and was simply playing the game by the rules he learned. Unless a contractor pays for preferential treatment in the form of “earmarks,” according to Mr. Wilkes, he doesn’t stand a chance. The New York Times reports that over 12,000 such earmarks were inserted in this year’s Federal spending bills, amounting to a total of some $64 billion, and the number of earmarks is rising every year. Of course, not every earmark is the result of a bribe, but some clearly were.
Suppose you were an engineer working at one of Mr. Wilkes’ companies. Should this affair bother you? The writer of this blog has received in-kind support (not money) from a research center in Massachusetts that was set up via a funding mechanism that could be considered an earmark, so the question is a personal one. The answer depends on your ideas about how government should work, and what representative democracy really is.
Western democracies trace their roots to ancient Greece, where the Athenian democracy gave rise to the most influential culture the West has ever known. Plato could be called the first (and probably the best) political scientist. In The Republic, he put forward his views on the different types of government and the strengths and weaknesses of each. While it is impossible to know what Plato would think of the government of the United States today, if he were looking at how things are really done, as opposed to how we say they are done, he might well classify it as an oligarchy.
Plato defines an oligarchy as “a government resting on a valuation of property, in which the rich have power and the poor man is deprived of it.” Although ownership qualifications for voters and poll taxes have been abolished in this country, we now have a system that still requires candidates for state and Federal office to raise millions of dollars, becoming temporarily rich if only for the duration of the campaign. Why? Because without cash, no one can pay for campaign ads. And since it is easier to raise money from rich people than from poor ones, guess who gets special attention at the very least, and occasionally, illegal favors such as Rep. Cunningham granted in the case of the bribes from Mr. Wilkes.
We should distinguish between legal campaign contributions made to a candidate on the one hand, and illegal bribes paid for specific legislative favors on the other hand. Unfortunately, members of Congress don’t always distinguish between the two. The point is, whether legally or illegally, money has come to have a peculiarly loud voice in U. S. government today, overpowering the voices of people who suffer injustice but don’t have money to do anything about it.
Well, what of it? Is that so bad? Plato thinks it is: “. . . in proportion as riches and rich men are honoured in the State, virtue and the virtuous are dishonoured.” He says that just being rich doesn’t make you wise in the ways of government. All it shows is that you know how to get rich, or at least to keep the riches you inherited. The rich rulers’ “fondness for money makes them unwilling to pay taxes.” And “oligarchies have both the extremes of great wealth and great poverty.” You don’t have to look very far to see both of those effects in action today.
Now, Plato doesn’t say that an oligarchy won’t work. It will, after a fashion, but if you live in an oligarchy, you should get used to certain drawbacks. Less taxes for the rich and extremes of wealth and poverty are two. The rich having virtually all the effective power is another. The worst, he says, is that being poor makes you a kind of non-person, without influence or the hope of justice.
The founders of this country did build in some property qualifications for voters in Federal elections at first. But the wave of Jacksonian democracy that swept through the country in the early nineteenth century did away with most of them, and the civil rights movement of the 1960s abolished poll taxes. At the time, people thought these were good things. They brought the country closer to the ideal enunciated by Lincoln: government of, by, and for the people, not just for some particular favored group with well-funded influence in Washington. Of course, well-funded groups with influence in Washington have been with us always. But the balance between radically egalitarian democracy and highly discriminatory oligarchy has swung back and forth over the years.
Right now, it is swinging pretty heavily toward oligarchy. If you see this as a good thing, or at least an inevitable feature of the way things are done nowadays, then maybe you would not feel a qualm at reading about the adventures of your company’s founder in the realm of bribery. After all, it seems to be only an extreme form of making campaign contributions, and who can draw the line? But if you think bribery and corruption are corrosive to the body politic and need to be fought at every turn, then you won’t be so happy at the news. Maybe you’ll quit and go into politics yourself. The least you can do is vote, and not just for the candidate who runs the most campaign ads, either.
Sources: The New York Times article on Wilkes is at http://www.nytimes.com/2006/08/06/washington/06wilkes.html. Plato’s The Republic can be found at http://www.literaturepage.com/read/therepublic, and his comments on oligarchy are from Chapter 8. I thank Jeff Bogumil, former president of the IEEE Society on Social Implications of Technology, for drawing my attention to this matter.
Tuesday, August 01, 2006
Online Gambling in the U. S.: Don't Bet On It
If you log on to BetOnSports.com today, and your Internet address identifies you as living in the U. S., all you will see besides their colorful logo is the following message:
IN LIGHT OF COURT PAPERS FILED IN THE UNITED STATES, THE COMPANY HAS TEMPORARILY SUSPENDED THIS FACILITY PENDING ITS ABILITY TO ASSESS ITS FULL POSITION. DURING THIS PERIOD NO FINANCIAL OR WAGERING TRANSACTIONS CAN BE EXECUTED. FURTHER INFORMATION WILL BE POSTED ONCE THE COMPANY IS IN A POSITION TO DO SO.
The BETonSPORTS.com
customer support team
The reason for this is simple: A U. S. District Court in St. Louis has issued a restraining order against BetOnSports PLC, forbidding them to take any bets from U. S. residents. The reason for the court order is a civil case filed by the U. S. Department of Justice to stop the company's U. S. operations. On July 16, the CEO of BetOnSports, David Carruthers, a British citizen, was on his way from London to the company's online operations in Costa Rica by way of the Dallas-Fort Worth Airport. Federal authorities arrested him at the airport. How you view all these goings-on depends on your view of gambling, the Internet, and what is right and wrong about both.
Engineering ethics often deals with the unexpected consequences of a new technology. Most of the time, the surprise comes not for purely technical or scientific reasons alone, but from the ways people find to use or misuse the new development. The designers of the Arpanet, an early predecessor of the Internet, were thinking in terms of Cold War national defense in 1969 when they put together a computer network that they hoped would withstand partial destruction in a nuclear war. I would be surprised to find that the thought of placing bets over their new medium of communication ever entered their minds. But as millions of ordinary people gained access to the Internet, that thought did occur to gamblers, bookies, and "gaming industry" professionals, who set up gambling websites, mostly outside the continental U. S. to avoid state and Federal laws against unauthorized games of chance. But now the Department of Justice seems to believe it can make a good case against one of the highest-volume online betting operations.
As a strong opponent of gambling in any organized form, I hope that Mr. Carruthers' recent experiences make other online gambling outfits think twice about continuing their U. S. operations. In my view, gambling approaches the perfect temptation, as defined by the demon Screwtape in C. S. Lewis' The Screwtape Letters. The perfect temptation is to entice someone into a trap and give them nothing in return. And most of the time, that's exactly what gamblers get, on individual bets and in the long run. I think it is a shame that most U. S. states have corrupted themselves to the extent of conducting lotteries. Never mind that the profits so gained are used for good purposes, including education. Studies have shown that people with lower incomes spend a much larger portion of their income on lotteries and gambling than upper-income groups. So organized gambling robs from the poor and gives to the rich, the rich being either state governments or the wealthy owners and operators of casinos and online gambling companies.
Does my personal opinion about gambling make me think that we therefore ought to roll up the Internet and put it away, simply because it can be used for nefarious purposes? Not necessarily. A lot of bad things on the Internet are there simply because people have always been doing them, and people are now using the Internet a lot.
Gambling is a very old social problem. It became a popular recreation in China as much as 3500 years ago. The sage Confucius opposed the practice and several Chinese emperors tried to prohibit it, with mixed success. The fact that gambling has become an issue on the Internet is no more surprising than the fact that people occasionally tell lies in emails as well as in person.
What the Internet has done with gambling that is new is to internationalize it, making it much trickier for any single jurisdiction to enforce its laws or prosecute violators. When you had to fly to Las Vegas or Monaco to gamble in a big way, the volume was necessarily small, but now numerous gambling sites are just a click of the mouse away. Just as the development of radio broadcasting in the 1920s led to a whole new set of laws to regulate international broadcasting, which were (and are) both obeyed and violated to various degrees, the global nature of the Internet has challenged the sovereignty of nations in an unprecedented way.
As I have mentioned elsewhere, some countries such as China have chosen to spend a lot of effort to control their part of the Internet in various ways. I don't know what China's policy is toward Internet gambling, but the great firewall of China can probably block those sites as effectively as it blocks sites with the word "freedom." Such a restrictive system is unthinkable in this country, where the Internet acquired much of its egalitarian and democratic nature. But the Department of Justice seems to believe that other approaches such as restraining orders and arresting CEOs in airports can have the same effect.
What if you think there's nothing wrong with gambling, even after reading what I have to say about it? Well, if you are an engineer, I suppose you could join the technical support staff of BetOnSports.com without having your conscience bother you. But it seems to me that engineers have a special calling to make life better in some way, and not just one's own life, as in getting a high-paying job. After all, if your only criterion about a career is pay, you should go right out and start running drugs: the hourly rate can't be beat and no higher education is required. If you disagree with that idea, that means you have some moral feelings and intuitions about your career. The thing to do is not to ignore them, but ask yourself what they are, and why you have them. If you figure all this out and still think it's fine to work for an online gaming outfit, then go ahead. But just be careful about where your flights land.
Sources: A report on Carruthers' predicament was carried in the Aug. 1, 2006 online edition of the New York Times at http://www.nytimes.com/2006/08/01/technology/01gamble.html. More information is at the Internet News Bureau site http://www.internetnews.com/ec-news/article.php/3622341. An interesting history of gambling in China by Desmond Lam is at http://www.urbino.net/articles.cfm?specificArticle=A%20Brief%20Chinese%20History%20of%20Gambling. The Maryland study of gambling is cited in a philosophical argument against state lotteries by Verna V. Gehring is at
http://www.publicpolicy.umd.edu/IPPP/Winter-Spring00/The_American_State_Lottery.htm.
IN LIGHT OF COURT PAPERS FILED IN THE UNITED STATES, THE COMPANY HAS TEMPORARILY SUSPENDED THIS FACILITY PENDING ITS ABILITY TO ASSESS ITS FULL POSITION. DURING THIS PERIOD NO FINANCIAL OR WAGERING TRANSACTIONS CAN BE EXECUTED. FURTHER INFORMATION WILL BE POSTED ONCE THE COMPANY IS IN A POSITION TO DO SO.
The BETonSPORTS.com
customer support team
The reason for this is simple: A U. S. District Court in St. Louis has issued a restraining order against BetOnSports PLC, forbidding them to take any bets from U. S. residents. The reason for the court order is a civil case filed by the U. S. Department of Justice to stop the company's U. S. operations. On July 16, the CEO of BetOnSports, David Carruthers, a British citizen, was on his way from London to the company's online operations in Costa Rica by way of the Dallas-Fort Worth Airport. Federal authorities arrested him at the airport. How you view all these goings-on depends on your view of gambling, the Internet, and what is right and wrong about both.
Engineering ethics often deals with the unexpected consequences of a new technology. Most of the time, the surprise comes not for purely technical or scientific reasons alone, but from the ways people find to use or misuse the new development. The designers of the Arpanet, an early predecessor of the Internet, were thinking in terms of Cold War national defense in 1969 when they put together a computer network that they hoped would withstand partial destruction in a nuclear war. I would be surprised to find that the thought of placing bets over their new medium of communication ever entered their minds. But as millions of ordinary people gained access to the Internet, that thought did occur to gamblers, bookies, and "gaming industry" professionals, who set up gambling websites, mostly outside the continental U. S. to avoid state and Federal laws against unauthorized games of chance. But now the Department of Justice seems to believe it can make a good case against one of the highest-volume online betting operations.
As a strong opponent of gambling in any organized form, I hope that Mr. Carruthers' recent experiences make other online gambling outfits think twice about continuing their U. S. operations. In my view, gambling approaches the perfect temptation, as defined by the demon Screwtape in C. S. Lewis' The Screwtape Letters. The perfect temptation is to entice someone into a trap and give them nothing in return. And most of the time, that's exactly what gamblers get, on individual bets and in the long run. I think it is a shame that most U. S. states have corrupted themselves to the extent of conducting lotteries. Never mind that the profits so gained are used for good purposes, including education. Studies have shown that people with lower incomes spend a much larger portion of their income on lotteries and gambling than upper-income groups. So organized gambling robs from the poor and gives to the rich, the rich being either state governments or the wealthy owners and operators of casinos and online gambling companies.
Does my personal opinion about gambling make me think that we therefore ought to roll up the Internet and put it away, simply because it can be used for nefarious purposes? Not necessarily. A lot of bad things on the Internet are there simply because people have always been doing them, and people are now using the Internet a lot.
Gambling is a very old social problem. It became a popular recreation in China as much as 3500 years ago. The sage Confucius opposed the practice and several Chinese emperors tried to prohibit it, with mixed success. The fact that gambling has become an issue on the Internet is no more surprising than the fact that people occasionally tell lies in emails as well as in person.
What the Internet has done with gambling that is new is to internationalize it, making it much trickier for any single jurisdiction to enforce its laws or prosecute violators. When you had to fly to Las Vegas or Monaco to gamble in a big way, the volume was necessarily small, but now numerous gambling sites are just a click of the mouse away. Just as the development of radio broadcasting in the 1920s led to a whole new set of laws to regulate international broadcasting, which were (and are) both obeyed and violated to various degrees, the global nature of the Internet has challenged the sovereignty of nations in an unprecedented way.
As I have mentioned elsewhere, some countries such as China have chosen to spend a lot of effort to control their part of the Internet in various ways. I don't know what China's policy is toward Internet gambling, but the great firewall of China can probably block those sites as effectively as it blocks sites with the word "freedom." Such a restrictive system is unthinkable in this country, where the Internet acquired much of its egalitarian and democratic nature. But the Department of Justice seems to believe that other approaches such as restraining orders and arresting CEOs in airports can have the same effect.
What if you think there's nothing wrong with gambling, even after reading what I have to say about it? Well, if you are an engineer, I suppose you could join the technical support staff of BetOnSports.com without having your conscience bother you. But it seems to me that engineers have a special calling to make life better in some way, and not just one's own life, as in getting a high-paying job. After all, if your only criterion about a career is pay, you should go right out and start running drugs: the hourly rate can't be beat and no higher education is required. If you disagree with that idea, that means you have some moral feelings and intuitions about your career. The thing to do is not to ignore them, but ask yourself what they are, and why you have them. If you figure all this out and still think it's fine to work for an online gaming outfit, then go ahead. But just be careful about where your flights land.
Sources: A report on Carruthers' predicament was carried in the Aug. 1, 2006 online edition of the New York Times at http://www.nytimes.com/2006/08/01/technology/01gamble.html. More information is at the Internet News Bureau site http://www.internetnews.com/ec-news/article.php/3622341. An interesting history of gambling in China by Desmond Lam is at http://www.urbino.net/articles.cfm?specificArticle=A%20Brief%20Chinese%20History%20of%20Gambling. The Maryland study of gambling is cited in a philosophical argument against state lotteries by Verna V. Gehring is at
http://www.publicpolicy.umd.edu/IPPP/Winter-Spring00/The_American_State_Lottery.htm.
Wednesday, July 26, 2006
Is MySpace a Safer Place?
Back on June 20, I wrote about the Texas Attorney General's efforts to track down cyber predators who abuse popular social-networking websites such as MySpace. At last report, he had rounded up eighty alleged criminals who tried to meet cute under-age girls or boys for nefarious purposes, only to find themselves at the wrong end of a sting operation. The very next day, on June 21, MySpace.com announced a series of new restrictions to help fix the problem. I am certain that this blog played no role in MySpace's decision, but it is equally certain that publicity about the potential for abuse as well as the potential for lawsuits did have an effect.
According to an Associated Press report, the changes make it impossible for anyone registered as being over 18 to view the full profiles of members under 16, unless the older user knows the younger one's email address or full name. (MySpace has long had a lower age limit of 14.) While this is undoubtedly an improvement, the report also pointed out that MySpace simply takes a user's word about age. There is still nothing like the credit-card verification mechanism recommended by the Texas Attorney General to verify the user's age by independent means. So if I decided to masquerade as a 14-year-old boy in order to view the full profiles of 14-year-old girls, I could still do so.
The controversy over MySpace is just one battle in the larger war about privacy and technology. These days, "technology" usually means computers, networks, and the whole communications infrastructure of iPods, websites, and other hardware and software that makes us the most connected society in history. In examining a problem, engineers sometimes like to cook up a worst-case scenario in which everything that could conceivably go wrong does go wrong. If the system they are designing nevertheless withstands such a perfect storm of Murphy's Law ("whatever can go wrong will go wrong"), then the engineers can generally breathe a sigh of relief that the system will make it through more likely incidents in which only some things go wrong. Of course, this assumes that the system is simple enough, and the engineers are imaginative enough, to come up with a truly worst-case situation. But even if these conditions don't always apply, the technique is still a useful one.
What is a worst-case scenario in terms of privacy and technology? The answer may depend on what your own worst fears are.
Say you feel strongly that your financial matters are nobody else's business, and that you value your good credit rating. Your worst cyber-privacy nightmare might then be to have your identity stolen by a gang of hot-check-writing, heroin-using, credit-card-busting criminals who pay for a million-dollar orgy of consumer spending with your financial resources and then flee the country, leaving your credit rating in tatters that will take years to repair.
Say that you like to speak your mind about politics or anything else. Then your worst fears might be that a kind of super-Patriot Act would allow the government to spy on everything you email, blog, say, or see online. Imagine what Joseph Stalin would have done with a Communist version of the Internet. In the old days of manual telephone taps and flesh-and-blood spies, the ability of a government to spy on its citizens was limited by the fact that you could hire only so many spies, and there were never enough to keep tabs on all the citizens all the time. But new automated spyware has lifted that restriction and brought the blessings of increased productivity to the espionage business. My blog on "Engineering Censorship in China" shows how a totalitarian government can use technology to monitor or censor the online activities of over a billion people, with the help of companies like Microsoft.
Say that you have a rare genetic disorder that has a good, but not certain, chance of striking you as a young adult. It won't be fatal, but will require many thousands of dollars' worth of specialized health care over the rest of your lifetime. Do you want your prospective employers or health insurance companies to know this fact about you? Even if they say they will not let it influence their decisions about you, do you believe them? There are laws currently under consideration by the U. S. Congress that will mandate the electronic storage of medical data, which is now largely maintained in the form of paper files. This change does not guarantee that any Joe or Jane off the street will be able to access your medical records, but it is not clear that it will safeguard them perfectly, either.
In each of these cases, something that was at first intended to be a good, convenient, or more efficient way of doing things gets twisted around and used to harm. Systems designed to make it easier to buy things also make it easier to steal things. Those who built features into the Internet to encourage the small-d democratic exchange of ideas now find that some governments use it to repress ideas. Attempts to make medical records more accurate and accessible can also hurt someone with a costly medical problem if insurers or employers use their medical records against them. And a great idea about how to bring people closer together with technology-assisted social networking occasionally helps cyber predators carry out their evil intentions.
While there are many laws of physics that engineers must obey at their peril, there is also one principle of human behavior that is equally important. It goes by various names. In the Christian tradition, it is called "original sin," which means that everyone on Earth has an inherent tendency to do the wrong thing, even if they know the right thing. G. K. Chesterton called this doctrine "the only part of Christian theology which can really be proved." The proof, of course, is empirical. There has never been a technology that has actually been used, which has not ended up causing at least some harm as well as good. And it is foolish to design anything without taking this tried-and-true human factor into account.
Sources: The Associated Press report on MySpace's new restrictions is at http://www.msnbc.msn.com/id/13447786/. One view of the issue of medical privacy rights (the patient-advocate view) can be found at http://www.patientprivacyrights.org. The Chesterton quote is from Orthodoxy (New York: Doubleday, 1990, orig. published 1908), p. 15.
According to an Associated Press report, the changes make it impossible for anyone registered as being over 18 to view the full profiles of members under 16, unless the older user knows the younger one's email address or full name. (MySpace has long had a lower age limit of 14.) While this is undoubtedly an improvement, the report also pointed out that MySpace simply takes a user's word about age. There is still nothing like the credit-card verification mechanism recommended by the Texas Attorney General to verify the user's age by independent means. So if I decided to masquerade as a 14-year-old boy in order to view the full profiles of 14-year-old girls, I could still do so.
The controversy over MySpace is just one battle in the larger war about privacy and technology. These days, "technology" usually means computers, networks, and the whole communications infrastructure of iPods, websites, and other hardware and software that makes us the most connected society in history. In examining a problem, engineers sometimes like to cook up a worst-case scenario in which everything that could conceivably go wrong does go wrong. If the system they are designing nevertheless withstands such a perfect storm of Murphy's Law ("whatever can go wrong will go wrong"), then the engineers can generally breathe a sigh of relief that the system will make it through more likely incidents in which only some things go wrong. Of course, this assumes that the system is simple enough, and the engineers are imaginative enough, to come up with a truly worst-case situation. But even if these conditions don't always apply, the technique is still a useful one.
What is a worst-case scenario in terms of privacy and technology? The answer may depend on what your own worst fears are.
Say you feel strongly that your financial matters are nobody else's business, and that you value your good credit rating. Your worst cyber-privacy nightmare might then be to have your identity stolen by a gang of hot-check-writing, heroin-using, credit-card-busting criminals who pay for a million-dollar orgy of consumer spending with your financial resources and then flee the country, leaving your credit rating in tatters that will take years to repair.
Say that you like to speak your mind about politics or anything else. Then your worst fears might be that a kind of super-Patriot Act would allow the government to spy on everything you email, blog, say, or see online. Imagine what Joseph Stalin would have done with a Communist version of the Internet. In the old days of manual telephone taps and flesh-and-blood spies, the ability of a government to spy on its citizens was limited by the fact that you could hire only so many spies, and there were never enough to keep tabs on all the citizens all the time. But new automated spyware has lifted that restriction and brought the blessings of increased productivity to the espionage business. My blog on "Engineering Censorship in China" shows how a totalitarian government can use technology to monitor or censor the online activities of over a billion people, with the help of companies like Microsoft.
Say that you have a rare genetic disorder that has a good, but not certain, chance of striking you as a young adult. It won't be fatal, but will require many thousands of dollars' worth of specialized health care over the rest of your lifetime. Do you want your prospective employers or health insurance companies to know this fact about you? Even if they say they will not let it influence their decisions about you, do you believe them? There are laws currently under consideration by the U. S. Congress that will mandate the electronic storage of medical data, which is now largely maintained in the form of paper files. This change does not guarantee that any Joe or Jane off the street will be able to access your medical records, but it is not clear that it will safeguard them perfectly, either.
In each of these cases, something that was at first intended to be a good, convenient, or more efficient way of doing things gets twisted around and used to harm. Systems designed to make it easier to buy things also make it easier to steal things. Those who built features into the Internet to encourage the small-d democratic exchange of ideas now find that some governments use it to repress ideas. Attempts to make medical records more accurate and accessible can also hurt someone with a costly medical problem if insurers or employers use their medical records against them. And a great idea about how to bring people closer together with technology-assisted social networking occasionally helps cyber predators carry out their evil intentions.
While there are many laws of physics that engineers must obey at their peril, there is also one principle of human behavior that is equally important. It goes by various names. In the Christian tradition, it is called "original sin," which means that everyone on Earth has an inherent tendency to do the wrong thing, even if they know the right thing. G. K. Chesterton called this doctrine "the only part of Christian theology which can really be proved." The proof, of course, is empirical. There has never been a technology that has actually been used, which has not ended up causing at least some harm as well as good. And it is foolish to design anything without taking this tried-and-true human factor into account.
Sources: The Associated Press report on MySpace's new restrictions is at http://www.msnbc.msn.com/id/13447786/. One view of the issue of medical privacy rights (the patient-advocate view) can be found at http://www.patientprivacyrights.org. The Chesterton quote is from Orthodoxy (New York: Doubleday, 1990, orig. published 1908), p. 15.
Wednesday, July 19, 2006
The Big Dig in Big Trouble
Boston's Big Dig project to put much of I-90 underground spanned parts of two centuries and cost more than any other single highway project in the United States. On July 11, when the project was mostly finished and people in Massachusetts thought they could begin to put the disruption and cost overruns behind them, a three-ton ceiling tile came loose in a connector tunnel and killed a newlywed woman. Further investigation has revealed that over a thousand fasteners used to hold up similar tiles are probably defective. What can we learn from all this?
The first lesson is an old one: nothing draws attention like death and destruction. According to a report by Sean Murphy and Raja Mishra in the July 18 Boston Globe, lab tests of the epoxy glue used to hold the fasteners in place were originally scheduled during construction. But officials of Bechtel/Parsons Brinckerhoff, the engineering firm in charge of the Big Dig, felt so confident in the epoxy that they canceled the tests. Now it looks like the tests would have been a good idea, because they might have revealed the kind of problems that ultimately led to the fatal ceiling collapse. But there was no immediate harm that resulted from skipping the tests, so the incident went by unnoticed.
The next lesson is one we hear starting in kindergarten: be sure to follow instructions. Engineering is a constant battle between expensive over-caution on the one hand, and reckless negligence on the other hand. Where lives are at stake, as in the construction of bridges and tunnels, laws require licensed engineers to sign off on plans and specifications. But all the licensed engineers in the world won't do any good if the contractors and builders don't carry out the engineers' instructions to the letter.
Speculation by experts centers on the possibility that the epoxy used to hold the concrete ceiling tiles up was either not prepared and applied correctly, or used with oily steel. Steel as it comes from the factory has a thin coating of oil on it, and unless this oil is cleaned off prior to use, adhesives such as epoxy cannot form a good bond. Even if the steel was clean, the widely varying temperatures at a Boston construction site may have interfered with the chemical changes that epoxy goes through in order to harden. Inadequately hardened plastic adhesives can "creep" under stress, moving a tiny fraction of an inch every month, until the entire joint fails. Whatever was done wrong, it appears to have been done wrong consistently, because Governor Mitt Romney has announced that over 1300 fasteners are suspect and will have to be removed or replaced.
Further investigations will eventually reveal what went wrong, and possibly who was responsible. Structural engineering is based mostly on physical science, and things don't generally fall down for no reason at all. But finding the physical cause gets us only part way toward preventing similar accidents in the future. Until the human organizations that let such things happen are repaired and kept in order, the same thing can happen again. In a way, it has.
The Boston tunnel collapse is strangely similar in some ways to a much more serious tragedy that happened twenty-five years ago this month. On July 11, 1981, several hundred people gathered on a suspended concrete walkway to watch a dance party in the newly opened Hyatt Hotel in Kansas City, Missouri. The walkway was held up by steel rods which should have been strong enough to support the weight of the crowd. If they had been installed according to the original engineering plan, everything would have been fine. But on the site, a contractor decided to make a subtle change in the way the rods were made and assembled. This change greatly weakened the structure and caused it to collapse that evening, killing 114 people and injuring 200. Again, we had heavy concrete slabs, dangerous to life, suspended by thin steel rods. Again, if the plans had been carried out to the letter, the disaster would not have occurred. This is not to say that nobody should ever suspend heavy concrete slabs with thin steel rods again, or that engineers never make mistakes. They do. But the point is that responsibility inheres not only in those who make plans, but in those who carry them out and those charged with making sure that the work agrees with the plans.
Everyone involved in a building project, from those who pay for it, to the architects and engineers, to the contractors, to inspectors, down to the lowliest laborer cleaning up afterwards, has to walk that same line between excessive over-caution and reckless carelessness. Since the vast majority of engineering projects work without major failures or loss of life, we can assume that most of these folks do their job well enough most of the time. But an accident like the Big Dig tunnel collapse reminds us of what has to happen at every step of the way, and what can go wrong if somebody doesn't pay enough attention to details that don't seem to matter at the time.
Sources: The Boston Globe articles cited are at http://www.boston.com/news/globe/city_region/breaking_news/2006/07/romney_number_o.html (Gov. Romney's announcement) and http://www.boston.com/news/traffic/bigdig/articles/2006/07/18/workers_doubted_ceiling_method/ (the neglected lab tests). A string of technical discussions on the general subject of epoxy ceiling fasteners and how they can fail is at the Engineering Tips website http://www.eng-tips.com/viewthread.cfm?qid=159632&page=1. The Wikipedia article about the Kansas City Hyatt Regency walkway collapse is at http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse.
The first lesson is an old one: nothing draws attention like death and destruction. According to a report by Sean Murphy and Raja Mishra in the July 18 Boston Globe, lab tests of the epoxy glue used to hold the fasteners in place were originally scheduled during construction. But officials of Bechtel/Parsons Brinckerhoff, the engineering firm in charge of the Big Dig, felt so confident in the epoxy that they canceled the tests. Now it looks like the tests would have been a good idea, because they might have revealed the kind of problems that ultimately led to the fatal ceiling collapse. But there was no immediate harm that resulted from skipping the tests, so the incident went by unnoticed.
The next lesson is one we hear starting in kindergarten: be sure to follow instructions. Engineering is a constant battle between expensive over-caution on the one hand, and reckless negligence on the other hand. Where lives are at stake, as in the construction of bridges and tunnels, laws require licensed engineers to sign off on plans and specifications. But all the licensed engineers in the world won't do any good if the contractors and builders don't carry out the engineers' instructions to the letter.
Speculation by experts centers on the possibility that the epoxy used to hold the concrete ceiling tiles up was either not prepared and applied correctly, or used with oily steel. Steel as it comes from the factory has a thin coating of oil on it, and unless this oil is cleaned off prior to use, adhesives such as epoxy cannot form a good bond. Even if the steel was clean, the widely varying temperatures at a Boston construction site may have interfered with the chemical changes that epoxy goes through in order to harden. Inadequately hardened plastic adhesives can "creep" under stress, moving a tiny fraction of an inch every month, until the entire joint fails. Whatever was done wrong, it appears to have been done wrong consistently, because Governor Mitt Romney has announced that over 1300 fasteners are suspect and will have to be removed or replaced.
Further investigations will eventually reveal what went wrong, and possibly who was responsible. Structural engineering is based mostly on physical science, and things don't generally fall down for no reason at all. But finding the physical cause gets us only part way toward preventing similar accidents in the future. Until the human organizations that let such things happen are repaired and kept in order, the same thing can happen again. In a way, it has.
The Boston tunnel collapse is strangely similar in some ways to a much more serious tragedy that happened twenty-five years ago this month. On July 11, 1981, several hundred people gathered on a suspended concrete walkway to watch a dance party in the newly opened Hyatt Hotel in Kansas City, Missouri. The walkway was held up by steel rods which should have been strong enough to support the weight of the crowd. If they had been installed according to the original engineering plan, everything would have been fine. But on the site, a contractor decided to make a subtle change in the way the rods were made and assembled. This change greatly weakened the structure and caused it to collapse that evening, killing 114 people and injuring 200. Again, we had heavy concrete slabs, dangerous to life, suspended by thin steel rods. Again, if the plans had been carried out to the letter, the disaster would not have occurred. This is not to say that nobody should ever suspend heavy concrete slabs with thin steel rods again, or that engineers never make mistakes. They do. But the point is that responsibility inheres not only in those who make plans, but in those who carry them out and those charged with making sure that the work agrees with the plans.
Everyone involved in a building project, from those who pay for it, to the architects and engineers, to the contractors, to inspectors, down to the lowliest laborer cleaning up afterwards, has to walk that same line between excessive over-caution and reckless carelessness. Since the vast majority of engineering projects work without major failures or loss of life, we can assume that most of these folks do their job well enough most of the time. But an accident like the Big Dig tunnel collapse reminds us of what has to happen at every step of the way, and what can go wrong if somebody doesn't pay enough attention to details that don't seem to matter at the time.
Sources: The Boston Globe articles cited are at http://www.boston.com/news/globe/city_region/breaking_news/2006/07/romney_number_o.html (Gov. Romney's announcement) and http://www.boston.com/news/traffic/bigdig/articles/2006/07/18/workers_doubted_ceiling_method/ (the neglected lab tests). A string of technical discussions on the general subject of epoxy ceiling fasteners and how they can fail is at the Engineering Tips website http://www.eng-tips.com/viewthread.cfm?qid=159632&page=1. The Wikipedia article about the Kansas City Hyatt Regency walkway collapse is at http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse.
Monday, July 10, 2006
Counterfeit Electronics: Coming to a Store Near You
Ten days ago, on July 1, 2006, it became illegal in the European Union to sell electronics that contain more than a very small amount of lead, mercury, cadmium, and a few other hazardous chemicals. These new Reduction of Hazardous Substances (RoHS) regulations present a golden opportunity for electronics counterfeiters to re-label and re-package lead-containing electronics to look like they meet the RoHS requirements.
What is electronics counterfeiting? Anyone who has strolled through a crowded street-level market in New York City has had the chance to buy things like "Rolix" watches and maybe even "Ipods" (not "iPods"). This kind of counterfeiting, where someone makes a cheap imitation of an expensive product and labels it with an almost-like name, is pretty easy to spot and avoid. But it is only the tip of a huge iceberg that costs legitimate manufacturers up to $100 billion a year in lost revenue, according to some estimates.
Most of the counterfeiting goes on far out of sight of consumers, among the thousands of manufacturers, suppliers, and parts brokers who provide the components for both consumer items and industrial electronics systems. Electronics supply chains are increasingly global, and increasingly use the Internet as a marketing and communications tool. The problem with global Internet-based supply chains is that purchasers and suppliers rarely meet face-to-face. This makes it easy for an unethical engineering firm to set up as a legitimate manufacturer and resell used ICs salvaged from old computers as new parts, for example. Another ploy is to relabel cheap, poorly performing parts as expensive better-performing ones. The manufacturer who trusts the part's label and builds a bogus two-dollar IC into a five-hundred-dollar motherboard, which thereupon fails, has got a huge financial headache on his hands. And even worse, the part can perform just well enough to leave the factory, only to fail when it gets to the consumer.
A recent article in IEEE Spectrum Magazine by Michael Pecht and Sanjay Tiku describes some of the ways manufacturers can guard against these problems. One obvious way would be to test parts as they arrive. Years ago, this practice was not uncommon, but it is costly and recent trends have been to move component testing away from the user and toward the supplier. But this requires a level of trust between supplier and user that some suppliers obviously don't deserve.
If the supply chain consisted of just two links, a manufacturer might be able to vet each supplier thoroughly and establish trustworthiness that way. But take the example of a criminally incompetent supplier a few years ago, who stole a formula for the electrolyte used in electrolytic capacitors, a very common type of cheap electronic component. He got the formula wrong, but went ahead and mixed up a batch anyway and sold it to some capacitor manufacturers. They used it to make their capacitors, they sold the capacitors to a board-making company, who sold the boards to computer makers. Some time later, the bad electrolyte began to fail and ruined hundreds, if not thousands, of computers. There were at least five links in this defective supply chain, not counting middlemen and suppliers, and the only problem was at the head of the chain, where it was hard to detect. The harm in this case was a flurry of failed computers, but suppose a bad capacitor went into a heart pacemaker? The harm that counterfeit parts cause isn't only financial. Reputations can be ruined and people can die. But connecting the dots to find out who was responsible is often an impossible task.
Counterfeit electronics is an obvious case of unethical engineering. Someone with enought technical expertise to know what parts are in demand and how to fake them is profiting illegally and immorally from counterfeiting of this kind. Although it happens all over the world, including the United States, the fact that a huge part of all electronics manufacturing is done in Asia means that many counterfeiters also hail from the East. Ironically, a friend of mine who is a native of Hong Kong characterizes the engineering environment in China in recent years as "the wild wild West," associating it with California gold rushes, wide-open cities, and general hell-raising. This anything-goes atmosphere encourages fly-by-night counterfeiting operations and worse. Although China has anti-counterfeiting laws on the books and stages highly publicized raids on counterfeiters from time to time, the sheer volume of fake goods produced means that most fakers never get caught.
If there weren't so many fakers in the first place, things would improve on their own. What if more engineers in China joined professional organizations with a strong commitment to ethical behavior? The Chinese government is suspicious of any organization that is not tightly under its control, but it would certainly have no objection to professional organizations that oblige their members not to engage in counterfeiting.
By many measures, the economies in China, India, Malaysia, Singapore, and elsewhere in Asia are still maturing. In the 1800s, when the British Empire's economy vastly overshadowed that of the United States, it was very common for unethical U. S. publishers to print unauthorized editions of British authors' works. Eventually, an international copyright agreement was hammered out, and as more U. S. publishers agreed to pay copyright to British authors, British publishers did the same for U. S. authors, and the marketplace became more efficient overall. Something like this may take place in Asia, but first, as in the United States, the professional culture will have to change.
Counterfeiting electronics, like counterfeiting money, is an act that benefits the counterfeiter substantially (for a while, anyway) while spreading harm randomly and diffusely everywhere else. There will always be some criminals, but wherever there are enough professionals to band together to take common action and to declare themselves committed to upholding the highest principles of their profession, they can bring about a change in their culture. And this is something no amount of law enforcement can do.
Sources: The IEEE Spectrum article "Bogus" is at http://www.spectrum.ieee.org/may06/3423.
What is electronics counterfeiting? Anyone who has strolled through a crowded street-level market in New York City has had the chance to buy things like "Rolix" watches and maybe even "Ipods" (not "iPods"). This kind of counterfeiting, where someone makes a cheap imitation of an expensive product and labels it with an almost-like name, is pretty easy to spot and avoid. But it is only the tip of a huge iceberg that costs legitimate manufacturers up to $100 billion a year in lost revenue, according to some estimates.
Most of the counterfeiting goes on far out of sight of consumers, among the thousands of manufacturers, suppliers, and parts brokers who provide the components for both consumer items and industrial electronics systems. Electronics supply chains are increasingly global, and increasingly use the Internet as a marketing and communications tool. The problem with global Internet-based supply chains is that purchasers and suppliers rarely meet face-to-face. This makes it easy for an unethical engineering firm to set up as a legitimate manufacturer and resell used ICs salvaged from old computers as new parts, for example. Another ploy is to relabel cheap, poorly performing parts as expensive better-performing ones. The manufacturer who trusts the part's label and builds a bogus two-dollar IC into a five-hundred-dollar motherboard, which thereupon fails, has got a huge financial headache on his hands. And even worse, the part can perform just well enough to leave the factory, only to fail when it gets to the consumer.
A recent article in IEEE Spectrum Magazine by Michael Pecht and Sanjay Tiku describes some of the ways manufacturers can guard against these problems. One obvious way would be to test parts as they arrive. Years ago, this practice was not uncommon, but it is costly and recent trends have been to move component testing away from the user and toward the supplier. But this requires a level of trust between supplier and user that some suppliers obviously don't deserve.
If the supply chain consisted of just two links, a manufacturer might be able to vet each supplier thoroughly and establish trustworthiness that way. But take the example of a criminally incompetent supplier a few years ago, who stole a formula for the electrolyte used in electrolytic capacitors, a very common type of cheap electronic component. He got the formula wrong, but went ahead and mixed up a batch anyway and sold it to some capacitor manufacturers. They used it to make their capacitors, they sold the capacitors to a board-making company, who sold the boards to computer makers. Some time later, the bad electrolyte began to fail and ruined hundreds, if not thousands, of computers. There were at least five links in this defective supply chain, not counting middlemen and suppliers, and the only problem was at the head of the chain, where it was hard to detect. The harm in this case was a flurry of failed computers, but suppose a bad capacitor went into a heart pacemaker? The harm that counterfeit parts cause isn't only financial. Reputations can be ruined and people can die. But connecting the dots to find out who was responsible is often an impossible task.
Counterfeit electronics is an obvious case of unethical engineering. Someone with enought technical expertise to know what parts are in demand and how to fake them is profiting illegally and immorally from counterfeiting of this kind. Although it happens all over the world, including the United States, the fact that a huge part of all electronics manufacturing is done in Asia means that many counterfeiters also hail from the East. Ironically, a friend of mine who is a native of Hong Kong characterizes the engineering environment in China in recent years as "the wild wild West," associating it with California gold rushes, wide-open cities, and general hell-raising. This anything-goes atmosphere encourages fly-by-night counterfeiting operations and worse. Although China has anti-counterfeiting laws on the books and stages highly publicized raids on counterfeiters from time to time, the sheer volume of fake goods produced means that most fakers never get caught.
If there weren't so many fakers in the first place, things would improve on their own. What if more engineers in China joined professional organizations with a strong commitment to ethical behavior? The Chinese government is suspicious of any organization that is not tightly under its control, but it would certainly have no objection to professional organizations that oblige their members not to engage in counterfeiting.
By many measures, the economies in China, India, Malaysia, Singapore, and elsewhere in Asia are still maturing. In the 1800s, when the British Empire's economy vastly overshadowed that of the United States, it was very common for unethical U. S. publishers to print unauthorized editions of British authors' works. Eventually, an international copyright agreement was hammered out, and as more U. S. publishers agreed to pay copyright to British authors, British publishers did the same for U. S. authors, and the marketplace became more efficient overall. Something like this may take place in Asia, but first, as in the United States, the professional culture will have to change.
Counterfeiting electronics, like counterfeiting money, is an act that benefits the counterfeiter substantially (for a while, anyway) while spreading harm randomly and diffusely everywhere else. There will always be some criminals, but wherever there are enough professionals to band together to take common action and to declare themselves committed to upholding the highest principles of their profession, they can bring about a change in their culture. And this is something no amount of law enforcement can do.
Sources: The IEEE Spectrum article "Bogus" is at http://www.spectrum.ieee.org/may06/3423.
Thursday, July 06, 2006
Willie Nelson, Environmental Engineer
The last time I drove from San Marcos up to Fort Worth on Interstate 35, I passed a billboard that bore the grizzled visage of Willie Nelson, the living legend of country music. But instead of advertising his latest album, this billboard urges me to go "BioWillie." Mr. Nelson, it turns out, is using his popularity among truckers to promote biodiesel, a type of diesel fuel made partly from animal and vegetable fats as well as ordinary petroleum. A recent New York Times article said that he drives cars and runs tractors on his farm which are modified to operate on 100% renewable oil, which according to some reports makes the exhaust smell like French fries. So far, biodiesel is available at only a few truck stops, mostly in Texas, but the entertainer has high hopes that his environmentally friendly fuel will become at least as popular as his music.
Does this make Willie Nelson an environmental engineer? I'm not sure he can even spell "methyl ester," much less synthesize it from the used restaurant frying oil that forms much of the raw stock that his refinery uses to make the stuff. But his interest in biodiesel and his clever promotion of the fuel to a market of likely users shows the kind of imagination and initiative that characterizes good engineers.
For that matter, the definition of a good engineer has been changing. It used to be the case in my grandfather's day that technical ability was the only thing expected of engineers. Before the dawn of the computer age, designs of any complexity, from a bridge to a telephone network, needed lengthy, tedious calculations combined with the kind of judgment learned only from experience. But today, technical expertise surpassing even the best of the earlier engineers has been canned into computer software packages. It requires a different kind of genius to use these packages, but the need to spend time on all the nitty-gritty details is less than it used to be.
In many fields, engineers have been freed by these changes to consider other matters beyond the strictly technical features of a project. These include safety concerns, marketing and cost factors, manufacturing problems, and environmental issues. Not that the earlier engineers ignored these factors altogether. But back then simply getting a design to work took so much effort that the other things didn't receive as much attention as they could have.
Biodiesel is a good example of a product whose appeal derives from the simple fact that it is made or grown in an environmentally friendly way, even if it costs more and doesn't perform much better than a competing product. These so-called "soft" issues can actually be harder to deal with than the "hard" technical questions, which nowadays can often be settled in a few computer runs rather than having to build prototype after prototype until the right combination of design factors falls together. And the soft issues are where engineering ethics comes in.
Take for example the thing that is making biodiesel and other bio-derived fuels such as ethanol (made from corn) so attractive: the relatively high price of oil, seventy-five dollars a barrel at this writing. There is one school of economic thought that favors minimal interference in markets, from the convenience store down the street to the global market for oil or any other commodity. If oil becomes too expensive, they say, people will scout around for other ways to get from A to B: a hybrid car, biodiesel, hydrogen, or even a bicycle. In the meantime, such meddlesome practices as higher fuel taxes to force drivers to conserve are counterproductive. When the price of oil gets high enough, the chance to make money with alternative fuels will attract inventors, engineers, and entrepreneurs like Willie Nelson, and in the meantime, we should leave things alone.
That argument is fine as far as it goes, but the trouble is, sometimes it doesn't go far enough. Simple free-market analyses often leave out what are called "externalities." These are things like air pollution, global warming, and other effects that result from the use of a certain commodity, but are not easily expressed in terms of the commodity's market cost. In an insightful article in IEEE Technology and Society Magazine, regional planning expert Clint Andrews showed what happens if you look at global energy costs in recent history and include the externality of military expenditures.
Andrews supposes for the sake of argument that concerns over energy security represent half of the reasons that the U. S. went to war in Iraq in 2003. Estimating the annual cost of the war at $40 billion, half of that figure is $20 billion a year. Andrews points out that $20 billion is also about what the U. S. spends on imported Persian Gulf oil annually. So if we include only half of a modest estimate of what we spend on the Iraq war as an externality of our oil supply and "internalize" it, we really spend $40 billion a year, not $20 billion. And of course this neglects the cost in human lives, which is—or should be—incalculable.
Andrews concludes that while a reasonably free market is a necessary condition to good energy policies, it isn't sufficient. When you include externalities such as wars and other government interventions in energy markets (the billions of dollars in state and federal highway taxes are another example), we are very far from the ideal free market envisioned by libertarians.
An ethical engineer will not simply sell technical services to the highest bidder, but will also think about the far-reaching effects of each project or job. That's exactly what Willie Nelson is doing with his French-fry-smelling tractors and BioWillie billboards. May all engineers do the same.
Sources: Willie Nelson's activities in biodiesel were described in an article by Eric O'Keefe in the New York Times on July 5, 2006 at http://www.nytimes.com/2006/07/05/business/05biowillie.html. Mr. Nelson's website describing his project is at http://www.wnbiodiesel.com. Clinton Andrews' article "Energy security as a rationale for government action" was in the Summer 2005 issue of IEEE Technology and Society Magazine, available through many university libraries and at www.ieeessit.org.
Does this make Willie Nelson an environmental engineer? I'm not sure he can even spell "methyl ester," much less synthesize it from the used restaurant frying oil that forms much of the raw stock that his refinery uses to make the stuff. But his interest in biodiesel and his clever promotion of the fuel to a market of likely users shows the kind of imagination and initiative that characterizes good engineers.
For that matter, the definition of a good engineer has been changing. It used to be the case in my grandfather's day that technical ability was the only thing expected of engineers. Before the dawn of the computer age, designs of any complexity, from a bridge to a telephone network, needed lengthy, tedious calculations combined with the kind of judgment learned only from experience. But today, technical expertise surpassing even the best of the earlier engineers has been canned into computer software packages. It requires a different kind of genius to use these packages, but the need to spend time on all the nitty-gritty details is less than it used to be.
In many fields, engineers have been freed by these changes to consider other matters beyond the strictly technical features of a project. These include safety concerns, marketing and cost factors, manufacturing problems, and environmental issues. Not that the earlier engineers ignored these factors altogether. But back then simply getting a design to work took so much effort that the other things didn't receive as much attention as they could have.
Biodiesel is a good example of a product whose appeal derives from the simple fact that it is made or grown in an environmentally friendly way, even if it costs more and doesn't perform much better than a competing product. These so-called "soft" issues can actually be harder to deal with than the "hard" technical questions, which nowadays can often be settled in a few computer runs rather than having to build prototype after prototype until the right combination of design factors falls together. And the soft issues are where engineering ethics comes in.
Take for example the thing that is making biodiesel and other bio-derived fuels such as ethanol (made from corn) so attractive: the relatively high price of oil, seventy-five dollars a barrel at this writing. There is one school of economic thought that favors minimal interference in markets, from the convenience store down the street to the global market for oil or any other commodity. If oil becomes too expensive, they say, people will scout around for other ways to get from A to B: a hybrid car, biodiesel, hydrogen, or even a bicycle. In the meantime, such meddlesome practices as higher fuel taxes to force drivers to conserve are counterproductive. When the price of oil gets high enough, the chance to make money with alternative fuels will attract inventors, engineers, and entrepreneurs like Willie Nelson, and in the meantime, we should leave things alone.
That argument is fine as far as it goes, but the trouble is, sometimes it doesn't go far enough. Simple free-market analyses often leave out what are called "externalities." These are things like air pollution, global warming, and other effects that result from the use of a certain commodity, but are not easily expressed in terms of the commodity's market cost. In an insightful article in IEEE Technology and Society Magazine, regional planning expert Clint Andrews showed what happens if you look at global energy costs in recent history and include the externality of military expenditures.
Andrews supposes for the sake of argument that concerns over energy security represent half of the reasons that the U. S. went to war in Iraq in 2003. Estimating the annual cost of the war at $40 billion, half of that figure is $20 billion a year. Andrews points out that $20 billion is also about what the U. S. spends on imported Persian Gulf oil annually. So if we include only half of a modest estimate of what we spend on the Iraq war as an externality of our oil supply and "internalize" it, we really spend $40 billion a year, not $20 billion. And of course this neglects the cost in human lives, which is—or should be—incalculable.
Andrews concludes that while a reasonably free market is a necessary condition to good energy policies, it isn't sufficient. When you include externalities such as wars and other government interventions in energy markets (the billions of dollars in state and federal highway taxes are another example), we are very far from the ideal free market envisioned by libertarians.
An ethical engineer will not simply sell technical services to the highest bidder, but will also think about the far-reaching effects of each project or job. That's exactly what Willie Nelson is doing with his French-fry-smelling tractors and BioWillie billboards. May all engineers do the same.
Sources: Willie Nelson's activities in biodiesel were described in an article by Eric O'Keefe in the New York Times on July 5, 2006 at http://www.nytimes.com/2006/07/05/business/05biowillie.html. Mr. Nelson's website describing his project is at http://www.wnbiodiesel.com. Clinton Andrews' article "Energy security as a rationale for government action" was in the Summer 2005 issue of IEEE Technology and Society Magazine, available through many university libraries and at www.ieeessit.org.
Tuesday, June 27, 2006
Discovery Launch: Hopes, Prayers, and Engineering Judgment
This morning, Tuesday, June 27, 2006, four days and some hours before the scheduled launch of NASA's Space Shuttle Discovery, the director of engineering at the Johnson Space Center, Charlie Camarda, was removed from the mission's management team. The Houston Chronicle reports that this reassignment, which Camarda says was against his will, took place after Camarda sent an email to colleagues supporting them for expressing their "dissenting opinions and your exceptions/constraints for flight." Ten days ago, in the June 17 flight readiness review meeting, NASA's head safety official Bryan O'Connor and Christopher Scolese, NASA's chief engineer, voted not to launch. Despite their opposition, NASA managers decided to proceed with the scheduled flight anyway. According to comments the two made after the meeting, their concerns were more that Discovery may suffer irreparable damage during the launch, not that the crew of seven astronauts is in more than the usual danger involved in a ride into space. Nevertheless, it's very clear from these and other reports that NASA is far from one big happy family these days.
Camarda's dismissal may have more to do with internal NASA politics than with shuttle safety. But the two cannot be separated. NASA maintains the shuttles, trains the astronauts, and decides when and how often to fly the remaining three orbiters: Atlantis, Discovery, and Endeavor. NASA head Michael Griffin has gone on record as saying that if Discovery is seriously damaged by pieces of insulating foam—the same problem that doomed Columbia in 2003—he would consider shutting down the entire shuttle program. That policy no doubt influenced the votes of O'Connor and Scolese, who feel that engineering modifications to foam on a number of support brackets should be made to prevent irreparable damage to Discovery's vital heat shield. Everyone agrees that if the kind of damage sustained by Columbia occurs, and is discovered in orbit, and can't be repaired, then the astronauts can take refuge in the International Space Station until a rescue flight can be arranged with one of the two remaining shuttles. This despite the fact that the Station has lately had trouble accommodating only two or three residents at a time. But being uncomfortable and cramped in weightlessness for a few weeks is better than a fiery death. You haven't seen a lot of news items about billionaires paying for rides into space lately, have you? Maybe there's a reason.
In my Mar. 21, 2006 blog, "Retire the Space Shuttle Now," I stated a number of good reasons that we should go straight to the next model of space orbiter without risking any more people's lives in antiquated, patched-up shuttles that deserve an honored place in the Smithsonian, not reuse in space long after their design lifetimes. The recent news out of NASA has only increased my concern that yet another known problem that we haven't heard about in public, but which the engineers are all too familiar with, will reach out and cause another hair-raising space adventure like Apollo 13's near-disaster, if not worse.
Unfortunately, the shuttle program has achieved canonical status in the engineering ethics literature for a couple of reasons. One is that NASA, being a public agency, is unusually open about its internal processes and debates, which means that records of data and decisions are easy to obtain. The second is that both the Challenger and Columbia disasters were caused by known problems that were technically fairly well understood. The failures were not mysterious scientific puzzles; they were failures in management decision-making.
In most well-run organizations, the chief safety officer is king in his or her limited domain. In an oil refinery, for instance, if the president and owner of the plant walks into a hazardous area and attempts to light a cigar, the lowliest safety official present is entirely within his rights to do anything necessary to prevent it, including knocking the president down. On June 17, we witnessed the spectacle of not only NASA's chief safety officer, but its chief engineer as well, say that for reasons of property protection, the launch should not proceed—and they were overruled. And Charles Camarda, an engineer who himself flew on the 2005 Discovery flight, the first one after the Columbia disaster, has just gotten sacked from his mission responsibilities for commending the way some of his underlings spoke out at the flight review. It is not a pretty picture.
In Greek mythology, a young woman named Cassandra had the misfortune to attract the eye of the god Apollo. In an attempt to put himself in her good graces, he gave her the gift of prophecy. But when she refused his advances, he ran up against the rule that says what the gods giveth, the gods can't taketh away. He couldn't keep her from being a prophet, but he could spoil it another way: he made sure that whatever Cassandra prophesied in the way of dire forecasts would not be believed by anybody else. So when she ran around in Troy saying, "You'll be sorry if you bring that big wooden horse in here," she warned the Trojans in vain, the Greeks popped out anyway, and Troy fell. This made Cassandra wish she had never seen Apollo in the first place. Since then her name has passed into the language to mean one whose accurate foretellings of disaster are ignored.
I don't want to be a NASA Cassandra. I have no illusions that one blogger, or even an entire Greek chorus of bloggers, will influence NASA's decision-making process. My hopes and my prayers are that STS-121 will go smoothly, with no headlines other than the routine ones. But we face three possible outcomes on this trip: a routine flight with no significant problems, a flight in which Discovery is damaged enough to scuttle the remaining Shuttle fleet, or a more serious problem that endangers life. May God grant that the third possibility doesn't happen. But I'm going to leave it up to Him as to which of the other two takes place.
Sources: For Camarda's reassignment, see the Houston Chronicle at http://www.chron.com/disp/story.mpl/front/4004817.html. For Camarda's comments on NASA's changed culture, see the 2004 interview at the NASA website
http://www.nasa.gov/vision/space/preparingtravel/rtf_interview_camarda_04.html. For a report on the June 17 meeting, see
http://news.yahoo.com/s/space/20060620/sc_space/nasaschiefengineersafetyofficerweighinonsts121launchdecision.
Camarda's dismissal may have more to do with internal NASA politics than with shuttle safety. But the two cannot be separated. NASA maintains the shuttles, trains the astronauts, and decides when and how often to fly the remaining three orbiters: Atlantis, Discovery, and Endeavor. NASA head Michael Griffin has gone on record as saying that if Discovery is seriously damaged by pieces of insulating foam—the same problem that doomed Columbia in 2003—he would consider shutting down the entire shuttle program. That policy no doubt influenced the votes of O'Connor and Scolese, who feel that engineering modifications to foam on a number of support brackets should be made to prevent irreparable damage to Discovery's vital heat shield. Everyone agrees that if the kind of damage sustained by Columbia occurs, and is discovered in orbit, and can't be repaired, then the astronauts can take refuge in the International Space Station until a rescue flight can be arranged with one of the two remaining shuttles. This despite the fact that the Station has lately had trouble accommodating only two or three residents at a time. But being uncomfortable and cramped in weightlessness for a few weeks is better than a fiery death. You haven't seen a lot of news items about billionaires paying for rides into space lately, have you? Maybe there's a reason.
In my Mar. 21, 2006 blog, "Retire the Space Shuttle Now," I stated a number of good reasons that we should go straight to the next model of space orbiter without risking any more people's lives in antiquated, patched-up shuttles that deserve an honored place in the Smithsonian, not reuse in space long after their design lifetimes. The recent news out of NASA has only increased my concern that yet another known problem that we haven't heard about in public, but which the engineers are all too familiar with, will reach out and cause another hair-raising space adventure like Apollo 13's near-disaster, if not worse.
Unfortunately, the shuttle program has achieved canonical status in the engineering ethics literature for a couple of reasons. One is that NASA, being a public agency, is unusually open about its internal processes and debates, which means that records of data and decisions are easy to obtain. The second is that both the Challenger and Columbia disasters were caused by known problems that were technically fairly well understood. The failures were not mysterious scientific puzzles; they were failures in management decision-making.
In most well-run organizations, the chief safety officer is king in his or her limited domain. In an oil refinery, for instance, if the president and owner of the plant walks into a hazardous area and attempts to light a cigar, the lowliest safety official present is entirely within his rights to do anything necessary to prevent it, including knocking the president down. On June 17, we witnessed the spectacle of not only NASA's chief safety officer, but its chief engineer as well, say that for reasons of property protection, the launch should not proceed—and they were overruled. And Charles Camarda, an engineer who himself flew on the 2005 Discovery flight, the first one after the Columbia disaster, has just gotten sacked from his mission responsibilities for commending the way some of his underlings spoke out at the flight review. It is not a pretty picture.
In Greek mythology, a young woman named Cassandra had the misfortune to attract the eye of the god Apollo. In an attempt to put himself in her good graces, he gave her the gift of prophecy. But when she refused his advances, he ran up against the rule that says what the gods giveth, the gods can't taketh away. He couldn't keep her from being a prophet, but he could spoil it another way: he made sure that whatever Cassandra prophesied in the way of dire forecasts would not be believed by anybody else. So when she ran around in Troy saying, "You'll be sorry if you bring that big wooden horse in here," she warned the Trojans in vain, the Greeks popped out anyway, and Troy fell. This made Cassandra wish she had never seen Apollo in the first place. Since then her name has passed into the language to mean one whose accurate foretellings of disaster are ignored.
I don't want to be a NASA Cassandra. I have no illusions that one blogger, or even an entire Greek chorus of bloggers, will influence NASA's decision-making process. My hopes and my prayers are that STS-121 will go smoothly, with no headlines other than the routine ones. But we face three possible outcomes on this trip: a routine flight with no significant problems, a flight in which Discovery is damaged enough to scuttle the remaining Shuttle fleet, or a more serious problem that endangers life. May God grant that the third possibility doesn't happen. But I'm going to leave it up to Him as to which of the other two takes place.
Sources: For Camarda's reassignment, see the Houston Chronicle at http://www.chron.com/disp/story.mpl/front/4004817.html. For Camarda's comments on NASA's changed culture, see the 2004 interview at the NASA website
http://www.nasa.gov/vision/space/preparingtravel/rtf_interview_camarda_04.html. For a report on the June 17 meeting, see
http://news.yahoo.com/s/space/20060620/sc_space/nasaschiefengineersafetyofficerweighinonsts121launchdecision.
Tuesday, June 20, 2006
Hunting the Cyber Predator
The scene: a ballroom in a fancy hotel in Denver, Colorado. The room is crammed with teenagers of both sexes, as well as a preponderance of young men in their twenties, from all across the U. S. and from many foreign countries as well. Each person wears a mask and a costume that completely disguises identity. What brought them here? In malls and shopping centers all across the nation, attractive advertisements enticed these young people to a free party. To respond to an ad, you entered a small office where you encountered a man wearing a blindfold. The man asked you a few not particularly personal questions about yourself, and handed you a free round-trip airline ticket to Colorado. Some of the younger teens told their parents what they were up to, but many of them neglected that little detail.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
The episode above is fiction. It sounds like the beginning of a bad suspense novel, bad because of its unbelievability. Any outfit making such an offer would risk kidnaping charges or worse. But if you substitute the Internet for the free airline tickets, and the elementary requirements for entering such social-networking sites as MySpace.com for the interview with the blindfolded man, you have a fairly good approximation of what goes on online every day, twenty-four hours a day. And while the vast majority of social encounters on these sites do no harm, there are enough folks out there trying to abuse the system for purposes of sex or child pornography to keep the Texas attorney general's Cyber Crimes Unit busy. That office recently marked the third anniversary of its founding in 2003 with the arrest of its 80th alleged cyber predator.
Although many social networking websites have minimum age limits and warnings against putting too much personal or identifying information online, these restrictions are easy to evade, for either innocent or sinister reasons. For example, MySpace.com has a section on "Safety Tips" in which they warn users to "avoid meeting people in person whom you do not fully know." What "fully" knowing somebody means is left up to the user to decide. You are warned that "if you lie about your age, MySpace will delete your profile," which fails to explain how MySpace is going to find out how old you really are in the first place. Texas Attorney General Greg Abbott has called for social-networking websites to require a credit card number from users, which would at least ensure the involvement of someone over seventeen years of age. But so far the sites have resisted this proposal.
None of the good things the Internet has brought us—and none of the bad things, either—could have come about without the vision and labor of many thousands of software engineers and others who came up with the idea and manage to keep the whole unruly thing going. It is a truism of the history of technology that people will use—and abuse—new technologies in ways that the designers never thought of. As it has become easier for more people without technical backgrounds to present more personal online information about themselves, including photos and up-to-date identifying data, the dangers of letting the whole world see your virtual persona online have increased as well. No responsible parent would let their ten-year-old daughter wander around in an unfamiliar city. But there are some children that age who can run cybercircles around older adults and do things that we literally can't imagine, because we older folks are unfamiliar with that world.
Where does the responsibility for protecting children from Internet predators lie? For the most part, not with the children themselves. Both in law and in fact, even children who can write C++ code at the age of ten are still emotionally immature, and can't be expected to follow all the "safety tips" a well-meaning site manager posts. Parents are the next logical choice. But parents find it hard to be in the room watching every last second that Jack or Jill spends online, even if they wanted to. That leaves the operators of the social-networking sites themselves on the front lines.
No doubt there are some security measures already in operation that are invisible to the user. But if the attorney general of only one state has been able to catch eighty suspected cyber predators in three years from a dead start, you know there are lots more out there to be caught. Clearly, whatever measures are already in place at the sites are not foolproof, nor should they be. But it seems that the looseness and open-ended nature of these sites, while encouraging people to meet new friends, leaves children wide-open to the danger of becoming a victim to a sufficiently ingenious and dedicated predator.
Some feel that since software got us into this problem, software can help us solve it too. Increasingly sophisticated automatic systems for detecting pornographic content (both text and visual forms) are being used here and there. But that is only part of the problem. To make sure no one under age uses these systems, something like the credit-card-number idea needs to be implemented. People with past criminal records having to do with child molestation should be positively identified and blocked from such sites. And while it is a challenge to come up with a system that would sense when a potential predator is "pumping" a victim for identifying information, equally sophisticated systems now routinely develop elaborate and finely graduated profiles of our tastes in books, food, entertainment, and other online purchases. If software engineers devoted a fraction of the energy to the problem of cyber predators that they have expended on figuring out exactly what we want to buy, maybe the Cyber Crimes Unit in Austin will eventually have to look for other kinds of criminals to catch. For example, there's that Nigerian princess who hasn't got back to me lately . . . .
Sources: The Texas Attorney General's announcement "Texas Attorney General Greg Abbott’s Cyber Crimes Unit Marks 3-year Anniversary With 80th Arrest" is at http://www.oag.state.tx.us/oagNews/release.php?id=1573. MySpace.com's list of safety tips is at http://collect.myspace.com/misc/safetytips.html?z=1.
Tuesday, June 13, 2006
Engineering the Perfect Baby
Most engineering societies publish codes of ethics, and most of these codes say something about the health and welfare of the public. My own professional society, the IEEE, has over 300,000 members involved in electrotechnology of all kinds, including the ultrasound machines that produce images of unborn babies. The IEEE code of ethics says among other things that its members agree "to accept responsibility in making decisions consistent with the safety, health and welfare of the public" and "to treat fairly all persons regardless of such factors as race, religion, gender, disability, age, or national origin." Many people—and several U. S. states—include unborn babies in the category of "persons," even if they are found to have disabilities.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Before the advent of ultrasonic medical imaging, amniocentesis testing, and other prenatal diagnostic techniques, the mother's womb was a mysterious and inviolable sanctum. But now, due largely to the efforts of biomedical engineers and scientists, we can monitor heart rates, blood chemistry, and even perform surgery on babies who have several months to go before their regularly scheduled arrival. We can also discern defects such as clubfeet, extra digits, webbed fingers, and cleft palates. None of these defects are life-threatening, but they mar the ideal image that all parents want of a "perfect baby."
In her June 7 Orlando Sentinel column, Kathleen Parker deplored the cases of several British parents who had aborted their babies precisely because they had one of the defects I just mentioned. The numbers were not large—twenty or so with clubfeet, four with hand problems, one with a cleft palate—but numbers are not always the most important thing. While we have no comparable data for the U. S., our larger population and access to largely unrestricted abortion probably means that even more abortions in this country are performed for comparable reasons. In India, it is well known that many abortions take place simply because the unborn baby is female. And this fact is usually disclosed by an ultrasound imaging machine.
I am not about to issue a blanket condemnation of prenatal diagnostic technology. It is a classic case of the two-edged sword. Some anti-abortion groups have found that one of the most effective ways they can persuade a potential mother to carry her baby to term is to show her an ultrasound image of the live, kicking infant inside. And until recently, the universal unstated purpose of medical technology was to save lives and preserve health, abortion and euthanasia notwithstanding.
But if you consider unborn babies persons and members of the public, in these cases technology is a hazard to their health, safety, and welfare. And even more obviously, technology is being used to discriminate against (e. g. kill) those with disabilities or those who happen to be the wrong gender, at an early age when they are most defenseless. Such use of technology clearly violates the IEEE Code of Ethics.
Well, you say, technology is neutral, and the person who designs equipment can't always predict how people will use or misuse it. As I have mentioned elsewhere, the "technology is neutral" argument is a shaky one, especially in the case of technologies designed explicitly to harm people. As for predicting how technology will be used, engineers are responsible for making sure that when a new technology is introduced, they have taken reasonable safety precautions in terms of warning labels, training in safe procedures, and so on. But when an unsafe condition arises in use, it seems to me that turning a blind eye to the situation is irresponsible.
I don't like to talk philosophy in this blog ordinarily, but in this case it's unavoidable. There are scientists and engineers today who take the view that the human being is essentially no different than a computer, and an early-stage primitive computer at that. I'm thinking of such "posthumanists" as Ray Kurzweil and Hans Moravec, who see humanity as just a crude sketch of what we are now obliged to improve upon using genetic engineering, robotics, and artificial intelligence. One way to approach this improvement process is to throw away defective units, which is the approach the British parents of defective infants used. This reminds me of the early days of transistor manufacturing when the chemistry and physics of semiconductors was poorly understood. The factory would do their best to make a batch of a hundred transistors, and then they would sort through them one by one to find the ten or twenty that worked acceptably, and throw the rest away. But people aren't transistors, or computers, or machines. They're people.
Kathleen Parker began her column with a poetic quotation from the famous—and clubfooted—Lord Byron, who wouldn't have made it out of the womb if he had been conceived by a British couple equipped with an ultrasound machine and a false ideal of bodily perfection. People with minor or major bodily defects, and yes, even mental defects, who went on to achieve incredible feats of human endeavor are among the most encouraging examples of what it means to be human. Paradoxically, you will find in many of the biographies of the great, from Homer the blind poet down to Lance Armstrong the cancer survivor, some great physical challenge which forced them to develop the kind of character that can overcome great challenges.
It is time to divide the medical wheat from the chaff. Given a human life, the job medical science and technology should tackle is how to help that human life overcome problems and difficulties with a reasonable use of limited resources. That is the wheat. But any technology or procedure that is used to end a defenseless human life because others decide that for whatever reason—status, economics, politics—it is not worth living, is chaff. And the sooner the chaff is gone with the wind, the better.
Sources: The IEEE Code of Ethics is at http://www.ieee.org/portal/pages/about/whatis/code.html. Kathleen Parker's article "Abortion's dead poets society" is at http://www.orlandosentinel.com/news/opinion/columnists/orl-parker07_106jun07,0,2091692.column. The Alan Guttmacher Institute study she mentions, "Reasons U. S. Women Have Abortions: Quantitative and Qualitative Perspectives," is at http://www.guttmacher.org/sections/abortion.php.
Sunday, June 04, 2006
Hurricane Katrina: Good News for Flood Control Engineering
Last August's Hurricane Katrina left well over a thousand people dead, most of New Orleans flooded, and many thousands homeless. You have to look long and hard to find any good news in the aftermath of the worst natural disaster to hit the United States in many decades. But ironically, one of the best things that may happen as a result is a badly needed top-to-bottom reorganization of coastal flood control work.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Engineer and author Henry Petroski likes to say that engineers learn a lot more from failure than they learn from success. You have to know a certain amount in order to succeed at all, of course. But if you are a young engineer and you just apply book learning to a project where everything goes smoothly, all that tells you is that the books were right. Failure is Nature's way of telling an engineer that the books didn't tell the whole story, and that the state of the art needs improving. Katrina overwhelmed a complex system of levees, dams, and canals that clearly wasn't up to the challenge. But now everybody concerned is motivated to find out what went wrong and how to fix it in a way that will prevent another Katrina disaster.
On June 1—the start of the 2006 hurricane season—the U. S. Army Corps of Engineers released a huge, detailed report on the failures that contributed to the New Orleans floods. More important than the details of the report is the fact that the Corps accepted full responsibility for the failure. The Corps and the Mississippi go back more than a century, to the days when many people doubted that the Big Muddy could ever be contained or controlled by the works of man. In "Life on the Mississippi," Mark Twain's memoir of his years as a riverboat pilot, he reports on how bold engineers had just begun to erect levees and dams to channel the river's unceasing powerful currents in the 1880s. Despite Twain's generally optimistic attitude toward the modern age's advances in technology, he expressed considerable skepticism that the Corps of Engineers, or anyone else short of the Almighty, could make much of a difference in the way the Mississippi found its way to the sea.
In the intervening decades, the Corps found ways of doing just that. The South still saw severe floods from time to time. In 1927, the Mississippi inundated hundreds of square miles of Delta land, and 1965 a hurricane caused serious flooding in New Orleans. And here we are in 2006, a year after another major flood-control disaster. It may not be entirely coincidental that these events are about a generation apart. A pattern Petroski has found over and over in the history of technology goes like this: In the early stages of a new technology, engineers tend to overdesign a system to make sure it doesn't get a bad reputation that would kill it off right away. But as more designs succeed, newer engineers on the job tend to become not exactly careless, but overconfident. It's easy to assume that because there haven't been any major problems so far, there aren't likely to be in the future. This is when new circumstances or long-term failure mechanisms are most likely to cause trouble. What we may be seeing here is a pattern of disaster, followed by a few years of overcautious design, followed by reduced attention, less funding, and complacency, and a new generation of engineers who aren't old enough to remember the last big failure, who arrive just in time for the next one.
But there are other factors as well. A system of dams and levees protecting a certain land mass has one thing in common with power lines, high-voltage insulation, and chains. All it takes is one failure in one little place—one tree touching a sagging transmission line, one piece of insulation failing, one link breaking—and the whole system collapses. Enough water can—and did—flow through a twenty-foot breach in a dike to flood most of a city like New Orleans. Historically, the best way engineers have found to deal with such chain-like systems is to design and build them consistently, to uniform plans, and perform a rigorous and thorough quality-control inspection to make sure every single part of the system is up to snuff.
Unfortunately, it appears that the political structure of New Orleans at least partly militated against such a procedure. Although the U. S. Corps of Engineers had overall responsibility for the integrity of the flood-control system for New Orleans, there were also state and local authorities whose job it was to inspect and maintain parts of the system. I almost wrote, "critical parts," but in a system of dams, every single part is just as critical as every other part. In the nature of things, some parts of the system received better attention than others. But Katrina went for the weak spots regardless of politics, and the result filled New Orleans with filthy water and emptied it of people.
The good news I referred to above is that no one now needs convincing that the old way of doing flood-control business along the Mississippi, and especially in New Orleans, doesn't work. There were many technical problems with the levees such as inadequate construction and failure to take into account the poor quality and subsidence of the soil. People are now discussing the construction of "fail-safe" levees that have secondary landfill areas behind them, but of course, that takes up valuable real estate. What should result from the sad images we saw of flooded New Orleans is a revitalized and chastened Corps that will coordinate with reorganized state and local authorities to do a good job next time. It will take money and political will, but the alternative is too fresh in our minds to allow them to do anything less—at least for the next thirty years.
Sources: The U. S. Army Corps of Engineers draft report released June 1, 2006 is currently online at https://ipet.wes.army.mil/. A personal recollection of the 1927 Mississippi floods is contained in the memoir Lanterns on the Levee: Recollections of a Planter's Son by William Alexander Percy, who was author Walker Percy's uncle.
Monday, May 29, 2006
Model Railroading: Coming to Your Town in a Big Way
A friend of mine is an avid model railroader. He has spent countless hours assembling intricate scale-model railroad cars and locomotives, constructing miles of model track, and attending meets where dozens of his fellow enthusiasts put together entire scale-model counties of rail routes through scenic landscapes and busy towns. The remote controls for these toys have grown increasingly sophisticated with time as well, all the way down to realistic engine noises produced digitally. The only people who may resent the time and energy spent on such a harmless hobby are the wives thus deprived of their husbands' time (and husbands, if any women pursue this avocation, of which I am unaware). But a parallel development—the remote control of real railroad locomotives with no one on board—is stirring up a considerable controversy.
Since the decline of passenger rail transportation in the U. S. in the last half of the twentieth century, the U. S. rail system has faded into the background of public consciousness. But the freight operations that rail lines support have actually become more critical than ever to the country's economy. Nearly all the coal that fuels our coal-fired power plants (and that is about half of them) is carried by rail, as well as numerous other bulk materials such as gravel, cement, chemicals, and food products, not to mention imported merchandise, automobiles, and so on. Since very few additional rail lines are being built, the railroad industry is searching for ways to put more and more freight through a physically limited system. And one of these ways involves remote control of unmanned locomotives.
An article in the May 28 issue of the Austin American-Statesman describes how this works. An operator who has completed an 80-hour training course stands by a track on which a remote-control locomotive sits. Strapped to his chest is a box sprouting joysticks, crank knobs, and a stubby antenna, rather like an overgrown model-airplane radio-control unit. With this remote control system, the operator can perform most of the operations that the engineer in the cab can do, only without any engineer in the cab. If radio control is lost for any reason, the system automatically stops the train.
Most of these systems are being used in switchyards, where the relatively short range of the radio transmitter is not a problem. But recently, some lines have been experimenting with using the system to send trains to nearby industrial sites for short hauls.
Safety is an obvious concern. If there is nobody in the cab, how can the operator stop the train if an obstruction unexpectedly shows up? Unfortunately, stopping a train is not an instantaneous act. Depending on speed and size, it can take up to a mile or more to stop a train even under emergency conditions. The engineers who designed the remote-control systems have presumably taken these factors into consideration, but as with many technologies, the way it is used has a lot to do with how safe it is.
Railroads are one of the most highly unionized industries in America, and opinions among the unions about the new technology are divided. The Brotherhood of Railway Engineers' feelings about the matter are clear from their main website, which shows a tipped-over railway engine with the legend "Remote Control" plastered across it. Since a locomotive running without an engineer represents direct job loss, their concern is understandable. They are, in the colloquial phrase, "agin it," and have commissioned a report which criticizes wider adoption of the technology before better operating rules are put in place. Numerous attempts by the BLE to slow the technology through strikes or other means have been blocked by federal judges.
On the other hand, the United Transportation Union, which represents conductors and switchmen, has come out, after some waffling, in favor of limited use of the technology. The Federal Railroad Administration, for its part, has studied the issue and allowed limited experimentation as long as the operators (generally switchmen) have received an 80-hour training course. This annoys the railway engineers, who have to take a six-month-long course and pass tests to qualify for their jobs.
What about accidents? There have not been many serious accidents reported as yet, possibly because the technology is so new: a few derailing and three fatalities, but no major large-scale accidents with multiple loss of life. It is not clear how far the rail lines wish to go with remote-control locomotives. It is easy to imagine a single model-railroad-style system the size of the U. S. with thousands of trains running completely under computer control. Even now, locomotive engineers are like airline pilots in that they do what centralized traffic-control operators tell them to via microwave radio links from a few control centers that continuously monitor train positions and movements. So replacing the engineers with "robotic" control would not be as great a change as you might think. What the people on the train supply now, of course, is eyes and ears and hands to do the great variety of things that computers and robots cannot yet do. Some of these things are related to safety and some are not.
So it will be some time before the average train you see trundling across a grade crossing while you wait in your car will be nothing but a pile of steel and cargo, bereft of any human presence. If the Brotherhood of Locomotive Engineers has its way, it will never happen. On the other hand, remote control may spread gradually until some big disaster occurs with a remotely-controlled locomotive, which might energize legislators to prohibit the practice altogether. In the meantime, you might visit the next model-railroaders meet in your town to see what the future of real railroading may be like.
Sources: The Federal Railroad Administration has a statement "Remote Control Locomotive Operations" at http://www.fra.dot.gov/us/content/94. The website http://www.labornotes.org/archives/2003/08/b.html has an article "Rail Workers Battle Unsafe Remote Control Technology" written by Ron Hume. The Brotherhood of Locomotive Engineers website has an article "BLET releases remote control hazard study" at http://www.ble.org/pr/news/newsflash.asp?id=4156.
Since the decline of passenger rail transportation in the U. S. in the last half of the twentieth century, the U. S. rail system has faded into the background of public consciousness. But the freight operations that rail lines support have actually become more critical than ever to the country's economy. Nearly all the coal that fuels our coal-fired power plants (and that is about half of them) is carried by rail, as well as numerous other bulk materials such as gravel, cement, chemicals, and food products, not to mention imported merchandise, automobiles, and so on. Since very few additional rail lines are being built, the railroad industry is searching for ways to put more and more freight through a physically limited system. And one of these ways involves remote control of unmanned locomotives.
An article in the May 28 issue of the Austin American-Statesman describes how this works. An operator who has completed an 80-hour training course stands by a track on which a remote-control locomotive sits. Strapped to his chest is a box sprouting joysticks, crank knobs, and a stubby antenna, rather like an overgrown model-airplane radio-control unit. With this remote control system, the operator can perform most of the operations that the engineer in the cab can do, only without any engineer in the cab. If radio control is lost for any reason, the system automatically stops the train.
Most of these systems are being used in switchyards, where the relatively short range of the radio transmitter is not a problem. But recently, some lines have been experimenting with using the system to send trains to nearby industrial sites for short hauls.
Safety is an obvious concern. If there is nobody in the cab, how can the operator stop the train if an obstruction unexpectedly shows up? Unfortunately, stopping a train is not an instantaneous act. Depending on speed and size, it can take up to a mile or more to stop a train even under emergency conditions. The engineers who designed the remote-control systems have presumably taken these factors into consideration, but as with many technologies, the way it is used has a lot to do with how safe it is.
Railroads are one of the most highly unionized industries in America, and opinions among the unions about the new technology are divided. The Brotherhood of Railway Engineers' feelings about the matter are clear from their main website, which shows a tipped-over railway engine with the legend "Remote Control" plastered across it. Since a locomotive running without an engineer represents direct job loss, their concern is understandable. They are, in the colloquial phrase, "agin it," and have commissioned a report which criticizes wider adoption of the technology before better operating rules are put in place. Numerous attempts by the BLE to slow the technology through strikes or other means have been blocked by federal judges.
On the other hand, the United Transportation Union, which represents conductors and switchmen, has come out, after some waffling, in favor of limited use of the technology. The Federal Railroad Administration, for its part, has studied the issue and allowed limited experimentation as long as the operators (generally switchmen) have received an 80-hour training course. This annoys the railway engineers, who have to take a six-month-long course and pass tests to qualify for their jobs.
What about accidents? There have not been many serious accidents reported as yet, possibly because the technology is so new: a few derailing and three fatalities, but no major large-scale accidents with multiple loss of life. It is not clear how far the rail lines wish to go with remote-control locomotives. It is easy to imagine a single model-railroad-style system the size of the U. S. with thousands of trains running completely under computer control. Even now, locomotive engineers are like airline pilots in that they do what centralized traffic-control operators tell them to via microwave radio links from a few control centers that continuously monitor train positions and movements. So replacing the engineers with "robotic" control would not be as great a change as you might think. What the people on the train supply now, of course, is eyes and ears and hands to do the great variety of things that computers and robots cannot yet do. Some of these things are related to safety and some are not.
So it will be some time before the average train you see trundling across a grade crossing while you wait in your car will be nothing but a pile of steel and cargo, bereft of any human presence. If the Brotherhood of Locomotive Engineers has its way, it will never happen. On the other hand, remote control may spread gradually until some big disaster occurs with a remotely-controlled locomotive, which might energize legislators to prohibit the practice altogether. In the meantime, you might visit the next model-railroaders meet in your town to see what the future of real railroading may be like.
Sources: The Federal Railroad Administration has a statement "Remote Control Locomotive Operations" at http://www.fra.dot.gov/us/content/94. The website http://www.labornotes.org/archives/2003/08/b.html has an article "Rail Workers Battle Unsafe Remote Control Technology" written by Ron Hume. The Brotherhood of Locomotive Engineers website has an article "BLET releases remote control hazard study" at http://www.ble.org/pr/news/newsflash.asp?id=4156.
Thursday, May 25, 2006
Engineering Laptop Data Security, or, 26.5 Million Veterans Can't Be Wrong
On Monday, May 22, we learned that some time in the preceding three weeks, a burglar broke into the house of a mid-level analyst in the Department of Veterans Affairs in Washington, D. C. Among the items missing the next day was the employee's laptop computer. That by itself is not news—laptops are stolen every day. But the thing that motivated Veteran Affairs Secretary Jim Nicholson to announce the theft to the news media was the fact that on that laptop's hard drive were the names, Social Security numbers, and other personal information belonging to over 26 million veterans.
It is not hard to imagine what someone with the scruples of a burglar could do with that information. We can only hope that the miscreant does not read the newspapers, watch TV news, or download iPod newsblogs, and that he fenced the machine to someone who will divest it of all identifying indications, including the hard drive data. But the very small chance that a very big problem will occur is still a very big problem. And since Social Security numbers last for the lifetime of their owners, the concern that one of those veterans will be a victim of identity theft may not go away unless the machine is recovered with the knowledge that the data wasn't copied. This happy eventuality is, to say the least, unlikely.
As it does in many other areas, the advance of technology has blurred the distinction between two groups of people who formerly had very different responsibilities. Back in the 1970s when it took a roomful of refrigerator-size tape drives to store twenty-seven million personal records, there were only a handful of people in any given organization who had the technical ability to manipulate or copy the information. The computer-science specialists who designed, operated, and maintained the systems were generally aware of their special responsibilities that came with the power to work with personal data. Besides which, a putative thief would have had to bring a small loading van along to steal such a large amount of data. Although data theft and identity theft have been a problem at some level since the earliest days of computers, the sheer bulk and awkwardness of large amounts of data, and the relatively scarce and highly secure computer rooms in which they were housed, meant that such a theft had to be carefully planned and executed like a bank or payroll heist. For the average non-technical user of such information, the most data handled at once was contained in a bulky folder of green-and-white-striped computer paper, which nobody wanted to carry out of the office anyway. So computer security was an issue mainly for those few specialists who dealt directly with mainframe computers, and the rest of us scarcely knew it existed.
No longer. Because of the democratization of technology we now enjoy, most laptops sold today with 100 GB hard drives can hold the digital equivalent of all the printed contents of a small town's public library. The size of digital storage has changed, but the responsibilities are still the same. Every person who is in charge of a laptop with sensitive information on it has the same moral obligations as those (now retired) computer operators in the glass-walled computer rooms of yore. But in these days of high-pressure work and high-speed internet connections at home, what is more natural than to throw the laptop in the car and finish that special project in the evening just this once, even though you seem to recall some office rule against taking work home? That is just what the anonymous Veterans Administration employee did, and now look what's happened.
There are technological fixes for this technological problem, of course. A ten-second Google search turns up companies such as Eracom Technologies, which offers a variety of data encryption methods for servers, desktops, and laptops. The idea is that the authorized user types in a special password, and for extra security plugs in a special module to enable the laptop to boot up. Once the computer is satisfied that it is being used by the right person, it acts just like a normal computer. But all the data on the hard drive is actually encrypted with advanced techniques and de-encrypted as needed. Were a thief to steal the unit, he or she would be unable to start the machine. Even if the hard drive were removed and copied, the result would be nonsense.
Of course, Eracom doesn't give this technology away for free. I don't know what it costs, but it must be considerably less than the cost of a laptop, and they probably give quantity discounts for large organizations such as the U. S. Department of Veterans Affairs. But even advanced security technology like this can be thwarted if the user does something dumb, like writing the password on a note taped to the keyboard, or keeping the special unlocking module in the same bag with the computer. As an engineer told me recently, he tries to design systems that are foolproof, but doesn't bother to make them "damn-fool proof."
If a pattern of identity theft matching the stolen records does not emerge soon, our returning soldiers may not have to worry about the consequences of this particular laptop burglary. After all, they have seen and dealt with a lot bigger problems than this one. The rest of us, especially those who have any kind of sensitive data that we carry around in laptops, Blackberries, or data storage devices, should think twice before we take it out of a secure area. And ask what your organization does in case such data is stolen. If the answer isn't satisfactory, maybe someone should invest in a little added security. But all the data-security technology in the world cannot substitute for simply being careful.
Sources: An article describing the news conference at which Jim Nicholson revealed the laptop theft is at http://www.acm.org/serving/se/code.htm. Information on encrypting hard-drive data is available at such sites as http://www.eracom-tech.com/hard_disk_encryption.0.html.
It is not hard to imagine what someone with the scruples of a burglar could do with that information. We can only hope that the miscreant does not read the newspapers, watch TV news, or download iPod newsblogs, and that he fenced the machine to someone who will divest it of all identifying indications, including the hard drive data. But the very small chance that a very big problem will occur is still a very big problem. And since Social Security numbers last for the lifetime of their owners, the concern that one of those veterans will be a victim of identity theft may not go away unless the machine is recovered with the knowledge that the data wasn't copied. This happy eventuality is, to say the least, unlikely.
As it does in many other areas, the advance of technology has blurred the distinction between two groups of people who formerly had very different responsibilities. Back in the 1970s when it took a roomful of refrigerator-size tape drives to store twenty-seven million personal records, there were only a handful of people in any given organization who had the technical ability to manipulate or copy the information. The computer-science specialists who designed, operated, and maintained the systems were generally aware of their special responsibilities that came with the power to work with personal data. Besides which, a putative thief would have had to bring a small loading van along to steal such a large amount of data. Although data theft and identity theft have been a problem at some level since the earliest days of computers, the sheer bulk and awkwardness of large amounts of data, and the relatively scarce and highly secure computer rooms in which they were housed, meant that such a theft had to be carefully planned and executed like a bank or payroll heist. For the average non-technical user of such information, the most data handled at once was contained in a bulky folder of green-and-white-striped computer paper, which nobody wanted to carry out of the office anyway. So computer security was an issue mainly for those few specialists who dealt directly with mainframe computers, and the rest of us scarcely knew it existed.
No longer. Because of the democratization of technology we now enjoy, most laptops sold today with 100 GB hard drives can hold the digital equivalent of all the printed contents of a small town's public library. The size of digital storage has changed, but the responsibilities are still the same. Every person who is in charge of a laptop with sensitive information on it has the same moral obligations as those (now retired) computer operators in the glass-walled computer rooms of yore. But in these days of high-pressure work and high-speed internet connections at home, what is more natural than to throw the laptop in the car and finish that special project in the evening just this once, even though you seem to recall some office rule against taking work home? That is just what the anonymous Veterans Administration employee did, and now look what's happened.
There are technological fixes for this technological problem, of course. A ten-second Google search turns up companies such as Eracom Technologies, which offers a variety of data encryption methods for servers, desktops, and laptops. The idea is that the authorized user types in a special password, and for extra security plugs in a special module to enable the laptop to boot up. Once the computer is satisfied that it is being used by the right person, it acts just like a normal computer. But all the data on the hard drive is actually encrypted with advanced techniques and de-encrypted as needed. Were a thief to steal the unit, he or she would be unable to start the machine. Even if the hard drive were removed and copied, the result would be nonsense.
Of course, Eracom doesn't give this technology away for free. I don't know what it costs, but it must be considerably less than the cost of a laptop, and they probably give quantity discounts for large organizations such as the U. S. Department of Veterans Affairs. But even advanced security technology like this can be thwarted if the user does something dumb, like writing the password on a note taped to the keyboard, or keeping the special unlocking module in the same bag with the computer. As an engineer told me recently, he tries to design systems that are foolproof, but doesn't bother to make them "damn-fool proof."
If a pattern of identity theft matching the stolen records does not emerge soon, our returning soldiers may not have to worry about the consequences of this particular laptop burglary. After all, they have seen and dealt with a lot bigger problems than this one. The rest of us, especially those who have any kind of sensitive data that we carry around in laptops, Blackberries, or data storage devices, should think twice before we take it out of a secure area. And ask what your organization does in case such data is stolen. If the answer isn't satisfactory, maybe someone should invest in a little added security. But all the data-security technology in the world cannot substitute for simply being careful.
Sources: An article describing the news conference at which Jim Nicholson revealed the laptop theft is at http://www.acm.org/serving/se/code.htm. Information on encrypting hard-drive data is available at such sites as http://www.eracom-tech.com/hard_disk_encryption.0.html.
Subscribe to:
Posts (Atom)