Monday, February 27, 2012

Train Crash in Argentina Claims 51 Lives

Last Wednesday, an eight-car commuter train headed into a station named Once in Buenos Aires at the normal speed of 21 kilometers per hour (about 16 miles per hour). The track ahead dead-ended past the platform, so it was even more important than usual that the train slow down and stop at the right place. On a CCTV recording, you can see that the train simply keeps moving at the same speed until the lead car hits the buffers at the end of the track. A cloud of brownish dust flies up to obscure what becomes all too clear seconds later: the second car telescopes inside the first car, killing a total of 51 people, and injuring about 700—about half of all the people on the train. Saturday, after relatives of a man who was not officially listed as missing kept up intense pressure on rescue workers, the man’s body—the 51st victim—was finally pulled from the wreckage, touching off riots in Buenos Aires.

Railway fatalities have nearly as long a history as railways. One of the first successful steam locomotives, George Stephenson’s Rocket, was responsible in 1830 for the death of William Hoskisson, a well-known member of Parliament who failed to notice the approaching train until it was too late and fell on the tracks. Stephenson himself drove the train carrying the dying Hoskisson to hospital, but to no avail. Ironically, Hoskisson’s death was so widely reported that it served to draw worldwide attention to the beneficial aspects of the new railway technology as well as its dangers.

An investigation will be needed before all the essential facts are known about the Argentine disaster. But the driver survived and told reporters that the brakes failed, and that he had tried to contact supervisors by radio earlier about brake problems but was told to “carry on.” Earlier accidents have happened in the same area, and poor maintenance has been cited as a contributory factor in the past. Recordings, inspection of equipment, and other evidence will either confirm or contradict the driver’s claims. But a lot can be learned already from the way the Buenos Aires commuter rail transportation system is operated.

Privately owned companies contract with the government for the privilege of operating the trains, but apparently they are restricted by regulations governing fares and fare increases. It is often useful in engineering-ethics situations to list the interested and affected parties in the case. In this situation the major players are: (1) the commuting public, which has only indirect control of the system through their elected representatives’ regulations, but has to use and pay for the system; (2) the railway workers, including operating and maintenance personnel; (3) the management and owners of the private firms that run the railway concessions; (4) the Argentine government agencies and officials who regulate, deal with, inspect, and investigate railways and incidents such as this accident; and (5) the general Argentine public which does not ride the Buenos Aires commuter-rail system, but which elects government officials, pays taxes, and shares in such national tragedies as railway accidents of this kind.

Worldwide, rail-based public transportation systems are rarely profitable on a strictly private-enterprise basis, despite the superficial attractiveness of monopoly status and large customer bases. Despite economies of scale, the personnel, maintenance, and upkeep costs of rail systems would price them out of the market they are intended to serve—namely, the poorer populace who can’t afford cars—if they charged enough to both make a reasonable profit and reinvest sufficiently to allow for depreciation of equipment and so on. This is why, after a disastrous experiment with private ownership of subways in New York City around the turn of the 20th century, the city government took over all of them and now operates the subways with a deficit made up with general tax revenues.

The Buenos Aires system went the other way a decade or so ago: the national railway system was largely privatized. Faced with a situation where raising fares is not an option, a private firm compelled to make a profit is going to cut costs by deferring maintenance and improvements. But you can only do that so far until something dreadful happens such as the accident of last week.

The particular question of what immediately caused this accident will very likely be cleared up in a matter of months, if not weeks. But that leaves the larger problem of how to make sure that accidents like this don’t happen again. For every major accident with a mechanical cause, there are usually dozens of near-miss incidents that serve as warnings to perceptive engineers and operators, unless their hands are tied by lack of resources. It looks like major changes will be needed in the economics and management of Argentine commuter rail lines if we can hope to avoid another such disaster in the future.

Sources: I used several articles on the accident, specifically these: from the BBC news services at http://www.bbc.co.uk/news/world-latin-america-17169315 and http://www.bbc.co.uk/news/world-latin-america-17149814, Britain’s Daily Mirror at http://www.mirror.co.uk/news/world-news/buenos-aires-train-crash-violence-743309, and Yahoo News at http://news.yahoo.com/argentine-train-crash-kills-49-people-hurts-600-162519779.html.

Sunday, February 19, 2012

Half a Century After Glenn’s Flight, NASA Tries to Make Up Its Mind

In February of 1962, I was eight years old. That was plenty old enough to watch Walter Cronkite on CBS-TV narrate the countdown for astronaut John Glenn’s attempt to be the first American to orbit the earth. I say “attempt” because at the time, nobody knew for sure if it would work. When the rocket fired up and sailed safely into the sky, Cronkite dropped his objectivity enough to say, “Go, baby!” The world of space flight (and journalism, for that matter) would never be the same.

By that time, NASA had gotten on the one track to the moon, passing by logical waystations during the sixties: the Mercury single-man capsule, the Gemini two-man unit, and the Apollo, which became the way we ultimately got to the moon. There was a breathtaking simplicity about the program, which belied the infinite technical complexities of manned spaceflight. We were going to land men on the moon before the Russians did—it was that simple. Even an eight-year-old could understand that. And we succeeded.

Now I’m 58, John Glenn is 90, and NASA—well, I hate to admit it, but NASA is no longer the stripped-down, single-minded Cold-War-by-other-means fighting machine it was. Try explaining the current NASA budget to an eight-year-old, or a twenty-eight-year-old. Unless he has degrees in accounting and political science, he’s not likely to see much to be excited about.

There are two main initiatives at NASA these days concerning manned space flight (all of which, incidentally, is anathema to many scientists who would rather see dollars go to more efficient unmanned robotic flights). One initiative is called the Commercial Crew Program. This is aimed at developing not only crews, but an entire program, that commercial firms design and build with NASA’s “guidance.” NASA has always had contractors—it has never been in the business of manufacturing major flight hardware without commercial help—but the Commercial Crew Program is intended to move the entire enterprise closer to a free-market model, somewhat like the airlines. Of course, the average profit margin of commercial airlines over several decades is about zero, so that may not be a good model. Add to that the fact that there are not a lot of customers for manned-space-flight services, other than the U. S. and some other governments, and you have a very strange economic proposition, to say the least. This has not kept lots of companies from flocking to NASA’s information sessions to see how they can get a piece of the pie, but when Congress cut the 2011 allocation for this project to only $400 million, the schedule stretched out and it is not clear that NASA will get the $830 million it’s asking for in the present budget cycle.

One big reason for that is the Obama administration is proposing an overall flat budget for 2013 for NASA, which means the increase for the Commercial Crew Program might have to come out of the other big initiative for manned space flight, the Orion/Space Launch System. Orion (for short) is intended for deep-space activities, to asteroids or beyond. It has gone through several transformations, but clearly needs a lot of money (around $1 billion a year) to go anywhere anytime soon, which means probably ten to fifteen years. Orion is the logical extension of the quasi-religious feeling that man is destined to keep on exploring farther and farther reaches of space. Its supporters include hard-core spaceniks and a lot of Congressmen and contractors (many in Texas) who want to keep NASA’s existing facilities busy and its employees employed. If all NASA did was to contract out manned space flight to commercial firms, you could do that out of a couple of buildings in Washington, and what would we do with all those other labs and things?

I am sympathetic with people who do not want to lose their jobs. But I would also like to know that their jobs are worth doing, and will issue in some meritorious achieved goal within the foreseeable future. The way NASA is thrashing around like a canvas bag full of cats fighting does not encourage the belief that we will see strong, clear, directed effort come from the agency or its contractors any time soon.

NASA was once a great organization, and achieved great things. It still has pockets of high-quality and unique talent that we should keep around in some form for reasons of national pride and capability. John Glenn was once a strong, brave, 40-year-old astronaut. And in 1998, at age 77 he became the oldest person to go into space, on a Space Shuttle flight. But even Glenn has wisely put space flight behind him, personally, and long ago passed the torch to younger people.

As some commentators have proposed recently, perhaps NASA in its present form has outlived its ability to achieve simple, clear goals, and has become such a battered political football that it would be easier to start over with two or three different agencies, each directed at a specific goal that you could explain to an eight-year-old. But the way things are going with political paralysis in Washington, the chances of this getting done are small.

Manned space flight is a novel activity in historical terms, deeply tied to technology, which I think deserves to continue on some basis. It is so costly that turning the whole thing over to private hands is practically to give up, so the government needs to be involved at some level. But trying to do too many things at once, especially when you’re older, is a recipe, if not for disaster, at least for a lot of wasted effort. And engineers hate to waste effort.

Sources: I consulted two articles on recent NASA activities in the Commercial Crew Program, one published by Aviation Week at http://www.aviationweek.com/aw/generic/story_channel.jsp?channel=space&id=news/awx/2012/02/14/awx_02_14_2012_p0-425174.xml&headline=Commercial%20Crew%20Push%20Has%20Some%20Concerned

and another at a website that promotes the space industry called www.spacefellowship.com: http://spacefellowship.com/news/art27725/commercial-crew-program-introduces-ccicap-initiative.html.

For those of you who never saw it, YouTube has a clip of the actual launch of John Glenn’s three-orbit flight on Feb. 22, 1962 at http://www.youtube.com/watch?v=whSYzSbJvsc.

Monday, February 13, 2012

Discovery Channel Seeks a Top Engineer

Writing a blog attracts many kinds of responses, some of which are more interesting than others. A few days ago I got an email from someone at an outfit called Pilgrim Productions. Turns out they are looking for cast members for a new reality show, and the reason they contacted me, I suppose, is because the reality show has the tentative title of “Top Engineer.” This fact filled me with mixed emotions, and while poetry is allegedly the best way to express mixed emotions, I will forego any attempts at verse and try to say how I feel in ordinary prose.

Part of me is glad to see this. A few years ago the Institute of Electrical and Electronics Engineers, my 300,000-member professional society, sponsored a discussion of how to get a popular TV show going whose theme would be connected with engineering. This was back when most TV shows were still scripted, and so the ideas that came out were pretty feeble, along the lines of “My Three Sons” only we follow Fred MacMurray to his engineering office instead of staying home. But now that reality shows are all the rage, I can easily picture some kind of built-it challenge carried out in a well-equipped design lab. The Discovery Channel has procured the cooperation of an outfit called WET, which makes fancy servo-controlled fountains for places like Dubai, so they probably have plenty of toys in their labs to do fun things with. That’s the good news.

The bad news is, I recently had a small personal experience with the way TV deals with intellectually challenging concepts, and I am not optimistic about how that aspect of engineering is going to fare on the small screen. And face it, engineering of any sophistication has to involve some intellectually challenging concepts. What happened was that I agreed to be interviewed for a TV show called “Weird or What?” Outside the U. S. it’s hosted by William Shatner, but some legal tussle or other prevents it from being shown in the fifty states. So sometime last fall, those of you reading this in certain English-speaking countries might have had the privilege of seeing yours truly talking about ball lightning, which is a current research topic of mine.

As is always the case, they taped far more of me than they used, and I expected that. I even rigged up a demo to show them that small burning spheres of liquid silicon looks sort of like ball lightning, I made a joke on camera that was so funny the cameraman laughed, and I tried to be as serious and clear as possible when they asked me specific questions. The silicon didn’t fit into their narrative, and as for the joke, all I can figure is that the only person on that show who is allowed to be funny is William Shatner. And how funny he is, I will allow the unbiased viewer to decide.

What they used me for was one side of a conflict between wild-and-crazy theories of a certain incident on a Canadian island in the 1960s, and the “sober-scientist” view. I played the sober scientist, and they found some other folks with interesting backgrounds to propose the wild-and-crazy theories. And believe me, when I saw the final DVD of the show, I was halfway embarrassed even to be seen in the same segment with some of those people, even though I was presented as saying reasonable, scientifically-based things that countered their wackiness.

I shouldn’t have been surprised, though. TV has to have conflict, movement, and surprises, or its viewers fall asleep. Sometimes they fall asleep anyway, but the kind of thing I give to my students every day in the lecture room would not make good TV, unless you could give all the viewers a grade for comprehension at the end. And the sponsors wouldn’t like that.

Judging by some of the legalese on the Pilgrim Production website and the casting call, they are not looking for your standard-issue behind-the-keyboard type of engineer, which, for better or worse, describes most engineers today. They want “visual effects experts, accomplished home shop machinists, contractors and engineers with backgrounds in electrical, civil, structural, or mechanical engineering.” And in all caps near the bottom we find this interesting section: “As part of your participation in and/or in connection with the program, you will engage in activities that may be considered dangerous, including without limitation activities involving electrical and hydraulic equipment, power tools and machinery, heavy objects, combustibles, and other potentially hazardous materials and fire.” I like that “and fire” at the end. Some legal intern probably put that in.

I can also tell you this. If you are physically unappealing, without being so ugly that it’s funny, you probably won’t get in either. For a TV show designed merely to entertain, that’s not so bad, but it’s too bad that TV is so heavily involved in the way we choose politicians today. If you look at a photo album of congressmen from the pre-TV era, you will notice that a good many of them, including some of the greatest ones, were simply not much to look at. In particular, Abraham Lincoln, whose birthday we celebrated yesterday, was described not entirely inaccurately as homely as a baboon. He never would have made it if they had had TV in 1860.

So I wish the best to the producers of “Top Engineer” and hope that the image of engineering which emerges from their labors bears at least some slight resemblance to what real engineers really do most of the time. If they do their job right, the show will be fun to watch, nobody will get killed (although it will look like someone might be), and maybe some young people watching will get the idea that engineering is fun as well as remunerative, and it certainly can be both. But be forewarned: most engineers aren’t that good-looking.

Sources: The best rundown on what Pilgrim Productions is looking for can be found at their website, http://pilgrimstudios.com/casting/topengineer/. If you’re interested, check it out soon because their deadline for submitting applications is March 7.

Monday, February 06, 2012

Bad Apple in China?

Apple Inc. currently enjoys one of the most positive consumer perceptions of any company in America. A New York Times poll last November revealed that more than half of those surveyed couldn’t think of anything negative about the firm, and when pressed, the worst thing they could say was that their products cost too much. So when the same paper came out with a long, carefully researched story about hazardous and onerous working conditions in China where Apple products are made, it was a little bit like reading that Santa Claus was hauled in for heroin possession.

Full disclosure: My wife and I have been Macintosh fans since the early 1980s, and I bought her an iPad this last Christmas. But when I told her about some of the things I read about how they were made, she may never view her iPad in the same light again.

First, the salient facts. Most consumer electronics products are made in China in factories that are Chinese-owned and operated. But the ties between Chinese manufacturers and U. S.-based firms like Apple are very close. When Apple chooses a new supplier, they ask all kinds of nosy questions about costs, facilities, numbers of workers, and so on, and allow only a small profit margin. Since 2005, they also inform the supplier about Apple’s “Supplier Code of Conduct” which reportedly requires adherence to basic standards of safety, worker rights, and other good things. And commendably, Apple actually conducts audits of its suppliers and has found and publicly reported many violations of the Code—so many, in fact, that some former Apple executives say it is largely window-dressing, and Apple may not be that serious about enforcement. Apple says it will drop a supplier if too many violations are found, but in the case of a firm such as Foxconn, which makes about 40% of all the consumer electronics manufactured in China, alternative suppliers simply may not be there. So in some cases it’s a matter of either Foxconn or no (or fewer) iPads. And in the highly competitive and fast-paced world of consumer electronics, an entire generation of products can come and go in a few months. Supply delays can mean not just reduced profits, but complete failure.

How bad are conditions for workers in Chinese consumer-electronics factories? It depends. If you picked up a well-paid auto worker from his production line making Toyota pickups in San Antonio, say, and plopped him down so he was making less than $7,000 a year working ten- to twelve-hour days, five or six days a week, and living in a dorm with nine other guys in a three-room apartment, and nothing to eat but Chinese food—well, he’d scream bloody murder. On the other hand, if you were like Times-profiled worker Lai Xiaodong, taking the same job would seem at first glance to be a stroke of good fortune, because you likely grew up in a small farming community where city life in Chengdu looked like Heaven, even with the long hours and crowded living conditions (Xiaodong could afford a single apartment, tiny as it was). Unfortunately, Mr. Lai was one of two workers killed in an apparent aluminum-dust explosion last May at a plant that makes iPad cases. That beautiful smooth-grained aluminum finish is not easy to make, and the plants where the cases are finished are potential firetraps. The firm where the explosions happened has since made safety improvements, but there are millions of other Chinese workers at hundreds of other plants where similar accidents may be just waiting to happen.

Back when most products sold in America were also made in America, you could sometimes buy an item that was made by someone you knew personally. But even by the 1800s, this was increasingly not the case: first raw materials, then later low-tech manufactured goods such as toys and clothing began to be imported from abroad in large volumes. Geography textbooks from the 1930s showed photographs of supposedly happy natives carrying bushels of raw rubber so that Mr. Ford could sell more cars with rubber tires. The happiness of the natives was assumed, not verified, and in fact, exploitation of workers of all kinds has been a chronic problem ever since exchange economies came into being.

Apple may have to join the ranks of Nike and other firms who have squirmed in the spotlight of exposure when maltreatment of workers making their products became public knowledge. A new and positive trend in the retail economy is the practice of buying according to conscience rather than just price or performance. With commodities such as clothing or coffee, sometimes the fact that one supplier can guarantee his product was made by genuinely contented workers in safe, comfortable factories gives him the only edge he needs over a similar product with no guarantee. However, Apple is not anywhere close to that situation. There is literally nothing like an iPod, or an iPad, at least for many consumers, and Apple wants to keep it that way. But part of the way they keep it that way is by squeezing the last drop of fast, agile production out of their (largely Chinese) suppliers, and so you get clouds of aluminum dust and an explosion here and there.

We may be seeing part of what can be regarded as a normal maturing process for Apple Inc. They began as the small, impudent upstart against IBM, and played the underdog role for years. Underdogs don’t have time to get all self-conscious and introspective—they’re too busy fighting. But the underdog label no longer fits Apple, and these latest revelations are a kind of loss of innocence. We will still probably buy Apple products as long as they are good ones, but I sure hope Apple slows down enough to do the right thing by its suppliers. It would be a shame if they don’t.

Sources: The New York Times article “In China, Human Costs are Built Into an iPad” appeared on Jan. 25, 2012 at http://www.nytimes.com/2012/01/26/business/ieconomy-apples-ipad-and-the-human-costs-for-workers-in-china.html.

Sunday, January 29, 2012

Engineers, the Public, and Crime and Punishment

Fyodor Dostoevsky’s novel Crime and Punishment was completed in 1866, but even that long ago there were signs of the coming upheavals that would lead to the Russian Revolution of 1917 and the establishment of the USSR, a government founded on the principle that the coming future utopia of fulfilled Communism justified any amount of butchery in the present. This idea was presaged in miniature by Raskolnikov, the protagonist of the novel. A failed law student in whom noble idealism waged a constant struggle with depression and anger, Raskolnikov tried his hand at journalism and wrote an essay on the idea that humanity could be divided into two types: ordinary and extraordinary. For the vast majority of ordinary souls, obedience to law and custom was obligatory and kept the wheels of society turning. But for a few rare extraordinary individuals—Keplers, Newtons, or Napoleons—rules and morality itself were things to be overcome along the journey to break new ground for the ever-upward progression of history. Here is Raskolnikov explaining his idea to a friend:

"I believe that if circumstances prevented the discovery of a Kepler or a Newton from becoming known except through the sacrifice of a man’s life, or of ten, or of a hundred. . . Newton would have the right. . . to remove these ten men, or these hundred men, so he could make his discoveries known to all mankind."

His point is that if a person has something of great enough worth to give to mankind, its value to later generations is worth the sacrifice of a few lives, if killing a few people makes the gift possible.

Raskolnikov’s academic speculations turn to grim reality when he later finds himself actually carrying out the murder of an old pawnbroker woman and her sister. The rest of the novel is a brilliant exploration of Raskolnikov’s complex psychological turmoil as he struggles with the burden of his crime and what it means to himself and others.

The lessons of this novel should be borne in mind by engineers who participate in ambitious projects that propose to reshape the way people live. The 19th-century world that Dostoevsky lived in was just beginning to be changed by technological innovations such as the railroad, steam power, and the electromagnetic telegraph. Physics and chemistry transformed the world of the twentieth century, and technologists are now learning how to use biological knowledge to meddle with things that former generations viewed as immutably fixed by evolution—or God.

The recent debate about the use of embryonic stem cells in medical research turns on the questions of what people are for, and who counts as a person. Raskolnikov liked to reassure himself that the old woman he murdered was of no more value than a cockroach, and that he was doing the rest of humanity a favor by exterminating her. Those who advocate the destruction of frozen embryos for embryonic stem cell research must believe that the potential good, in the form of possible cures and treatments for presently incurable illnesses, outweighs any harm to the embryos, which are only potential human beings, after all. And some philosophers have been outdoing Raskolnikov’s essay by proposing that some mature animals may be of more intrinsic worth than some immature human beings: it is permissible to kill certain disabled infants, for example, according to some schools of thought.

Engineers are extraordinary people, in the statistical sense. Out of the world’s population of some six billion, perhaps 15 to 20 million could generously be classed as engineers. That is less than one percent. But from Raskolnikov down through the abominations committed by dictatorships of the last two hundred years, we have seen what can happen when we start to view some elite individuals as exempt from the usual laws, rules, and moral strictures that most of us obey. Surely we can allow some moral license to those men and women in the white coats who promise us such wondrous treatments, and eventually biological enhancements, at the price of a few frozen embryos whose fate was probably annihilation anyway, can we not? We can, but we may not like what happens to the elites who get used to flouting the rules, or what happens to us when the elites take advantage of their privileges.

Without spoiling the novel for those who haven’t read it, I will say that Raskolnikov comes to regret his willingness to put his academic theory into practice. Dostoevsky being Dostoevsky, it is a complex regret, full of ambivalence and shot through with seemingly good things that could happen if Raskolnikov conceals his crime and tries to live out his dream that he is indeed one of the few extraordinary souls for whom ordinary law is nullified. Dostoevsky, ever the Christian artist, portrays both the simple trust of believers who have never questioned God as well as the convoluted thoughts of Raskolnikov, who at some points confesses belief in the miracles of the Bible, but at other times talks like an atheist from Central Casting. While fiction cannot directly teach us to be better people, a thoughtful reading of Crime and Punishment will challenge you to think about the meaning of life, the purpose of love, and the values of will and judgment.

Sources: The quotation from Crime and Punishment is from the Sidney Monas translation (New York: New American Library, 1968), p. 257. The latest (Winter 2012) edition of The New Atlantis magazine carries a fine series of articles on the theme of “The Stem Cell Debates.” For more details about how some philosophers have valued mature animals over some immature humans, see the works of Peter Singer.

Monday, January 23, 2012

SOPA, PIPA, and the Wikipedia Blackout

As regular readers of this blog know, one of my favorite sources of online information is Wikipedia. While not perfect, this largely volunteer-maintained site is a generally reliable, up-to-date, and accurate source of many kinds of information. It is especially good for technical and scientific data where there is a general consensus of agreement, and even in controversial areas it tends to be pretty even-handed. So imagine my surprise last Wednesday when I clicked onto Wikipedia for something and was greeted instead by a blacked screen for 24 hours and a plea for me (if I was a U. S. citizen, which many Wikipedians are not) to contact my congressional representatives to protest the consideration of SOPA and PIPA.

What are SOPA and PIPA? Legislative acronyms for the Stop Online Piracy Act and the (get ready for this one) Protecting Real Online Threats to Economic Creativity and Theft of Intellectual Property Act. If you read the second title it sounds like they want to protect online threats, not do something to stop them, but I think PIPA comes from the informal title, which is just the Protecting Intellectual Property Act. SOPA is being considered by the House of Representatives and PIPA by the Senate.

Why is Wikipedia (and many other online service providers of various kinds) so upset about these proposed laws? According to the text on the blacked-out screen, the laws pose a potentially crippling threat to the freedom of information exchange. If they were passed in their present form and the Attorney General, or a civil court, or some bureaucrat somewhere, decided that Wikipedia was an internet search engine, then it would be Wikipedia’s responsibility not to link to certain nefarious websites, the list of which the government would apparently determine. (It is not clear to this non-lawyer exactly who would enforce the acts, but you are welcome to read the 10,000-word text of SOPA yourself and figure it out if you so choose.) And if Wikipedia (or any other website falling under the jurisdiction of the act) failed to do its court-mandated duty, the court would be free to impose penalties, probably in the nature of fines and/or injunctions to stop or start doing things.

Well, right there we have a problem. One of the better aspects of the Web is the way that it has grown to its present stature largely without government aid or regulation. True, there are many problems and illegal or immoral things that go on, some of which we have discussed in this space. But overall, Internet commerce and Internet entities such as Wikipedia have behaved pretty well and do no more than reflect the general makeup of society, which is made up mostly of fairly decent people with a few bad apples here and there.

In my limited, non-lawyer view, SOPA and PIPA would try to change that by putting a huge number of Internet entities under the watchful eye of the courts. It is probably not an exaggeration to say they might create for the Internet a regulatory regime not too different from what the Interstate Commerce Commission (ICC) was for interstate commerce, which kept all kinds of business ranging from bus companies to railroads and trucking firms under its thumb for decades. The difference is that the ICC came to pass to curb genuine piratical abuses by monopolistic railroad companies, who shook down their customers (mostly farmers) shamelessly with exploitative and discriminatory rates. And when the deregulatory fad came to pass a few decades ago, the ICC bit the dust, and with the much greater level of competition in interstate commerce that prevails today, nobody much misses it.

Nothing much like monopolistic exploitation is going on with the Internet organizations targeted by SOPA and PIPA, with the possible exception of Google. The proposed laws’ supporters claim that their only targets are the truly bad actors: the crooks who set up phony or phishing websites, those who sell pirated software, child pornographers using the Web, and so on. But from my (again) limited reading of the proposals, that judgment call is left strictly up to the lawyers enforcing the act. Once power is created, bureaucrats seem to develop an irresistible urge to use it, and so I have concluded that it would be a bad idea to pass SOPA and PIPA in their present form. And I made that opinion known to my Federal legislative representatives here in Texas.

So did several million other people, evidently, because a few days after Wikipedia and many other sites did their blackout bit, Congress announced that it was “indefinitely postponing” consideration of the bills. At the rate Congress gets truly important work done these days, that means you can forget about SOPA and PIPA unless you run out of other things to worry about first.

I am not a libertarian, and appropriate legislation to curb some of the more blatant abuses found on the Internet is a good idea if it can be enforced without an undue burden on the service providers or the public using the services. Most law enforcement has to take a “good-enough” approach, given limited resources. You want enough highway patrols to keep speeders and other vehicular misbehavior down to a reasonable level, but to get the public to obey the speeding laws 100% of the time would require something on the order of a speed-cop Reign of Terror. From my point of view, SOPA and PIPA moved too far in the Reign of Terror direction. I am sure that interested legislators will go back to the drawing board to craft something that will deal with the worst abuses without being so intrusive on the vast majority of sites and users who are behaving themselves, but I do not myself believe there is any big rush about the matter.

Reportedly, major Hollywood interests (copyright holders) were behind SOPA and PIPA, and were disappointed when the proposals went down in flames. It is a disturbing enough time for content providers these days, as file-sharing and online movies become more and more technically facile. The phrase “rent-seeking” has shown up a lot lately in editorials about how powerful business interests have influenced government so as to direct more revenue their way. One could view the SOPA-PIPA business in that light, with what fairness I’m not sure. But it looks like this time, anyway, a grass-roots effort by millions of users (admittedly led by organizations with influence, though not so much financial as merely relational) prevailed over the rent-seekers, if that is the right phrase for them. Unfortunately, Wikipedia can shut down their site for the first time in protest only once. Any more and it will get to be a drag. So the future will reveal how this continuing conflict gets resolved, if it ever does.

Sources: For those policy wonks dealing with a sleepless night or for other reasons, a website has posted the “markup” versions of both SOPA and PIPA at http://www.keepthewebopen.com/sopa and http://www.keepthewebopen.com/pipa, respectively. Though they are mostly of academic interest now, they show just how complicated modern legislation has become.

Monday, January 16, 2012

From Dreamcatchers to Soulcatchers

The day after Christmas, I was asked to contribute to a long paper on the past, present, and future of the social implications of technology. One of the other contributors cited an idea called the “soulcatcher chip” as something that would have profound social implications, if it ever comes to pass.

The phrase “soulcatcher” presumably derives from the word “dreamcatcher.” A dreamcatcher, at least in the original versions made by the Ojibwe and Sioux tribes of native North Americans, was a small frame or loop of willow twigs hung with feathers. Mothers would make dreamcatchers and hang them above their children’s beds to filter out nightmares and send only good dreams to their offspring. I am unaware of any scientific studies on dreamcatchers, but the idea has caught on in the commercial world and you can buy such things to hang on your rear-view mirror.

A soulcatcher chip, as envisioned by former Chief Technology Officer of British Telecomm Peter Cochrane, is a piece of silicon that you would implant in your brain. Early versions would simply be an interface between your brain and the Internet, bypassing all those old-fashioned electromechanical keyboards and eye-tiring display screens. Later versions of soulcatchers would do exactly that: the interface would be broadband enough to “capture all a human’s thoughts and feelings on a single silicon chip,” according to a 1998 posting on the website of Wired Magazine. In the same piece, Cochrane predicted that an external version of the soulcatcher would be available in about five years, that is, by 2003.

As far as I know, that prediction has fallen flat. While functional magnetic resonance imaging (fMRI) technology has advanced to the point that we can observe which parts of the brain get active when a wide variety of mental events happen, this is very far from directly reading a person’s thoughts in general, or being able to get onto Wikipedia by thinking instead of moving your mouse or typing.

The soulcatcher idea is basically a communications problem, and can be broken down into the parts of transmitting (brain to the external world) and receiving (external world to brain). While fMRI technology has made a fair amount of progress on the transmitting end, the receiving end is much trickier. Implanting stuff in the brain is a risky thing, even if the object you’re implanting is only a protective cover to replace a missing part of the skull, for example. And running wires into the brain, or even silicon-chip substitutes for wires, appears to be a very crude way of conveying data to one’s mental world. While some progress has been made in brain implants as a type of therapy for conditions such as epilepsy and even depression, this is a far cry from conveying novel detailed data into the brain.

The idea of a soulcatcher chip brings up a problem that has up till now stayed within the halls of philosophy departments. When Cochrane asked his wife how many parts of himself he could replace with synthetic components before she rejected him as a machine, she said she was revolted by the idea. This is an indirect compliment to Cochrane, because I can think of some marriages in which the wife would welcome the process (“Let me at that off switch!”). Of course, such speculations will remain hypothetical for some time, perhaps forever, because there is no hard experimental or theoretical evidence that it is even possible to simulate the workings of the human mind with a computer, or to do anything close to downloading all a person’s thoughts and feelings onto a computer.

This is just personal speculation on my part, but there may even be some sort of psycho-physiological uncertainty principle out there, analogous to the Heisenberg uncertainty principle of quantum physics. The Heisenberg uncertainty principle says that you cannot measure both the momentum and position of a particle simultaneously with arbitrarily great precision. If you get the momentum exactly right, you will have no idea where the particle was at the time, and vice-versa.

The soulcatcher analogue of that may be that it is impossible to go beyond a certain point in measurement (and especially in two-way communication) with the mind by means of physical actions related to the brain, without seriously disrupting or possibly even destroying the mind you are dealing with. Given the complexity of the brain and its interactions with the mind, any such uncertainty principle will also be more complicated and less straightforward than the physics principle first enunciated by Heisenberg. But that doesn’t mean no such principle exists. It may simply work out that way experimentally before we understand the brain well enough to realize theoretically what the true limitations are.

The dreamcatcher was a physical object constructed by people who wanted to change something the mind was doing, namely, giving their children nightmares. And in the nature of a placebo, it may have well had a good effect, if the mother felt she was doing something positive and became more reassuring to the child as a result. The hopes for a soulcatcher chip are more ambitious: nothing less than the direct connection of one’s mind to external data in a way that would be hard to ignore. If I get tired of surfing the Internet, I can always just turn off the computer and walk away. But if the thing was directly piped into my brain, all sorts of dire possibilities come to mind. So far, computer viruses have stayed outside the body, but what if one got into your brain and you couldn’t get it out? The ethical challenges alone would be enough to stop me from even contemplating such a project, but ethical considerations do not always stop researchers who are fascinated by an idea.

As we’ve seen, the soulcatcher is an idea that is already delayed in transit, if indeed it ever gets here. Even if it never comes to pass, it has given us a lot of mileage in the form of science-fiction tales and movies, and that may be the place where it does as much good as dreamcatchers, if not more.

Sources: The forecast by Peter Cochrane was published at http://www.wired.com/wired/archive/6.11/wired25_pr.html in the Nov. 1998 issue of Wired Magazine. I also referred to the Wikipedia article on dreamcatchers. If all goes well, the May 2012 issue of the Proceedings of the IEEE will carry an article entitled “Social Implications of Technology: Past, Present, and Future.”

Sunday, January 08, 2012

Ethics of Calendar Technology

With the turn of this new year, most people have at least heard about the Mayan calendar sequence ending that allegedly forecasts dire disasters to come on or about December 21, 2012. The Wikipedia article “Mayan calendar” has a quotation from Sandra Noble, who is an expert on ancient Mayan customs and practices. She says that contrary to a lot of the hype that has been promoted about the event, all that it really means is that a “long-count” period of one “b’ak’tun”, lasting 394.3 years, is going to end on that day. Far from prognosticating disaster, the ancient Mayans usually had a huge celebration, a kind of super-New-Year’s-Eve party, whenever they reached the end of a b’ak’tun. She says that the notion of a cycle end as a doomsday date is “a complete fabrication and a chance for a lot of people to cash in.” To the extent that calendars are a type of technology, such misrepresentation is a kind of violation of calendar ethics—if there is such a thing. As far as I’m concerned, there is now.

Just like the technology of clocks helps us to regulate and coordinate the way we use short intervals of time during the day, the technology of calendars allows us to plan and coordinate longer intervals involving days, weeks and years. That is why nearly every civilization worthy of the name has come up with some kind of calendar. Although the traditional seven-day week dates back at least to the Babylonian captivity of the Jews around 600 B. C., there are many other lengths of weeks used in other calendars, ranging from the three-day weeks of an early Basque calendar to the 13-day weeks used by the Mayans. In most ancient civilizations, the calendar was used to establish times for religious festivals as well as more practical issues such as the planting of crops. Because religion was a kind of all-pervasive thing to ancient peoples, the calendar was an intrinsic part of their culture, and anyone with the temerity to change it was in effect challenging the foundation of a way of life.

This tie to tradition was an aspect of the calendar appreciated by the French revolutionaries, who in 1793 threw out the Gregorian calendar with its religious associations and replaced it with a novel arrangement of three ten-day weeks in each 30-day month, enamored as they were with the decimal system. (We owe the metric system largely to this same regularizing spirit, which found its proper place in scientific measurements.) The revolutionary French calendar lasted for about a dozen years before the confusion caused by interconverting between the French days and weeks and the traditional ones used by everybody else got to be too much, and they changed it back. A similar stunt was tried by the leaders of the ten-year-old Soviet Union, who foisted a calendar consisting of 72 five-day weeks onto their reluctant citizens in 1929. This attempt failed after only two years, when a six-day week was tried, but led to further confusion and too many holidays. Finally, the whole thing was dropped in 1940 and the Gregorian calendar was quietly resumed.

Since then, there have been no major attempts to fiddle with what many people regard as a God-given system of accounting for days. In Christianity’s early years, believers adopted Sunday, the first day of the Roman week, as their new Sabbath, partly to distinguish themselves from the Jews, who observed their Sabbath beginning Friday night and going to Saturday at sundown. Making Sunday the first day of the week has been a nearly universal practice of calendar-makers until the last few years, when I began to notice European calendars that put Monday as the first day of the week, demoting Sunday to the last day.

I don’t know why, exactly, but this change annoyed me exceedingly. I suppose it was as a Christian, I resented the implied insult to Sunday, which Christians regard as a day set aside by the Lord for rest and avoidance of routine labor. Far from being just another part of the weekend (or something brought to us by labor unions, “the folks that brought you the weekend” as one bumper sticker says), Sunday is supposed to be the day when you stop to realize that what you have depends not only on your own efforts, but is really the gift of God, Who set aside the sabbath because He rested on the seventh day after making the world on the first six days. If God decides to rest after His labors, the least we can do is imitate Him in that regard. That is why Sunday used to be printed in red and lead the week, because it was special, even holy (which just means “set aside for a special purpose”). Holidays were originally holy-days, that is, religious festivals.

So all these traditional religious associations go by the board when you pick up a calendar with Monday as the first day, as I unwittingly did the day I went Christmas shopping for my wife at Half Price Books. I bought it because it had pictures of Volkswagen Beetles. A Beetle was our first car, and my wife has ever since had an unreasonable admiration for those machines. Because the calendar was sealed, I was unable to tell how the weeks were arranged. Imagine my horror (okay, displeasure is closer to it) when she opened it up, hung it on the wall, and I discovered that it was laid out in the European style of Monday first, Sunday last. I fussed about it till she offered to print up little strips of Sundays, one per month, tape them to the left sides of each sheet, and cover up the blasphemous tail-end-of-the week Sundays. I admitted this would be silly and said for her not to do it, but that calendar is going to annoy me for the whole coming year, I can tell.

By such subtle means are cultural shifts manipulated, or at least indicated. I have no idea why European calendar makers demoted Sunday, unless their customers demanded it by saying that Monday is when their weeks begin, and why not put it first? But in that demand itself is expressed the increasing secularization we hear about Europe all the time. And now it’s spreading to the U. S., at least in the form of specialty calendars. I fully expect nothing particularly bad to happen on Dec. 21 of this year, Mayans or no Mayans. But when it comes to the trend symbolized by moving Sunday to the last day of the week, I think the results are becoming plainer every week—and every year.

Sources: I referred to the Wikipedia articles on “week,” “French Republican calendar,” “Calendar,” and “Mayan calendar,” where the quotation from Sandra Noble appears.

Monday, January 02, 2012

Computer Wars, Now and Then

This Christmas, my wife received not one, but two tablet computers—an iPad and a Nook—from gift-givers who obviously did not coordinate their purchases. It’s too early to tell which, if either, will win her attention, affection, and devotion, which is always the goal when companies introduce new products. But at least no one gave her a TouchPad.

The TouchPad, for those of you who, like myself, were too engaged on other matters to notice, was HP’s attempt to crack the tablet-computer market. Released last July, the device used an operating system called WebOS that HP acquired when it bought the smartphone developer Palm. But according to an article in yesterday’s New York Times, the TouchPad was an example of too little, too late in a number of ways.

Users quickly found that the WebOS, or something, made the TouchPad’s speed resemble molasses in January compared to its competitors, mainly the Apple iPad OS and Google’s Android. This and the fact that the slew of programs that HP expected developers to produce for the machine failed to materialize, led the firm to announce only seven weeks after the TouchPad’s introduction that it was going to discontinue manufacturing all WebOS devices. Although it has since announced one more production run of the TouchPad, this is thought mainly to be for the purposes of clearing out inventory and meeting unmet obligations to large customers.

Playing catch-up is hard in any game, and especially so in the rapidly moving world of software and novel information and communications technologies such as tablet computers. The fifteen months between April of 2010, when Apple began selling the iPad, and July 2011 might as well have been a decade in more staid industries. And even if HP’s device had been simply a hardware knockoff and used the iPad OS (which Apple would have allowed only if the Indian Ocean froze over), the lead enjoyed by Apple and Google would have made for problems for HP. Add in the totally novel WebOS, which was so new that HP had a lot of trouble finding programmers who could work in it, and you had a disaster in the making.

I was once on the receiving end of a similar situation, though it moved in comparative slow motion because it took place thirty years ago. IBM (remember them?) introduced its personal computer in 1981, and inspired a similar mad rush on the part of other computer makers to come up with rivals for the then-burgeoning PC market. One logical candidate to win the race was the Digital Equipment Corporation of Maynard, Mass., which had shown IBM a thing or two with its PDP line of mini-computers. Back when an IBM mainframe needed the floor space (and air conditioning) of a small house, you could stow a PDP-8 in a couple of relay racks the size of a large closet. So you’d think DEC would know how to out-hardware IBM.

DEC tried, and the result was a thing called the Pro-350. It looked more or less like an IBM PC, only it ran some software that DEC had cooked up on their own. And to use it, you had to either write your own software or wait for programmers to write some and buy it from them. At the time I was a young assistant professor and knew that I needed some kind of a computer. I asked the advice of another professor in my department, who I later learned used to work for DEC. He said for me to buy a Pro-350, and I’d never regret it.

Let us say merely that he turned out to be a false prophet. I spent a good chunk of my start-up research money on that Pro-350. Because I could program in FORTRAN back then, I was able to write a few programs and run them on the thing (it had a FORTRAN compiler that worked halfway decently). But as other people started to buy word processing software, spreadsheet software, and other pre-packaged applications for their new IBM PCs, I was stuck with either writing my own FORTRAN for these things, which was as ridiculous as building my own laboratory building to do research in, or waiting for versions that would run on the Pro-350 to come onto the market. I’m still waiting.

I wound up using the thing as a terminal to get into the department’s mainframe for a few years, but eventually I bought a Mac at home and the world was never the same after that. DEC struggled on for another decade or so by mainly maintaining its fleet of aging computers for former customers, and was bought by Compaq (which is now itself history) in 1998.

The lesson here, if there is one, is that if you’re going to compete in the consumer electronics business with something new, you’d better be first with something that works really well, or else you’re probably wasting your time and money. As I said, it’s too early to tell whether the Nook or the iPad will win out here at our household. But so far, the iPad has worked flawlessly, whereas I had to spend a couple of hours twiddling with our wireless network after my wife spent even more time trying to get her Nook to see our base station before we could get it to connect. Not a promising start for the Nook in 2012.

Sources: The New York Times article on the TouchPad appeared online at http://www.nytimes.com/2012/01/02/technology/hewlett-packards-touchpad-was-built-on-flawed-software-some-say.html. I also referred to Wikipedia articles on the TouchPad, the iPad, and DEC.

Saturday, December 24, 2011

Enhancing the Humans of the Future

The summer 2011 issue of The New Atlantis carries a series of articles addressing the question of enhancing humanity through technical means: cyborgs, indefinite extension of lifespan, uploading one’s mind to computers, and other dreams of a group that call themselves “transhumanists.” It is a question fraught with implications for engineering ethics, because engineers will be the people who will develop many of these technologies if they come to pass.

In many ways, we are already living in a future where human performance is enhanced beyond what the “natural” human body can do. Is there an essential difference between a man who climbs into the cab of a backhoe and does the work of fifty men with shovels, or five hundred men digging with their fingers, on the one hand; or a man whose mind has been uploaded into a computer that controls a giant robot which can dig ditches as well as a man with a backhoe can, on the other hand? We are accustomed to seeing construction workers use powerful machinery all the time. But we might be surprised to see a gang of giant robots show up at a construction site, especially if we strike up a conversation with one and it claims to have a name, a Social Security number, and opinions on the upcoming Presidential election.

To my way of thinking, a human being with the freedom to get in a backhoe cab in the morning and get back out of it in the evening, is better off than a man (if that is still the right word) who has been permanently embodied in some piece of hardware subject to all the ills of engineered machinery, including obsolescence, breakdowns, and power failures. If all the imaginable enhancements to human performance become reality, a given human being can’t choose them all because some of them will be incompatible with others. And in making a choice, he or she will be shutting a lot of doors, not only to enhancements that are incompatible with the set chosen, but also to the door of living as a normal, natural human being with the incredible and even now not fully understood flexibility that life as a natural human being implies.

Great wisdom is found in old myths, such as the myth of King Midas. To a certain frame of mind, what better gift could be received than that of turning everything you touch into gold? If you substitute for gold the ability to achieve all the transhumanist dreams of indefinite lifespan, superhuman intelligence, artistic ability, athletic ability, vision, hearing, and so on, I think the myth’s point is still valid. Oscar Wilde is alleged to have said, “When the gods wish to punish us they answer our prayers.” Depending on how the thing is done, we may find that at least some of the supposedly desirable enhancements so fondly wished for turn out to be curses in disguise.

This is the stuff of science-fiction novels, and the point of such tales is generally to make us realize that we really have a wider and more complicated set of values than we often think we do. Midas found that he loved his daughter more than he loved gold, but he fully realized this only when he touched her by mistake. In my view, the whole transhumanist program of wanting whatever we can imagine suffers from a severe lack of philosophical and emotional depth. If Ray Kurzweil is a good example of the transhumanist frame of mind (and I think he is), his books about the future blessings of transhumanism are great at explaining how we may get there technologically. But the most you will find with regard to moral philosophy is the fact that he cites as his moral exemplar a fictional hero of novels for boys: Tom Swift.

Now I was an admirer of Tom Swift myself, from about the age of ten when I found “Tom Swift and His Television Detector” in my grandmother’s attic, left over from when her boys were growing up in the 1930s. I continued to enjoy the series when it was revived for a time in the 1960s, but when I went away to college I slowly began to realize that the cardboard world of technological whizzes whose inventions always made for good and banished evil was just that: two-dimensional, unsophisticated, and inadequate for helping me to understand the complex ambiguous world that real technology exists in.

I don’t think Mr. Kurzweil and his transhumanist friends have realized that Tom Swift couldn’t fix all our problems, and neither can we simply by acting like Tom Swift. Almost without exception, the transhumanists are people who disclaim any serious belief in the Judeo-Christian God and an afterlife of rewards and punishments. If you don’t have a hope of heaven, your only chance to get there is to make it yourself, and that’s what the transhumanist movement is trying to do.

I do not fault their motives. Kurzweil has personally developed machines to help blind people read, and I am sure that he and his fellow transhumanists sincerely believe that their plans are the best possible thing for humanity. But they rarely take into account the fact of original sin, and the fact that somehow, the limited scope of power, space, and time that living in normal human bodies gives us is the ground from which every human achievement has sprung.

Like most heresies, the hope of indefinite human enhancement takes a small idea which is proper in its place in the overall scheme of things, and blows it up out of all proportion. The myths of Midas, of Frankenstein’s monster, and even Oscar Wilde all tell us that we had better think in a more in-depth and multifaceted manner about the promise of human enhancements before we cross a line that we may regret crossing someday.

Sources: The summer 2011 edition of The New Atlantis carries extended discussions on “Science, Virtue, and the Future of Humanity.” The Oscar Wilde quote was found at http://www.brainyquote.com/quotes/quotes/o/oscarwilde139151.html. I consulted the Wikipedia article on King Midas, which says that the incident of Midas touching his daughter was first presented in a short retelling of the legend by American author Nathaniel Hawthorne.

Monday, December 19, 2011

The Air France 447 Crash: The Rest of the Story

On June 1, 2009, the aviation world was shocked to learn of the disappearance of Air France flight 447 over the Atlantic Ocean during a flight from Rio de Janeiro to Paris. All 228 people aboard died, and it took until April of 2011 to recover the flight-data recorder from its watery grave. Until then, the main clues as to the cause of the crash of the fly-by-wire Airbus 330 were some telemetered data received during the final moments of the flight that indicated the airspeed instruments had been iced up and were giving false readings. While serious and potentially confusing to pilots, it seemed like an insufficient reason by itself to make a modern jet aircraft fall out of the sky.

We now have a much fuller picture of what happened that day, thanks to the diligent efforts of the French air-accident investigation agency and the publication of a book about the crash that contains a complete transcript of the words spoken in the cockpit and captured by the flight’s voice recorder. As it turns out, the frozen pitot tubes that sense airspeed were only one of a number of confusing factors that led to a fatal mistake on the part of one of the two co-pilots. So human error combined with mechanical problems, as it so often does in accidents of this kind.

An article in Popular Mechanics magazine presents the following story. The trouble began when around 2 AM local time, the plane entered a region of frequent thunderstorms near the equator. A large airliner such as the Airbus carried a complement of a captain and two co-pilots. Shortly after 2 AM, the captain left the cockpit in charge of the two co-pilots as he went to take a nap. Instead of taking evasive action to avoid a large line of thunderstorms in their path, the co-pilots decided to maintain their course. They shortly entered the thunderstorm area, where the pitot tubes iced up. At this point a critical transition in the operation of the airplane occurred.

The Airbus 330 is one of a new generation of fly-by-wire aircraft in which a computer is in the path between the pilots’ controls and the actual control surfaces of the plane. The normal flight mode is autopilot, in which the computer is basically flying the aircraft. But certain unusual conditions, such as the pitot tubes icing over, make the autopilot trip out and hand control of the plane over to the pilots. Because of several other distractions in the cockpit, it is not clear that the junior co-pilot realized this happened about 2:10 AM. The airplane was experiencing turbulence, ice crystals on the windshield, and strange electrical phenomena such as St. Elmo’s fire. While we will never know why co-pilot Bonin (the one with least experience) did what he did, the fact remains that at 2:10, he pulled the stick back and basically kept it there until it was too late to correct his mistake.

Even non-pilots such as myself know that if you try to make a plane climb too steeply, its airspeed falls. Eventually the airflow past the wings is insufficient to provide enough lift, and the plane “stalls.” In a stall, the plane becomes a piece of metal falling through the sky. The only remedy is to reorient the craft by pushing the stick forward to get air flowing past the wings in the right direction and recover enough lift to pull out of the resulting dive. But you need a lot of room to do this in. Once the plane stalled, it began to lose altitude rapidly—almost two miles a minute—and the stall began at an altitude of about seven miles.

If the captain had arrived from his nap earlier, or if the senior co-pilot had shoved his colleague out of the way and done the right thing with both sticks, the stall might have been recoverable. But the confusion that happened next was also abetted by the fly-by-wire situation.

In older aircraft, the two pilot sticks are mechanically coupled together, so only one message goes from the cockpit to the ailerons. If two pilots disagree on what to do with such a stick, they find themselves literally fighting a tug-of-war in the cockpit, and most reasonable people would react by at least talking about what to do next.

But even in the autopilot-off mode, the Airbus sticks could be moved independently, and the plane responds to the average of the two sticks’ motion. To my ears, this sounds like a software engineer’s solution to a human-factors problem. In the event, even though the senior pilot eventually did the right thing with his stick, the computer averaged it with Bonin’s all-way-back stick, and the stall continued.

The rest of the story is short and bitter. About 10,000 feet above the ocean, the captain returned. Cursing, he realized what was happening, but no power on earth could have saved them at that point. Two miles of air was not enough to stop tons of aluminum and human bodies from plunging into the ocean less than a minute later.

What can be learned from this tragedy? Pilots of fly-by-wire craft around the world now have a vivid bad example not to follow, for one thing. Also, I hope the software and hardware engineers working on the next Airbus rethink their strategy of independent sticks and averaging. While human-machine communication is important, this accident emphasizes the fact that interpersonal communication in a crisis is vital. That single additional channel of communication through a mechanical link between sticks might have been enough to avoid this accident.

Despite such avoidable tragedies, air travel is still one of the safest modes of transport. But it stays that way only by the constant vigilance, training, and competent execution of duty by thousands of pilots, engineers, maintenance people, traffic controllers, and others. Let’s hope that the Air France 447 disaster teaches a lesson that makes air travel even safer in the future.

Sources: The Popular Mechanics article which carried much of the cockpit transcript appeared online at http://www.popularmechanics.com/print-this/what-really-happened-aboard-air-france-447-6611877. I also referred to the Wikipedia article on the Airbus series. And I thank James Bunnell for drawing my attention to this article. I blogged on the Airbus crash on June 8, 2009, the week after it took place.