Saturday, June 21, 2008

Does The Internet Flatten Your Mind?

If you are reading this, you must spend at least some time on the Internet, and possibly many hours a day. If you're older than 30 or so, you can remember a time before the Internet when "reading" and "holding a piece of paper in your hands" were generally synonymous. And if you're younger than that, believe me, there was such a time and people actually managed to live under such conditions.

The question for today is: does using the Internet make us less able to do certain important mental feats that we may miss after they're gone? More specifically, does it take from us the ability to give sustained attention to a long, complex piece of reading that requires deep thought?

I am moved to this inquiry by a couple of things. In a recent column, Miami Herald columnist Leonard Pitts Jr. says "amen" to an article in the latest issue of the Atlantic Monthly entitled "Is Google Making Us Stupid?" In the Atlantic article, author Nicholas Carr argues that people who use the Internet routinely tend to zip from info-nugget to ad to email to YouTube to . . . well, you get the idea, all without thinking thoughts any deeper than a puddle on a sidewalk. Both he and Pitts find that since adapting to the Internet, they find it much harder to sit still with a book that makes a complex, sustained argument over many chapters. They end up getting restless or sleepy. And they wonder if the instant-gratification style of thinking that Google and the rest of the Internet services encourage, militates against the deep, contemplative, often temporarily aimless and associative, but sometimes very productive type of thinking that reading at length encourages.

There is some quantitative evidence that this suspicion is true. Carr cites a study that found most Internet users do not read more than a few paragraphs of any resource they find, even if it is many pages long. In the technology and society journal The New Atlantis, Christine Rosen cites numerous studies that show the kind of work style known as multitasking actually decreases efficiency rather than otherwise. And the Internet makes multitasking so easy—just open three or four windows on your email, a favorite blog, a video news feed, and go to it.

I must admit that the Internet has profoundly changed the way I do what I used to call library research. My professional research is eclectic in that I often find myself working in fields that I do not have much educational background in. Suppose (as recently happened) that I want to find out about an arcane subject such as astronomical spectrophotometry. (For those who just have to know, it means measuring the light output of stars at various wavelengths.) In the pre-Internet days, this would have meant a trip to the library (preferably the multi-million-volume University of Texas library system), perhaps talking with a reference librarian, hauling six or eight books to a study carrel, writing down references to papers, going back to the shelves and looking up the papers in big heavy volumes of bound journals, and so on. It would have taken a whole day if done properly, and I might have ended up with two or three photocopied papers, some notes, and a whole lot more questions than answers.

Contrast that to what I managed to do yesterday. I Googled the topic, found a few papers online, got more confused than anything else, and ended up going to the library anyway (the local Texas State library, not Austin). I found two books that addressed the subject, but from an insider's point of view. Fortunately, one of them listed some references for introductory works—most of them were books, but one was an online source. Turns out that a professor at Oklahoma University has written an introductory text that he posts online for free. It turned out to be exactly what I needed.

That's a fairly typical story for any of my ventures into new fields. The online stuff helps some (especially Wikipedia, which seems to have very good articles about the basics of technical topics). But at some point I usually end up going to books, sometimes old books. It's unusual that I can find everything I need to know online, especially if I want an overall picture of a field as an introduction.

Now Google and company are working hard to change that by putting all the world's books online. And yes, they may succeed. But once that happens, somehow I don't think people will write new books the same way they used to write old books. Why put a 300-page book on line if nobody reads past the first three or four pages anyway?

It takes a certain kind of personality to write a good book. A psychological test called the Myers-Briggs Type Indicator alleges to measure a dichotomy between two distinct lifestyles which are termed "judging" versus "perceiving." One author summarized the difference between the two poles of the dichotomy this way. People who rate high on the "judging" end of the scale are "job-oriented jumpers" who like to size up a task, do it, get it out of their way, wipe their hands, and go on to the next thing. Perceiving types, on the other hand, tend to be "pendulous postponers" who can always think of one more touch to add to their creation, or one more aspect of looking at a subject.

Many college professors turn out to be pendulous postponers, delving endlessly into the infinite ramifications of a specialized topic. And since they will stick with a subject longer than anyone else does, they often find things that nobody else has found. The supreme example of this type that I can think of is the cultural historian Jacques Barzun, who turned 100 last November. A few years ago he wrote From Dawn To Decadence, a history of Western culture over the last five centuries, in which he summed up a long lifetime of learning that made connections and associations of ideas that even historical duffers like me could understand.

My mind doesn't work that way. I am a "judging" type, which is one reason I write a blog on a different topic each week, rather than using the same time to write a book or two a year (much as I'd like to write a book!). But the world needs both kinds of thinkers. It's pretty clear that the Internet encourages the superficial, the list-of-numbers kind of judging thinking, over the long-term study, contemplation, pondering, and sustained attention needed for the perceiving kind of thinking. It would be tragic if the Internet wipes out any future hope of having more of the Jacques Barzun type of personality arise in the intellectual world of the future. As long as we don't get doctrinaire about banning books in favor of the Internet or something, I don't think we have much to worry about. But the same end may be achieved by other means, and possibly even by accident rather than design.

Sources: The Atlantic Monthly article appears at http://www.theatlantic.com/doc/200807/google. Christine Rosen's article "The Myth of Multitasking" appears in the Spring 2008 issue of The New Atlantis.

Monday, June 16, 2008

The Micro- and Macro-Ethics of Plug-in Hybrids

The online version of Wired Magazine carried an article recently that took a dim view of the Bush Administration's commitment of $30 million toward plug-in hybrid vehicle research, saying it was grossly inadequate in view of our present oil-price exigencies. A plug-in hybrid car is like a conventional hybrid (e. g. the Toyota Prius) in that it has both batteries to run the electric motor coupled to the wheels, and an internal combustion engine to supplement power from the batteries when necessary. But in addition, a plug-in hybrid can be plugged in to your house current overnight to draw power from the electric grid. If the batteries are large enough, some people claim that plug-in hybrids can travel up to 100 miles per gallon of gasoline consumed, although this doesn't count what it does to your electric bill.

Sounds great, doesn't it? Let's look at the decision to go with a plug-in hybrid from two points of view. First, there's what ethicists call the micro-ethical view: what should you as an individual do about the situation? Then, there is the macro-ethical view: what should large institutions—corporations, professional societies, governments, nations—do about it? As we will see, the answers aren't necessarily the same.

What an individual should do depends on what kind of individual you are. If you're just an average consumer, the choice is simply, "Should I buy a plug-in hybrid or not?" Of course, this assumes that they are out there to buy. And they aren't—not just yet, anyway, although the much-hyped Chevrolet Volt is supposed to make it to showroom floors by 2010. This shows the limitations of microethical reasoning: options are limited to what one person can realistically do.

If you are an engineer, and you think a plug-in hybrid is a good idea, you might try getting a job related to power electronics or automotive R&D. Or you could even start your own company to address one of the many technical problems that lie in the way of plug-in hybrid development. The most promising type of battery, the lithium-ion cell, still has lots of problems with safety and lifetime, although these may be ironed out with time. So one's career choice is fraught with ethical implications that many young people don't even consider to start with, let alone after one has taken the job.

When we turn to the macroethical side of the question, a whole array of sub-questions arise. If a company goes into a market not because it's profitable but because it is the morally right thing to do, that company either has to subsidize its activity by drawing funds from other more profitable lines, or face the prospect of going broke, after which the company will no longer exist to do anything at all, moral or otherwise. There are specialty firms right now that will convert conventional cars to plug-in hybrids, but my impression is they are not growing fast and simply don't have the resources to compete with the major automakers. The automotive industry is a strange mixture of century-old traditions (the way car dealership economics works, for instance) and cutting-edge technology. Any organization that wants to succeed in it has to work within the complex environment of existing companies, regulations, and market forces.

The problem is even more complex when you ask what the U. S. government might best be doing in this area. Obviously, the Wired reporter (as well as several private and public sources he quoted) thought that $30 million was so small an amount as to be an empty gesture. He quoted a source at the Brookings Institution who said that to make a major impact on the auto market, plug-in hybrids would need about $18 billion of government subsidies and funds over the next ten years. That is a lot, but compared to many other things the government does, it's not all that much.

Over against that notion is the sense, supported by many conservative schools of economics, that we will have plug-in hybrids when fuel costs and other economic factors make it profitable to sell them, and any government intervention to hasten that day is liable to be counterproductive. Macroethics in engineering gets tangled up in economics and public policy pretty quickly, as you can see.

My own opinion of the matter is that there are technical solutions out there, but those who have the nominal power to implement them (both in private corporations and in government) lack the courage to go ahead and do something. The "something" might be in a variety of directions, either liberal or conservative. But my sense is that lately, no one has been willing to step up and put their hands on the wheel and steer. And just as with an individual who drifts through life reacting to things without making or implementing specific plans, institutional drift is sooner or later bound to lead to disaster.

As far as buying a plug-in hybrid goes, I plan to hang on to my own two cars for a while yet. One of them has 183,000 miles on it and the other, which already gets about 37 miles a gallon, is about to turn over 100,000 miles. The car I had before that made it to 200,000 before the wheels began to fall off (literally). So I figure by the time I'm in the market for another car, one of my choices is likely to be a plug-in hybrid. But whether I'll be able to afford it is another question.

Sources: The Wired article on plug-in hybrids appeared on June 13, 2008 at http://blog.wired.com/cars/2008/06/feds-scrape-tog.html.

Monday, June 09, 2008

New York's Crane Collapses: Who Inspects the Inspectors?

New York City is undergoing something of a building boom, and building in large cities means tower cranes—those improbably spindly structures that symbolize major construction these days. Last May 30, a crane in use at 91st Street in Manhattan collapsed, killing the operator and another construction worker, seriously injuring a third, and damaging several buildings as it fell to the street below. What made it even worse is that this collapse was the second in less than three months. On March 15, another crane collapsed in midtown Manhattan, killing seven. And in both cases, it appears that the inspection process designed to prevent just such accidents was flawed, to say the least.

What do crane inspectors do? What pressures do they experience in their jobs? And what changes can be made in the system to improve it?

On paper, at least, New York City appears to have a rigorous and exacting system of required inspections for the erection and use of tower cranes. Every contractor has to have a permit to operate a crane, the operators themselves must be licensed by passing tests or showing an equivalent amount of specialized experience, and the cranes themselves must be inspected periodically by crane inspectors, who are city employees. And most of the time, the cranes operate without major accidents or injuries. But it looks like all is not as it should be with the inspection process.

In the March accident, a crane inspector was arrested under the suspicion that he falsified a statement saying he inspected the crane on March 4 that later collapsed. And just last Friday, the acting chief inspector of cranes, James Delayo, was arrested on charges that he took bribes to supply a construction firm with answers to the crane operator's test, as well as to report inspections on cranes that he never in fact inspected. But even if all the inspectors involved had done their jobs, it appears that the May collapse might not have been prevented. A New York Times reporter found that the collapsed crane's turntable was a rebuilt unit that had earlier been struck by lightning and welded back together. It is entirely possible that a hidden defect in the weld contributed to the accident, although further investigations will have to be conducted to confirm that theory. If so, a routine visual inspection might not have revealed any problem.

Inspectors, quality control engineers, traffic policemen—the job of all these people is to make sure that what is supposed to happen actually happens, and what isn't supposed to happen, doesn't. And if they see problems, or potential problems, they have the authority to act. Any time a person holds authority over others, there is the temptation to abuse that authority. And it is no news that from time to time, inspectors take bribes instead of doing the harder thing—actually making the inspection or penalizing a crane operator for careless actions.

A chronic problem with government-operated departments of inspection—whether the things inspected are cranes, X-ray machines in dentists' offices, or sides of beef—is a shortage of inspectors. The benefits of inspection are largely invisible, while the negative consequences of inadequate inspection are blatted all over the news media. The political tendency is therefore to fund inspection agencies just enough to prevent too-frequent accidents, but not so much that the inspected industries and businesses get sore from being plagued with swarms of supernumerary inspectors. The technical abilities required of an inspector can be equal to or greater than his or her counterpart in private enterprise, but government pay is always less than in the private sector, adding to the temptation to bribery.

Some states have decided to outsource certain kinds of inspection to private third-party firms. This leaves the free market to decide the pay rates and numbers of inspectors, but has its own problems as well. How do you insure that a private inspection firm, which is basically a kind of consulting operation, is doing its job? Hire government inspectors to inspect the inspectors? Whether an inspector works for a public or private firm, the issue always comes down to professional integrity: does the inspector know enough technically to do a good inspection? And if so, do they have the moral fiber to resist the temptations to bribery, shortcuts, and other forms of professional corruption?

In today's short-term bottom-line world, the kind of long-term relationships and institutional reputations needed for inspection systems to work well can be hard to establish. But it is too easy to forget that lives are at stake. New York City appears to be trying a short-term fix by prosecuting some crane inspectors who were alleged to be on the take. While that is certainly something that needs to be done, one wonders whether corruption in the process may be endemic, and the arrests happened only in response to headlines. Is privatization a better approach? Maybe, but as in so many other aspects of engineering, you have to work with the materials, culture, and political environment you have, and privatization in certain political circles is a dirty word. Here's hoping that however it gets done, the system of crane inspections in New York improves to the point that seeing those giant towers swinging across the skyline will be only a source of pride, and not of fear.

Sources: I used reports from the New York Times on the crane accidents and bribery arrests available at http://www.nytimes.com/2008/06/07/nyregion/07crane.html and http://www.nytimes.com/2008/06/08/nyregion/08building.html. A technical description of the March 15 collapse is available at http://www.gostructural.com/article.asp?id=2788.

Saturday, May 31, 2008

California Supreme Court Damages Future of Engineering

I'm going to go out on a limb here. But I'm sure that the limb's pretty solid.

On May 15, the California Supreme Court struck down a ban on same-sex marriages that the state has had in place for some time. I'm not going to talk about the issue of judicial activism, or the question of whether California's citizens will assert their rights to reverse this action by approving a referendum amending their state constitution next fall. Instead, I am going to argue that allowing same-sex marriage will endanger the future of the engineering profession in this country.

Seems like a stretch, doesn't it? Here is my line of reasoning.

First, let me show that allowing same-sex marriages damages the institution of marriage. Some people simply do not see how conventional marriages between a man and a woman are in any way affected if we also let men marry men or women marry women. For these readers, let me make an analogy.

We have a nice solid base of well-functioning, highly capable engineering colleges in the U. S. Most of them are accredited through a rigorous process of inspections, visits, and continuous improvement. Suppose we passed a law that said all employers must recognize engineering degrees from any institution calling itself a college of engineering, whether it was accredited or not. It would be illegal to refuse to hire an engineer simply on the basis of what college he or she got a degree from. (=all of society must recognize marriage certificates of all kinds, whether for same-sex marriages or not.) We would leave the whole accreditation machinery in place, and universities capable of giving a good accredited education would still be able to do so. (=men and women who want to marry the opposite sex can still do so.)

What do you think would happen to the institution of engineering higher education in this country? Outfits handing out engineering degrees would spring up like newsstands on every corner, and students would flock to them. The average competency level of degree-holding engineers in this country would go into a precipitate decline, and the whole process of engineering education might undergo permanent damage that would take years or decades to repair, if ever. And note: in this hypothetical scenario, we did nothing whatever to the good schools. They were still free to stay accredited and do their good, competent job. We simply forced everyone to recognize the fly-by-night institutions as competent, but they were in fact incompetent.

The adjective "incompetent" often carries negative connotations, but it need not do so. It simply means that the noun modified is incapable of doing something or other. I have no shame in admitting the fact that, being a male, I am incompetent to bear a child. Women are incompetent to beget children without a male being involved somewhere along the line. And two men together, or two women together, are incompetent when it comes to fulfilling the practical duties and responsibilities of marriage, namely: being a biological and social unit that consists of a man as father, a woman as mother, and children who each have the same mother and father.

There are many scientific studies—thousands, in fact—performed by sociologists with all kinds of backgrounds and personal beliefs, which examine the question, "Do children who grow up in a family consisting of one mother and one father who are married and stay married, do better than children raised in any other kind of environment?" To qualify "better" you can look at social adjustment, criminal records, levels of school achievement, early or frequent sex and drug use, rates of depression and suicide, and so on. And the resounding, repeatable, monotonously consistent answer is, "Yes." This is not to say that kids raised by a single mother or two gay men are doomed to failure and a miserable existence. The human spirit can triumph over adversity of whatever kind. But when children are examined in statistically significant numbers, there is no question that the social institution we call conventional intact marriage beats any other way of raising children hands-down. That is not an ideological statement. It is a social-science statement backed up by years of the best kind of research that social science can offer these days. If you don't believe me on that, see David Blankenhorn's The Future of Marriage.

Now for the connection to engineering. It is my subjective impression, which I wish some social scientist would check out with the machinery of their trade, that the better grade of engineering students come from just the kind of stable family background that same-sex marriage will militate against. The National Science Foundation, among other institutions in this country, is concerned that very few students of either sex (and especially few women) choose engineering as an undergraduate degree, and even fewer decide to go on to graduate school. This is why it is increasingly rare to find engineering professors who were born in the U. S., because whatever mysterious factor it is that makes people want engineering graduate degrees is in short supply in this country, but seems to be plentiful abroad.

I will not claim that unstable marriages, divorced and remarried couples, single parents, and same-sex parenting is responsible for the entire decline in interest in engineering among young people in the U. S. But I believe a part of it is. And if we damage the institution of marriage further by insisting that same-sex unions get the same recognition as conventional marriages, I forecast a worsening dearth of U. S. students able to muster the discipline and deferred gratification necessary to pursue careers in engineering. I suspect we will wait a long time before the National Science Foundation comes out in opposition to same-sex marriage. Nevertheless, if I'm right, it might do more good for them to work in that direction than to spend their money on some of the programs they have supported in the past to encourage students to become engineers.

There. I made the connection. Like it, hate it, argue with it as you will. But that is my opinion, and as far as the marriage part goes, I'm on solid ground, not hanging from a tree by a limb.

Sources: Although I have not read the book, David Blankenhorn's The Future of Marriage comes highly recommended as a careful, scientifically reasoned argument written by a person who favors equal rights for homosexuals, but is convinced by scientific evidence that same-sex marriage would be too high a price to pay.

Sunday, May 25, 2008

Remembering Brian O'Connell

Last Thursday, May 22 brought the sad news of the passing of Brian O'Connell the previous day. Anyone who knew Brian, or met him even once, was not likely to forget him. For those of you who did not have the privilege of meeting him, I would like first to offer you my sympathy. Then I will try to describe one of the most colorful personalities ever to grace the field of engineering ethics.

This business tends to attract people with mixed backgrounds who are both conversant with the intricacies of some technical field and also interested in the human side of things. Brian was no exception. He once told me he was one of the youngest people ever to run a planetarium show at Hartford's Gengras Planetarium, when as a young teenager he was asked to fill in for the regular operator whom Brian had become friends with. But his interest in the depths of the human soul expressed itself soon thereafter when he attended seminary for a while. Deciding he wasn't quite cut out to be a priest, he switched to computer science, and then back to humanities as he took a law degree and practiced law for several years. Eventually he joined Central Connecticut State University and served with distinction in both their computer science and philosophy departments.

I met Brian shortly after he discovered the Society on Social Implications of Technology (SSIT), a society within the Institute of Electrical and Electronics Engineers (IEEE). He was the guy with long blond hair, horn-rim glasses, and a suave and engaging manner, and he saw something humorous in just about everything. Among the more staid, business-suit-clad engineers that often showed up at SSIT meetings, Brian looked like a hippie who had wandered into a Rotary Club meeting by accident. He was the kind of person who could walk into a room and change the whole tone of conversation in five minutes from boredom to excitement, and he often did.

Naturally, not everybody always agreed with Brian's ideas. But he had the ability to see the other person's point of view instinctively, sometimes better than the other person himself. I'm sure that's what made him a good lawyer, and it is also what made him an excellent advocate of engineering ethics in a wide variety of fields, starting with computer ethics and ranging over other areas it would take a detailed study of Brian's writings to determine. As I have said elsewhere, seeing the other person's point of view is an essential first step in good engineering ethics, and Brian could do that better than just about anyone I know. In everything Brian did, there was a foundational joy in living and a desire to see other people blessed by the same joy, not harmed. And technology, since it was such a big part of life nowadays, was something Brian wanted to bless people with, not the other way around.

I think that desire is what drove him to work so energetically on behalf of the SSIT (which he served in many capacities, including President), on behalf of his law clients when he practiced law, and on behalf of his students at CCSU, many of whom he invited to his own basement lab in his house in West Hartford. When I last saw him in July of 2007, he showed me where he pursued robotics projects with his students and we talked about what he could do with robotics and remote control radio links, which he had obtained an amateur radio license to use.
Brian's actions in his chosen professions (and I count at least three: law, computer science, and engineering ethics) all sprang from a view of life that was deeply rooted in his religious and philosophical outlook. We never spoke about it much, but he was familiar with the classics and liked to quote thoughtful people of faith, from St. Augustine to G. K. Chesterton. Like Chesterton, Brian believed life was a thing to be enjoyed with all one's might. Chesterton enjoyed a glass of wine and a cigar, and Brian was partial to tobacco as well (his lung cancer was diagnosed in the spring of 2007). His legacy continues in the lives of the hundreds or thousands of students, colleagues, and fellow professionals who, I hope, will know more about engineering ethics and act on that knowledge because of something Brian did, said, or wrote. His life crossed the paths of the rest of us like a skyrocket shooting up through the trees. Perhaps Edna St. Vincent Millay had someone like Brian in mind when she wrote

My candle burns at both ends;
It will not last the night;
But ah, my foes, and oh, my friends--
It gives a lovely light!

Requiscat in pace, Brian.

Saturday, May 17, 2008

China's Earthquake: What If We Had Known?

On Monday, May 12, the Sichuan region of China was devastated by one of the worst earthquakes in recent memory. At this writing, the death toll stands at over 50,000, and more bad news about the disaster arrives daily. One of the strangest news stories that has come out of region concerns rumors spread on the Internet that scientists working for the Chinese government knew the earthquake was going to happen, and suppressed the information out of fear that making their prediction public would cause panic ahead of the Olympic games.

A news source almost certainly affiliated with the Chinese government (China Radio International) issued a release Wednesday which quoted Zhang Guomin, a research fellow at China's Institute for Earthquake Science, as saying that earthquake forecasts should be based on scientific analysis and not tailored to political requirements. According to him, earthquake forecasts are not possible with our present state of knowledge. However, another researcher, Zhang Xiaodong of the China Earthquake Networks Center, seems to wish that predictions were possible, because he told the reporters, "I feel deeply regretful and sorrowful at the failure to predict the earthquake."

What if we could predict earthquakes with the same accuracy as, say, we can predict tornadoes today? At least one leading authority believes that such predictions may be possible. A NASA researcher named Friedemann Freund has published a series of papers over the years that connect measurable changes in the earth's electromagnetic fields to strong earthquakes that happen shortly after the changes. (My blogs of Feb. 20, 2007 and Apr. 13, 2006 describe more technical details.)

Without taking sides on whether this is in fact possible, let's do a little thought experiment. Suppose after X years of research and development, we assemble the expertise, equipment, and networks needed to predict major deadly earthquakes. Now no prediction system is going to be perfect, so let's say its accuracy can be quantified this way: when the system predicts an earthquake of at least a given magnitude in a given geographic area during a given time window (probably at least a week, and maybe much longer), the prediction is borne out 80% of the time. And let's say false positives and false negatives are equally likely. That is, for the 20% of predictions that come out wrong, 10% are major earthquakes that happen when none was predicted, and 10% are non-events that don't happen when an earthquake was predicted.

Given this imaginary system, what do we do with it? Do we treat the forecasts like hurricane forecasts and order mass evacuations? That's certainly one approach. Originally, Hurricane Katrina was predicted to hit the Houston area, and a graduate student I knew was pretty perturbed when he wasn't able to arrange for transportation out of the city. As it turned out, he was one of the lucky ones—nothing too bad happened to Houston, but everybody who tried to flee had to endure the grandaddy of all traffic jams on the already-clogged Houston freeways.

Hurricanes generally end up somewhere, so hurricane forecasters are given the benefit of the doubt when they miss on exact predictions of the storm's path. But what if earthquake experts made a prediction that turned out to be a complete bust—that is, everybody evacuates for the full term of the warning and exactly nothing happens? That might sully the reputation of the field indefinitely, and nobody would take them seriously forever after.

To bring the matter closer to home, what if this hypothetical system predicted The Big One for the San Francisco Bay area? If we shut down everything that goes on in Silicon Valley for a week, that would constitute a major economic disaster of its own. You don't just walk up to a huge semiconductor plant and turn off the switch, unless you want to turn it into scrap. Of course, a major earthquake might do that for you, but then you get into the question of how to deal with an evacuation order that would cost billions of dollars to a private company. Lives are more valuable than property, but property isn't negligible. And that's just one example of many problems that we would face in dealing with accurate earthquake forecasts.

The approach California has taken in the absence of reliable earthquake predictions is to mandate earthquake-resistant construction. But that costs more than ordinary construction, and requires a well-functioning regulatory system and a cooperative construction industry, neither of which are always found in other countries. Mass evacuations are simpler, and might be the best path to pursue for countries that can't afford to replace their entire infrastructure with earthquake-resistant structures.

Clearly, even if we had reliable earthquake prediction, we would face a lot of issues in deciding how to act on the knowledge it would provide. But it seems to me that knowledge is always better than ignorance, especially when it comes to earthquakes. And considering the terrible loss of life and property that major earthquakes usually cause, I wish that we spent more intellectual capital on serious efforts to predict earthquakes, and tried to evaluate the predictions in a statistically meaningful way.

Sources: The China Radio International article I quoted appeared at http://english.cri.cn/2946/2008/05/15/48@357631.htm.

Saturday, May 10, 2008

Ethics of the Smart Car

The relationship between drivers and their cars has always been a complex one, fraught with emotional and moral overtones. Maybe that was why some television writers with more enthusiasm than judgment came up with the concept of "My Mother the Car." I'm old enough to remember watching that show, which aired on U. S. television back in 1965. The basic idea was that this guy buys an antique car, only to discover that somehow his deceased mother's spirit has taken up residence in it. The radio dial flashed whenever she spoke to him, I guess so TV viewers could tell that it really was the car and not some hallucinogen-inspired inner voice. The show lasted only one season and is remembered, if at all, for being one of the worst TV series of all time. But if Prof. Clifford Nass of Stanford University has his way, we all may be talking with our cars in the future—and the cars may talk back in tones to match our emotions.

A recent Wired article profiled Prof. Nass's research on the future of the human-automobile interface, and how smart cars may be used. Smart in what sense? Well, with current GPS (global positioning system) technology and computer power, coupled with broadband wireless networks that will be ubiquitous soon, you can imagine driving down the street and saying to your car, "Hey, I'd like a pizza. Any good places within a couple of blocks?" Advertisers and automakers would like your car to reply, "Well, there's Gino's in the next block and Papa's one block over—they're having a lunch special today. What shall it be?"

Of course, the same smarts that lets your car give you dining advice will also empower it to remember how you drive. Auto insurance companies currently give discounts to good drivers and raise rates on poor ones, but the quality of your driving is determined mainly by very coarse measures: the number of accidents and traffic violations. Suppose every week your insurer could download and process (by software, of course) hundreds of details about how you drive: how fast you pulled out after a light changed, whether you were speeding and by how much, and whether you ran red lights without getting caught. Most of the technology's there, it's just a matter of developing it.

Some people would think this amounts to turning one's private car into a spy. The matter gets even more complex if we move to cars which partially or totally take over many of the functions of driving. (See my column "The Human Side of Automated Driving" Dec. 10, 2007). Clearly, if you take your hands completely away from the controls and let the car do everything, your responsibility for accidents that ensue is limited, if not absent entirely. But many plans for computer-assisted driving don't go that far. Nass imagines a heavy-footed driver negotiating with his car for permission to step on the gas after a stop light changes. "Aw, c'mon, just this once?" "No, you're wasting gas, and at five dollars a gallon!" Nass says that changing the car's tone of voice to match the driver's mood may help the situation, but I'm not so sure.

Right after it was economically feasible to put computer-generated voices in cars, some time in the early 1990s, a few manufacturers experimented with it. The idea proved to be almost universally unpopular, as the mechanical female tone reminded everybody of their worst nagging nightmares of school librarians and mothers (there it is again), and the feature disappeared in a model year or so.

Where is engineering ethics in all this? The first responsibility of engineers who are working on these things is to make sure they don't make driving more dangerous. Of course, that doesn't mean things can't ever go wrong occasionally, but tests will have to show a general improvement in safety before new features can be adopted. As for insurance companies and driving information, there is a public-policy aspect which has not been debated yet. It's the same kind of question that arises when health insurers want to use a person's genetic information to restrict health coverage, except in that case you can't help what genes you were born with, but you presumably have some control over how you drive. But should a taxi driver in New York pay higher rates than the legendary little old lady from Pasadena who only drives to church on Sundays? These are questions that involve technology as well as issues of fairness, economics, and what insurers like to call "moral hazard"—that is, the idea that you should not be exempt from all the consequences of your own voluntary bad behavior.

For my part, I'll be content to drive my old, dumb cars (dumb in two senses) until the wheels fall off. And maybe by then I can buy a car named James and commute by saying, "Home, James," and just enjoy the scenery while the car worries about the congestion on IH-35.

A Note To Readers

For the next two to four weeks I will be pursuing some research in a rather remote location where Internet access is not as reliable as it could be. So I apologize in advance for any delays in my weekly postings, which I will try to keep current as much as possible. For more information about the subject of my research, see www.nightorbs.net.

Sources: The Wired article appeared on May 9, 2008 at http://blog.wired.com/cars/2008/05/a-data-mining-c.html. And Wikipedia has an article that will tell you more than you will ever need to know about the show "My Mother the Car."

Monday, May 05, 2008

I Got the Botts About Bots

My father, God rest his soul, had enough South Texas German in him to be subject to occasional fits of Teutonic depression. He had enough self-awareness to know what was going on when these moods hit him. When we asked him what was bothering him, he'd generally say, "Aw, I've got the botts." (I never saw him write the word down, but for some reason I think it's spelled with two t's.) He passed on many years before the Internet was more than a gleam in a few researchers' eyes, but if he were alive now, he might well have the botts about bots.

A bot is a piece of malevolent software (malware) that infects your computer with the purpose of controlling it to do things that the bot tells it to do. These things are generally not nice. In the case of one of the worst bots, Storm Worm, some observers say that over a million computers took orders from some people who apparently went on the black market to offer denial-of-service attacks to the highest bidder. If a criminal takes up the offer, the victim's website is likely to be inundated with many millions of emails or other automated requests for service, whereupon the target website immediately gets overwhelmed and becomes inaccessible to legitimate users. Creators of botnets have progressed in the last few years from random vandalism to coordinated criminal activity, which is why computer security firms and software providers from Microsoft on down have lately spent so much time and effort combating the problem.

Until recently, people such as myself who use Macintosh computers could ignore bots, since up to 2004 or so no one had bothered to write a bot for Macs. Since only a relatively small percentage of all computers online at a given time are Macs, a malware writer who wants access to the largest number of computers in the shortest time is probably not going to bother writing two different bot programs, one for Macs and one for PCs. (Most legitimate software companies don't either, but that's another story.) But this supposed invulnerability has evidently come to an end. The other day I received a message from the IT division of a university where I do research. It informed me that a Mac on a network node in the lab I was working in was being remotely controlled by a bot. I was alarmed until I called the people and checked the Ethernet ID address, or whatever it's called—an identifying number unique to my computer. The number didn't match mine, so my computer must not have been the one that was zombified. Still, it means there could be a problem in the future.

It turns out that bots tend to use something called IRC, which stands for Internet Relay Chat. This is the old original protocol that enabled the first internet-based chats, before companies started selling proprietary versions. I am not a computer scientist and I don't know why this particular protocol is so useful to botnet masterminds, but it is.

Wouldn't it be nice if we could rewind to the day when the first wide-eyed innocent programmer came up with the neat idea of the IRC in the first place? "Hey, kids, let's make it so we can chat over the Internet in real time." Sounds great. But apparently, there is something fundamentally flawed about that IRC protocol that makes it able to take over people's computers.

I'm sure that was the last thing in the programmer's mind, to put in a built-in flaw that would later be exploited by criminal elements to the harm of thousands of victims, and to the possible legal compromise of millions of people who unknowingly participate in these crimes simply because their computers are hosting bots and follow the orders of their evil digital masters. But hey—with opportunity comes responsibility.

There is an idea in the engineering ethics world called the precautionary principle. Wikipedia defines it this way: "If there is a risk that an action could cause harm, and there is a lack of scientific consensus on the matter, the burden of proof is on those who would support taking the action." You hear more about it in European ethics discussions than in the U. S. Taking it seriously would severely hamper development of new technologies of all kinds. I wonder, though, if the people who developed the early Internet protocols had taken a more cynical view of human nature, and tried to think of all the evil things ill-willed programmers could do with the neat tools they were putting out there, if we might not have some of the problems we struggle with today.

If, for example, the developers of the IRC had taken a prototype version to some creative young bucks who spent their days trying to devise malevolent uses for new software, they might have discovered the extreme usefulness of IRC in botnets. And who knows?—they might have fixed it in a way that stayed permanently embedded in the Internet as it grew faster than almost anyone expected.

It's obviously too late to close the barn door on that particular horse. Now that Macs can harbor bots, I'll just have to be careful and try to make sure I follow good computer hygiene, for whatever good that will do. But people are writing new software all the time, and some of it is destined to be as influential and ubiquitous as the infamous IRC protocol is now. Surely we have learned a lesson about the depths of depravity to which some programmers will stoop. I just hope that people who write software these days take some thought as to how what they develop could be misused in the future, and even twist their minds around to be creative about it—and then fix it so it can't be used that way.

Sources: Slate has a good introduction to the subject of bots at http://www.slate.com/id/2190275/. A recent overview of the subject from a technical perspective can be found at http://8e6labs.com/2007/11/02/overview-of-the-threats-posed-by-bots/.

Monday, April 28, 2008

Should Google Be the World's Librarian?

Book Search is a portal that Google, Inc. is developing to provide access to all the world's books in digital form. How many is that? If you count editions (not individual copies), a recent Associated Press article about the project says there are between 50 and 100 million books in the world. The largest research library that I deal with on a regular basis, at the University of Texas at Austin, has only eight million of these. So clearly, Google will have done a great thing if and when it finishes—although with new books coming out all the time, a project like that is never really finished.

At first glance, this sounds like a great step forward in the history of information, on a par with the invention of printing. There are many parallels between the two events. Before movable type made it possible to produce thousands of identical copies of a manuscript, hand-copied books were rare, expensive treasures that only the wealthy and powerful classes could afford, by and large. But once Europe had dozens of print shops churning out books and pamphlets by the hundreds, prices came down to the point that artisans, shopkeepers, and even some farmers and peasants could afford them. You can make arguments that the Renaissance, the Protestant Reformation, and the Industrial Revolution all depended vitally on the invention of printing.

However, there is one critical difference between the invention of printing and what Google is doing. Print shops, publishers, and the whole network of book production, distribution, and the libraries that developed to house them were under the control of a diverse array of entrepreneurs, private organizations, schools, and governments. On the other hand, Google is, well, Google—a single, monolithic, centrally controlled corporation. Is there any ethical problem with that? It depends.

One thing that may be in danger is what I would term the universal freedom of library access. At any university library worthy of the name, anywhere in the world, any person can simply walk in and look at the general collections, generally without charge. And if you can produce scholarly credentials, you will usually be allowed to examine even the rarest items in their collections, under proper security controls, of course. The only limitation (and this is a severe one, admittedly) is that you have to travel physically to the library in question. But once you're there, you're in.

We have already seen how many internet firms have submitted to the will of dictatorial nations in exchange for the privilege of operating there. In my Mar. 30, 2006 blog, I criticized Google, Yahoo, and Microsoft for kowtowing to the government of the Peoples' Republic of China by restricting users' access to certain sites that the government deemed objectionable. Surely the books and other published works of Chinese dissidents will not be welcome there in electronic form any more than the people themselves, many of whom have endured long prison terms or even death for the "crime" of expressing their opinions.

But that is only one example of how Google, or any entity which has exclusive legal rights to the propagation of large amounts of information in a single medium, could distort or restrict access to the written heritage of the human race.

Am I being paranoid in sensing the potential for some sinister goings-on? I do not presently attribute evil or malign motives to Google, but sometimes things that look good to start with have bad unintended consequences. All I'm saying is that letting a single firm be in control of the way most of the world will in the future access its own written heritage, is at the least an unprecedented step, and potentially a very dangerous one.

The management of Google may all be nice folks now. But what if China gets more prosperous and has so much money in its government-controlled stock investment option that one day it hauls off and buys Google? Sounds ridiculous now, but if you had said in 1965 that in forty years, General Motors would be a money-losing basket case and Japanese car makers would beat them in worldwide sales, you would have gotten peculiar glances then too. Then China would get to say who gets access to what—an eventuality that few people would enjoy or benefit from.

My point is that the concentration of information control in the hands of a few is something to be regarded with caution, to say the least. Same goes for news media, but here we're talking a lot more than just news media—the intellectual heritage of the entire human race is at stake.

Do I have any suggestions? Well, no, in this case I'm just trying to get the ball rolling on a discussion. Even if I owned stock in Google, I have no illusions that they would listen to my opinions about their project. But if we're going to go ahead with this thing, we should at least go into it with our eyes open—as long as we can still see on our own.

Sources: The Associated Press article by Natasha Robinson on Google's Book Search project and its efforts toward the preservation of historical books was published in numerous venues. I saw it in print in the Austin American-Statesman (p. D3 of the Apr. 28, 2008 edition), and a version is accessible online at http://abcnews.go.com/Technology/wireStory?id=4722073.

Monday, April 21, 2008

Human Biological Enhancement and the Ethics of Personhood

Some philosophers of the mind like to try a little thought experiment on their students. It goes something like this. Suppose some years from now, a person—an ordinary human being—gets some dreaded brain disease that gradually destroys his gray matter. But also suppose that medical technology has advanced to the point that as the brain's biological tissue dies, it can be replaced by silicon (or some equivalent futuristic material) that is functionally equivalent to the dying brain part. And so as time goes on, Mr. Brain Patient has more and more of his brain replaced by the future's equivalent of computer chips. At what point, the philosopher asks, does the patient cease to be a human and begin to be a computer?

At one time, you could laugh off the whole thing by saying nobody has ever done such a thing and it's unlikely that they ever will. But no longer. Writing in Technology and Culture, historian Michael D. Bess points out that numerous blind and otherwise disabled people have received brain implants that allow them to see or communicate in ways that are utterly impossible for the rest of us mortals. Having a bunch of wires attached to your brain is not the same thing as replacing your cerebellum with a mainframe, but the border has been crossed. What happens from now on is more a matter of degree than of kind.

Bess foresees not just advances in brain science, but in genetic engineering and pharmacology as well, all leading to what he calls "human biological enhancement." Currently, the goal of most such projects is to use technology to restore the abilities of disabled people to something close to normal: curing genetic diseases, allowing the blind to see, allowing people with strokes or myasthenia gravis who end up "locked in" (unable to move or talk) to communicate via brain waves, and so on. But what is to prevent a person who sees through a computer from attaching an infrared camera to their input so they can see in the dark? Or what if we find a drug that restores Alzheimer's patients to normal brain function, and also gives normal people an IQ of 200? What is to keep us from taking human nature as merely raw material, a rough design to be improved on with increasingly advanced engineering? And what do we call these improved beings? People? Cyborgs? Or something in between?

Bess, for his part, sees no practical way to avoid these changes. The science will keep progressing, and as the natural desire on the part of people to take advantage of enhancements pulls the technology into the marketplace, we will face the issue of how to treat folks who have version numbers after their names (Bess titled his essay "Icarus 2.0"). He imagines that the only way to stop or regulate human biological enhancement would be to pass a worldwide set of laws together with a huge enforcement mechanism to chase down any miscreants trying to do enhancments under the table, so to speak. He sees the very public failure of the attempt to regulate performance-enhancing drugs in sports as a sign that this road is doomed to futility.

What we ought to do instead, he says, is get used to it. Start now to develop an "ethics of personhood" that in his words constitutes "an expanded conception of human dignity, a more generous understanding of the word 'us'." If one day you go to your job and find that the new hire you have to work with moves on wheels, sees through cameras, and accesses the Internet just by thinking, Bess is concerned that somehow you will be tempted to view that being as something other than human. We need to start now to work on that problem so that it doesn't lead to disastrous social consequences.

Well, I'm doing my little bit by drawing your attention to this matter. I'm already working with a colleague who gets around on wheels—he has osteomyelitis and spends most of his day in an electric wheelchair. Perhaps if these changes come along slowly enough, we can get used to them.

But for some reason, in searching history for an encounter between two very different orders of being who both happened to be human, the story of the early Spanish explorations of the New World comes to mind. With their armor, ships, and guns, the Spaniards must have looked to the native Americans like R2D2 looks to us. And sure enough, a whole lot of social disruption and suffering came about as a result of that encounter. But most of the misery and suffering was experienced by the native Americans, not the "enhanced" Spaniards.

Bess seems to be worried that un-enhanced humans will discriminate against the enhanced types, because they'll look odd or peculiar. But the case of Spanish exploitation of the New World suggests that the problems will mostly be experienced by those who, for whatever reason, don't benefit from technologically enhanced abilities. Especially if enhancement is expensive (it will always be at first), you could easily end up with an elite class of enhanced humans who would regard political and social power as their right.

Aldous Huxley's 1932 dystopia Brave New World divided the genetically engineered population of the future into alphas, betas, and gammas, as I recall. The alphas were the natural-born leaders with enhanced intelligence, and the gammas were bred (or manufactured, really) for menial jobs such as elevator operators (Huxley's crystal ball didn't include much in the way of automation). Huxley avoided the problem of having the gammas rise up in revolt when he made their genetic makeup include a natural-born enjoyment of menial tasks.

I don't know about you, but I wouldn't want to live in such a world. Bess is to be congratulated for raising a concern that we ought to start thinking about now. But I believe he's looking in the wrong places for problems. The enhanced types will do just fine—the people we need to start thinking about defending are the poor, the discriminated against, and the unborn, now and perhaps even more in the future.

Sources: Bess's essay "Icarus 2.0: A Historian's Perspective on Human Biological Enhancement" appears in the January 2008 issue of Technology and Culture (vol. 49, no. 1, pp. 114-126).

Monday, April 14, 2008

Thoughts on the Passing of a Zip Drive

In my household we try not to let too much old technology pile up, so after my wife bought a new laptop the other day, we began saying good-bye to her old Mac tower. It gave good service from about 2002 to a couple of years ago, and one of its features we're going to miss is its Zip drive. Zip disks were a removable magnetic-disk storage medium that were popular from the mid-nineties until flash drives came along. The first Zip disks held 100 MB, which was later boosted to 250 MB, but with 1-gig flash drives so cheap now I can't imagine there's much of a market for Zip drives now. Thing is, we have about 40 or so Zip disks that have stuff on them going all the way back to 1988, when my wife first learned to do graphics on a computer. Some of it has been backed up here and there, but if I had to tell you where, I'd be in trouble. So I spent yesterday afternoon transferring a good many of those old Zip disks to a backup drive, and it got me to thinking about the permanent impermanence of digital storage.

Every two to five years or so, a new generation of storage media come along. If the new generation didn't rise up and commit parricide on the previous generation, it wouldn't be so bad. But the hallmark of modern technology is "creative destruction," so for a new storage medium to be successful, it has to drive the previous medium out of existence. True, you can usually find antique drives, media, and even computers that use them if you look hard enough, but having to hunt around and assemble your own computer museum just to read some old files is hardly practical for most people. So the only alternative if you don't want your old data to go away as definitely as if you wrote it on paper and threw the paper on a bonfire, is to transfer it to the next medium. Which is fine for another two to five years, and then. . . .

And that gets me to wondering, what am I saving all this stuff for anyway? The inventor and futurist Ray Kurzweil wrote about this in one of the most human-sounding passages of a book about how we're all eventually going to live as software on hardware that will take over the universe (you think I'm kidding, go read The Singularity Is Near). His father Fredric was a musician and music teacher who fled Germany in the 1930s for the U. S. When he died at 58, the son inherited a large volume of paper documents, recordings, and other memorabilia. After starting a project to digitize all this stuff, Ray reached a conclusion which is as simple as it is startling. It was this: "Information lasts only so long as someone cares about it."

Like many of Kurzweil's philosophical epigrams, it contains elements of truth. I'm sure lots of information, in the form of paper, hard drives, old floppy disks, and so on, is eradicated every day simply because nobody needs or wants it any more, and the space or money it takes up is needed for something else. But just because somebody cares about information doesn't mean it will necessarily endure. Along with caring, the people interested in the data need the resources it takes to preserve it—whether that means space, funding for periodic migrations to new media, or archeological work.

In a way there's nothing new about this. People have been making choices about what information to save and what to toss ever since the invention of writing. Writing and paper are different in degree from Zip disks and flash drives, but not in kind. They are all technologies for the storage of a non-material entity—namely, information—using material media. You can make a good argument that the invention of writing made civilization possible, in that laws, history, customs, religious traditions, and most of what makes a culture could then be preserved independently of particular people with both good memories and the ability to pass their memories on to other people who could do the same. And I'm not one of these people who sit up at night worrying that historians of the future will have nothing to go on after the global catastrophe that wipes out all computer memories everywhere—although if that did happen, we'd all have a lot to worry about, not just the historians.

If we knew for certain whether anybody in the future would care about this or that data file, things would be easier. But you never know. Certain kinds of information, such as emails in the Executive Branch of the U. S. government, are just assumed to have historical importance, which is why the Bush administration got in some trouble a few months ago after admitting that they appear to have "lost" some emails covering several years, and had to recover them from backup tapes.

But for most ordinary, non-historical personages like myself, the candidates for people who will care about your information include yourself in the future, your relatives and children, and maybe a few friends and associates. It's actually a pretty short list. And unless you're a professional historian or plan to become the subject of one, if you don't think your list of carers-in-the-future would be interested in your tax return for 1982, you can just go ahead and throw it away.

Sources: Ray Kurzweil's The Singularity Is Near (Viking, 2005) carries the story of his attempts to archive his father's legacy on pp. 326-330. Zip is a registered trademark of Iomega Corporation, which still sells Zip drives, so maybe I won't worry about backing up those remaining disks just yet.

Monday, April 07, 2008

Whistleblowing on Southwest Airlines: Cracks of Doom or Paperwork Errors?

The lot of a whistleblower is not an easy one. And I'm not talking about football referees. In engineering ethics parlance, a whistleblower is someone who goes public with information about a safety issue, after trying without success to deal with the problem through normal organizational channels. Whistleblowers can toot either before or after something terrible happens, but the consequences for them are usually the same: isolation, criticism, and often the loss of a job or even a career. Their only compensation is the knowledge that, in most cases at least, they did the right thing.

Charlambe "Bobby" Boutris is finding out right now what life as a whistleblower is like. In 1998, the Federal Aviation Administration (FAA) hired him, and an important part of his job was to make sure that airlines complied with what are called Airworthiness Directives (ADs for short). These are rules that the FAA makes to ensure the safety of aircraft, and detail such things as regular fuselage inspections, especially for older planes.

You'd think nothing much could go wrong with the fuselage compared to moving parts like the engine and so on, but think again. If you've ever been on a jet aircraft and looked through a window with a view over the wing, you have probably noticed that the wingtip wiggles up and down several inches during air turbulence. That is perfectly normal, and designed into the way the plane works. If the wing was built solidly enough not to wiggle at all, it would make the plane so heavy that it couldn't get off the ground.

But if you've ever bent a paper clip back and forth until it breaks, you know about a thing called metal fatigue. And not only the wing, but all stress-bearing parts of the fuselage experience tiny movements that over time, can cause metal fatigue and cracks. Most of the time these cracks are small and don't spread. But in 1988, they were responsible for one of the most spectacular airline accidents in aviation history.

Passengers in the first-class section of an Aloha Airlines flight over Maui were astonished to see the roof of the plane pop off and rip away in the violent decompression, taking a flight attendant with it. The pilot, not even fully aware of what happened, quickly adapted to the altered flying characteristics of his plane and safely landed at a nearby airport. The attendant was the only fatality, but clearly, airlines did not want to take the chance of this kind of thing happening again. Investigation showed that the plane, which was one of the oldest in Aloha's fleet, had developed fatigue cracks that had spread to cause the whole top section of the fuselage to fly off.

For this and other very good reasons, the FAA requires air carriers to inspect their fleets for fatigue cracks on a regular basis. Now, these cracks are a statistical thing, like mortality rates. It's hard to predict whether a given plane will develop a crack at a given place by a given time, but the inspections are timed so that on average, any cracks can be caught and repaired well before they become dangerous. But the system works only if you keep to the schedule.

Well, it appears that Southwest Airlines didn't keep to the inspection schedule. In testimony before Congress on April 4, Inspector Boutris told the story of how he found numerous cases in which inspection records were either too mixed up to tell whether the inspections had been done, or showed definitely that planes had gone as long as 30 months past the time when ADs specified they had to be pulled out of service to be inspected. It's illegal to fly a plane in revenue service if it's behind in certain kinds of inspections.

What made matters worse was that when Boutris asked permission from his FAA supervisor to issue a letter of investigation to Southwest in 2007, the supervisor told him to tone it down to a letter of concern, which does not carry the same impact. Eventually, in late March of 2007, Southwest did finish up the late inspections, but only after some airplanes had gone months or years without them. The FAA has announced its intention to fine Southwest ten million dollars for flying the uninspected planes, at least one of which was found to have fatigue cracks after inspections were finally performed.

On a scale of "who cares?" to "stick it to 'em," you can identify two extremes of how one can view this story. If you take the side of Southwest Airlines, you can point out that besides being one of the most profitable airlines in the business, they have never had a catastrophic accident in which more than one person was killed. And that incident, when a ground crew member was pulled into an engine, was due to pilot error, not mechanical failure. True, they didn't follow all the rules, but no harm was done—none of their planes popped their tops like the Aloha Airlines flight did.

On the other extreme, you can say that you keep good safety records like that by following the rules, even if it means grounding a large fraction of your fleet to make overdue inspections. The attitude of Boutris' supervisor appears to be one of "don't rock the boat," which might indicate that he was more concerned with how Southwest Airlines would fare than he was worried about the safety of the flying public, despite the fact that he worked for the government. That indicates systemic organizational problems both within the FAA and Southwest Airlines.

Back in high school, I attended Explorer Scout meetings that were held in the basement of a telephone exchange building. On the wall of the break room was a brass plaque, as I recall, and its words went something like this: "No service is so urgent or no business need is so critical that we fail to perform our work safely." Back then, Ma Bell had a guaranteed monopolistic income, and could afford to make safety priority number one. But I thought it was a great motto at the time, no matter what the business was or how it was doing financially. And I still do. I hope Southwest Airlines agrees with me, not just in words, but in actions as well.

Sources: A video of Mr. Boutris' opening statement before a Congressional committee investigating this matter can be viewed at http://salon.glenrose.net/?view=plink&id=6899. A CNN article on the Southwest Airlines actions and the FAA's response is at http://www.cnn.com/2008/US/03/06/southwest.planes/. The Wikipedia article on Aloha Airlines has a brief description of the 1988 accident.

Monday, March 31, 2008

BitTorrent and Comcast: Who Pays and How?

Back on Feb. 4 of this year, I noted how a group of Swedish software experts got in trouble for running a peer-to-peer system for distributing video content over the Internet. The claim made by the prosecutors was that most of the content was pirated. Well, that turned out to be a sign of things to come. For some months now, the major U. S. cable television and Internet network operator Comcast has been in a dispute with BitTorrent Inc., a firm that provides software allowing peer-to-peer sharing of video. And the outcome of the fight may affect how all of us pay for Internet services for years to come.

The first punch in the public fight came when BitTorrent accused Comcast of singling out users of BitTorrent's protocol for interference and interruptions when Comcast's network traffic got too heavy for comfort. At first Comcast denied any such discrimination, but later under pressure, spokesmen for the cable and network firm admitted that they were doing exactly that. Then the Federal Communications Commission got involved and has held public hearings about the matter. On Mar. 27 (last Thursday), Comcast announced that it was making a number of changes that will both eliminate the discriminatory network measures against BitTorrent users and should make improvements in everyone's service through increased software and hardware efficiency and investment. But that hasn't stopped the FCC from announcing another hearing set for Apr. 17 at Stanford University in the heart of Silicon Valley, where I'm sure they will find people with an abundance of opinions on both sides.

What is BitTorrent and how does it work? You may recall the flaps about peer-to-peer sharing of audio files over the Internet a few years ago. BitTorrent's protocol also uses the fact that a file that one person wants is usually stored on thousands of other computers on the network. But video files are thousands of times bigger than audio files, especially if we're talking about HD video, which is becoming increasingly popular. The process of getting only one source computer to send a gigabyte-size file (1,000,000,000 bytes) over the Internet to another computer is tedious, error-prone, and takes a long time. So BitTorrent draws upon many of the other computers that have the file in question and gets them to cooperate by sending different pieces of the file to the target computer. Somehow the software coordinates all this confusion of activity, and the end result to the user is that he or she gets the desired file a lot faster than if only two computers were involved.

But as with so many things, what's good for the individual may not bode well for the group. Comcast and other network service providers estimate that because of BitTorrent's popularity, as much as half of all Internet traffic at certain times consists of peer-to-peer file sharing of this type. Comcast has defended its actions against BitTorrent protocols simply as their attempt to manage their limited network capacity fairly so that other customers were not left out in the cold with impaired service.

The word "fairly" means ethics has come into the picture. This ethical question arises from a tension that was born with the Internet some two decades ago, a tension between two competing philosophies.

Call the first the egalitarian-vision philosophy: the idea that information should be free, all Internet users should have the same privileges and access, and that such ideas should be built into the technical machinery of the Internet. The founders and early users of the Internet were imbued with this philosophy, and its legacy lives on in the basic structure of Internet protocols.

The second philosophy is the commercial free-enterprise notion that the Internet is a means to make money, and you should charge whatever the traffic will bear. It was years before anyone figured out how to make money with the Internet, but with the coming of Google I think it is fair to say that some people, anyway, have managed to do that. This philosophy sees the market as the best arbiter of resource distribution and even matters of fairness. Although there are now a few coarse-grained ways of charging people who want faster Internet service more money, hardly anyone pays any surcharge that depends on how much you actually use the thing. That is, if you ask your service provider for high-speed Internet service, you get a monthly bill that's the same whether you never touched your computer that month, or whether you downloaded seventeen movies in ten days using BitTorrent.

The network operators argue, and with some merit, that if five percent of their customers tie up half the resources of the entire network, it is not fair to the other 95% who pay just as much but have their service degraded by the overcrowding due to BitTorrent traffic. One alternative that Time Warner Cable is reported to be trying out in Beaumont, Texas on a trial basis is "metered" Internet use. That is, if you use more than a certain bandwidth-time product, let's call it, then you pay an extra fee. Metered use flies in the face of decades of Internet tradition and egalitarian philosophy, but if such distortions of the market as those caused by BitTorrent users continue, something will have to change, and the network companies may resort to metering on a wider scale.

A curious analogy to what is happening now with BitTorrent and Comcast went on for over a century in New York City. Until the late 1980s, residential users of the Big Apple's water supply had no meters—they just paid a flat monthly fee. You can imagine how this affected the way people used water. Finally, meters were installed, and the city as a whole used 28% less water in 2006 than it did in 1979. The Internet isn't water, but like water, it is not an infinite resource, and we may have to start paying by the drink if we don't want the whole thing to break down.

Sources: Bob Fernandez of the Philadelphia Inquirer has reported extensively on the BitTorrent-Comcast dispute, and I used his articles published on Mar. 23 (http://www.philly.com/philly/business/20080323_Online_Video__Data_Tidal_Wave_.html) and Mar. 27 (http://www.philly.com/philly/business/20080327_Comcast_agreement_in_dispute_with_BitTorrent.html). The statistic about New York City water use came from the Wikipedia article "Environmental issues in New York City."

Monday, March 24, 2008

Sustainable—But At What Cost?

I read a lot of discussion these days about "sustainability," "sustainable engineering," "sustainable agriculture," and so on. Sustainability, we are told, is the key to solving everything from global warming to finding world peace. What exactly is sustainability, and what are its implications?

One of the most obvious features of today's technological economy is not sustainable: the use of fossil fuels, which means mainly oil, natural gas, and coal. However these resources were formed (and there is still a good bit of debate about that), everybody agrees it took millions of years, and we stand a fair chance of running through them in a good deal less than 0.1% of that time, say a few hundred years. So the use of fossil fuels for energy is not sustainable.

So what? If you look around for anything at all, technological or not, which has turned out to be truly sustainable over recorded history, the list is fairly short. Things like the practice of begetting and raising families, farming, the life of some cities (e. g. Damascus, which is one of the oldest cities on earth), and even a few (very few) business firms have gone on for hundreds of years or more, and show no sign of disappearing because of lack of resources. I could add the professions of doctors and lawyers, and let's not forget taxes, but not governments that levy taxes—the habit endures even though the taxing entities don't.

The proponents of sustainability want basically everything we do to be a part of that kind of list—a list of things which have long traditions going back over many cultures and governments into the past.

In an article in the current issue of The New Atlantis, Yuval Levin makes the point that certain ideas vigorously promoted by political liberals in the U. S. are actually quite conservative. Sustainability, if successfully implemented, fits right into this pattern. If all social activities, technological and otherwise, were sustainable in the sense that liberals usually mean, the activities would go on and on without having to end because of physical limitations. While certain features might change, the physical resources needed would be either renewable or permanent.

Now that is a very conservative picture, meaning that the physical essentials of technology would not change. If new materials were invented that required using something that couldn't be recycled and reused–then they wouldn't be sustainable, and you couldn't use them. Everything would be recycled, with energy coming only from the sun. (Strictly speaking, even the sun isn't sustainable, although we can count on it shining for a few more million years.)

What if we went to such a totally sustainable economy? Some things wouldn't change much at all. Most steel is now made from recycled scrap, for instance, so that wouldn't be much of a problem.

But what about concrete? I have toyed with the idea of recycling concrete, because as far as I know, you could apply enough heat to it, drive off the water, and get back the calcium silicate that was in the original Portland cement. The trouble with this is, it would be vastly more expensive (and energy-intensive) to make cement from recycled concrete (laboriously hauled back from wherever it was poured to the recycling plant where huge amounts of energy would be required), than it would be simply to dig up some more limestone and sand from the ground. Ah, but limestone and sand are not renewable resources. Yes, there is enough limestone and sand to last us a long time, but if you're going to be a sustainable absolutist, you can't use anything that isn't recycled or in principle, recyclable.

I'm pushing this idea to the limits to make a point, but the point is a valid one. Namely, some things are more easily sustainable than others, and it simply doesn't make sense to hold sustainability up as a practical goal for every technological field, unless we are willing to make some very weird and silly changes in the way we do things.

While I was on vacation last week, I toured Indian City U. S. A. outside Anadarko, Oklahoma. It's a sort of outdoor museum where seven different kinds of Native American dwellings have been constructed and preserved. It was pouring rain at the time, but that didn't stop our guide from pointing out the different features of the various structures which were, of course, made from all-natural materials: tree trunks, mud, grass, and so on. Native Americans were the first recyclers, he said, since when they were finished with a structure they just abandoned it and let it return to Nature.

Though I didn't say anything at the time, I had a big "Yes, but. . . " in mind. Although estimates of how many people lived in what is now called North and South America before 1492 vary from 8 million to over 100 million, the figure is certainly less than the approximately 900 million people that the New World harbors today. And the Americas are some of the least densely populated regions of the developed world. If we all went back to living the way the first Native Americans did, there is no way that we would all be able to survive, even if we all suddenly acquired the hunting, gathering, and rudimentary agricultural skills necessary for such a life. And if we managed somehow to eke out a living, few of us would enjoy rising at dawn, doing back-breaking manual labor all day, and retiring at dusk only to do it all over again the next morning.

The only time when something like this has been tried on a massive scale recently was the Great Cultural Revolution under Mao Tse-tung in the Peoples' Republic of China, from 1968 to 1976. Millions of intellectuals and other suspicious persons, including most of the faculty members at all Chinese universities, were summarily hauled off to the countryside for a little bucolic "re-education" that lasted seven or eight years. I have known citizens of that country who lived through that period, and they tell me that it set back their lives a decade or more, and the progress of the country by a generation. But it was certainly sustainable, in the sense that they were still living and probably consuming fewer resources than they would have in the cities.

Few if any of the proponents of sustainability have in mind a radical, total shift to something like that. Or if they do, they're not talking about it openly. I favor a reasoned, appropriate move toward more nearly sustainable technology when it makes economic sense, when its adoption won't cause undue suffering or disruption, and when it leads to more human thriving than formerly. But a draconian swift transition to a totally sustainable economy would be in most respects indistinguishable from a worldwide depression. And I hope we don't get to that point any time soon.

Sources: Yuval Levin's article "Science and the Left" appears in the Winter 2008 edition of The New Atlantis.

Saturday, March 15, 2008

Robot Rats and SARs for PEPs

Sometimes things happen fast in politics. On Sunday morning, March 9, Eliot Spitzer woke up to the beginning of his 63rd week in office as Governor of New York State, an office which served as a stepping stone to the White House for his predecessors Theodore and Franklin D. Roosevelt. He had an apparently unstained reputation for fighting corruption in high places, which he had earned during his seven years as New York State's Attorney General, going after everything from Enron-type financial scandals to prostitution rings.

Two days from now—on Monday, March 17—he will hand over the keys of office and become Private Citizen Spitzer. Earlier this week, the New York Times revealed that Spitzer had been a customer of a prostitution ring that was under federal investigation. This evidence was revealed by a computer scan of Spitzer's banking transactions—a robot rat, if you will. The political firestorm that the news report touched off must have convinced him that trying to stay in office was an exercise in futility. On March 12, he announced that he was resigning. Ironies abound in a situation like this, but an ironic twist of special interest to the technical community is that Spitzer was caught by software that he had himself encouraged banks to use during his years as Attorney General. How did it work?

Banks have ethical obligations both to their customers and to the governments in whose jurisdictions they operate. Customers expect banks to keep their collective mouths shut about private financial matters, and by and large, banks are pretty good at doing this. But law enforcement officials realized long ago that banks are where the money is, including illegally gotten gains from enterprises such as drug dealing and prostitution. That is why in 1970, Congress passed the Bank Secrecy Act. This act is why you have to fill out a form with some identifying information any time you engage your bank in a single cash transaction of more than $10,000.

Criminals are as adaptable as anybody, and soon they learned not to trip that $10,000 wire by breaking up transactions into smaller amounts. To plug this leak in the dike, Congress enacted the Money Laundering Control Act of 1986. Besides asking banks to report any transactions over $5,000 that looked like they were evasions of the $10,000 limit, it removed liability for over-reporting. This meant that if you got annoyed at being called by the FBI for a series of legitimate but large financial transactions, you could no longer sue your bank for falsely tattling on you.

As time went on, $5,000 became less and less money in real terms, meaning that without doing a thing, Congress gradually lowered the threshold on what banks had to report. After a few banks got in trouble for under-reporting and computerized banking became nearly universal, the banks had the bright idea of just reporting everything automatically that looked suspicious. But first they had to tell the computers what "looking suspicious" meant.

One factor they loaded into their software, believe it or not, was the degree to which their customers are "politically exposed persons" (PEPs for short). If you are a governor, senator, UN delegate, or other personage whose position makes you more likely either to be the victim of a corrupt action (e. g. blackmail) or perhaps the perpetrator, you get a high PEP rating, and the threshold for making the computer spit out summaries of fishy-looking activity is accordingly set very low. Spitzer, needless to say, was a PEP, and when several large transactions to one firm showed up on a report, the bank decided to file a Suspicious Activity Report (SAR, for short) with the IRS.

At this point, humans got involved, but they could not have done their jobs without the aid of large software programs that inspect millions if not billions of transactions every year. Initially the investigators thought the governor might be the victim of blackmail, but when they found out the firm was a front for a prostitution ring, things took a different turn altogether.

Computers don't join political parties, but the people who program and operate them do. This story shows how technology can help law enforcement with investigations that in times past would have been impossible because of the sheer volume of data to inspect. Back in the days when the most advanced technology in a bank was the Friden calculating machine sitting on the comptroller's desk, a person's eyes were the only way to inspect records. That limited the nature and scope of investigations, although it also probably made things easier to do informally that were strictly against the law, as favors both to criminals and to policemen and detectives. Today, the same criteria can be applied impartially and exactly to millions of accounts, but at some point human judgment always comes into play. Once the computers provided the information to investigators, the investigators had to decide what to do with it.

And it was human judgment, however flawed, that made Governor Spitzer think that maybe he would escape detection of his expensive dalliances. Perhaps he was unconsciously hewing to an outmoded habit he developed before his own actions helped to tighten the screws on money launderers and others who do not care for banks to report their transactions to the government. Whatever the reason, this episode shows that the power to analyze large amounts of private computerized data can make or break very influential people. And without software engineers, no one would have that power.

Sources: A good summary of the laws and processes that led investigators to Spitzer's transactions is at http://firedoglake.com/2008/03/12/money-laundering-suspicious-activities-reports-and-structuring/. A Newsday account of how Spitzer's bank discovered the specific transactions is at http://www.newsday.com/news/local/state/ny-stspitzerbank0312,0,4637246.story.

Tuesday, March 11, 2008

Engineering the End of Malaria

In my Feb. 25 entry, I used the idea of wiping out malaria as an example of what might be done with "a few billion dollars" that would otherwise go toward dealing with global warming. I will admit that I simply pulled that number out of the air. Since then I have learned that while eliminating malaria is something that people as wealthy as Bill and Melinda Gates have tried to do, it is by no means a simple or straightforward task. But engineers may be able to help in some ways you wouldn't expect.

As you probably know, people contract malaria from the bite of a certain kind of mosquito that is infected with the protozoan parasite that causes the disease. The parasite hides inside liver cells or red blood cells in its human host, which is one reason that no one has devised an effective vaccine for the disease. Drugs are available to prevent it, but you have to take them all the time, sick or well, and such prophylactic treatment is too expensive for many residents of areas such as Africa where malaria is endemic. So many anti-malaria campaigns in the past have concentrated on eliminating the animal host: the anopheles mosquito that carries the malaria parasite.

The New York Times recently carried a report about whether malaria can be eliminated as smallpox has been. It seems that the consensus of public-health experts is that you can markedly reduce the incidence of malaria through spraying mosquito-infested areas with insecticide, but absolute elimination is an elusive goal at best. In Sri Lanka, for example, systematic spraying programs helped reduce the number of malaria cases from over a million in 1955 to only 18 in 1963. But the government cut back its programs, and malaria came back, reaching a level of over half a million cases in 1968. That lesson finally learned, Sri Lanka started spraying again and hasn't stopped, and the annual rate of malaria cases is now down to a few thousand.

At a 2007 malaria conference, Bill and Melinda Gates challenged public health leaders around the world to eradicate malaria altogether. Their foundation has already spent over a billion dollars fighting malaria, but clearly more than just money will be needed.

One commentator in Scientific American has pointed out that the free mosquito-net programs sponsored by many governments may not be as effective as they could be. Here is one area where engineers can get involved. The classic kind of mosquito net hangs from a string tied to the ceiling and drapes down to the edges of the mattress, protecting the sleeper from night-time mosquito bites, which is when the anopheles variety is active. This is fine as long as you have a mattress for the net to tuck under. But in thousands of villages where a mattress for every family member would be an unheard-of luxury, young children sleep on the ground. There are rectangular frame-type mosquito nets available that will work in this situation, but they aren't as convenient as the single-string type.

This little net problem is an example of how complex the malaria issue is. Even if engineers devised a new type of net that was ideal for the poorest residents, there are a lot of problems that remain. How do you get this net into the hands of those who can use it? How do you persuade them that using it will keep their children healthier? Who pays for all of this, especially if the new net costs more than the old ineffective ones?

In times past, some engineers would have said these issues were not engineering problems. But organizations like Engineers Without Borders (EWB) realize that the hardware or software part of a solution is only a part, and often not the most important part. An effective technical solution to any problem also has to factor in economics, motivation, distribution, education, and so on. EWB is an organization dedicated to providing engineering solutions for disadvantaged communities through sustainable engineering. Through many chapters at universities and colleges with engineering schools, they recruit volunteer students who get a holistic picture of not just a technical problem, but an entire culture and the cultural and social context of the problem as well. Though I never had such an experience in my student days, I think I might have been a very different kind of engineer if I had.

Only time will tell whether the wealth of the Gates Foundation, the ingenuity of engineers, medical researchers, public health officials, and the willingness of affected communities will converge to defeat the old tropical enemy, malaria. For the reasons I've discussed, it is a much harder task than the smallpox battle. But I wish the best for everyone involved.

Sources: The New York Times article on whether malaria can be defeated was carried in the Mar. 4, 2008 online edition at http://www.nytimes.com/2008/03/04/health/04mala.html. Scientific American's article on mosquito-net engineering appeared in the January 2008 issue, available online at http://www.sciam.com/article.cfm?id=a-better-mosquito-net. And Engineers Without Borders-International has a website at http://www.ewb-international.org.

Monday, March 03, 2008

Locked-In Profits or Service to the Downtrodden?

Suppose you're the wife of a man who got arrested in Oakland, California. You weren't with him at the time, and all you know is the bare fact that he was arrested. Until recently, your only alternative was to call the Alameda County public information number, work your way through a phone tree, and hope there would be a live person at the other end who could tell you something. Sometimes there was and sometimes there wasn't. But now, thanks to the initiative of some staff in the Alameda County Information Technology department, there is an Inmate Locator on the county's website. If you have the person's full name, or even if all you know is that they were booked in the last twenty-four hours, you can get online and see identifying information, the "custody status," and which jail they're in. Of course, you have to have a computer and a high-speed internet connection to do this efficiently, but doesn't everybody?

Despite the drawback of needing a computer to use it, this little advance in IT touches on a subject that I have seldom seen addressed in the engineering ethics literature. What special obligations or ethical issues are related to engineering as it applies to prisoners and jails? And in particular, what should we say about the recent trend toward privatization in U. S. prisons?

You may have read that the United States has the both the highest documented rate of incarceration in the world (over 700 per 100,000 population) and the largest absolute number of people behind bars (over 2 million, plus another 5 million or so on probation or parole). The reasons for this are worth going into, but for now let's just say they're a given. All these people have to be housed, fed, treated for medical conditions when necessary, shipped around, and maybe allowed some education and communication privileges. In addition, there are the families and friends of prisoners who have certain rights and privileges with regard to those behind bars. As the Alameda County IT folks have shown, engineering can benefit both the prisoners and their friends and relatives, in an entirely legal way (I'm not talking about high-tech jailbreaks here, which I suppose would be another way engineering could enter the picture).

I think it's significant that the people who came up with this idea were government employees (the article describing the system did not state otherwise). Along with the boom in prison populations has come a related boom in private prisons and companies that operate them. One of the largest, the Corrections Corporation of America, has gotten some coverage in this week's New Yorker magazine for its less-than-ideal operation of an illegal-immigrant holding facility outside Taylor, Texas, just up the road from my university here in San Marcos.

Privatization has been sold as a kind of universal solution to every government cost problem, but there are limits to what it can do. Somehow I suspect if Alameda County had outsourced its jail operations to a private firm, that firm would not have hired five web developers to come up with the Inmate Locator. Abuses can happen both in private and in public organizations, but the incentives are different.

As an employee of a state university, I view the advantages of well-run government-operated services as chiefly these: (1) Stability---the turnover in government employment is much lower than in comparable private operations; (2) Esprit de corps---in well-run government operations, a public-spiritedness can foster a selfless dedication to the needs of those serviced; (3) Relative lack of cost-squeezing pressures---assuming the management makes a good case to the appropriate legislature, expenditures can be planned and justified without concern that they will risk ending the whole enterprise if a lower bidder comes along.

I'm well aware that a critic could come along and turn each of those arguments on its head. Stability can mean that once a goof-off gets a government job, he's set for life. Private companies can develop esprit de corps too, and cost-squeezing pressures can happen in government as well as private industry.

But I would point out a philosophical difference between the two approaches. The bottom line of government service is just that: service. Ideally, the public servant is as dedicated to his or her clients as the nuns of centuries ago who founded and staffed the first hospitals. At least, there is no philosophical conflict between having a totally dedicated public servant and the overall goals of the organization.

With private companies, especially those which are joint-stock (publicly owned) firms, the fundamental philosophy is different. If a company doesn't make money for more than a certain length of time, it should disappear, and often does (despite evidence such as General Motors to the contrary). Companies can provide good services, but there is a built-in conflict between the ultimate raison d'etre of a company, which is making money for the owners, and service to its customers or clients, at least to the extent that improvements in the service or product make less profit available to the owners.

This is not to say that all corporate enterprise is morally suspect—absolutely not. But prisoners are a special kind of client, and are treated specially along with children, the elderly, and medical patients in a number of ethical contexts such as the rules for ethical conduct of research studies. Unlike a customer at a hardware store, if a prisoner doesn't like the service he's getting, he can't just walk away and go to another prison. I think that is the main reason why for nearly the entire history of prisons in the U. S., they have been exclusively a government-run operation. Maybe the government didn't do that good a job, but at least there was a way, in principle, for abuses in government-run prisons to be corrected through the democratic process. Private companies that run prisons can and do claim that vital information about their operations is a trade secret, and therefore not available for public access, at least not without a lengthy and often unsuccessful series of inquiries under the Freedom of Information Act. This kind of secrecy can hide abuses and wrongdoing that would be harder to hide in a public setting.

So what is the bottom line here? First, kudos to the IT folks in the Alameda County Sheriff's Office, who make it possible for the over 100 inmates booked each 24 hours to be found by their relatives or friends much more easily than before. Second, any time an engineer does something related to prisons or prisoners, he or she should remember that prisoners are not just any old client. They have special rights and privileges. Yes, many of them have done something wrong. But the fact that we are a country of laws means that we need to hold those laws in high regard, especially when we deal with people who may have broken them.

Sources: The article on Inmate Finder appeared in the online issue of the San Francisco Examiner for Mar. 3, 2008 at http://extra.examiner.com/linker/?url=http%3A%2F%2Fwww%2Einsidebayarea%2Ecom%2Fci%5F8435580%3Fsource%3Drss. The New Yorker article by Margaret Talbot on CCA's operation in Taylor is entitled "Lost Children," on p. 58 of the Mar. 3, 2008 edition. Statistics on U. S. prisons were found at the Wikipedia article "Prisons in the United States."