Monday, March 30, 2009

Can Google Save Emailers From Themselves?

Most people, it seems, have sent off emails they later regret sending. In last Friday's edition of Slate magazine, reporter Michael Agger comments on what Google is calling its "Gmail embarrassment reduction pack." Among the new features are a five-second window during which you can hit an "undo send" button. While Agger wishes Google would come up with a more powerful version that would reach out into recipients' email boxes minutes or hours after you send a regrettable email, there are technical server barriers that make such a thing practically impossible.

Saying or doing things you are sorry for later is nothing new, but email has made it treacherously easy to fire off flaming ripostes, jokes from the Poor-Taste Review, and confidential memos to people you either change your mind about sending them to later, or sometimes even to people you never intended to contact, if the automatic email-address-completer function guesses your intentions incorrectly. The other day I watched an old suspense movie about a woman whose husband falsely accused her of murder in a letter he mailed to the local district attorney's office. The plot's engine ran on her efforts to get the letter back from the post office, and we got a little tour of how a small-town 1950s post office handled such requests: badly, it turned out. They refused her requests at every turn. Just when it seemed that all was lost and the letter was about to fall into the hands of the DA, here it came back to the woman in the next day's mail—returned for insufficient postage!

So even when it took many minutes or hours to send a letter, people would get into trouble caused by someone's malicious hand, if not their own. With email, it just happens faster nowadays, and there's no friendly (or unfriendly) postal employees to go talk to and beg for your emails back.

Google is to be commended for something that software engineers do too rarely, which is to take into account the real ways that average people (not other software engineers) actually use and misuse their products.

Sometimes this works well, but other times it backfires. For example, I am using two different version of Microsoft Word at work. The old familiar version makes .doc files, but the new version produces something called ".docx" files that my old version can't make heads or tails of. I understand that one reason each version of Word is bigger than the previous one is that "backward compatibility" is something they've tried to preserve over the years. What this means is that even files made by nearly prehistoric software (meaning, anything older than five years) should be readable by the latest applications. Evidently this got to be impossible with Word 2007, or so difficult Microsoft decided to bite the bullet and pitch it—hence the .docx problem. Which incidentally forces anyone who receives Word attachments to get the new version of Word, but that's another issue.

So at least part of the time, I'm using the new 2007 Word, and it tries to read my mind. For example, any time I type a period it capitalizes the following word. If I'm typing regular sentences, that's appropriate, but if I'm typing lists, or software code, or other things, when I type a lower-case letter after a period, I mean a lower-case letter. So then I have to go back and type the same thing again. There must be a "that's what I meant the first time" detector built into the program, because at least it doesn't keep capitalizing the letter over and over again. I've searched all over the preferences controls for a way to turn this irritating feature off, but I can't find it. Perhaps a merciful reader will write in with the solution. In the meantime, it slows me down and adds an incremental bit to the annoyance level of my job.

It would be an interesting exercise for some anthropologists with time on their hands to try to recreate what software engineers and developers think human beings are like from the way we are expected to use computers. We love generic, inane clip art that tries to look different but always looks like cheap clip art; we make common grammatical errors all the time and require the help of our word processors to fix them; but we always mean to send emails immediately after we write them and never have any regrets (unless we're using Gmail, in which case the regrets always show up within five seconds). We demand tons of new features in every new software package even though we end up using only a few percent of them. We love new things of any kind, even though the added value or usefulness of them is sometimes hard to see. A good number of us respond to web ads placed anywhere in our visual field, regardless of whether the ad pertains to the website we happen to be looking at, especially if the ads have little animated figures of women wiggling their behinds. And enough people to make the scam worthwhile apparently believe there are really usurped former princes in Nigeria looking for someone to help them get their cash out of the country who email strangers at random trusting them with their cash, if they'll only send a few bucks to Nigeria to prime the pump, so to speak.

This is not an edifying picture. To a great extent, general-purpose software and the web are a free-market response to what people are actually like, and to that extent, the picture is accurate. But instead of just extracting money from our wallets, it is good to read that some software developers are at least trying to appeal to the better angels of our nature, in Lincoln's famous phrase. I hope Google's efforts reduce the number of email flaming incidents and to that extent, make the world a better place. But human nature being what it is, I'm sure we'll find ways around it too.

Sources: The article "Can't Believe I Just Sent That" appeared in Slate magazine on Friday, Mar. 27, at http://www.slate.com/id/2214733/.

Monday, March 23, 2009

Conficker Stumps the Experts, So Far

Back in January, I blogged on the Conficker or Downadup worm that had spread to millions of computers worldwide. Conficker is a worm that is intended to form "botnets" of computers owned by unsuspecting users who have no idea that their machine has been taken over for (usually) nefarious purposes. Since then, Conficker has continued to spread and its developer (or developers) have managed to stay a few steps ahead of the growing team of computer-security experts who are trying to foil it.

A recent New York Times article describes how the "Conficker Cabal," a team of leading security specialists from a variety of private and governmental organizations, have tried to frustrate the worm's attempts to control its botnets from a list of Internet domain names that was originally only 250 or so. The Conficker authors foxed the experts by modifying the program so it can now use about 50,000 addresses from which to send its nefarious instructions, making the problem of combating it much harder. Even the U. S. military doesn't seem to know what to do. The situation grows more urgent as April 1 approaches, which is evidently the date at which the bots in the botnet will report for Conficker duty. But what that duty might be is a matter of speculation, ranging from a harmless April Fool prank to a severe attack on Internet sites of major importance, or even the entire Internet.

I'm trying to think of another case in which a high-tech system of international scope has been turned from good to evil purposes. It's not that hard. The Sept. 11, 2001 attacks on the World Trade Center used atoms, not bits, but the idea was similar: take a complex technology that involves large amounts of power and divert it to harmful purposes. Conficker lacks the element of surprise that 9/11 carried, but the level of planning and expertise required is comparable. Nuclear energy is another ongoing example. The beneficial use of nuclear energy for peaceful power reactors carries with it the constant hazard of diversion of nuclear fuel and knowhow to rogue regimes who want nuclear weapons.

A question we could ask that ties all these cases together is this: to what extent should engineers who develop a new technology, take into account the evil purposes to which it could be applied? I'm not talking about accidental hazards, but intentional misuse. I can't help but think that the original developers of the Internet were not thinking too heavily along these lines when they came up with the protocols that they did. Obviously, the Internet is generally one of the greatest success stories of the twenty-first century, and such problems that we have run into on it so far have not led to fatalities on a wide scale. But as we depend on it more and more and as attacks grow more sophisticated, that may change.

I have mentioned previously the need for engineers to use moral imagination, but mostly in the context of imagining how a given technology employed for its intended purpose can affect various groups of people. This is not always an easy thing to do, and it takes determined effort and a kind of thinking outside the usual engineering box to do it. But it often pays off in terms of new insights about potential problems that can be avoided, sometimes with simple low-cost fixes such as notifications or minor changes.

What I haven't considered in such musings is the need for a kind of twisted or evil imagination. It looks like not only should you think of how a technology will affect people if it is used as intended, but also if some evil person comes along and tries to do really nasty things with it. For some reason, this line of thinking has gone farther in computer technology than in most other forms of technology, partly because attempts to defeat security measures have been a part of computer programming almost since the beginning. There are several reasons for this.

Much more than other kinds of technology, computer technology is homogeneous: there's the human programmer or user, and the machine with its software. And the prize is simple: control. While control is only one aspect of the problem with hijacking other kinds of technology, control is the major part of the battle with computer hacking. Once you have control, computers will do your bidding with entire indifference to your moral values. And computer technology is the supreme example of fungibility: a general-purpose computer can literally do almost anything, limited only by resources. So once you have control, there's no particular problem in making the botnet or whatever do your evil will.

All the same, when programmers and computer scientists create new technologies, they build into them realms of possible and impossible actions. Because of the way the system is structured, there are certain things that it is physically impossible to do with the Internet. It's too late now, but wouldn't it be nice if one of those impossible things was to create a botnet and do evil things with it? Hindsight is generally sharper than foresight, but there are always new technologies coming along, and so there is still a chance to get it right, or more nearly right, in the future.

Of course, if you're clever and wicked enough, you can take almost any technology and do something bad with it. This doesn't mean that designers should simply drop any project that could conceivably be used for malicious acts. Engineering is all about compromises and tradeoffs. All I'm suggesting is that when you can think of an obvious nefarious use for a new technology, it would be a good idea to take some small steps toward building in preventive measures that would make it harder to use in a bad way.

In the meantime, let's hope that nothing worse happens on April 1 than a few bad practical jokes here and there.

Sources: I last blogged about the Conficker worm on Jan, 16, 2009. The New York Times article "Computer Experts Unite to Hunt Worm" can be found at http://www.nytimes.com/2009/03/19/technology/19worm.html.

A Note About Broken Links: Whenever I give a source URL link, I make sure that it is working at the time I write the blog. Over time, some of these links have become broken because the source website has taken down the article or for other reasons. I do not have the resources to go back and repair old links, so if you are interested in a source URL, my suggestion is to click on it as soon as you see it show up. If you are interested in a link but find it is broken and can't locate the material any other way, you can email me at kdstephan@txstate.edu. I sometimes keep local file copies of the source material referred to, and if I have done so I will be happy to provide you with a copy if the original URL is broken.

Monday, March 16, 2009

Nuclear Power: Technical Assets and Political Liabilities

With the coming of the new U. S. presidential administration, we as a country have a rare chance to debate and decide on a new course in energy policy: specifically, where we will get our electricity during the remainder of the twenty-first century. For a number of reasons ranging from geopolitical issues to fear of global warming, many people want to get away from burning fossil fuels. Technically, one of the most promising and accessible ways to do that is to build more nuclear plants. But politically, doing that will be an uphill battle.

France seems to be one of the models that the new administration is using as an example of how to run things. It turns out that France generates over three-fourths of its electricity from nuclear power, and they have beaten us out of the gate in the race to start building new plants. The French have never had a major nuclear accident on the order of Three Mile Island or Chernobyl, and they are the only country in the world that successfully reprocesses nuclear fuel on a commercial basis (think recycling for nuclear waste). Reprocessing and a variety of yet-to-be-commercialized techniques such as fast breeder reactors promise to reduce or eliminate the need for storing large amounts of nuclear waste. While it is true that such promises have yet to be delivered and so far, nuclear waste is stored on site at many plants, good engineering and planning is capable of dealing safely with that problem too. Unfortunately, the budget proposed by the Obama Administration eliminates funding for continuing the development of the best project the U. S. has sponsored for dealing coherently with nuclear waste, namely the Yucca Mountain program.

So why don't we follow France's example and go nuclear in a big way? I can think of at least two reasons, both of them mainly political rather than technical: fear of nuclear anything and competition from renewable energy.

A small, vocal minority in the U. S. has dedicated their lives, it seems, to the proposition that all nuclear technology must be banished from the face of the earth forever. I agree with them that if we could wave a magic wand somehow and make it impossible to build nuclear weapons forever, the world would probably be a better place. (Human cussedness being what it is, I'm not sure, but on balance I think it would be.) But to this minority, nuclear power and nuclear waste are just as evil and just as deserving to be eradicated. A larger number of people are influenced by these minority views and hold a deep, almost instinctive revulsion for nuclear technology, especially if a new nuclear plant is proposed in their neighborhood (where "neighborhood" often means anywhere within one's state or region). Technical people can talk themselves blue in the face about how non-rational this fear is, but in a democracy, the fears of millions of voters can and should make a difference. Nuclear power has had a mainly bad press in the U. S. and many other parts of the world for decades, and that fact cannot be ignored in any efforts to go nuclear with our power systems.

The flip side of that coin is the popularity that green anything enjoys these days (I'm writing this on St. Patrick's Day, incidentally, but the Irish green isn't the kind I'm talking about). You can tell by the almost desperate way companies claim they're going green with products and services that if you can label yourself green, you get a publicity boost almost regardless of whether you can back up the claim. Renewable energy sources such as wind and solar power benefit immensely from this green buzz. And that is good to the extent that we can use them as an auxiliary energy source. But the problem with most renewable sources that remain to be exploited (that eliminates hydropower, for example, in most places) is that they depend on the fickleness of their natural drivers. Wind blows sometimes and doesn't sometimes. The sun never comes out at night and has problems coming out on cloudy days. And since it's not practical to store electric energy in large quantities (although this issue could be addressed if we wanted to), wind and solar sources are best used for what is called "peak load," which is the times when everybody has turned on their air conditioners on a hot summer day, and the utility companies are desperately scrambling to squeeze every last kilowatt out of their generators. At times like those, it's great to have arrays of solar panels you can call on, and for every solar-powered kilowatt you get during a peak-load period, that's one less kilowatt you have to generate with coal or oil.

But to go completely renewable is impractical. Solar arrays take up huge amounts of real estate and are very expensive. Some estimates I've read say that to supply even the majority of U. S. electric power with solar, you'd have to cover most of New Mexico with solar panels, and that deals only with the daytime. Wind energy is equally problematic as a source of what is called "base-load" power that you can rely on 24 hours a day, which is most of what electric utilities need to keep going. And that doesn't even address the problem of how to get the energy from where it would be generated (mainly in low-population rural areas) to where it would be used (mainly cities).

Most of these technical issues never come up in political discussions of the future of energy policy. If we go with the inclinations of the average voter, we'd get all our power from wind and solar and none from nuclear or fossil fuels. That's fine if you happen to be an off-the-grid type living by yourself in the wilds of Montana, but we simply can't run our cities and industries and homes that way, unless we tear them all down and redesign them to use about 25% or less of the power they now use.

In Europe there is a small building boom in nearly zero-power-consumption homes. It turns out that by using vast quantities of insulation, air-based heat exchangers that take up a large part of the basement (assuming you have a basement), and by approaching the shape of a sphere, you can build a (small) residence of a few hundred square feet that uses almost no energy for heating or cooling. Somehow I don't think we're all going to enjoy living in tiny insulated igloos in the future. But if we simply go with how the majority feels about energy and we ignore the technical realities, we might end up that way.

Sources: A good article on France's reprocessing facilities was carried by IEEE Spectrum in their February 2007 online edition at http://www.spectrum.ieee.org/feb07/4891. The statistic about France's nuclear power as a percentage of all power was obtained from an International Herald Tribune article at http://www.iht.com/articles/2008/08/17/europe/17francenuke.php.

Monday, March 09, 2009

Stem Cells and "The Prestige"

If you haven't seen the remarkable 2006 film The Prestige, quit reading this blog and go rent it, because there's a "spoiler" in the next paragraph.

If you have, you will remember among the final scenes the sight of one hundred tanks of water, each containing the drowned body of a "duplicate" of the magician Angier. Each body was created and destroyed in a matter of minutes during the performance of a magic trick. The fictional form of cinema drives home, as no dry argument can do, the horror of how a man driven by worldly ambition for fame and fortune could bring himself to produce and then kill dozens of human beings.

That scene comes to mind as I am writing this blog early on the morning of March 9. Later today, if all goes according to plan, President Obama will announce the rescinding of President Bush's order restricting federal funding of embryonic stem-cell research. According to the New York Times, the President is doing this as part of his pledge to "separate science and politics."

How will increased federal support, by tax money designated by the duly elected Congress of the United States, for research that destroys human beings who under normal circumstances would develop into babies, children, and adults more or less like the rest of us, be a step in the direction of "separating science and politics"? If anyone deserves credit for separating science and politics, it is former President Bush, who, after careful consideration early in his first term, decided to allow limited federal support of embryonic stem-cell research using only existing stem-cell lines, so that no more embryos would be destroyed for the purposes of this research.

That was a long time ago. Since then, science has progressed to the point that cells from the adult body can be made to do nearly everything that embryonic stem cells do, and without the destruction of embryos. According to Yuval Levin, director of the Bioethics and American Democracy program at Washington's Ethics and Public Policy Center, the number of labs using these non-embryonic "induced pluripotent stem cells" had increased to about 800 by the fall of 2008.

But in the meantime, politicians shanghaied the science for their own purposes. We were showered with TV ads and shows portraying victims of neurological damage such as Michael J. Fox and the late quadriplegic Christopher Reeve as being made to suffer primarily because of Bush's partial ban on embryonic stem-cell funding. Voters in the state of California were persuaded to approve Proposition 71 in 2004, which allowed a $3-billion bond issue designated for human stem-cell research. Despite these efforts and privately funded research in this country and abroad, not a single therapy based on human embryonic stem cells has even reached the stage of clinical trials in operation, according to Levin.

The claim that to allow unrestricted federal funding for embryonic stem-cell research is to separate science from politics is the exact opposite of the truth. Decades ago when the government was smaller, federal funds were treated with a certain amount of deference and respect. Having been forcibly extracted from the entire populace, federal money was held in special regard and used only for causes such as national defense and scientific projects that showed clear and unequivocal promise of furthering the public good.

Not only has science recently shown that embryonic stem cells are probably not the way to go in stem-cell research, the old idea that we would need lots of them to insert into patients for treatment is also becoming passé. More recent studies indicate that molecular biology directed at particular genetic switches will be more effective than the crude injection of stem cells, which tend to form malignancies and other problems that are often worse than the disease they were originally intended to cure.

This is the science that needs to be separated from politics to a greater extent that it is already. Any time you have public funding of science, science tends to become politicized. But it is at least possible for the influence of politics on science to be minimized by a hierarchy of authority. The best people to decide on a tactical level which science should be funded are the scientists themselves, which is why agencies like the National Science Foundation and the National Institutes of Health conduct peer reviews of proposals. It is by no means a perfect system, but it is vastly superior to earmarks or other political approaches that channel funds directly to certain projects or institutions regardless of their scientific merit or qualifications. However, scientists cannot always be trusted to do things in keeping with the moral inclinations of the public, and that is why Bush decided the way he did about limiting funding for embryonic stem-cell research, as a part of his strategic outlook on the broad politics of science research. Not everything that can be done should be done, and scientists should not have the last word in all cases over how public money should be spent.

But political causes, once set in motion, tend to take on a life of their own independent of rational thought or scientific progress. There are millions of people out there convinced by politicians that the only thing standing between us and Heaven on earth is Bush's restrictions on embryonic stem-cell research.

It looks like President Obama is going to do what he said he would. A lot of people (embryos are people, they're just a lot younger than you and me) will die as a result, and a lot of other people will be disappointed that all the claims of miracle cures don't pan out. And science will get more deeply embroiled in politics than it ever was before.

Sources: The New York Times story on Obama's plans to rescind the Bush rules can be found at http://www.nytimes.com/2009/03/07/us/politics/07stem.html. Yuval Levin's report "Biotech: What to Expect" is carried in the March 2009 issue of the journal First Things, pp. 17-20.

Monday, March 02, 2009

Software Engineers as Legislators: Is Code Law?

The other day someone (perhaps a publisher's representative, or a colleague who thought I'd be interested in it) put in my mailbox a copy of David G. Post's new book In Search of Jefferson's Moose: Notes on the State of Cyberspace. Whoever did it was right to think I'd be interested. But rather than review the whole book (which tries to tie together cyberspace, Thomas Jefferson, a stuffed moose he went to great trouble and expense to have shipped to him from the U. S. to France, and a great variety of other matters), I would like to cogitate on just one idea from it: the notion that in cyberspace, "code is law."

The word "law" has at least two distinct common meanings. If I say, "You can't drive over 40 MPH on this street, it's against the law," the word means the set of rules enacted by a duly authorized governmental body. In a democracy the laws are presumably made by representatives of the people. In a dictatorship they're made by the dictator. But in either case they are human constructions. And when I say "can't" I'm not being strictly accurate. It's not physically impossible to drive faster than 40 MPH on that street, but if you do, you are liable to get caught and pay a fine.

The other important meaning of "law" is what we mean when we talk about the law of gravity, for instance: a natural principle that governs how the universe works. Try as you might, you simply can't defeat the law of gravity. It's part of the structure of the physical world. Obviously, we can do something about human laws—debate them, even change them if necessary—but all we can do about physical laws is try to understand them better so we can work within their constraints.

Which of these two meanings applies better to the idea that in cyberspace, "code is law"? That's actually a quotation in the book from Lawrence Lessig, a law professor who has written extensively on intellectual property in cyberspace. What he means is perhaps best illustrated by an example.

It turns out that embedded in the underlying "HTTP protocols" on which all web browsers run is a requirement for what is called a "referrer field." This is how Google gets paid for sending people to its advertisers' websites. The referrer field tells the advertiser that the visitor came from a Google site, and Google can collect their fee by using this information. The only way Google can do this, though, is by means of the "law" that allows for the referrer field.

If, way back in the beginning of hypertext and browsers in the 1990s, the engineers who wrote the HTTP protocols had neglected to allow for the referrer field, Post points out that Internet commerce would be very different. More specifically, millions of common transactions we are used to doing today would be impossible. What kind of law would make them impossible?

If you say that it's a physical law, like gravity, I will point out that the code enabling or disabling the referrer field was written by ordinary (more or less, anyway) human beings calling themselves software engineers.

But if you say it's just like a law on the books of a state or country, I will point out that unlike such man-made rules, it's not, strictly speaking, illegal to break "code laws"—but it just won't work. If you pretend there's a referrer field that isn't there, something very much like physical law intervenes to stop you, since you are denying a part of reality.

So the constraints and allowances imposed by the software structure of cyberspace borrow characteristics from both physical law and legislative law. This fact is underappreciated by at least two groups of people.

The first group is the software engineers themselves. I don't know why the early HTTP code warriors put that referral field in there, but making the founders of Google fabulously rich was probably not foremost in their minds. It probably served some minor technical function that paled into insignificance once the commercial possibilities of its use came to the fore. No one can foretell the future with perfect accuracy, but it would be nice if software engineers working in fields that are likely to influence the behavior and freedom, even, of millions of people, would at least realize that they are playing the part of legislators, usually without realizing it. Maybe a few of them do realize the broader implications of what they're doing, but it is a rare engineer who has even an average legislator's appreciation for the needs and wants of the public. That is one reason why so much software has annoying habits that make you want to go hunt up the guy who wrote it and give him a piece of your mind before you lose it on account of the software.

The second group, which includes practically everybody nowadays, is the public at large who uses, deals with, or is (sometimes) victimized by, software. You need to know that it is possible, at least in principle, for things to change, even in software. Unfortunately, when you look at the governance systems erected by those in charge of the Internet and allied software standards, they are typically as complicated as the software is. I have noticed that whenever engineers are left to themselves to design an organization, whether it's a five-person committee or something as large as the 300,000-member Institute of Electrical and Electronics Engineers, they will typically devise a legislative monstrosity with interlocking boards, districts, criss-crossing lines of authority, and other features that leave the outside observer with a general sense of not knowing quite who is in charge. It's hard even for technical people to get anything useful out of such organizations, and as for the general public—well, "forget it" is a tad discouraging, but the systems are usually not designed for ease of access by non-experts.

But as with any problem, people of good will can at least try to make things better. For you software engineers out there, try to think outside your little code box and consider the wider implications of your work, especially if you're fooling with stuff that millions of people will use. And as for the rest of us, if you ever get a chance to have some input on software design, take it and run. You stand a good chance of making cyberspace a better place.

Sources: In Search of Jefferson's Moose: Notes on the State of Cyberspace by David G. Post was published in February 2009 by Oxford University Press.