Friday, May 29, 2009

Does the U. S. Need a Cyber Czar?

On Friday May 29, President Obama is scheduled to announce a plan to name a “cyber czar” whose responsibility will be to oversee computer security both in and outside the federal government. The term “czar” in Russian originally meant an emperor whose reign was maintained by the authority of God. Somehow I doubt that such overtones of meaning are intended by the PR people who put together these news releases and the members of the press who report them. But it is a good place to start asking whether the U. S. really needs such a czar for this increasingly important area of technology, and what the good and bad aspects of such an appointment might be.

From time to time we have discussed various cyberthreats in this blog, and so far, none of them have turned out to be the Armageddon of viruses or cyberattacks. The trend in recent years, however, is not reassuring. Back when email was a novelty engaged in by a few nerds and their friends, the worst motivation of those who wrote viruses or produced spamware was a kind of intellectual mischievousness: “Gee, can I really get away with this?” But eventually, people figured out there was serious money to be made, either quasi-legitimately via spamware advertising of kooky products, or illegally via shakedowns and blackmail threats (“If you don’t want your whole website to go down next Tuesday, leave $100,000 in unmarked bills in the trash can next to the entrance of the Kremlin tonight.”). And in the last year or two we’ve seen pretty definite evidence that nations are using cyberattacks as part of more conventional warfare, as when Russia evidently coordinated a cyberattack on Georgia’s government websites last August during its attack on contested territory between the two countries.

So the threats are real, no doubt about that. The question is, can we defend ourselves better against them if we have some centralized governmental authority taking some as-yet-undefined actions? That question has to be answered in the context of how things are done currently.

Like the Internet itself, the U. S. system (if you can call it that) of defense against cyberattacks consists of a not very organized, highly distributed network of specialty firms, companies who simply want to use the Internet legitimately without hindrance, and the various governmental entities who use computers, which is (I hope by now) all of them. Judging by various reports, the private firms seem to do a better job of security and upgrading, including defense against attacks, than the government does. But this may simply be an artifact of accessibility. Reporters can file requests under the Freedom of Information Act to obtain a wide variety of government records, but there is no such privilege with regard to the internal documents of private firms. So if (to take an example) Bank of America makes a big goof in purchasing vulnerable ATM machines that can be programmed to spurt out piles of twenty-dollar bills to a waiting kid on his tricycle, as long as they catch the problem and fix it before it hits the news wires, no one is the wiser. But let that happen in a government agency, and reporters can get all the documentation on it they want, usually.

That doesn’t mean the government is necessarily less competent in dealing with cyberattacks. One danger I can foresee is that of burdensome regulations in what is historically a very unregulated industry. If Microsoft had to prove to some government bureaucrat that its new software upgrade is bulletproof against cyberattacks before it could be released, we’d all still be running OS/2 on our PCs (except for those of us using Macs). But the advance reports indicate that the new cyber czar won’t have even the authority of a cabinet official, nor Presidential access of the highest level.

So if the new czar can’t do much, why should we bother? One aspect of the situation appears to pertain to public education. I suppose if the President talks about what you as an individual can do to improve computer security, a certain number of people will pay more attention, but it does seem like it might be a needless expenditure of political capital. On the other hand, if we are made aware of the cost of cyberattacks in terms of centrally analyzed statistics publicized by the government, that might motivate some changes.

This problem resembles environmental issues in that it is essentially a global, not strictly a national matter. The Internet knows no boundaries, and in fact many if not most cyberattacks on U. S. institutions come from abroad. That means a solution, or more likely a range of solutions, will have to have international aspects to it: international agreements, international coordination, and so on. And for this the federal government is probably the best choice.

In sum, let’s wait and see how czar-like the new czar acts. There is no need to worry that the designee will take over the universe, even the cyber-universe. And there is a lot of room for improvement both in the public and the private sector. But government can do only so much, and it will be interesting to see whether the person chosen makes a positive difference, or disappears after the next federal initiative grabs all the headlines.

Sources: An Associated Press article describing the results of a Presidential study of cyber security and related issues can be found on the Business Week website at My blog “War Comes to the Internet” was posted on Sept. 8, 2008.

Monday, May 25, 2009

Ethics Education: How Can You Tell?

One reason I started this blog was to use it in a short (four-week) engineering ethics segment of a freshman course for engineering and technology majors. When you teach something, you obviously expect the lessons to make some kind of positive change in your students. You hope that they will be able to do or understand something that they couldn’t do, or didn’t understand, before you went to work on them. With technical subjects such as circuit theory or computer programming, it’s fairly easy to tell whether the students learn what you want them to learn. That’s what exams are for. But how can you tell whether ethics instruction has achieved its goal, which is to turn students into ethical engineers?

In an ideal world of infinite educational-evaluation resources, you would do a longitudinal study of two groups of students: one group who took engineering ethics, and a second matched cohort of students who took the same courses as the test group except for the engineering ethics parts. You would then follow every student, doing in-depth interviews and gathering third-party information about the ethical aspects of their work over their entire careers. And at the end of this process (which would take thirty-five years or so), you could write a paper saying we know for sure that X number of students who took Y ethics module thirty-five years ago were Z per cent more ethical than the control group of students who didn’t. Only, of course, Z might turn out to be negative. All it takes is one determined crook in your test sample to throw everything off.

And that ties in to something I learned last week. It’s not only universities that try to improve their students’ ethics with educational modules; companies and governments try it too. In particular, every employee of the state of Illinois has to take a brief online ethics module periodically, up to and including (I presume) the governor. You may have heard the name Rod Blagojevich in the news over the last six months or so. He is the (now ex-) governor of Illinois who was impeached for trying to sell the Senate seat vacated by now-President Obama. This was hardly ethical behavior under any standard, yet Blagojevich was following a tradition honored by numerous Illinois governors, of engaging in indictable behavior. If anyone was trying to evaluate the ethics education of Illinois employees and included the governor in their sample, they are going to have a lot of trouble showing that it helps.

But wait. Should one or two spectacularly bad apples spoil the barrel? That is, you are always going to have what are called “outliers.” If you are familiar with a Gaussian distribution, often called a “bell curve,” you know that it looks like a hill with gently sloping sides. If one of these distributions represents some measure of “ethicalness” (an ugly word, but I can’t think of a better one), then the peak of the hill represents the bulk of students who have what you might call typical or average ethics. Mother Teresa would be in the right-hand tail of the distribution, way off to one end, and the ex-governor would be somewhere in the left-hand tail.

You can make the argument that even though ethics education doesn’t prevent the occasional Blagojevich, if it moves the whole distribution to the right it makes the average person more ethical, which is worth something. But then you get into the utilitarian bind of evaluating the worth of ethics to society in general. Is it better that most people in a profession are a little more ethical even though some are still news-worthily unethical, or would it be better if somehow we could prevent only the worst ethical lapses and leave the rest alone? And all this assumes that there is some fail-safe way to evaluate ethics education other than the impossibly expensive and lengthy longitudinal study I described above, which is by no means clear.

All education involves some degree of faith, which is the certain knowledge of things we don’t see yet. Even if my students pass exams on digital logic or electromagnetics, I can’t say for sure what they’re going to do with those pieces of knowledge and ability. I can only trust that they will remember them and use them somehow in a good way. Experience has shown that the vast majority of our students do just that, although I can’t instantly pull up tons of documentation to prove it.

In the last several years, whenever the National Science Foundation funds programs to augment and encourage engineering ethics education, it insists that the outcomes of these programs be evaluated by some independent means. Their argument is that taxpayer money is being used for these programs, and the agency has to go back to Congress and show that the money did some good. Although the motive is laudable, I have questions about the method. The same person who told me about the online ethics course in Illinois is an expert in evaluating ethics education, which he admits is not a perfect process either. The preferred way is to administer a survey in which students answer questions about hypothetical ethical situations. As I say, it’s better than nothing. But it seems to me that the process of evaluating ethics education mainly to generate some paperwork to send back to Washington is motivated by the same spirit that causes the government of the state of Illinois to insist that all its employees take the online ethics module. In both cases, the ostensible motive seems to differ from the real motive.

Ostensibly, one is doing something that will genuinely improve (or measure) the ethics of the target population. But in reality, both the online ethics module and the ethics evaluation process serve mainly to shift responsibility for any possible bad outcomes. If another Rod Blagojevich shows up, Illinois government administrators can say, “Well, we did all we could—we made him take that ethics module.” And if despite all efforts, the next couple of decades turn up a few engineers who, despite taking NSF-evaluated engineering ethics education, go ahead and do something unethical anyway, the National Science Foundation can turn to Congress and say, “Well, we did all we could—we evaluated those programs with the best available evaluation instruments.”

Am I saying we should chuck all attempts at evaluation, or even ethics education? By no means. But let’s be realistic about what we’re trying to do, and not pretend that it’s capable of more than it can really do, which is simply to give us some reason to hope, but not to be certain, that we are making people more ethical.

Sources: Michael Loui, professor at the University of Illinois Urbana-Champaign, told me about these matters. I would point you to some information about Rod Blagojevich, but I think he’s already had more attention than he deserves. And my definition of faith is taken from the New Testament book of Hebrews, 11:1.

Monday, May 18, 2009

Crash of Flight 3407: The Human Factor

Last February 12, a Continental Airlines regional flight 3407, operated by Colgan Air, crashed short of the Buffalo, New York runway it was headed for, killing all 49 aboard and one person on the ground. At the time I wrote about it shortly afterward, speculation centered on how well the deicing systems were working, since icy conditions had been reported in the area. But after a three-day hearing on the crash held by the National Transportation Safety Board last week, it looks like human error may be the root cause of the crash.

Working with voice-recorder transcripts and flight data from the "black boxes" recovered from the crash, NTSB investigators painted a picture of the last minute or so of the flight which did not show pilot Marvin Renslow and his 24-year-old copilot Rebecca Shaw in a good light. During their final approach, when FAA regulations prohibit nonessential communications in the cockpit, the pair are heard chatting about careers and the co-pilot's lack of experience flying in icing conditions. Renslow himself had only three months of experience flying the particular Dash-8 involved in the crash, and had failed several flight simulator tests in the last few years. Besides these factors, fatigue may have further dulled the crew's responses. Shaw had joined the flight after commuting all night from her home in Seattle, where she lived with her parents. Her raising the plane's flaps without a command from the captain compounded the already critical situation the pilot found himself in when the plane lost airspeed and began to stall. Under these conditions an automatic system activates a "stick-shaker" intended to alert the pilot to the danger. The proper response is to move the stick forward to regain airspeed, but records indicate Renslow pulled it back. After stalling, the plane rolled and crashed.

The impressive and improving safety record of U. S. air travel says that on balance, nearly all pilots do the right thing in critical moments nearly all the time. But the fact that the safety record for smaller regional carriers such as Colgan is not as good as for the major carriers flying larger aircraft says there may be something about the difference in working conditions between long-range and regional carriers that bears watching, to say the least. A lot of the news coverage of the NTSB hearing centered on co-pilot Shaw's meager annual salary, which was less than $17,000 (not counting extra flying time). Deregulation of the airline industry plus the recent recession has brought intense competitive pressure to regional operators, who may be cutting corners and hiring inexperienced pilots with less-than-stellar records simply because they're cheaper. The Federal Aviation Administration has regulations about minimum standards for pilot training, performance, work hours, and rest breaks, but these things are human rules, and rules can be bent or broken without automatic penalties coming into play. At least, until something bad happens.

The loss of any life in an engineered system is a tragedy. But if the publicity surrounding the accident and its investigation result in corrective action, we can look forward to further improvements in safety procedures and their enforcement.

At last week's hearing, a NASA expert in cockpit communications acknowledged that more could be done to give pilots even earlier warning of potential stall conditions than the stick-shaker provides. This is a problem in what is called human-factors engineering: how to effectively interface a machine to a person so that the person has the right information at the right time in order to take the right action. By the time the stick-shaker went off, the pilot's options were very limited. If an earlier warning had been provided, the crew might have snapped out of their inattentive mood sooner and realized their difficulties in time to avert the accident. We will never know about this particular case, but if the investigation results in improved cockpit instrumentation that saves other inattentive crews from getting into the same fix, something good will have come from this crash.

The current federal administration seems to be more interested in regulation than deregulation, and there may be areas where such a change is appropriate. One reason that co-pilot Shaw's low pay got so much attention was that it is such a contrast to the typical popular perception of airline pilots: distinguished-looking former military flyers with some dignified gray around their temples (nearly always men), good pay, and years of flying experience. Stereotypes are made to be broken, and my hat is off to any young woman who goes through the arduous process of becoming a commercial pilot, but in the bad old days of high airfares and closely regulated airlines, the companies could afford to hire the very best pilots available, and generally did. The case of Shaw may indicate that inexperienced crews are being pushed too fast into positions of great responsibility without adequate training, or even sleep.

As sad as this accident was, we are starting to see the feedback system of engineering work. I don't mean the stick-shaker; I mean the corrective process that learns from mistakes, errors, and tragedies, and does things to make them less likely in the future. This kind of work takes place out of the spotlight, in quiet offices and labs around the world, but it is the reason that air travel is as safe and reliable as it generally is. And as long as we pay attention to the rare cases when something goes wrong, and have the courage to fix problems—whether mechanical or human—it will keep on getting even safer.

Sources: Two good reports on last week's NTSB hearings may be found at,0,5946950.story and My article "The Crash of Flight 3407: Better Deicing Needed?" appeared on Feb. 16, 2009.

Monday, May 11, 2009

An Orbital Service Call to Hubble

Today, if all goes well, the Space Shuttle will take off with a cadre of astronauts whose main job will be to act as glorified technicians. There's nothing wrong with doing a technician's job well, and although I have said critical things about the Space Shuttle and NASA in the past, this trip is more justifiable than most. The Hubble space telescope, launched in 1990, has already outlived its nominal lifetime, and with some judicious repairs, scientists hope it will run for at least another five years or so. But as a recent National Public Radio report describes, fixing Hubble is no ordinary service call.

Take the 111 screws, for example. I have enough trouble in an ordinary 1-G lab keeping track of small screws involved in my research projects. If I spend a day or so building something, I'm pretty sure that at least a few minutes will pass with me on my hands and knees on the floor, looking for a critical nut or bolt that jumped off the edge of the workbench. Well, it turns out there's an instrument box on Hubble that needs to be accessed for repairs, but the designers never meant for it to be fooled with anywhere but on the ground. Hence the 111 screws, which would form a toxic cloud of malicious orbiting metal if just released around the telescope. Never fear, though. NASA engineers under the direction of Jill McGuire devised a plate with 111 or so tiny plastic boxes that fit exactly over the screws. A hole in each box is just big enough for the screwdriver to go through, but when the screw comes loose the only drifting it can do is inside the box. A snap-on replacement cover is part of the repair kit, so the astronaut doesn't have to find all those screws and put them back on.

This is engineering of an extreme kind, and I suppose that in testing the extremes of repair operations in the vacuum and weightlessness of space, NASA may come up with something that we ordinary Jills and Jacks could use as well. Back in the days when NASA was searching for reasons to justify itself after the end of the Apollo moon program, you heard a lot about "spinoff technologies"—ideas that were originally developed for the space program and turned out to be useful for earthbound applications as well. I have the unconfirmed impression that Velcro may be in this category, but other than that, I can't think of anything that's made a huge difference to the economy. I'd like to have one of those sleek little vacuum-and-zero-G-adapted hand drills they're using for my own toolbox, but not if I had to pay $180,000 or whatever the equivalent cost would be.

The Hubble, as with most astronomy, is pure science, and science is its own justification, culturally. To do certain kinds of science, you end up developing some weird engineering, such as plates that capture 111 screws in the vacuum of space. Offhand, I can't think of any other circumstance in which you'd need a screw-capturer like that, but maybe tools developed for some other obscure task the astronauts will do up there, will turn out to have beneficial consequences down here. Even if it doesn't, just getting the astronauts up there safely and back is something that takes a lot more resources than developing the hundred or so tools they'll carry with them. But that would get us into the manned-versus-unmanned space flight argument, and hey, I'm on vacation. I'd rather not argue. Let's just hope the repair trip goes well and Hubble gives us another half-decade or so of fine science. By which time, I also hope, we're well on the way to replacing the outmoded Shuttle with something better.

Sources: A written form of the report about NASA tools carried on NPR can be found at

Monday, May 04, 2009

In Search of the Perfect Email Software

Email is as much a fact of life nowadays for most knowledge workers as opening the morning snail mail used to be. I don't know about you, but just dealing with email has lately gotten to be a time-sink and chore I don't look forward to. Anyone who can improve this situation will certainly do a lot of people a lot of good, and that's a good example of engineering ethics in my book. Part of the problem, no doubt, is my high expectations for what should happen to my email. In what follows I'm probably going to show off my ignorance and prejudices in a good strong light, but it may be worth it if something close to my ideal software ever turns up.

I'm one of those people who takes seriously the thought that months or years later after I get an email message I care about, I should be able to find it any time my computer is on, whether it's connected to the network or not. This means (unless I'm blessed with a total-recall photographic memory, which I'm not) that important emails (that is, ones I decide to keep) need to be sorted somehow and should physically reside somewhere on my laptop for access without a network connection.

Back when email was a novelty and getting three emails a day was a comparative blizzard, these requirements were easy to meet. Sorting email into files on my computer took maybe thirty seconds. But nowadays, if I skip reading my email for only twenty-four hours, when I check it again there's easily fifty or a hundred of the little jewels, only a few of which I am interested in. The rest is everything from notices about worker-training courses I don't need to offers to help princes get their money out of countries I've never been to, and worse.

I used to pride myself on doing what the older generation called "clearing my correspondence," which meant that every day, I checked out every email (at least by its source and subject line), either threw it away or filed it somewhere using the software filing routine, and got the inbox down to either zero or the two or three emails I hadn't decided what to do with yet. Filing consists of negotiating one of those multiple-level popup menus, most layers of which have so many items that I have to use the scroll function, which on no email program I've tried has a scroll bar, so I have to slide to the bottom of the visible list and stand on the mouse till the desired category comes into view, at which point I select it and sometimes have to do the whole thing over again at the next menu level (I have files within files within files, sort of like wheels within wheels). This means that filing a single email sometimes takes twenty or thirty seconds, and oh! the joy when the very next email in the list turns out to belong right where the previous one went—another thirty or forty seconds, because this time I'm mad and slip up and select "Nutcases" instead of "NosferatuTheorists"—well, in that case it wouldn't matter, but you know what I mean. So after a half hour or forty minutes of this kind of thing, I struggle back up the Sisyphean slope to a mostly empty email box, only to turn my back for a few hours and face a door-filling pile flooding in again, metaphorically speaking.

So how would the perfect email software help me? For one thing, I could use it on either of my two main computers. The way it is now, I can have part of what I want—files of old email without Internet access—only on my office computer. For some obscure reason known only to IT professionals, I can send emails with a computer-resident software like Thunderbird (or the old Eudora) only if I'm physically plugged in to my university server. If I'm anywhere else, I have to log into the internet-based software program the University runs (it's like Gmail in that respect), send an email, and copy it to myself in order to have a permanent copy that I'll later download into my Thunderbird resident software, but that adds to my already tedious task of sorting email.

Returning to the elusive purpose of describing the perfect email software, I'd better resort to bullets if I'm going to finish at all. It would:

--- Store all the email I decide to keep in an intuitive, use-frequency-based filing system (one that makes the more frequently used files easier to get at, and saves the four-layer menus for ones I access every three years or so)
--- Be accessible anywhere in the world, for sending as well as receiving, and would leave a permanent sorted record of sent emails on my machine as well as on some server somewhere
--- Would automatically figure out the procedure for getting off an email list and write the necessary messages once I put a sample undesirable email into a "get rid of this junk" file
--- Would use some kind of quasi-intelligent processing to figure out which email sources I'm really interested in and which I'm not, and would rank order these within some kind of time-based presentation, that is, most recent interesting ones first, older interesting ones later, and so on.
--- Would give me access to all emails I decided to keep, going back to the dawn of time (email time, anyway) with or without internet access

There, that'll do for starters. So far, I haven't been able to find the perfect software. None of the server-based systems will do (Gmail, Microsoft Outlook) because you have to be hooked to the Internet to find old emails, and some of them throw away old ones anyway, drat it. But the resident programs that store mail physically on your laptop can't be used to send mail except from the one server. That seems like a simple thing to fix, but maybe fixing it would violate the computer-science equivalent of the law of gravity, or something. And moving categories around so that the most frequently used folders are easy to get at doesn't sound hard. Note that I don't want to do it—I want the software to do it for me. Sure, I could reorganize my own files, but that would add a three-hour task every few months to my already excessive time spent on computer housekeeping, and I thought time saving was what software was all about. Hah.

And don't tell me to get a new email account to cut down on the junk email, either. That way folly lies, because it just trades a few months of quiet now for the heinous duty of checking more than one email account—forever. No, thanks.

Any suggestions?

Sources: If you want to know what "Sisyphean" means, check out the back story on the founder of Corinth at—he was quite a tricky guy, it turns out, and well deserved the punishment meted to him by the gods. I think some of his descendants must be writing spamware today.