Monday, October 31, 2011

Fighting Dengue Fever with Engineered Mosquitoes: Good News or Otherwise?

Today’s online New York Times carries a story about a new way to fight dengue fever, a tropical disease that afflicts an estimated 50 to 100 million people annually and kills about 25,000 people a year. The disease is carried by a single species of mosquito, so if you can reduce or eliminate that mosquito, you also reduce the risk of dengue fever. Historically, spraying noxious pesticides was the only way to kill mosquitoes over large regions, but a British firm called Oxitec has developed a clever way to decimate populations of Aedes aegypti, the species that carries the disease. They have used genetic engineering to create a line of mosquitoes that die before reaching adulthood unless they are fed a particular chemical (specifically, tetracycline, an antibiotic). So to spell doom to Aedes aegypti, Oxitec breeds thousands of these mosquitoes in captivity by feeding them tetracycline, then releases them into the wild. In order not to make the mosquito problem temporarily worse, only males (which do not bite humans) are released. These little genetic time-bomb males look just as attractive to the native females, but their progeny don’t get their tetracycline fix in the wild, and die out before they can spread dengue fever. Population simulations show that once a certain percentage of wild females mate with the modified males, the entire mosquito population should collapse in that region.

Dengue fever is a truly miserable disease, as you can tell from its informal name, “breakbone fever.” I have known several people who have had it, and although they didn’t endure the severe hemorrhagic form (which is often fatal), it was one of the worst experiences of their lives because of the nightmarish bone and joint pain. So anything reasonable that will keep people from getting this disease is welcome news in my book.

This blog is about engineering ethics, not media ethics, although I must say that the news article in which this work is reported emphasizes the possible hazards of the new development. For example, in selecting which mosquitoes to releas, it’s hard to tell male mosquitoes from female ones, at least if you aren’t a mosquito yourself, so inevitably a few biting females are always released with the males. And the Times reporter found an academic spokesperson who criticized some field tests as being inadequately reviewed and vetted with public notification, pointing out that the tests have been made in countries such as the Grand Cayman Islands which have relatively weak regulatory structures.

From what I can tell, however, the Oxitec people have followed all applicable protocols, and the first notification of their work to the scientific community was a peer-reviewed publication in Nature Biotechnology. In this they are following accepted scientific procedure rather than rushing out with a news conference in advance of peer review.

For various reasons, the phrase “genetically modified” has become a trigger for fear and opposition in Europe, especially, as well as other regions. There is no single word to describe the situation in which a technology is feared, not because it has ever led to any significant harm to the general public, but for other reasons. One cynical view holds that because genetically modified crops were first developed to a large extent in the U. S., they posed an economic threat to European farmers, who then mounted a scare campaign to induce public fear and obtain legal restrictions against the sale of such products. If this was the case, the farmers largely succeeded, and now the fear of genetically modified anything is one of the background assumptions of millions of people.

One thing that is hard for some engineers to learn is the fact that when millions of people, including potential customers or otherwise affected parties, hold a particular view about something even if the view cannot be logically or reasonably supported, one cannot simply ignore that view and pretend it doesn’t exist. This may be one reason that Oxitec chose to try out their mosquitoes in places where people generally have more important things to worry about than genetically modified insects. Most places where dengue fever is a problem are poor and have inferior healthcare systems, and illness can mean loss of a job (assuming one has a job to start with). So to people in sub-Saharan Africa or Papua New Guinea, a company that lets a few non-biting mosquitoes loose in order to reduce the chances of your getting dengue fever looks like a good deal.

No engineering can be carried out without money, and Oxitec is hoping that they can show enough good results for their process to be paid for by governments who see the doomed-mosquito trick to be more cost-effective than treating millions for the effects of dengue fever, or even worse, doing nothing. Obviously they have some challenges ahead of them, but it seems short-sighted to me to throw up roadblocks just because the whole idea of genetically modified critters is under a cloud in some places.

The best argument I can think of for opposing the general release of genetically modified mosquitoes is that there may be some sinister unintended consequence lurking in the background. But that’s why people do field tests: to uncover such problems and deal with them before they cause widespread harm. Here’s hoping that Oxitec does a good job of looking out for such problems, fixes them if they occur, and then goes on to alleviate the miseries of dengue fever for millions of people worldwide.

Sources: The New York Times article by Andrew Pollack on Oxitec’s effort appeared on Oct. 31, 2011 at http://www.nytimes.com/2011/10/31/science/concerns-raised-about-genetically-engineered-mosquitoes.html. I also referred to the Wikipedia article about dengue fever.

Saturday, October 22, 2011

Steve Jobs and the Esthetics of Computers

When Apple co-founder and longtime CEO Steve Jobs passed away earlier this month, the tributes, laments, and commentaries were all out of proportion to the usual “captain-of-industry” obituaries. That’s because Jobs was not just a captain of industry, though he was that. More than anyone else, he was identified with the esthetic associated with everything Apple, including the Macintosh computer.

Pardon me if I get shamelessly nostalgic for a moment. I saw my first Macintosh at the home of a missionary friend of ours in North Carolina in the late 1980s. Because the first Mac came out in 1984, this must have been either a Mac Plus or perhaps an SE. Anyway, the minute I saw how the mouse worked and how you could escape the hated DOS magic words by just clicking on things on the screen, I fell in love with it. In fact, one day not too long after that, my wife found me in flagrante delicto in bed—with a rented Mac. Despite her initial objections to my taking a computer to bed with me, she soon came around, and when she found out you could do really cool graphics on the thing, she started to learn drawing programs on it. This eventually turned into a full-time job for her at the University of Massachusetts Amherst, where I worked at the time, and she became the staff technical illustrator for their College of Engineering.

Through thick and thin (notably thin in the 1990s during Jobs’ absence), we have stuck with Macs ever since. As Bill Gates and Microsoft came to dominate the non-artistic consumer and business computer world, I eventually had to buy a PC or two to run software that was not available on the Mac platform. But I always sort of mentally hold my nose whenever I have to do that, putting up with the cheesy-looking graphics, relatively speaking, and the just-good-enough compromises that characterize Windows products compared to Apple stuff.

That is pure personal opinion, and while the Mac-versus-PC battles have largely subsided since the turn of the twenty-first century, there is an odd analogy between what brand of computer one uses and the religion one subscribes to. Ideally, a religion shapes one’s entire worldview and positively influences one’s daily life. The mechanical reliability and elegance of Macintosh products and the style of the OS X operating environment do those things for me, on a small scale. And the fanatic perfectionism that Steve Jobs was famous (or infamous) for is no small reason why Macs and other Apple products such as the iPod, the iPhone, and the iPad are so easy to incorporate into one’s life.

Jobs was as much an artist as he was an engineer or entrepreneur. His superabundant talents allowed him to metaphorically zoom along at 80 in a sportscar while most people were still on foot, whether the task was designing a motherboard, inventing a new form of entertainment medium, or rescuing a moribund company.

It’s interesting, though not always illuminating, to inquire into a famous person’s faith, or at least evidences of it. By most accounts I’ve read, the faith Jobs was most often associated with was Buddhism. He ate no meat (only fish), and in his early years made a pilgrimage to India to an ashram. Why someone deeply influenced by Buddhism would wind up inventing world-changing people-friendly hardware is not obvious. There are plenty of Buddhists who have not done anything like that, so Buddhism alone isn’t the answer to the “why” of Jobs’ career.

But I think the style of Apple can be traced, at least in part, to elements of Buddhism.

The essence of the old DOS operating system was the command. You typed in a command, and the computer (presumably) obeyed. But you had first to learn all those commands, so in a way the computer was in charge, not you, if you didn’t know any commands. So the first thing a person had to do in order to use an old-style IBM PC was to go to school to learn some arbitrary commands cooked up by programmers, who were the real commanders of the whole business. It was a hierarchy: the programmers, the user, the computer.

Now, hierarchies have their place. I’m a Christian, and in many places the Christian universe is portrayed hierarchically: God at the top, then angels, then mankind, then the lower animals, plants, and inanimate objects (the Great Chain of Being, so called). But it’s not necessarily the best way to organize computing for the average person.

The Apple esthetic is to subsume the programmers, the technical junk, and the magic words into the invisible interior of the machine. What the user sees is simplicity, elegance, and hardware and software that is seamlessly integrated, both with each other and with the natural way humans do things: we point at things we’re interested in, we touch things to get stuff, our visual field focuses in on things of interest, and so on. With Apple machines, you get the feeling that people come first. The hardware and software are only means to the end of accomplishing something truly beneficial to humanity, with the rough edges smoothed off. To the greatest extent possible, the average person’s native abilities are often sufficient for the task. The highest compliment an operating system can get is that it’s “intuitive,” and that is true of most Apple designs.

It’s not a coincidence or fluke that after his difficulties that led to his leaving Apple in 1985, Jobs found success with the movie studio that eventually became Pixar. The vital thing in movies is to appeal to as many people as you can, tapping typical emotions that the vast majority of us have in common. This sense of what the average person wants to do, and what they’ll think is funny or appealing, was at the heart of everything Jobs did, whether it was encouraging innovative animation at Pixar or innovative hardware design at Apple.

How is that more like Buddhism than Christianity? Well, it’s certainly not hierarchical. The Beatles, who were heavily influenced by Buddhism, made famous the phrase “Let it be.” Jobs let human nature be, and adapted his signature products to the way people are, not the way some programmer wants them to be. And the world is richer for all he did. Not necessarily in the sense of more wealthy, though that has happened too. But richer in the way that every truly meritorious work of art makes us all richer. Recquiescat in pace.

Sunday, October 16, 2011

Bitcoin: Currency of the Future?

The first person who came up with the idea of money did more to advance civilization than most inventors whose names we know. Trade, commerce, and financial operations would be inconvenient or impossible without it. Money is one of the most real and commonplace things that people deal with, but for all that, it is fundamentally a non-material entity. And a person (or persons) known as “Mr. Nakamoto” has recently pushed the idea of money that much closer to its true essence, which belongs wholly in the realm of ideas.

In January of 2009, Bitcoin was born. It is a totally digital form of currency. Instead of dollar bills, or numbers in a bank account representing dollar bills or any other sovereign currency, bitcoin exists physically only on computers running a highly sophisticated encryption software program written by Nakamoto himself (we will refer to him thus for convenience, although he may in fact be a team of programmers, or even a secret branch of a government agency, for all I know).

Where do bitcoins come from? A computer-operated lottery that anyone with a fast enough computer can play. Practically speaking, only people who devote considerable computational resources to the task stand a reasonable chance of winning the lottery, which will be played every ten minutes or so for the next twenty years, at which point about 21 million bitcoins will be in circulation. These specialists are called “miners” because of the obvious analogy to gold mining.

What is a bitcoin worth, in terms of a dollar? Right now, about three dollars and eighty cents, depending on market fluctuations. Bitcoins have been much higher—up to twenty-nine dollars last June—but there’s been a bit of a bitcoin bubble, apparently, and so anyone who bought some last summer is probably regretting their decision.

Can you counterfeit bitcoins? Not so far. Mr. Nakamoto is a very good cryptographer, and no one has been able to fool his software into taking bogus bitcoins. The system is based on digital security that verifies bitcoins as genuine articles (I suppose it’s sort of like serial numbers on dollar bills), and so faking bitcoins is not practically possible.

Is the bitcoin legal tender? Here we get into a hazy area. There are U. S. laws against making and selling coins or other currency “intended as current money.” In other words, places such as the Franklin Mint have the excuse that they’re really selling their commemorative coins and other objects of value to coin collectors for esthetic reasons, despite the fact that many “collectors” simply buy them as an investment. But bitcoin goes farther than that. You can buy things with it, keep it in a digital “wallet,” exchange it online for a wide variety of other currencies, and treat in most ways just like money. The only thing that stops Mr. Nakamoto from getting in trouble with the FBI is, well, nobody knows who or where he is. Which is, I’m sure, one reason he took care to erase his digital footprints, despite the efforts of many people, including New Yorker writer Joshua Davis, to identify him.

I first learned about bitcoin from Davis’s article in last week’s New Yorker magazine. The essentially non-material aspect of money has fascinated me for years, and to find that someone has pursued their individual idea to the point that millions of dollars of equivalent cash in bitcoin is now floating around the Internet is intriguing, to say the least.

According to an article Mr. Nakamoto reportedly posted, he invented bitcoin because was hacked (meaning irritated, not digitally outsmarted) at the tendency of governments to inflate their fiat currencies whenever they got into a financial bind. Fiat currency is any type of money not tied by law to an article of intrinsic value. The U. S. dollar has been fiat currency ever since we went off the gold standard in 1933, and most other countries have the same setup.

Technologically, bitcoin wasn’t possible until the Internet developed, and now that it’s here I’m not sure what its fate will be. In perilous economic times, people tend to run away from ethereal-sounding concepts and head toward investment instruments that have historically proven to be sound: U. S. government bonds, gold, diamonds, that sort of thing. Of course, bonds are just the paper documentation for a promise made by a government. And all we have to back bitcoin is Mr. Nakamoto’s promise that he won’t flood the market with bitcoins at some time in the future, if he happens to get hard up for lunch money some day. This shows that any currency that exists in time must of necessity be based on some mass expectation of how its issuer will behave in the future. When bitcoin goes up in comparison to the dollar, I suppose it says that some number of people concerned about both currencies trust Mr. Nakamoto more than they trust Uncle Sam, at least for the moment. Which, if you think about it, is a remarkable thing to do.

I own no bitcoin, nor do I plan to invest in any, although I may check out its price from time to time. As the world’s first supra-national digital currency, it deserves some attention from ethicists who no doubt have opinions about this novel way of dealing with the problem of currency. And it will be interesting if we ever find out who Mr. Nakamoto is, assuming he shows up some day. But unless the world economy gets a whole lot weirder than it is already, I expect bitcoin may just fade away and become a historical memory, like emu farms. Remember emu farms and emu oil? If you don’t, don’t worry about it.

Sources: Joshua Davis’s article “The Crypto-Currency” appeared in the Oct. 10, 2011 issue of The New Yorker. I also referred to an article on pseudo-currency laws in the Wall Street Journal at http://online.wsj.com/article/SB10001424052748704425804576220383673608952.html

and the Wikipedia article on bitcoin.

Sunday, October 09, 2011

Neutrinos Faster Than Light? Not So Fast

Last month, researchers at CERN, the European high-energy-physics lab that houses the world’s most powerful atom smasher, announced that they had detected a subatomic particle in the act of breaking the speed limit posted by Officer Einstein. In other words, their data indicated that the particles—very light, hard-to-detect items called neutrinos—covered a distance of some 400-plus miles and took about sixty-billionths of a second less time than they should have. It’s like the man who said good-by to his wife and drove his sports car to a town 80 miles away. When he got there she called him and asked: “Are you there yet?” When he said yes, she said, “Well, you’ve been speeding again. You left 45 minutes ago.” The neutrinos got from CERN’s accelerator (it’s so big it covers portions of France and Switzerland, I believe) to an underground detector in Gran Sasso, Italy a little too faster than they should have.

If the results are independently confirmed, they will rank as one of the most important experimental discoveries of the century. And if they are not, they will show how the way science is conducted has changed over the last few decades, and not for the better.

I am not a professional physicist, though I know enough about the subject to start squinting when someone comes up with a claim that anything, from a microwave to a piece of peanut brittle, has been measured as traveling faster than the speed of light. One of the two pillars of modern science is relativity, whose fundamental postulate is that no signal (an event capable of carrying information) can travel faster than the speed of light in a vacuum. (The other pillar is quantum mechanics.) So within hours of the announcement, theorists had lined up on both sides of the aisle, one group claiming that the discovery confirmed their pet theories, and the other citing seventeen different reasons why the experimentalists had to be wrong. It was quite a show.

And the show-business aspect is why the whole scene bothers both me and Laurence Krauss, a highly qualified physicist who wrote an op-ed piece in the L. A. Times criticizing the way the discovery was presented.

The normal procedure is for scientists to submit new work to a refereed journal. In many fields, this step is preceded by informal publication on a site such as arxiv.org, where non-refereed papers can be published online under certain conditions. But everybody understands this is tantamount to thinking out loud, and an essential part of the scientific process of investigating the validity of new results is for qualified colleagues to critique the paper during the process of peer review by referees. Sometimes this results in the paper being rejected, but more often, criticism in the right spirit points out weaknesses or omissions that the authors can correct to make the final publication even better. Only when the paper is published (and in the case of unusual results such as CERN’s, duplicated by different laboratories), should the public in general be informed that, hey, we may really have something here.

As is so often done these days, however, the CERN authors called a news conference to discuss the implications of their paper before it had been peer-reviewed. This short-circuits the referee process and puts highly digested and simplified versions of the science out into the general blogosphere, where hun-yocks like me can have a go at it whether we are qualified or not.

As Krauss points out in his opinion piece, the result is a brief flurry of ambiguous reports of the original news, together with pro- and con-comments from other experts which largely cancel each other out. Then the whole thing is forgotten, at least by the non-experts who hear about it between an ad for an energy drink and the latest on how the Texas Rangers are doing. It’s this kind of thing that gives science a bad name. It will take a lot of work to degrade the prestige of science down to the point where the average citizen will take the word of a Congressman over the word of a physicist, but news conferences about results that haven’t been peer-reviewed take us a little bit in that direction.

I have probably mentioned that years ago, I attended a conference in China at which an otherwise well-reputed professor of microwave engineering reported that he had measured microwaves traveling at a speed greater than light. I wrote an article on his work, and included some other examples of instances where experimentalists deluded themselves, not out of a desire to deceive, but out of a combination of inadequate care to keep their own psychology from skewing the process, and a strong wish to discover something remarkable.

There’s nothing wrong with wanting to discover something remarkable. I have had that desire off and on myself. But doing new science is very hard work, and when you get a result that would upset most of the applecart of existing physical law, the first thing to do is not rush out and invite in a bunch of science reporters. The thing to do is to ask your colleagues, including your worst enemies, to come over and throw as many rocks as they can at your methods, your calculations, your assumptions, and your instrumentation (metaphorically speaking, of course). The more remarkable the result, the more rocks.

The worst that can happen this way is that you will be humbled to discover where you have made your mistake, and both you and your critic will have learned something. You will learn what you did wrong, and your critic will learn that you are the type of physicist who can benefit with good grace from constructive criticism. And you will avoid looking foolish in public, because the situation will have been kept among professionals rather than plastered all over the Internet.

Instead, the CERN people have stuck their necks out and made headlines (and engendered a lot of jokes, too, but that’s somewhat beside the point). Whether they deserve the headlines remains to be seen. If the result holds up (which I personally think it won’t—their geographic accuracy alone had to be on the order of ten feet in 400 miles), they will deserve not only the favorable publicity they have gotten already, but probably a Nobel Prize. If the result is shown to be in error, you probably won’t see a single headline about it. There will just be a lot of question marks in the public’s mind like, “Didn’t they say something goes faster than light? Wonder what happened to that? Those scientists can’t make up their minds about anything anymore.” And that won’t be good for science.

Sources: The CERN news was reported by many sources, and appeared with commentary on the Scientific American website http://blogs.scientificamerican.com/degrees-of-freedom/2011/10/02/superluminal-neutrinos-would-wimp-out-en-route/

Laurence M. Krauss’ op-ed piece appeared at http://www.latimes.com/news/opinion/commentary/la-oe-krauss-neutrino-20111004,0,7882894.story

And my article on the supposedly superluminal microwaves was entitled "N-rays, super-dielectrics, and microwaves faster than light: improbable discoveries in electromagnetics," and appeared in IEEE Antennas and Propagation Magazine, vol. 35, no. 3, pp. 13-18, June 1993; a letter by me pertaining to the same subject was published in IEEE Antennas and Propagation Magazine, vol. 36, no. 5, p. 72, Oct. 1994. And I’ve used “hun-yock” before—look it up at http://www.urbandictionary.com/define.php?term=Hunyock.

Monday, October 03, 2011

Cybervetting and You

Although I hope any engineer can benefit on occasion from this blog, I am especially keen to discuss matters that younger engineers and engineering students can use. One concern I’ve addressed before continues to come up in news reports, and now a scholarly article by William Herbert in the IEEE Technology and Society Magazine. For those of you who use social media such as Facebook and Twitter, it’s something you ignore at your peril: the increasing practice of “cybervetting” by firms looking to hire new staff.

In the old days, meaning pre-Internet, young people did foolish things just as much or more so than they do now. Suppose a group of engineering students in 1980 went out and had a little too much to drink, found some young women in the same condition (back then the vast majority of engineering students were men), and did things that, say, they wouldn’t want their parents to know about. And suppose that one of the engineers brought along a camera and took pictures of the proceedings. Short of blackmail, there’s nothing much that one could do with such pictures, and normally they’d either be forgotten, or turn up some day when the now-soberly-married-with-children husband gets a question from his wife who is cleaning out the attic: “Is this you in this picture? And who’s that girl?”

But fast-forward to today. We engineers have transformed the social world of young people around the world. Cell phones and digital cameras and Internet connections are ubiquitous. The barriers to mass duplication of embarrassing images of all kinds are so low that the only thing standing in the way is often one’s inhibitions, which alcohol or a party atmosphere can lower so much that one does things without thinking of the consequences. Only now, the consequences can damage your career.

In his article, Herbert cites two surveys that show over 70% of job recruiters admit that they do online searches (“cybervetting”) of potential hires, and just as many say they have rejected applications based on information they have found that way. This means that when you apply for a job, you can assume your potential employer is going to do some checking up on whatever publicly accessible information is out there about you. And because the operators of social media have strong financial motivations to make more information easy to get at, it’s likely that if you’ve ever posted anything you might not want someone to see, they’ll be able to see it anyway.

In the current economy there are all kinds of reasons for employers to reject applicants. But this is one you can do something about. It’s unreasonable to recommend that young people avoid social media entirely. It’s just a part of life now, and unless you become a hermit and stay away from all social events, it’s hard not to be included in group pictures, and some of those pictures may end up looking ambiguous later, to say the least. The point is, things that used to be private matters are now no longer under a single person’s control. Some guy you don’t even know might take a picture of some foolishness at a bar and you just might happen to be in the picture, and someone might tell the photographer your name. That’s all it takes for you to be famous (or infamous) in places where you have no idea your image has turned up.

And that assumes you have good judgment, which is not always the case. Many employees have lost their good jobs because of a lack of discretion regarding negative things they say about their employer on semi-private blogs. This is a gray area that requires judgment and discretion. Many companies have internal chatrooms, sort of online water-coolers, where employees virtually gather to discuss all kinds of things, and obviously one’s employer is aware of those comments. What you may not realize is that at least in the U. S., your employer has the presumptive right to look at all your electronic communications that use facilities provided by the employer. So if you send emails having nothing to do with your job, but use your employer’s network or computers to do it, they can legally read those emails as long as they have met certain minimal requirements regarding notification and so on. This is a surprise to a lot of people, but it’s true.

Employers have limits too. Herbert describes a case in which an airline pilot ran a blog which was password-protected explicitly to keep the airline he worked for from seeing it, but a vice-president of the airline persuaded another pilot to let him gain access to the blog. This action violated the terms of the blog access rules, and the pilot sued. In this case, the pilot was taking what I regard as reasonable precautions to restrict access to persons that were sympathetic with his position. But privacy is only a password away from being violated on the web, and all it takes is one person violating such trust to blow the lid off, so to speak.

By now I hope you have a bigger picture of what can happen if one’s youthful romps of an indiscreet kind show up on the computer of a recruiter whose idea of a good time may not match yours. Things are more complicated than they used to be, but that doesn’t mean you can’t both enjoy the benefits of social media and get and hold a good job. Perhaps it sounds a little extreme these days, but it’s worth considering the advice of an older professor I knew who was asked about things to watch out for when writing reports and other technical documents. He said: “One rule I’ve always followed is never to write down anything that I wouldn’t mind showing up on the front page of the New York Times.” As a matter of fact, I don’t believe anything he did or wrote ever did show up anywhere in the New York Times, but it’s not bad advice, all the same.

Sources: William A. Herbert’s article “Workplace Consequences of Electronic Exhibitionism and Voyeurism” appeared in the fall 2011 (vol. 30 no. 3) edition of IEEE Technology and Society Magazine, pp. 25-33.