Sunday, January 29, 2012

Engineers, the Public, and Crime and Punishment

Fyodor Dostoevsky’s novel Crime and Punishment was completed in 1866, but even that long ago there were signs of the coming upheavals that would lead to the Russian Revolution of 1917 and the establishment of the USSR, a government founded on the principle that the coming future utopia of fulfilled Communism justified any amount of butchery in the present. This idea was presaged in miniature by Raskolnikov, the protagonist of the novel. A failed law student in whom noble idealism waged a constant struggle with depression and anger, Raskolnikov tried his hand at journalism and wrote an essay on the idea that humanity could be divided into two types: ordinary and extraordinary. For the vast majority of ordinary souls, obedience to law and custom was obligatory and kept the wheels of society turning. But for a few rare extraordinary individuals—Keplers, Newtons, or Napoleons—rules and morality itself were things to be overcome along the journey to break new ground for the ever-upward progression of history. Here is Raskolnikov explaining his idea to a friend:

"I believe that if circumstances prevented the discovery of a Kepler or a Newton from becoming known except through the sacrifice of a man’s life, or of ten, or of a hundred. . . Newton would have the right. . . to remove these ten men, or these hundred men, so he could make his discoveries known to all mankind."

His point is that if a person has something of great enough worth to give to mankind, its value to later generations is worth the sacrifice of a few lives, if killing a few people makes the gift possible.

Raskolnikov’s academic speculations turn to grim reality when he later finds himself actually carrying out the murder of an old pawnbroker woman and her sister. The rest of the novel is a brilliant exploration of Raskolnikov’s complex psychological turmoil as he struggles with the burden of his crime and what it means to himself and others.

The lessons of this novel should be borne in mind by engineers who participate in ambitious projects that propose to reshape the way people live. The 19th-century world that Dostoevsky lived in was just beginning to be changed by technological innovations such as the railroad, steam power, and the electromagnetic telegraph. Physics and chemistry transformed the world of the twentieth century, and technologists are now learning how to use biological knowledge to meddle with things that former generations viewed as immutably fixed by evolution—or God.

The recent debate about the use of embryonic stem cells in medical research turns on the questions of what people are for, and who counts as a person. Raskolnikov liked to reassure himself that the old woman he murdered was of no more value than a cockroach, and that he was doing the rest of humanity a favor by exterminating her. Those who advocate the destruction of frozen embryos for embryonic stem cell research must believe that the potential good, in the form of possible cures and treatments for presently incurable illnesses, outweighs any harm to the embryos, which are only potential human beings, after all. And some philosophers have been outdoing Raskolnikov’s essay by proposing that some mature animals may be of more intrinsic worth than some immature human beings: it is permissible to kill certain disabled infants, for example, according to some schools of thought.

Engineers are extraordinary people, in the statistical sense. Out of the world’s population of some six billion, perhaps 15 to 20 million could generously be classed as engineers. That is less than one percent. But from Raskolnikov down through the abominations committed by dictatorships of the last two hundred years, we have seen what can happen when we start to view some elite individuals as exempt from the usual laws, rules, and moral strictures that most of us obey. Surely we can allow some moral license to those men and women in the white coats who promise us such wondrous treatments, and eventually biological enhancements, at the price of a few frozen embryos whose fate was probably annihilation anyway, can we not? We can, but we may not like what happens to the elites who get used to flouting the rules, or what happens to us when the elites take advantage of their privileges.

Without spoiling the novel for those who haven’t read it, I will say that Raskolnikov comes to regret his willingness to put his academic theory into practice. Dostoevsky being Dostoevsky, it is a complex regret, full of ambivalence and shot through with seemingly good things that could happen if Raskolnikov conceals his crime and tries to live out his dream that he is indeed one of the few extraordinary souls for whom ordinary law is nullified. Dostoevsky, ever the Christian artist, portrays both the simple trust of believers who have never questioned God as well as the convoluted thoughts of Raskolnikov, who at some points confesses belief in the miracles of the Bible, but at other times talks like an atheist from Central Casting. While fiction cannot directly teach us to be better people, a thoughtful reading of Crime and Punishment will challenge you to think about the meaning of life, the purpose of love, and the values of will and judgment.

Sources: The quotation from Crime and Punishment is from the Sidney Monas translation (New York: New American Library, 1968), p. 257. The latest (Winter 2012) edition of The New Atlantis magazine carries a fine series of articles on the theme of “The Stem Cell Debates.” For more details about how some philosophers have valued mature animals over some immature humans, see the works of Peter Singer.

Monday, January 23, 2012

SOPA, PIPA, and the Wikipedia Blackout

As regular readers of this blog know, one of my favorite sources of online information is Wikipedia. While not perfect, this largely volunteer-maintained site is a generally reliable, up-to-date, and accurate source of many kinds of information. It is especially good for technical and scientific data where there is a general consensus of agreement, and even in controversial areas it tends to be pretty even-handed. So imagine my surprise last Wednesday when I clicked onto Wikipedia for something and was greeted instead by a blacked screen for 24 hours and a plea for me (if I was a U. S. citizen, which many Wikipedians are not) to contact my congressional representatives to protest the consideration of SOPA and PIPA.

What are SOPA and PIPA? Legislative acronyms for the Stop Online Piracy Act and the (get ready for this one) Protecting Real Online Threats to Economic Creativity and Theft of Intellectual Property Act. If you read the second title it sounds like they want to protect online threats, not do something to stop them, but I think PIPA comes from the informal title, which is just the Protecting Intellectual Property Act. SOPA is being considered by the House of Representatives and PIPA by the Senate.

Why is Wikipedia (and many other online service providers of various kinds) so upset about these proposed laws? According to the text on the blacked-out screen, the laws pose a potentially crippling threat to the freedom of information exchange. If they were passed in their present form and the Attorney General, or a civil court, or some bureaucrat somewhere, decided that Wikipedia was an internet search engine, then it would be Wikipedia’s responsibility not to link to certain nefarious websites, the list of which the government would apparently determine. (It is not clear to this non-lawyer exactly who would enforce the acts, but you are welcome to read the 10,000-word text of SOPA yourself and figure it out if you so choose.) And if Wikipedia (or any other website falling under the jurisdiction of the act) failed to do its court-mandated duty, the court would be free to impose penalties, probably in the nature of fines and/or injunctions to stop or start doing things.

Well, right there we have a problem. One of the better aspects of the Web is the way that it has grown to its present stature largely without government aid or regulation. True, there are many problems and illegal or immoral things that go on, some of which we have discussed in this space. But overall, Internet commerce and Internet entities such as Wikipedia have behaved pretty well and do no more than reflect the general makeup of society, which is made up mostly of fairly decent people with a few bad apples here and there.

In my limited, non-lawyer view, SOPA and PIPA would try to change that by putting a huge number of Internet entities under the watchful eye of the courts. It is probably not an exaggeration to say they might create for the Internet a regulatory regime not too different from what the Interstate Commerce Commission (ICC) was for interstate commerce, which kept all kinds of business ranging from bus companies to railroads and trucking firms under its thumb for decades. The difference is that the ICC came to pass to curb genuine piratical abuses by monopolistic railroad companies, who shook down their customers (mostly farmers) shamelessly with exploitative and discriminatory rates. And when the deregulatory fad came to pass a few decades ago, the ICC bit the dust, and with the much greater level of competition in interstate commerce that prevails today, nobody much misses it.

Nothing much like monopolistic exploitation is going on with the Internet organizations targeted by SOPA and PIPA, with the possible exception of Google. The proposed laws’ supporters claim that their only targets are the truly bad actors: the crooks who set up phony or phishing websites, those who sell pirated software, child pornographers using the Web, and so on. But from my (again) limited reading of the proposals, that judgment call is left strictly up to the lawyers enforcing the act. Once power is created, bureaucrats seem to develop an irresistible urge to use it, and so I have concluded that it would be a bad idea to pass SOPA and PIPA in their present form. And I made that opinion known to my Federal legislative representatives here in Texas.

So did several million other people, evidently, because a few days after Wikipedia and many other sites did their blackout bit, Congress announced that it was “indefinitely postponing” consideration of the bills. At the rate Congress gets truly important work done these days, that means you can forget about SOPA and PIPA unless you run out of other things to worry about first.

I am not a libertarian, and appropriate legislation to curb some of the more blatant abuses found on the Internet is a good idea if it can be enforced without an undue burden on the service providers or the public using the services. Most law enforcement has to take a “good-enough” approach, given limited resources. You want enough highway patrols to keep speeders and other vehicular misbehavior down to a reasonable level, but to get the public to obey the speeding laws 100% of the time would require something on the order of a speed-cop Reign of Terror. From my point of view, SOPA and PIPA moved too far in the Reign of Terror direction. I am sure that interested legislators will go back to the drawing board to craft something that will deal with the worst abuses without being so intrusive on the vast majority of sites and users who are behaving themselves, but I do not myself believe there is any big rush about the matter.

Reportedly, major Hollywood interests (copyright holders) were behind SOPA and PIPA, and were disappointed when the proposals went down in flames. It is a disturbing enough time for content providers these days, as file-sharing and online movies become more and more technically facile. The phrase “rent-seeking” has shown up a lot lately in editorials about how powerful business interests have influenced government so as to direct more revenue their way. One could view the SOPA-PIPA business in that light, with what fairness I’m not sure. But it looks like this time, anyway, a grass-roots effort by millions of users (admittedly led by organizations with influence, though not so much financial as merely relational) prevailed over the rent-seekers, if that is the right phrase for them. Unfortunately, Wikipedia can shut down their site for the first time in protest only once. Any more and it will get to be a drag. So the future will reveal how this continuing conflict gets resolved, if it ever does.

Sources: For those policy wonks dealing with a sleepless night or for other reasons, a website has posted the “markup” versions of both SOPA and PIPA at http://www.keepthewebopen.com/sopa and http://www.keepthewebopen.com/pipa, respectively. Though they are mostly of academic interest now, they show just how complicated modern legislation has become.

Monday, January 16, 2012

From Dreamcatchers to Soulcatchers

The day after Christmas, I was asked to contribute to a long paper on the past, present, and future of the social implications of technology. One of the other contributors cited an idea called the “soulcatcher chip” as something that would have profound social implications, if it ever comes to pass.

The phrase “soulcatcher” presumably derives from the word “dreamcatcher.” A dreamcatcher, at least in the original versions made by the Ojibwe and Sioux tribes of native North Americans, was a small frame or loop of willow twigs hung with feathers. Mothers would make dreamcatchers and hang them above their children’s beds to filter out nightmares and send only good dreams to their offspring. I am unaware of any scientific studies on dreamcatchers, but the idea has caught on in the commercial world and you can buy such things to hang on your rear-view mirror.

A soulcatcher chip, as envisioned by former Chief Technology Officer of British Telecomm Peter Cochrane, is a piece of silicon that you would implant in your brain. Early versions would simply be an interface between your brain and the Internet, bypassing all those old-fashioned electromechanical keyboards and eye-tiring display screens. Later versions of soulcatchers would do exactly that: the interface would be broadband enough to “capture all a human’s thoughts and feelings on a single silicon chip,” according to a 1998 posting on the website of Wired Magazine. In the same piece, Cochrane predicted that an external version of the soulcatcher would be available in about five years, that is, by 2003.

As far as I know, that prediction has fallen flat. While functional magnetic resonance imaging (fMRI) technology has advanced to the point that we can observe which parts of the brain get active when a wide variety of mental events happen, this is very far from directly reading a person’s thoughts in general, or being able to get onto Wikipedia by thinking instead of moving your mouse or typing.

The soulcatcher idea is basically a communications problem, and can be broken down into the parts of transmitting (brain to the external world) and receiving (external world to brain). While fMRI technology has made a fair amount of progress on the transmitting end, the receiving end is much trickier. Implanting stuff in the brain is a risky thing, even if the object you’re implanting is only a protective cover to replace a missing part of the skull, for example. And running wires into the brain, or even silicon-chip substitutes for wires, appears to be a very crude way of conveying data to one’s mental world. While some progress has been made in brain implants as a type of therapy for conditions such as epilepsy and even depression, this is a far cry from conveying novel detailed data into the brain.

The idea of a soulcatcher chip brings up a problem that has up till now stayed within the halls of philosophy departments. When Cochrane asked his wife how many parts of himself he could replace with synthetic components before she rejected him as a machine, she said she was revolted by the idea. This is an indirect compliment to Cochrane, because I can think of some marriages in which the wife would welcome the process (“Let me at that off switch!”). Of course, such speculations will remain hypothetical for some time, perhaps forever, because there is no hard experimental or theoretical evidence that it is even possible to simulate the workings of the human mind with a computer, or to do anything close to downloading all a person’s thoughts and feelings onto a computer.

This is just personal speculation on my part, but there may even be some sort of psycho-physiological uncertainty principle out there, analogous to the Heisenberg uncertainty principle of quantum physics. The Heisenberg uncertainty principle says that you cannot measure both the momentum and position of a particle simultaneously with arbitrarily great precision. If you get the momentum exactly right, you will have no idea where the particle was at the time, and vice-versa.

The soulcatcher analogue of that may be that it is impossible to go beyond a certain point in measurement (and especially in two-way communication) with the mind by means of physical actions related to the brain, without seriously disrupting or possibly even destroying the mind you are dealing with. Given the complexity of the brain and its interactions with the mind, any such uncertainty principle will also be more complicated and less straightforward than the physics principle first enunciated by Heisenberg. But that doesn’t mean no such principle exists. It may simply work out that way experimentally before we understand the brain well enough to realize theoretically what the true limitations are.

The dreamcatcher was a physical object constructed by people who wanted to change something the mind was doing, namely, giving their children nightmares. And in the nature of a placebo, it may have well had a good effect, if the mother felt she was doing something positive and became more reassuring to the child as a result. The hopes for a soulcatcher chip are more ambitious: nothing less than the direct connection of one’s mind to external data in a way that would be hard to ignore. If I get tired of surfing the Internet, I can always just turn off the computer and walk away. But if the thing was directly piped into my brain, all sorts of dire possibilities come to mind. So far, computer viruses have stayed outside the body, but what if one got into your brain and you couldn’t get it out? The ethical challenges alone would be enough to stop me from even contemplating such a project, but ethical considerations do not always stop researchers who are fascinated by an idea.

As we’ve seen, the soulcatcher is an idea that is already delayed in transit, if indeed it ever gets here. Even if it never comes to pass, it has given us a lot of mileage in the form of science-fiction tales and movies, and that may be the place where it does as much good as dreamcatchers, if not more.

Sources: The forecast by Peter Cochrane was published at http://www.wired.com/wired/archive/6.11/wired25_pr.html in the Nov. 1998 issue of Wired Magazine. I also referred to the Wikipedia article on dreamcatchers. If all goes well, the May 2012 issue of the Proceedings of the IEEE will carry an article entitled “Social Implications of Technology: Past, Present, and Future.”

Sunday, January 08, 2012

Ethics of Calendar Technology

With the turn of this new year, most people have at least heard about the Mayan calendar sequence ending that allegedly forecasts dire disasters to come on or about December 21, 2012. The Wikipedia article “Mayan calendar” has a quotation from Sandra Noble, who is an expert on ancient Mayan customs and practices. She says that contrary to a lot of the hype that has been promoted about the event, all that it really means is that a “long-count” period of one “b’ak’tun”, lasting 394.3 years, is going to end on that day. Far from prognosticating disaster, the ancient Mayans usually had a huge celebration, a kind of super-New-Year’s-Eve party, whenever they reached the end of a b’ak’tun. She says that the notion of a cycle end as a doomsday date is “a complete fabrication and a chance for a lot of people to cash in.” To the extent that calendars are a type of technology, such misrepresentation is a kind of violation of calendar ethics—if there is such a thing. As far as I’m concerned, there is now.

Just like the technology of clocks helps us to regulate and coordinate the way we use short intervals of time during the day, the technology of calendars allows us to plan and coordinate longer intervals involving days, weeks and years. That is why nearly every civilization worthy of the name has come up with some kind of calendar. Although the traditional seven-day week dates back at least to the Babylonian captivity of the Jews around 600 B. C., there are many other lengths of weeks used in other calendars, ranging from the three-day weeks of an early Basque calendar to the 13-day weeks used by the Mayans. In most ancient civilizations, the calendar was used to establish times for religious festivals as well as more practical issues such as the planting of crops. Because religion was a kind of all-pervasive thing to ancient peoples, the calendar was an intrinsic part of their culture, and anyone with the temerity to change it was in effect challenging the foundation of a way of life.

This tie to tradition was an aspect of the calendar appreciated by the French revolutionaries, who in 1793 threw out the Gregorian calendar with its religious associations and replaced it with a novel arrangement of three ten-day weeks in each 30-day month, enamored as they were with the decimal system. (We owe the metric system largely to this same regularizing spirit, which found its proper place in scientific measurements.) The revolutionary French calendar lasted for about a dozen years before the confusion caused by interconverting between the French days and weeks and the traditional ones used by everybody else got to be too much, and they changed it back. A similar stunt was tried by the leaders of the ten-year-old Soviet Union, who foisted a calendar consisting of 72 five-day weeks onto their reluctant citizens in 1929. This attempt failed after only two years, when a six-day week was tried, but led to further confusion and too many holidays. Finally, the whole thing was dropped in 1940 and the Gregorian calendar was quietly resumed.

Since then, there have been no major attempts to fiddle with what many people regard as a God-given system of accounting for days. In Christianity’s early years, believers adopted Sunday, the first day of the Roman week, as their new Sabbath, partly to distinguish themselves from the Jews, who observed their Sabbath beginning Friday night and going to Saturday at sundown. Making Sunday the first day of the week has been a nearly universal practice of calendar-makers until the last few years, when I began to notice European calendars that put Monday as the first day of the week, demoting Sunday to the last day.

I don’t know why, exactly, but this change annoyed me exceedingly. I suppose it was as a Christian, I resented the implied insult to Sunday, which Christians regard as a day set aside by the Lord for rest and avoidance of routine labor. Far from being just another part of the weekend (or something brought to us by labor unions, “the folks that brought you the weekend” as one bumper sticker says), Sunday is supposed to be the day when you stop to realize that what you have depends not only on your own efforts, but is really the gift of God, Who set aside the sabbath because He rested on the seventh day after making the world on the first six days. If God decides to rest after His labors, the least we can do is imitate Him in that regard. That is why Sunday used to be printed in red and lead the week, because it was special, even holy (which just means “set aside for a special purpose”). Holidays were originally holy-days, that is, religious festivals.

So all these traditional religious associations go by the board when you pick up a calendar with Monday as the first day, as I unwittingly did the day I went Christmas shopping for my wife at Half Price Books. I bought it because it had pictures of Volkswagen Beetles. A Beetle was our first car, and my wife has ever since had an unreasonable admiration for those machines. Because the calendar was sealed, I was unable to tell how the weeks were arranged. Imagine my horror (okay, displeasure is closer to it) when she opened it up, hung it on the wall, and I discovered that it was laid out in the European style of Monday first, Sunday last. I fussed about it till she offered to print up little strips of Sundays, one per month, tape them to the left sides of each sheet, and cover up the blasphemous tail-end-of-the week Sundays. I admitted this would be silly and said for her not to do it, but that calendar is going to annoy me for the whole coming year, I can tell.

By such subtle means are cultural shifts manipulated, or at least indicated. I have no idea why European calendar makers demoted Sunday, unless their customers demanded it by saying that Monday is when their weeks begin, and why not put it first? But in that demand itself is expressed the increasing secularization we hear about Europe all the time. And now it’s spreading to the U. S., at least in the form of specialty calendars. I fully expect nothing particularly bad to happen on Dec. 21 of this year, Mayans or no Mayans. But when it comes to the trend symbolized by moving Sunday to the last day of the week, I think the results are becoming plainer every week—and every year.

Sources: I referred to the Wikipedia articles on “week,” “French Republican calendar,” “Calendar,” and “Mayan calendar,” where the quotation from Sandra Noble appears.

Monday, January 02, 2012

Computer Wars, Now and Then

This Christmas, my wife received not one, but two tablet computers—an iPad and a Nook—from gift-givers who obviously did not coordinate their purchases. It’s too early to tell which, if either, will win her attention, affection, and devotion, which is always the goal when companies introduce new products. But at least no one gave her a TouchPad.

The TouchPad, for those of you who, like myself, were too engaged on other matters to notice, was HP’s attempt to crack the tablet-computer market. Released last July, the device used an operating system called WebOS that HP acquired when it bought the smartphone developer Palm. But according to an article in yesterday’s New York Times, the TouchPad was an example of too little, too late in a number of ways.

Users quickly found that the WebOS, or something, made the TouchPad’s speed resemble molasses in January compared to its competitors, mainly the Apple iPad OS and Google’s Android. This and the fact that the slew of programs that HP expected developers to produce for the machine failed to materialize, led the firm to announce only seven weeks after the TouchPad’s introduction that it was going to discontinue manufacturing all WebOS devices. Although it has since announced one more production run of the TouchPad, this is thought mainly to be for the purposes of clearing out inventory and meeting unmet obligations to large customers.

Playing catch-up is hard in any game, and especially so in the rapidly moving world of software and novel information and communications technologies such as tablet computers. The fifteen months between April of 2010, when Apple began selling the iPad, and July 2011 might as well have been a decade in more staid industries. And even if HP’s device had been simply a hardware knockoff and used the iPad OS (which Apple would have allowed only if the Indian Ocean froze over), the lead enjoyed by Apple and Google would have made for problems for HP. Add in the totally novel WebOS, which was so new that HP had a lot of trouble finding programmers who could work in it, and you had a disaster in the making.

I was once on the receiving end of a similar situation, though it moved in comparative slow motion because it took place thirty years ago. IBM (remember them?) introduced its personal computer in 1981, and inspired a similar mad rush on the part of other computer makers to come up with rivals for the then-burgeoning PC market. One logical candidate to win the race was the Digital Equipment Corporation of Maynard, Mass., which had shown IBM a thing or two with its PDP line of mini-computers. Back when an IBM mainframe needed the floor space (and air conditioning) of a small house, you could stow a PDP-8 in a couple of relay racks the size of a large closet. So you’d think DEC would know how to out-hardware IBM.

DEC tried, and the result was a thing called the Pro-350. It looked more or less like an IBM PC, only it ran some software that DEC had cooked up on their own. And to use it, you had to either write your own software or wait for programmers to write some and buy it from them. At the time I was a young assistant professor and knew that I needed some kind of a computer. I asked the advice of another professor in my department, who I later learned used to work for DEC. He said for me to buy a Pro-350, and I’d never regret it.

Let us say merely that he turned out to be a false prophet. I spent a good chunk of my start-up research money on that Pro-350. Because I could program in FORTRAN back then, I was able to write a few programs and run them on the thing (it had a FORTRAN compiler that worked halfway decently). But as other people started to buy word processing software, spreadsheet software, and other pre-packaged applications for their new IBM PCs, I was stuck with either writing my own FORTRAN for these things, which was as ridiculous as building my own laboratory building to do research in, or waiting for versions that would run on the Pro-350 to come onto the market. I’m still waiting.

I wound up using the thing as a terminal to get into the department’s mainframe for a few years, but eventually I bought a Mac at home and the world was never the same after that. DEC struggled on for another decade or so by mainly maintaining its fleet of aging computers for former customers, and was bought by Compaq (which is now itself history) in 1998.

The lesson here, if there is one, is that if you’re going to compete in the consumer electronics business with something new, you’d better be first with something that works really well, or else you’re probably wasting your time and money. As I said, it’s too early to tell whether the Nook or the iPad will win out here at our household. But so far, the iPad has worked flawlessly, whereas I had to spend a couple of hours twiddling with our wireless network after my wife spent even more time trying to get her Nook to see our base station before we could get it to connect. Not a promising start for the Nook in 2012.

Sources: The New York Times article on the TouchPad appeared online at http://www.nytimes.com/2012/01/02/technology/hewlett-packards-touchpad-was-built-on-flawed-software-some-say.html. I also referred to Wikipedia articles on the TouchPad, the iPad, and DEC.