Monday, July 28, 2014

Imagine There's No Email


For people of a certain age, you're supposed to sing that title to the tune of the John Lennon song that uses the word "heaven" instead of "email."  The other day our wireless hub here at home went out, and it took a day or two before we could get a new one going.  In the interim, my wife, who was initially distressed at her lack of connectivity, remarked that actually it was a refreshing thing to go without email or looking at the Internet for a couple of days.  Without meaning to, we accidentally endured what you might call a period of fasting from email and the Internet.  And we found that it wasn't all that bad.

Mention the word "fasting" to most people, and you may conjure up images of scrawny half-crazed religious fanatics who lived a long time ago.  Or if you have had personal experience of fasting, it was probably just an unpleasant prelude to a medical procedure.  The whole spirit of the age militates against voluntarily refraining from consumption of one kind or another, which is all fasting is.  We are told without letup that we live in a consumer-driven economy, and so it's positively unpatriotic to consume less if you can consume more.

Well, if it's so economically harmful, why do people do it at all?  What is the point of fasting?

Theologians have an umbrella word for fasting, abstinence, and other kinds of things discussed in magazines with titles like A Simple Life, The Simple Things, or just Real Simple.  The word is "simplicity."  Simplicity is a type of spiritual discipline, meaning that it's a habit you can practice that will make you a better person if you get better at it.  Or at least, it stands a chance of doing that.  What is certain, is that if you don't practice the discipline, it won't do you any good.

You don't have to be a theologian, or even a religious believer, to benefit from spiritual disciplines, especially fasting.  The reason is that human nature is meant to be a certain way, and habits that make us more the way we were intended to be have benefits, whether or not you believe there is a God that designed you to be a certain way or not.  The habit or discipline of fasting helps the rational part of you gain mastery over the less-rational part. 

All of us have what some sociologists refer to as a "lizard brain":  a primitive part of the brain that we appear to share with lower animals such as lizards.  Lizards are good at what they do.  We have bright-green anoles around our yard here, and they move in a way that I have to admit is quite human:  slowly, guardedly creeping up on a bug until it's within reach, and then snatching it before the bug can figure out what hit him.  But lizards are slaves to their instincts.  When they're hungry, they hunt.  When it's breeding time, they breed.  You don't see lizards wearing little hooded robes and rope belts around their waists refraining from eating juicy bugs right in front of them.  At least, not outside Geico commercials.

But humans can voluntarily refrain from consuming or doing something that is otherwise good, helpful, or even necessary, simply to practice what you might call ordinate self-control.  Take email as an example of such a thing.  Some small fraction of what most people with email accounts receive is worth reading:  it's from a person you know, or your boss, or your long-lost Cousin Max, and you get a benefit or pleasure from reading it.  But the temptation of email, at least for me, is to jump on the computer every time that little bing goes off and see what the newest email is.  If I give in to the temptation to monitor my email more or less constantly like that, I will get little if anything else done. 

An occasional fast from email can teach me several things.  One is, I won't die or lose my job (not necessarily, depending on the job) if I don't read my email for a couple of days, with the proper preliminary precautions and notices to others.  Another lesson is, life without email is not only possible, but has advantages too.  I can spend hours reading a book, for instance (remember books?—the paper kind, I mean).  Or I can take a walk in a park and observe, really observe, nature and its manifold wonders—not just treat it as some green-screen CGI background to the movie of my life. 

Much as engineers like rules, there are no universal rules for fasting (aside from rules promulgated by various religions for their members, that is).  If you want to try it, think of a bad habit you have that you'd really like to be able to control, a habit that involves something necessary in its proper amount, but something that you find yourself going overboard with.  I'm not trying to start a twelve-step program here, I'm simply suggesting how you can pick a feature of your life that you might consider fasting from.  Then decide on some period of time in which you could afford to stop or reduce that activity, and try to stick to it.  If it's something you really think you can do without altogether, go slow at first.  Trying too much too soon is a classic mistake of novice fasters.  If you can do without the thing for an hour, or a day, do it, don't be too hard on yourself if you fail, but if you succeed, try two hours or two days next time.

Fasting is currently a countercultural thing, and except for the magazines I've mentioned and some books I will refer to below, you won't find much support from other people if you decide to fast.  They may secretly feel jealous or threatened by your abstaining from what they view as a normal, healthy part of life.  They may even tell you you're foolish or going to cause yourself trouble, and you should at least listen to them.  But if you've made up your mind to try a fast, go ahead and try it.  The worst that can happen is that you find out the thing has got a tighter grip on you than you thought—and that's worth knowing too.

Sources:  I have found very helpful a couple of books that relate to fasting, simplicity, and related spiritual disciplines.  Richard J. Foster's Celebration of Discipline:  The Path to Spiritual Growth, 3rd ed. (HarperCollins, 1998) is a classic that treats many types of spiritual disciplines, including fasting, in an organized way that respects a wide variety of religious traditions.  For a more personal take on how a very busy wife, mother and author up the road here in Austin implemented seven types of simple living in her household, I recommend Jen Hatmaker's 7:  An Experimental Mutiny Against Excess (B&H Publishers, 2012).

Monday, July 21, 2014

Books and E-books


Last Christmas, someone gave me a Kindle, and I have made intermittent attempts to get engaged in reading e-books on it.  These attempts have met with only mixed success.  A book that was highly recommended by my pastor, who makes no secret that he's not much of a reader, left me unimpressed, and I abandoned it.  More recently, out of a sense of duty to a cultural icon more than genuine interest, I downloaded (for free) a copy of Swann's Way, the first volume of Marcel Proust's encyclopedic multivolume Remembrance of Things Past.  Proust wins my nomination for Greatest Introspector of the Nineteenth Century Award, but I'm afraid I've abandoned him too, somewhere in his childhood garden among his maiden aunts and the eccentric visitor Mr. Swann. 

The only books I've managed to finish on the thing were a couple of mass-produced page-turners written for young adults.  They managed to keep me turning the electronic pages, all right, but after I finished the last one I felt a little like you might feel after binge-watching five recorded episodes in a row of some trashy TV series—I had to ask myself, "Was that really the best use of my time?" 

Despite numerous prophecies that the days of the printed book are numbered, e-books have not yet done to the paper-book publishing business what hand-held electronic calculators did to the slide-rule business.  Electronic calculators were so obviously superior to slide rules in nearly every way that only die-hard traditionalists clung to their slide rules, which took a one-way trip to the museum and never came back.  That is not happening with paper books.

Once the market stabilized on a few common platforms such as Kindle, e-book sales took off and increased steadily for several years.  Some of the biggest sales boosts came from mass-market fiction series such as the hugely popular Hunger Games franchise.  But in the last year or so, e-book sales have flattened out, while paper-book sales are seeing increases, both in the U. S. and worldwide, that in many cases show faster growth than e-books.  A report on the Digital Book World website says that U. S. sales of e-books through August 2013 were $647 million, about a 5% increase from the previous year, while hardcover printed books accounted for sales of $778 million, up nearly 12% from a year earlier.  This trend is continuing in 2014, and is not the picture of a situation where one medium is simply being dropped for a newer one. 

Instead, it's beginning to look like the book medium one chooses will depend on the message it carries.  This is a familiar phenomenon in other fields—music, for example.  Take two music lovers.  One is a busy college student whose part-time job is standing in front of a tax office waving a big arrow sign.  He wants something to listen to while doing this mindless task.  The other is a professional music critic with exquisite taste and highly discriminating ears, wishing to evaluate the latest recordings of a particular Mozart string quartet.  The college student will be happy with an iPod (or smartphone) with earbuds, while the music critic will want to listen in a quiet room through a high-dollar stereo system and speakers.  Different kinds of messages are just naturally suited to different kinds of media, and the same may be true of book publishing going forward.

So will e-books destroy the paper-book publishing business?  No, but they will change the makeup of what gets published that way.  Books with mainly transient value—what an acquaintance of mine once called "nonce books," meaning it's of interest for the nonce, but not much longer—will probably show up as e-books.  Fiction mega-hits that masses of otherwise non-literary folk gobble up are perfectly suited to the e-book format, which makes it easy for the reader to plow through in a straight line as fast as he or she can read.  But for more scholarly publications that someone might want to keep around for reference or contemplation, I think the paper format is more suitable, and current sales statistics say that paper books are not on the verge of immediate extinction.

If you think about it, there is a physical connection, however tenuous, between a person holding a mechanically typeset book in his hands, and the original author, no matter how long ago the author lived.  If you pick up a copy of Aristotle printed before about 1960, the chain goes like this:  from handwritten manuscript to medieval scribes, to nineteenth-century editor, to typist copying the editor's manuscript, to the Linotype operator setting the type, to the stereotype plates that impressed the ink into the very paper you hold in your hands.  

Maybe some computer geek can figure out the analogous path for an e-book, but I'm not sure I want to hear about it.

I think one of the most profound differences between the natures of the two media is that paper books are inclined to permanence, while e-books are suited to transience.  In the nature of things, I expect that today's e-books will not be readable by future generations of machines, or if they are, it will become a bigger and bigger hassle to do so as time goes on, just as it is probably hard for you right now to recover files on a computer you used more than a decade ago.  But unless the ink has faded to invisibility or the paper has crumbled to dust, we can still read writings that were penned thousands of years ago. 

There is a story, possibly apocryphal, that the only copy of the writings of Aristotle, upon whose ideas much of Western civilization is based, lay forgotten in some heir's basement for a couple of hundred years before being rediscovered.  Good thing they were written on paper, because if Aristotle had used a Kindle, in two centuries the batteries would have died and the operating system would have been, well, ancient history.

Sources:  I referred for statistics on U. S. publishing of print and e-books to the websites http://www.digitalbookworld.com/2013/adult-ebooks-up-slightly-in-2013-through-august-hardocovers-up-double-digits/ and http://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/62031-print-digital-settle-down.html, and for worldwide sales to http://www.publishingtechnology.com/2013/07/year-on-year-ebook-sales-fall-for-the-first-time-says-nielsen-research/.  The popular fiction I read on Kindle was the first two books in the "Airel" series by Aaron Patterson and Chris White.  The story of the rediscovery of Aristotle's works is reported by at least two ancient historians, according to the Wikipedia article on Aristotle.   

Monday, July 14, 2014

The Birth Control Chip


An MIT spinoff called MicroCHIPS has announced plans to market an implantable contraceptive chip that can be turned on and off remotely, and lasts for as long as sixteen years.  Funded by the (Bill) Gates Foundation to the tune of $5 million, the chip contains enough of the contraceptive drug levonorgestrel to provide contraception for the major part of a woman's fertile years.  Once implanted, the device will automatically melt a seal to release a few micrograms of the drug every month until it receives a wireless command to stop, or to start again if desired.  When developers were questioned about hacking concerns, they said the device will incorporate such precautions as individual password-protected remote controls and the need for an external transmitter to be held within a few inches of the device, which will be implanted in a region of fatty tissue.  MicroCHIPS hopes to market the device in some regions of the world starting in 2018.

This announcement raises two distinct ethical issues. 

One is the question of security relating to any kind of medical chip implanted in the human body.  One of the news reports on the contraceptive device noted that former U. S. Vice President Dick Cheney asked his doctors to disable his heart pacemaker's wireless interface out of concerns that someone might hack into it and zap him into eternity.  Such fears are not without foundation.  For example, password protection is notably weak in many cases, and short-range low-power RF links can be manipulated from greater distances by (illegal) high-power transmitters. 

It is a sign of a narrow mindset to consider only technical means of hacking.  In the developing-world environments where the Gates Foundation intends the contraceptive chip to be used, there is often a strong animus against any method of birth control on the part of husbands and boyfriends. Why should a man bother with sophisticated technical hacking when he can threaten to beat the stuffing out of the woman if she doesn't tell him her password?  No one has figured out a foolproof way to prevent that kind of hack.

The second ethical issue, and the one that will probably get me into hot water shortly, is the question of contraception in general.  Contraception is an existential question for the human race as a whole, and thus goes to the very heart of what you think humanity is about. 

Until the mid-twentieth century, the consensus of both learned and popular opinion was that engaging in sexual intercourse while intentionally preventing the conception of a child was wrong.  Here is what none other than the great psychologist (and atheist) Sigmund Freud said in a lecture delivered in 1915:  "We actually describe a sexual activity as perverse if it has given up the aim of reproduction and pursues the attainment of pleasure as an aim independent of it. So, as you will see, the breach and turning-point in the development of sexual life lies in its becoming subordinate to the purposes of reproduction."

While he said this in the context of the subject of infantile sexuality, Freud is essentially making the distinction between the animal type of intercourse, in which creatures such as dogs and cats simply follow their instinctive sexual urges wherever they lead, and the mature human type of intercourse, in which the main reproductive function of sex is recognized by the rational animal known as a human being, and used with that function fully in mind. 

Now this is an ideal, obviously, and many people have fallen short of the ideal since prehistoric times.  But when pharmaceutical contraceptives became available in the 1950s, moral authorities in Western societies gradually abandoned the ideal, with one notable exception:  the Roman Catholic Church.  Since then, nearly everyone has adopted a model of the human being that views sexuality as independent of reproduction. 

If you believe that human beings arose by means of mindless undirected evolution and no God was ever in the picture, it's hard for me to understand how you can also believe sexuality should be independent of reproduction.  Isn't that how we got here, by means of sexual attraction between opposite-sex fertile men and women?  Oh, but now we're beyond all that, you say.  We've taken control of our own evolution and can do anything we like, implant chips to turn our women into sex robots or what have you.  Reproducing is somebody else's job—seems like we will never run out of people.  To that I would say, ask Japan.

Japan is the incredible shrinking country.  For the last four years in a row, Japan's population has suffered a net decline, even with immigration taken into account.  In 2013 there were about 238,000 more deaths than births in the famously insular island nation.  While not all of this decline can be attributed to contraceptive technologies, those means go together with a cultural mindset that focuses people on careers and individual success to the detriment of families, marriage, and (in Japan) even between-sex relationships, which many Japanese have given up on altogether.  The future for Japan looks grim, as it does to a greater or lesser degree for many European countries whose birth rates are not much better than Japan's.

I was going to bring religion into this argument, but I don't think there is a need to.  Plain lunkheaded observation of simple statistics shows that cultures and countries that discourage reproduction, whether by abortion, birth control, or a mindset that disses family life, will tend to grow smaller, will experience widespread economic and social dislocations, and possibly disappear altogether.  And in the course of time they will be replaced, if at all, by other cultures that encourage reproduction and promote stable family structures that produce mature, competent people who have the long-term interests of their societies at heart.  And that is a totally Darwinist secular evolutionary argument.

Excuse me, but DUH.

One of my favorite Eudora Welty short stories ends up with a small boy being punished for a minor infraction in a hair salon.  He breaks loose from his mother and runs out the door, but as he leaves he stops to get in the last word: "If you're so smart, why ain't you rich?"  I would turn it around and ask Mr. Gates, "If you're so rich, why ain't you smart enough to realize that contraceptive technology is not in the best interests of humanity?" 

Mr. Gates is not going to pay any attention to me, and I expect that many of my readers will not see eye-to-eye with my position on this either.  Though not a Catholic myself, after many years of experience, both personal and second-hand, I have come to the conclusion that the Roman Catholic Church has the most philosophically and theologically sound positions on human sexuality of any institution around—scientific, cultural, religious, political, or otherwise.  But that is a story for another time and place.

Sources:  For information on the contraceptive chip, I referred to an article at http://mashable.com/2014/07/10/wireless-birth-control/, and also one at http://www.medicalnewstoday.com/articles/279323.php.  Sigmund Freud's Lecture XX, "The Sexual Life of Human Beings," from which the above quotation was taken, is available in numerous print editions of his 1915 lectures, Introductory Lectures on Psycho-analysis, which is apparently in the public domain in some translations.  My particular source online was a George Mason University site, http://chnm.gmu.edu/courses/honors130/freud3.html.  The Eudora Welty short story I referred to is "Petrified Man."  Readers interested in knowing more about the Roman Catholic Church's position on sexuality in a highly readable and useful form can consult Christopher West's Good News About Sex & Marriage (Cincinnati, OH:  St. Anthony Messenger Press, 2004).  This book is especially recommended for young people who have most of their lifetimes ahead of them in which to avoid the mistakes of an older generation.

Monday, July 07, 2014

The Robot Says You Flunked: Algorithms versus Judgment


Harvard and MIT have teamed to develop an artificial-intelligence system that grades essay questions on exams.  The way it works is this.  First, a human grader manually grades a hundred essays, and feeds the essays and the grades to the computer.  Then the computer allegedly learns to imitate the grader, and goes on to grade the rest of the essays a lot faster than any manual grader could—so fast, in fact, that often the system provides students nearly instant feedback on their essays, and a chance to improve their grade by rewriting the essay before the final grade is assigned.  So we have finally gotten to the point of grading essays by algorithms, which is all computers can do.

Joshua Schulz, a philosophy professor at DeSales University, doesn't think much of using machines to grade essays.  His criticisms appeared in the latest issue of The New Atlantis, a quarterly on technology and society, and he accuses the software developers of "functionalism."  Functionalism is a theory of the mind that says, basically, the mind is nothing more than what the mind does.  So if you have a human being who can grade essays and a computer that can grade the same essays just as well, why, then, with regard to grading essays, there is no essential difference between the two. 

With all due respect to Prof. Schulz, I think he is speculating, at least when he supposes that the essay-grading-software developers espouse a particular theory of the mind, or for that matter, any theory of the mind whatsoever.  The head of the consortium that developed the software is an electrical engineer, not a philosopher.  Engineers as a group are famously impatient with theorizing, and simply use whatever tools fall to hand to get the job done.  And that's what apparently happened here.  Problem:  tons and tons of essay questions and not enough skilled graders to grade them.  Solution: an  automated essay grader whose output can't be distinguished from the work of skilled human graders.  So where is the beef?

The thing that bothers Prof. Schulz is that the use of automated essay-grading tends to blur the distinction between the human mind and everything else.  And here he touches on a genuine concern:  the tendency of large bureaucracies to turn matters of judgment into automatic procedures that a machine can perform. 

Going to extremes can make a point clearer, so let's try that here.  Suppose you are unjustly accused of murder.  By some unlikely coincidence, you were driving a car of a similar make to the car driven by a bank robber who shot and killed three people and escaped in a car whose license plate number matches yours except for the last two digits, which the eyewitness to the crime didn't remember.  The detectives on the case didn't find the real bank robber, but they did find you.  You are arrested, and in due time you enter the courtroom to find seated at the judge's bench, not a black-robed judge, but a computer terminal at which a data-entry clerk has entered all the relevant data.  The computer determines that statistically, the chances of your being guilty are greater than the chances that you're innocent, and the computer has the final word.  Welcome to Justice 2.0. 

Most people would object to such a delicate thing as a murder trial being turned over to a machine.  But nobody has a problem with lawyers who use word processors or PowerPoints in their courtroom presentations.  The difference is that when computers and technology are used as tools by humans exercising that rather mysterious trait called judgment, no one being judged can blame the machines for an unjust judgment, because the persons running the machines are clearly in charge. 

But when a grade comes out of a computer untouched by human hands (or unseen by human eyes until the student gets the grade), you can question whether the grader who set the example for the machine is really in charge or not.  Presumably, there is still an appeals process in which a student could protest a machine-assigned grade to a human grader, and perhaps this type of system will become more popular and cease to excite critical comments.  If it does, we will have moved another step along the road that further systematizes and automates interactions that used to be purely person-to-person.

Something similar has happened in a very different field:  banking.  My father was a loan officer for many years at a small, independent bank.  He never finished college, but that didn't keep him from developing a finely honed gut feel for the credit-worthiness of prospective borrowers.  He wouldn't have known an algorithm if it walked up and introduced itself, but he got to know his customers well, and his personal interactions with them was what he based his judgment on.  He would guess wrong once in a great while, but usually because he allowed some extraneous factor to sway his judgment.  For example, once my mother asked him to loan money to a work colleague of hers, and it didn't work out.  But if he stuck to only the things he knew he should pay attention to, he did pretty well.

Recently I had the occasion to borrow some money from one of the largest national banks in the U. S., and it was not a pleasant experience.  I will summarize the process by saying it was based about 85% on a bunch of numbers that came out of computer algorithms that worked from objective data.  At the very last step in the process, there were a few humans who intervened, but only after I had jumped through a long series of obligatory hoops that allowed the bankers to check off "must-do" boxes.  If even one of those boxes had been left blank, no judgment would have been required—the machine would say no, and that would have been the end of it.  I got the strong impression that the people were there mainly to serve the machines, and not the other way around.

The issue boils down to whether you think there is a genuine essential difference between humans and machines.  If you do, as most people of faith do, then no non-human should judge a human about anything important, whether it's for borrowing money, assigning a grade, or going to jail.  If you don't think there's a difference, there's no reason at all why computers can't judge people, except for purely performance-based factors such as the machines not being good enough yet.  Let's just hope that the people who think there's no difference between machines and people don't end up running all the machines.  Because there's a good chance that soon afterwards, the machines will be running the people instead.

Sources:  The Winter 2014 issue of The New Atlantis carried Joshua Schulz's article Machine Grading and Moral Learning on pp. 109-119.  The New York Times article from which Prof. Schulz learned about the AI-based essay grading system is available at http://www.nytimes.com/2013/04/05/science/new-test-for-computers-grading-essays-at-college-level.html.  The Harvard-MIT consortium's name is edX.

Note to Readers:  In my blog of June 16, 2014, I asked for readers to comment on the question of monetizing this blog.  Of the three or four responses received, all but one were mostly positive.  I have decided to attempt it at some level, always subject to reversal if I think it's going badly.  So in the coming weeks, you may see some changes in the blog format, and eventually some ads (I hope, tasteful ones) may appear.  But I will try to preserve the basic format as it stands today as much as possible.

Monday, June 30, 2014

When Is a TV Not a TV?


When the U. S. Supreme Court says so, that's when.  Last Wednesday, June 25, the Court issued a split decision (6-3) against Aereo, a provider of over-the-Internet broadcast TV service which used a unique technology to get around the requirement to pay retransmission fees to program originators, as cable TV companies do.  Without such fees, Aereo's service was really cheap—as little as eight bucks a month—and over the last year or two the firm had expanded into several urban U. S. markets.  In response to the ruling, on Sunday June 28 Aereo's CEO Ken Kanojia pulled the plug on the service "temporarily," although it will be surprising if Aereo ever makes a comeback, at least in its present form.

For readers who missed my blog on Aereo last February 3, a little background is in order.  Copyright laws exist so that creators of original content won't starve to death while unscrupulous people copy or retransmit the content without paying for it.  It seems to this non-lawyer that there is a happy medium of copyright law between two extremes.  One extreme is that of no law at all, which stifles originality because nobody can make money doing creative stuff.  The other extreme is copyright control, by the originators, of everything in perpetuity, which leads to permanent monopolies that work against the interests of the consumer.  Copyright law is largely a federal matter, so the U. S. Congress is where it comes from, and the proper job of the courts, including the U. S. Supreme Court, is to interpret the law the way Congress intended.

When cable TV arose in the 1950s as a way of providing TV service for isolated communities beyond the reach of TV signals, the content providers (mostly the big three networks back in those days) were miffed, because here was a bunch of companies taking money from their customers for signals they didn't pay for.  In response, Congress amended the copyright laws in 1976 to make it clear that cable TV was a "public performance," legally speaking.  The basic idea is that if you as a content provider take somebody else's content and make it available to all comers, you are profiting from it and should compensate the parties that you got the content from.  Hence, the big retransmission fees that cable companies pay to content providers.  The only exception to this rule is the end user or consumer, for whom the whole system operates.  If you sit in your own house and watch an over-the-air broadcast on your own TV using your own antenna, then it's not a public performance, and you don't have to pay retransmission fees because you're not retransmitting.

It was Ken Kanojia's dream to take that exact situation, and just stretch it out technically while staying within what he thought was the letter of the law.  An Aereo subscriber was at the end of a long chain of technology that started with a paperclip-looking antenna at an Aereo "head-end."  Each head end had thousands of individual antennas, so that every active subscriber controlled a different antenna.  The signal the consumer selected was picked up by the assigned antenna, converted to digital form, and sent over the Internet to the receiver of choice—a phone or computer or iPad or what have you.  The effect, broadly considered, was not essentially different from what a cable TV company would do:  a lot of hardware delivering someone else's content to a lot of consumers.  But technically, each consumer controlled a virtual TV of his or her own, so Aereo claimed it wasn't like cable TV at all—it was just a whole lot of individual TVs controlled by individual consumers.  And therefore, Aereo didn't have to pay retransmission fees.

Naturally, the service providers hated this into the ground, and quickly got their lawyers to sue Aereo.  Back in February, the lawsuits were working their way up the legal ladder to the Supreme Court, which heard the arguments in April, and finally last week the Court issued its decision.

The basic argument of the Court's majority was what I would call the duck approach:  if it walks like a duck and quacks like a duck, it must be a duck.  If you ignore the technical insides of how Aereo provides its service and just treat it like a black box, it's not that much different from a cable TV provider.  Hence, Aereo has to pay up just like the cable companies.  With his whole business carefully tailored to the assumption that his firm would not have to pay such fees, Kanojia recognized that the jig was up, and shut it down.

Three conservative members of the Court—Alito, Thomas, and Scalia—sided with Aereo, but not because they think Aereo should be left alone to go about its business.  Even the dissenters agreed that what Aereo is doing smacks of copyright infringement, but the dissenters thought that the similarity argument with cable TV was a weak one.  The dissenters are concerned that the adverse decision against Aereo will stifle technological innovation, and wanted to see a more technically savvy argument as to exactly what Aereo was doing wrong besides looking broadly like a cable TV company.

They may be right, but frankly, I'm not sure Aereo's kind of innovation is the sort we need.  Remember, if it weren't for lawyers and copyright laws, Aereo never would have designed their system the way they did in the first place.  It was a brilliant technical dodge designed to evade the retransmission fees by configuring the system to imitate a legal technology.  Unfortunately for Aereo, a majority of the Supreme Court justices didn't think the technical details made that much difference.  And when all is said and done, I tend to agree with them.

It seems to me that we need people like Ken Kanojia engaged in technical challenges that really matter, rather than spending his time devising clever ways to avoid legal obstacles.  I'm sure Kanojia believes that what he was doing was a true service to the consumer, but at least in the U. S., his Aereo venture looks like it has made its final performance—public or otherwise. 

Sources:  I consulted these news items on the Supreme Court Aereo decision: http://www.huffingtonpost.com/2014/06/28/aereo-suspension-operatio_n_5539559.html
and http://www.businessinsider.com/aereo-supreme-court-ruling-2014-6.  As mentioned, I last blogged on Aereo on Feb. 3, 2014.

Monday, June 23, 2014

The Two-Edged Sword of Email Archives


Lois Lerner, former head of the U. S. Internal Revenue Service Exempt Organizations Division, is a lawyer by training.  Don't forget that fact, which is significant for what follows.  When her division came under fire for selectively persecuting conservative organizations with everything from delays in processing tax-exemption applications to leaks of confidential donor lists, she refused to testify before a House of Representatives investigative committee, claiming the Fifth Amendment's guarantee against self-incrimination.  The House later voted to hold her in contempt of Congress.  And more recently, investigators working the scandal have learned that Ms. Lerner's emails for a critical period ending in 2011 are probably lost because the hard drive she had them on crashed and was thrown away.  Besides throwing a big monkey wrench into the investigation, this fact highlights a question of interest to engineers who design information systems and everyone who uses email:  what do you do with old emails?

Anyone reading this blog is very likely a daily user of email.  Email has been a routine part of life for so long that it is hard to imagine a time when it was available only to a select few computer scientists and physicists in the 1970s.  The transition year for my use of email was 1993.  Here is an excerpt from my journal for Oct. 2 of that year:  "This is the year I have gone whole-hog into email.  Before . . . a year ago I hardly ever used it, but now it’s a rare day I don’t get at least three or four email messages, and send almost that many."  Ah, the good old days.

Most emails, like most conversations, are fleeting in significance.  Once the meeting is set up or the news is shared, the bits representing the message have served their purpose, and you face the problem of what to do with them.  Some people just let the stuff accumulate in their inboxes as close to forever as the operating system permits, using search engines to locate the occasional old email that needs to be found.  A subset of these folks end up declaring "email bankruptcy," which is a term attributed to Lawrence Lessig for a person who gets overwhelmed by email in a given account, ignores it while it piles up to intimidating proportions, and then flushes the whole thing.  Others keep old emails until some external factor intervenes, like a hard-drive crash or a notice from IT support saying their inbox is full.  And then there are the email packrats like me.

Once a month, I go through my email inbox and pitch emails I no longer need.  This is the majority of them, because even after my university's spam filter has disposed of the worst offenders, I still get hundreds of emails every month from conferences I will never attend, organizations I will never join, and product and service providers whose products or services I will never need.  When I encounter an email I would like to keep, I sort it into a folder in my Mac Mail application, which physically resides on my computer.  (No email clouds for this guy—not yet, anyway.)  This tedious task takes me the better part of an morning or afternoon each month, but at the end of it I have the satisfaction of a clean email inbox and the knowledge that I can find any important emails I need without using a search engine, which I've never found that helpful for emails anyway.  Together with regular backups to an external hard drive, this process allows me to locate, or at least have possession of, any email I have received going back as far as 1998, and earlier if I cared to dig up some legacy email software.  It's not quite true that I still have the first email I ever received, but I've got some pretty old ones in there.

Now if I were a lawyer, this clinging to old emails would be unwise behavior on my part.  Why?  Because, as a patent lawyer once told me when I asked whether I should email him or phone him about a delicate and confidential matter, "emails are discoverable, and phone conversations aren't."  "Discoverable" means if somebody sues or indicts you and sends you a subpoena, they can legally grab documents of all kinds, including emails.  But if you happened to have a phone conversation that wasn't recorded, there's nothing to discover, and they have to use oral questioning to grill the participants' memories, which can be conveniently feeble at times. 

It is not for me to say whether Lois Lerner's conveniently-timed hard drive crash and disposal were just the normal way the IRS did business, or whether there were more nefarious things going on.  Experience has taught me that for every bad thing that happens due to evil intent, six or eight bad things happen due to simple incompetence.  It turns out that the IRS's email system managers kept tapes of all emails, but these were routinely erased and reused every six months because they were backups kept for emergencies, not archives retained for permanent storage.  Investigators have turned up thousands of Lerner's emails preserved on the machines of people she sent emails to, so that is something, but the electronic records will never be as complete as they might have been if Lerner had been more of an email packrat.

But if she had been, she probably would have gone into some other field than law, like engineering, and nobody would care about her old emails.  Lawyers know that old records of all kinds, not just emails, can be both helpful and harmful.  The IRS people in charge of email may have made a considered judgment not to keep old emails longer than a certain time.  Or it may have just been up to the individual employee.  At any rate, the lesson from this situation is that old emails are two-edged swords.  If you get rid of them, your life is simpler, but you may be called to account for doing something like purging old emails, that you didn't consider was wrong at the time.  But if you keep them, they may come back to haunt you.

Sources:  I referred to these news stories describing the details of Lois Lerner's hard-drive vicissitudes:  from Politico at http://www.politico.com/story/2014/06/irs-lois-lerner-emails-108044.html from USA Today at http://www.usatoday.com/story/news/politics/2014/06/17/how-the-irs-lost-lois-lerners-e-mails/10695507/, and from National Review Online at http://www.nationalreview.com/article/362667/investigation-ids-irs-leaker-eliana-johnson.  I also referred to the Wikipedia articles on email, Lois Lerner, and the 2013 IRS scandal.

Monday, June 16, 2014

To Monetize or Not To Monetize—Reader Responses Requested


One of the more prominent concerns in engineering ethics is the improper influence of money.  It's impossible to do engineering of any magnitude without money being involved somehow, because doing work for pay is what engineering is mostly about.  Without meaning to, G. K. Chesterton provided one of the best and most succinct definitions of engineering I've ever come across:  "the application of physical science to practical commerce."  And commerce involves money, so money changes hands in most engineering work.

It's how money changes hands, and who knows about it, that can lead one into an ethical quagmire.  Two items in the IEEE's Code of Ethics address this problem.  In its code, the IEEE charges its members "to avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist" and "to reject bribery in all its forms."  Getting paid for engineering work is not a problem.  But if an engineer gives the impression of doing something on an objective basis—selecting competing bids for an engineering project based on technical criteria, for instance—but in fact has been secretly influenced to favor one party over others by means of money or its value equivalent paid by that party, then you have a conflict of interest, at least, and possibly a case of bribery.  And while engineers of all stripes should be careful about such things, one who writes a blog on engineering ethics must be especially cautious.

Preserving not only objectivity, but the appearance of objectivity, is the main reason that since I began this blog about eight years ago, I have kept it as non-commercial as possible.  It is brought to you by Google, a notably profit-making enterprise, but I pay them nothing and they pay me nothing, unless you count the value of the technical facilities they provide me to enter the blog text into their system every week.  In exchange, of course, they hope that my blog encourages people to use their search engine, and I suppose in that way I'm responsible for a vanishingly small fraction of Google's profit.  But other than that very tenuous exchange, I get no monetary or economic benefit from writing this blog.  In fact, on occasion Google has come in for its share of criticism in this space, and nobody has ever pulled my plug that I'm aware of. 

I am now considering an experiment in what is called "monetizing."  Basically, I would tell Google that it's okay to put some amount of advertising on my blog.  Some aspects of this would be under my control, I think, although I haven't pursued it far enough to know for sure.  I do know that if I want to stop it after a while, I can do that, so if it doesn't work out or isn't worth the annoyance, I can always go back to being non-monetized. 

Before I take this step, I am checking with you, my readers, as to your thoughts and opinions on the question of whether I should try monetizing this blog.  I am under no illusions that I am addressing a vast multitude.  The last time I checked, there were a few dozen people who follow this blog regularly, and more who find it via search engines and so on for one-time views on certain topics.  But whether you've been following it for years or just came across it today, I am grateful for your attention, which is so valuable in this media-overload era, and do not wish to do anything that would turn off or disappoint numbers of you. 

So I am asking for your input.  I promise not to do anything about monetizing at least through the end of June.  In turn, if you have any opinion about this—favorable or unfavorable—please let me know in the next week or so.  If you wish to make your thoughts public, use the comment space below this blog (rather unfortunately labeled "NO COMMENTS" until somebody clicks on it and makes the first one—maybe I can fix that too, and I'd get more comments).  Or if you'd prefer to send me a private response, you can email me at kdstephan@txstate.edu.  While this blog is not a democracy and I don't promise to do whatever the majority says, I will certainly take every response into consideration in deciding whether or not to proceed with this experiment.  And if the response is primarily negative, it's unlikely I will try it.  I don't need money that badly, and if monetizing would turn off a lot of readers, it's a bad idea.

If I do proceed with it, I will do my best to preserve the objectivity which I hope has been a characteristic of this blog so far.  I came across a useful philosophical distinction the other day between two types of objectivity:  psychological and rational objectivity.  Psychological objectivity is the state of being neutral on a topic, of having no strong opinion one way or another.  Typically, we can be psychologically objective only about things we know little about, or haven't thought about deeply.  On the other hand, rational objectivity is the ability to distinguish between good and bad arguments on a topic, and to believe a thing for reasons that are genuinely good ones.  Rational objectivity has been my goal in this blog from the start, and I plan to keep it that way even if I receive some money from advertisements that I may not be psychologically objective about.  For that matter, I'm not psychologically objective about engineering ethics itself:  I care deeply about it, and I'm biased in favor of it.  But that doesn't prevent me, I hope, from being rationally objective about it and judging various arguments on their logical and evidential merits, rather than just going with my feelings about a question.

If you respond, I'm not asking you to be psychologically objective.  Rationally objective would be nice, but I won't insist on that either.  Unless it makes no difference to you, consider letting me know your opinion on monetizing this blog by June 30.  After that I'll summarize the responses and announce the next step:  to monetize or not to monetize?

Sources:  In a discussion of American character in Generally Speaking (London:  Methuen, 1928, p. 63), G. K. Chesterton said that Americans favored action over contemplation and excelled in the application of physical science to practical commerce.  The distinction between psychological and rational objectivity is made in J. P. Moreland and W. L. Craig, Philosophical Foundations for a Christian Worldview (Downer's Grove, Ill.:  InterVarsity Press, 2003), p. 150.  The IEEE Code of Ethics can be found at http://www.ieee.org/about/corporate/governance/p7-8.html.