Monday, August 18, 2014

"Survivor" On Mars: Poor Ratings Could Be Deadly

Reality shows on TV claim to present life as it happens.  Never mind that the kind of life that happens on these shows is something that most of us would pay money to avoid:  getting tossed into a wilderness with next to nothing to live on, or being expelled from the show altogether by a vote of your peers.  But reality shows continue in various forms to be one of the more popular TV genres.

A Dutch nonprofit startup called Mars One is planning a reality show that is literally out of this world.  The organization's plan is to send four astronauts—two men and two women—to Mars by 2025, that is, eleven years from now.  And they plan for their main source of revenue to be fees charged by the outfit for continuous media coverage of the entire venture. 

Did I say anything about bringing the astronauts back?  No, and neither does Mars One.  From the get-go, the organization's plan has been to get their stars to Mars, and after that, well, they knew what they were getting into, didn't they?  And there's always phone calls—with a seven-minute one-way delay.  Despite this, er, disadvantage, about 200,000 people have reportedly expressed interest in being selected for the first trip.  As of last May, Mars One had culled the list of prospects down to about 700 lucky (or unlucky, as the case may be) people.  Eventually it will have to be cut down to a few dozen or so at most who will undergo the planned seven or eight years of training, which has to commence no later than 2016 for the project to keep on schedule for the launch in 2024 (it will take over a year to get there). 

The Mars One website has that characteristically Dutch tone of modesty combined with a tolerance for things that other cultures consider beyond the pale.  It may be no coincidence that the same country harboring Mars One is also where euthanasia has made its biggest advances.  And as far as living on earth is concerned, the Mars One trip would be just a long-drawn-out, televised, technologically implemented end to your earthly existence. 

In a way, there's nothing new about Mars One's invitation to become famous and historical at the price of never being on earth again.  In wars and disasters, individuals have at various times chosen to throw away their lives with a vanishingly small chance of survival, in order to achieve a greater good.  Japanese fighter pilots flew suicide missions in the closing days of World War II.  Arland D. Williams, Jr., one of only six survivors of a plane that crashed into the Potomac River on January 13, 1982, repeatedly handed lifelines to the other survivors, only to drown when the plane's wing he was standing on sank.  But rightly or wrongly, these people were sacrificing their lives for a cause greater than themselves.

What is the comparable cause that Mars One is proposing to achieve, at the price of its passengers' lives?  Whatever it is, 200,000 people around the world at least considered it worthwhile enough to apply. 

National glory doesn't seem to be much of a motive.  Mars One is probably the most extreme existing example of the turn toward private space ventures that began about a decade ago.  When space exploration was something so difficult that only governments could afford it, those who volunteered and went through the arduous training and took great risks—and those who lost their lives, too—had the satisfaction of knowing that their actions were on the behalf of the United States, or the USSR, or (more lately) the People's Republic of China.  During the space race of the 1960s, being an astronaut was a way of fighting the Cold War by other means.  But the Mars One venture has a deliberately international tone to it, and I suspect that most of their applicants consider themselves mainly citizens of the world, rather than of the particular country where they happened to be born.

What if Mars One barely manages to get their first folks on Mars and then runs out of money?  Even the most debauched reality-TV shows up to now have not proposed to show us live scenes of slow starvation, but that's what we'd be dealing with.  What would the dying colonists be thinking? 

There are precedents for this sort of thing, after all.  We can look at the record left behind of a man who knowingly ventured on a risky expedition that turned out badly:  Robert Falcon Scott.  In 1912, his team was the second in history to reach the South Pole, after Roald Amundsen.  A few weeks later, after his team consumed the last of their provisions, Scott was the last of his five-person party to die of cold and starvation.  Knowing what was coming, he left a "Message to the Public" which reads in part: "We took risks, we knew we took them; things have come out against us, and therefore we have no cause for complaint, but bow to the will of Providence. . . . Had we lived, I should have had a tale to tell of the hardihood, endurance, and courage of my companions which would have stirred the heart of every Englishman."  He lived and died an Englishman to the last, and expressed his dying thoughts in prose that has stood the test of time.

If Mars One ever gets off the ground, the adventure may end in tragedy—suddenly, with no time for last words or regrets, or slowly, allowing its victims to reflect on their fate as Scott did in his last letters.  Maybe in the applicant pool of 700 there are one or two Robert F. Scotts whose grasp of reality, and what the human spirit is capable of, would be equal to a supreme crisis like the one Scott faced.  But the track record of reality TV is not promising in this regard.  

Sources:  The Mars One website is  I referred to reports on their activities at (posted April 13, 2014) and a CNN report at  I also consulted Wikipedia articles on reality television and Ronald Falcon Scott.

Monday, August 11, 2014

Dodging Solar Bullets

Massive blackouts—pipeline explosions—whole regions of Europe or North America plunged into the nineteenth century, but without even the rudiments of that century's technology.  Elevators that don't elevate, ventilators that don't ventilate, gas pumps that don't pump, hospitals that turn into charnel houses.  Entire cities evacuated and their populations dying on their frantic attempts to escape to nowhere.

No, this isn't a movie review of the latest mega-disaster flick.  It is a fairly realistic scenario of what could have happened on July 23, 2012, if a certain cluster of sunspots had been facing directly toward the earth, rather than pointing out away from us toward a space probe called STEREO A.  As it happened, STEREO A had a front-row seat at a performance that engineers hope we will never witness here—but one that could happen any time.

What happened that day was not just one, but two coronal mass ejections (CMEs).  Often associated with, but distinct from, the brilliant solar flares that arc above the sun's surface every now and then, coronal mass ejections contain the energy of millions of nuclear bombs and send tons of charged particles flying out into space.  Entangled with the particles are spaghetti-plates full of tangled magnetic field lines, and the magnetic fields are what can damage our electrical and mechanical infrastructure. 

When a CME encounters the earth's magnetic field, the normally fairly stable domestic field jumps around like the proverbial cat on a hot tin roof.  And as every electrical engineer knows, changing magnetic fields near conductors induce voltages and currents in those conductors.  Substitute "power lines" and "pipelines" for "conductors" and you begin to see the problem. 

While these structures are protected against the normal kinds of mishaps that can befall them—lightning in the case of power lines, breaks in the case of pipelines—relatively few such installations are also protected against the unique sort of stresses that a record-breaking geomagnetic storm can induce.  And geomagnetic storms, along with brilliant auroras near the polar regions, are what happens when a large CME hits the earth. 

The last major geomagnetic storm that did considerable damage occurred in 2003, knocking out a series of electric-grid transformers in Sweden.  Utility operators usually have on hand one or two spare transmission transformers­—the big boxes in substations that cost upwards of millions of dollars each—but not a dozen.  And even if they did, hauling those multi-ton pieces of gear around the country to replace ones burned out by a geomagnetic storm is not the light task of a few hours' time.  Multiply this actual event by a factor of two or ten or twenty, and you can see how bad things could get.

What can be done from an engineering point of view to protect infrastructure assets from a large geomagnetic storm?  We will concentrate on the protection of the electric grid, since its loss would be by far more immediately consequential than the loss of pipelines.

If grid operators are given enough warning, they can call for a pre-emptive voluntary blackout that disconnects vulnerable transformers from the long lines that will pick up the high currents and voltages that would otherwise cause damage.  The problem with this is, nobody wants to be the one to decide to pull the switch, especially if the storm turns out to be less severe than expected.  Another problem is that there is currently no good way to predict the exact effects of a given geomagnetic storm on a particular part of the grid.  So the safe thing to do would be to shut down the whole system for the duration of the storm, which usually lasts only a few hours.  But a region-wide blackout lasting several hours is a serious disruption of its own, and few grid operators are currently willing to do such a thing based on only the fuzzy and general forecasts of geomagnetic storms that are presently available.

Another alternative is to install special protective gear designed to bypass the large energy generated in power grids by geomagnetic storms.  This would allow the grid to keep working right through the storm, but has the disadvantage of costing millions of dollars and not doing a blessed thing until the storm hits.  This reminds me of those vending machines you used to see at airports where you could buy $50,000 of life insurance for something like a quarter, valid only during your upcoming flight.  I suppose somebody may have collected on one of those policies, but I doubt it.  Still, this would be the safest course, all things considered.

Healthy societies have institutions that look ahead to unlikely eventualities, so that when they happen, as sooner or later they surely will, the society rolls through the crisis while maybe sustaining some damage, but otherwise stays intact.  The closest we have come in the U. S. to a crisis like the one a geomagnetic storm might cause was Hurricane Katrina, the one that devastated New Orleans in 2005.  Sad to say, New Orleans was grossly unprepared for Katrina.  Its infrastructure of dikes and canals had been neglected for decades, despite warnings that if something like Katrina hit, large parts of the city would be underwater, and they were.  Over 1,800 people died in a disaster that was, fortunately, of limited geographic extent.  Multiply Katrina by ten or twenty times the area, and you can begin to see what a perfect geomagnetic storm might do.

In a recent issue of National Review, Christopher DeMuth points out that past generations of U. S. citizens allowed the federal government to go into debt, but always for a reason that was forward-looking:  to win a war, for example, or to finance infrastructure improvements such as canals, railroads, and interstate highways.  By contrast, today we are continually warned of our crumbling infrastructure, but the massive debt we are incurring is going mainly for payments to persons—consumption, in other words, not investment for the future. 

The amount of money it would take to improve geomagnetic-storm forecasting and power-grid protection to the point that we could cross a geomagnetic-storm disaster off our list of things to worry about, is not large.  Whether public or private funds, or a combination, should pay for it is not the question.  The question is whether society still has enough foresight to avoid needless disasters—or whether we have to experience them first before we do anything about them.

Sources:  A good brief description of the nearly-disastrous CME event of July 23, 2012 was carried online by IEEE Spectrum at  The technical paper on which the report was based is Liu, Y. D. et al. "Observations of an extreme storm in interplanetary space caused by successive coronal mass ejections." Nature Communications 5:3481 (doi: 10.1038/ncomms4481) (2014).  The problem has not gone entirely unnoticed by government officials, as the threat evaluation report on geomagnetic storms at the U. S. Department of Homeland Security found at shows.  I also referred to Wikipedia articles on coronal mass ejections, solar rotation, and Hurricane Katrina.  Christopher DeMuth's article "Our Democratic Debt" appeared on pp. 28-34 of the July 21, 2014 issue of National Review.

Monday, August 04, 2014

Israel's Iron Dome and the Ethics of War

Just before I wrote this, I learned that a cease-fire negotiated last Friday between Israel and Hamas collapsed after less than two hours.  For the last few weeks, the Gaza-based Hamas organization has been shooting Grad-type rockets at Israel, and Israel has lately been responding both with aerial attacks and ground action in Gaza itself.  By many reports, the damage done by the rockets fired from Gaza into Israel would be much worse if it were not for Israel's air-defense system called Iron Dome.  According to the Israeli Defense Force (IDF), Iron Dome succeeds in intercepting about 80% of rockets that come within its zone of protection, and is one reason why civilian casualties in Israel from the rocket attacks have been so low. 

The ethics of the Israeli-Hamas conflict is, shall we say, outside the scope of this blog.  Rather, I would like to look at the ethics of war as it concerns engineers, with Iron Dome as a case in point.  From the viewpoint of a student about to graduate from engineering school, should you consider job offers from military contractors?  And if not, why not?  Just to make it interesting, let's say you're graduating from the Technion, Israel's premier technology university.  What choices do you face regarding the military and working for military contractors?

As many people know, there is universal conscription in Israel.  Theoretically, all men over the age of 18 serve in the Israeli Defense Force (IDF) for three years (two for women).  It's not as universal as it sounds:  according to Wikipedia, about half of those drafted manage to avoid serving for various reasons having to do with religious exemptions, being members of exempted communities, or even by being a conscientious objector.  To avoid service, a conscientious objector (CO) has to have a principled opposition to all war and conflict, not just particular conflicts that the IDF is engaged in.  And this is not an easy path to tread:  one study of applicants for CO status in Israel from around 2000 found that only about ten percent of applicants were granted the exemption.

So, say you've served your three years in the IDF and you now want to have nothing to do with the military ever again.  There's plenty of job opportunities in technical fields in Israel for non-defense work.  You could work for Given Imaging, for example.  They're a medical-device outfit that has pioneered the development of capsule endoscopy:  swallowable video cameras, to be specific.  A cousin of mine took one of these as a part of an investigation of why he was having acid reflux.  I don't think the results wound up on YouTube, but if there had been anything serious wrong, the pictures would have been courtesy Given Imaging, or maybe one of their imitators.

But wait—you look into the background of Given Imaging, and you find that it's actually a spinoff from a company that specializes in commercializing military technology.  Look hard enough, and you find that the same organization that makes Iron Dome also spun off the medical firm Given Imaging.  Originally called the Science Corps at Israel's founding in 1948, the government-funded military R&D organization was renamed Rafael in 1958, and restructured as a profit-making, though still government-owned, company in 2002, now known as Rafael Advanced Defense Systems.  As anyone striving for total purity in association or support will find, if you trace money, influence, or history back far enough, sooner or later you'll find something you don't like.

So let's take the opposite view:  say your sister was one of the few Israeli civilians killed by a Grad rocket fired by Hamas from Gaza, and you'd like to do what you can to prevent it from happening again.  If you had joined Rafael back in 2007, you could have gotten in on the ground floor of the development of Iron Dome.  The idea of a rocket defense system occurred to the IDF long before then, but American advisers looked at the relatively small Rafael organization in the relatively small country of Israel and told the Israelis not to waste their time, that such an idea was "doomed to fail."  Antimissile defense systems developed by the U. S. have a checkered past, to be sure, and the only one that seemed to have had a major effect on global politics—Star Wars in the 1980s—was never actually deployed fully.  When President Reagan just threatened to build it, it scared the socks off the USSR.  And the mere threat of making an enemy's weapons useless is often a good strategic weapon of its own.

But the threats Israel was experiencing in recent years were not theoretical.  Grad rockets were originally developed by the USSR in the 1960s as dumb weapons whose inaccuracy (they are less aimable than even conventional gun-fired shells) is intended to be overcome by sheer numbers.  Nobody knows where a Grad rocket will fall, including those who fire them.  These types of rockets make a good target for a sophisticated radar-guided defense system like Iron Dome, whose optical-tracking missiles can home in on a target and explode it before it reaches the ground.  Of course, the resulting debris don't just go away—even after a successful interception, you will have pieces of hot scrap metal falling to the ground, which can be inconvenient, to say the least.  But what you won't have is 6 to 22 kg (14 to 50 pounds) of high explosive propelling shrapnel all around your back yard, which is what a Grad rocket can do if it lands and explodes.

The choice of an engineering career is always an interesting one, but for Israeli engineering graduates these days, it must be especially so.   There is room in the discipline of engineering for those who believe wholeheartedly in war, for those who oppose war with every fiber of their being, and for those who may not want to work on systems that actually kill people, but who want to defend innocent lives against attacks.  Iron Dome looks like a lifesaver to me, and whatever your beliefs about the Israel-Hamas conflict in general, I think most engineers would agree that the system is a fine piece of work.

Sources:  I referred to Wikipedia articles with the following titles:  Iron Dome, Conscription in Israel, Rafael Advanced Defense Systems, and Hamas.  The report on the short-lived ceasefire was carried online by CNN at

Monday, July 28, 2014

Imagine There's No Email

For people of a certain age, you're supposed to sing that title to the tune of the John Lennon song that uses the word "heaven" instead of "email."  The other day our wireless hub here at home went out, and it took a day or two before we could get a new one going.  In the interim, my wife, who was initially distressed at her lack of connectivity, remarked that actually it was a refreshing thing to go without email or looking at the Internet for a couple of days.  Without meaning to, we accidentally endured what you might call a period of fasting from email and the Internet.  And we found that it wasn't all that bad.

Mention the word "fasting" to most people, and you may conjure up images of scrawny half-crazed religious fanatics who lived a long time ago.  Or if you have had personal experience of fasting, it was probably just an unpleasant prelude to a medical procedure.  The whole spirit of the age militates against voluntarily refraining from consumption of one kind or another, which is all fasting is.  We are told without letup that we live in a consumer-driven economy, and so it's positively unpatriotic to consume less if you can consume more.

Well, if it's so economically harmful, why do people do it at all?  What is the point of fasting?

Theologians have an umbrella word for fasting, abstinence, and other kinds of things discussed in magazines with titles like A Simple Life, The Simple Things, or just Real Simple.  The word is "simplicity."  Simplicity is a type of spiritual discipline, meaning that it's a habit you can practice that will make you a better person if you get better at it.  Or at least, it stands a chance of doing that.  What is certain, is that if you don't practice the discipline, it won't do you any good.

You don't have to be a theologian, or even a religious believer, to benefit from spiritual disciplines, especially fasting.  The reason is that human nature is meant to be a certain way, and habits that make us more the way we were intended to be have benefits, whether or not you believe there is a God that designed you to be a certain way or not.  The habit or discipline of fasting helps the rational part of you gain mastery over the less-rational part. 

All of us have what some sociologists refer to as a "lizard brain":  a primitive part of the brain that we appear to share with lower animals such as lizards.  Lizards are good at what they do.  We have bright-green anoles around our yard here, and they move in a way that I have to admit is quite human:  slowly, guardedly creeping up on a bug until it's within reach, and then snatching it before the bug can figure out what hit him.  But lizards are slaves to their instincts.  When they're hungry, they hunt.  When it's breeding time, they breed.  You don't see lizards wearing little hooded robes and rope belts around their waists refraining from eating juicy bugs right in front of them.  At least, not outside Geico commercials.

But humans can voluntarily refrain from consuming or doing something that is otherwise good, helpful, or even necessary, simply to practice what you might call ordinate self-control.  Take email as an example of such a thing.  Some small fraction of what most people with email accounts receive is worth reading:  it's from a person you know, or your boss, or your long-lost Cousin Max, and you get a benefit or pleasure from reading it.  But the temptation of email, at least for me, is to jump on the computer every time that little bing goes off and see what the newest email is.  If I give in to the temptation to monitor my email more or less constantly like that, I will get little if anything else done. 

An occasional fast from email can teach me several things.  One is, I won't die or lose my job (not necessarily, depending on the job) if I don't read my email for a couple of days, with the proper preliminary precautions and notices to others.  Another lesson is, life without email is not only possible, but has advantages too.  I can spend hours reading a book, for instance (remember books?—the paper kind, I mean).  Or I can take a walk in a park and observe, really observe, nature and its manifold wonders—not just treat it as some green-screen CGI background to the movie of my life. 

Much as engineers like rules, there are no universal rules for fasting (aside from rules promulgated by various religions for their members, that is).  If you want to try it, think of a bad habit you have that you'd really like to be able to control, a habit that involves something necessary in its proper amount, but something that you find yourself going overboard with.  I'm not trying to start a twelve-step program here, I'm simply suggesting how you can pick a feature of your life that you might consider fasting from.  Then decide on some period of time in which you could afford to stop or reduce that activity, and try to stick to it.  If it's something you really think you can do without altogether, go slow at first.  Trying too much too soon is a classic mistake of novice fasters.  If you can do without the thing for an hour, or a day, do it, don't be too hard on yourself if you fail, but if you succeed, try two hours or two days next time.

Fasting is currently a countercultural thing, and except for the magazines I've mentioned and some books I will refer to below, you won't find much support from other people if you decide to fast.  They may secretly feel jealous or threatened by your abstaining from what they view as a normal, healthy part of life.  They may even tell you you're foolish or going to cause yourself trouble, and you should at least listen to them.  But if you've made up your mind to try a fast, go ahead and try it.  The worst that can happen is that you find out the thing has got a tighter grip on you than you thought—and that's worth knowing too.

Sources:  I have found very helpful a couple of books that relate to fasting, simplicity, and related spiritual disciplines.  Richard J. Foster's Celebration of Discipline:  The Path to Spiritual Growth, 3rd ed. (HarperCollins, 1998) is a classic that treats many types of spiritual disciplines, including fasting, in an organized way that respects a wide variety of religious traditions.  For a more personal take on how a very busy wife, mother and author up the road here in Austin implemented seven types of simple living in her household, I recommend Jen Hatmaker's 7:  An Experimental Mutiny Against Excess (B&H Publishers, 2012).

Monday, July 21, 2014

Books and E-books

Last Christmas, someone gave me a Kindle, and I have made intermittent attempts to get engaged in reading e-books on it.  These attempts have met with only mixed success.  A book that was highly recommended by my pastor, who makes no secret that he's not much of a reader, left me unimpressed, and I abandoned it.  More recently, out of a sense of duty to a cultural icon more than genuine interest, I downloaded (for free) a copy of Swann's Way, the first volume of Marcel Proust's encyclopedic multivolume Remembrance of Things Past.  Proust wins my nomination for Greatest Introspector of the Nineteenth Century Award, but I'm afraid I've abandoned him too, somewhere in his childhood garden among his maiden aunts and the eccentric visitor Mr. Swann. 

The only books I've managed to finish on the thing were a couple of mass-produced page-turners written for young adults.  They managed to keep me turning the electronic pages, all right, but after I finished the last one I felt a little like you might feel after binge-watching five recorded episodes in a row of some trashy TV series—I had to ask myself, "Was that really the best use of my time?" 

Despite numerous prophecies that the days of the printed book are numbered, e-books have not yet done to the paper-book publishing business what hand-held electronic calculators did to the slide-rule business.  Electronic calculators were so obviously superior to slide rules in nearly every way that only die-hard traditionalists clung to their slide rules, which took a one-way trip to the museum and never came back.  That is not happening with paper books.

Once the market stabilized on a few common platforms such as Kindle, e-book sales took off and increased steadily for several years.  Some of the biggest sales boosts came from mass-market fiction series such as the hugely popular Hunger Games franchise.  But in the last year or so, e-book sales have flattened out, while paper-book sales are seeing increases, both in the U. S. and worldwide, that in many cases show faster growth than e-books.  A report on the Digital Book World website says that U. S. sales of e-books through August 2013 were $647 million, about a 5% increase from the previous year, while hardcover printed books accounted for sales of $778 million, up nearly 12% from a year earlier.  This trend is continuing in 2014, and is not the picture of a situation where one medium is simply being dropped for a newer one. 

Instead, it's beginning to look like the book medium one chooses will depend on the message it carries.  This is a familiar phenomenon in other fields—music, for example.  Take two music lovers.  One is a busy college student whose part-time job is standing in front of a tax office waving a big arrow sign.  He wants something to listen to while doing this mindless task.  The other is a professional music critic with exquisite taste and highly discriminating ears, wishing to evaluate the latest recordings of a particular Mozart string quartet.  The college student will be happy with an iPod (or smartphone) with earbuds, while the music critic will want to listen in a quiet room through a high-dollar stereo system and speakers.  Different kinds of messages are just naturally suited to different kinds of media, and the same may be true of book publishing going forward.

So will e-books destroy the paper-book publishing business?  No, but they will change the makeup of what gets published that way.  Books with mainly transient value—what an acquaintance of mine once called "nonce books," meaning it's of interest for the nonce, but not much longer—will probably show up as e-books.  Fiction mega-hits that masses of otherwise non-literary folk gobble up are perfectly suited to the e-book format, which makes it easy for the reader to plow through in a straight line as fast as he or she can read.  But for more scholarly publications that someone might want to keep around for reference or contemplation, I think the paper format is more suitable, and current sales statistics say that paper books are not on the verge of immediate extinction.

If you think about it, there is a physical connection, however tenuous, between a person holding a mechanically typeset book in his hands, and the original author, no matter how long ago the author lived.  If you pick up a copy of Aristotle printed before about 1960, the chain goes like this:  from handwritten manuscript to medieval scribes, to nineteenth-century editor, to typist copying the editor's manuscript, to the Linotype operator setting the type, to the stereotype plates that impressed the ink into the very paper you hold in your hands.  

Maybe some computer geek can figure out the analogous path for an e-book, but I'm not sure I want to hear about it.

I think one of the most profound differences between the natures of the two media is that paper books are inclined to permanence, while e-books are suited to transience.  In the nature of things, I expect that today's e-books will not be readable by future generations of machines, or if they are, it will become a bigger and bigger hassle to do so as time goes on, just as it is probably hard for you right now to recover files on a computer you used more than a decade ago.  But unless the ink has faded to invisibility or the paper has crumbled to dust, we can still read writings that were penned thousands of years ago. 

There is a story, possibly apocryphal, that the only copy of the writings of Aristotle, upon whose ideas much of Western civilization is based, lay forgotten in some heir's basement for a couple of hundred years before being rediscovered.  Good thing they were written on paper, because if Aristotle had used a Kindle, in two centuries the batteries would have died and the operating system would have been, well, ancient history.

Sources:  I referred for statistics on U. S. publishing of print and e-books to the websites and, and for worldwide sales to  The popular fiction I read on Kindle was the first two books in the "Airel" series by Aaron Patterson and Chris White.  The story of the rediscovery of Aristotle's works is reported by at least two ancient historians, according to the Wikipedia article on Aristotle.   

Monday, July 14, 2014

The Birth Control Chip

An MIT spinoff called MicroCHIPS has announced plans to market an implantable contraceptive chip that can be turned on and off remotely, and lasts for as long as sixteen years.  Funded by the (Bill) Gates Foundation to the tune of $5 million, the chip contains enough of the contraceptive drug levonorgestrel to provide contraception for the major part of a woman's fertile years.  Once implanted, the device will automatically melt a seal to release a few micrograms of the drug every month until it receives a wireless command to stop, or to start again if desired.  When developers were questioned about hacking concerns, they said the device will incorporate such precautions as individual password-protected remote controls and the need for an external transmitter to be held within a few inches of the device, which will be implanted in a region of fatty tissue.  MicroCHIPS hopes to market the device in some regions of the world starting in 2018.

This announcement raises two distinct ethical issues. 

One is the question of security relating to any kind of medical chip implanted in the human body.  One of the news reports on the contraceptive device noted that former U. S. Vice President Dick Cheney asked his doctors to disable his heart pacemaker's wireless interface out of concerns that someone might hack into it and zap him into eternity.  Such fears are not without foundation.  For example, password protection is notably weak in many cases, and short-range low-power RF links can be manipulated from greater distances by (illegal) high-power transmitters. 

It is a sign of a narrow mindset to consider only technical means of hacking.  In the developing-world environments where the Gates Foundation intends the contraceptive chip to be used, there is often a strong animus against any method of birth control on the part of husbands and boyfriends. Why should a man bother with sophisticated technical hacking when he can threaten to beat the stuffing out of the woman if she doesn't tell him her password?  No one has figured out a foolproof way to prevent that kind of hack.

The second ethical issue, and the one that will probably get me into hot water shortly, is the question of contraception in general.  Contraception is an existential question for the human race as a whole, and thus goes to the very heart of what you think humanity is about. 

Until the mid-twentieth century, the consensus of both learned and popular opinion was that engaging in sexual intercourse while intentionally preventing the conception of a child was wrong.  Here is what none other than the great psychologist (and atheist) Sigmund Freud said in a lecture delivered in 1915:  "We actually describe a sexual activity as perverse if it has given up the aim of reproduction and pursues the attainment of pleasure as an aim independent of it. So, as you will see, the breach and turning-point in the development of sexual life lies in its becoming subordinate to the purposes of reproduction."

While he said this in the context of the subject of infantile sexuality, Freud is essentially making the distinction between the animal type of intercourse, in which creatures such as dogs and cats simply follow their instinctive sexual urges wherever they lead, and the mature human type of intercourse, in which the main reproductive function of sex is recognized by the rational animal known as a human being, and used with that function fully in mind. 

Now this is an ideal, obviously, and many people have fallen short of the ideal since prehistoric times.  But when pharmaceutical contraceptives became available in the 1950s, moral authorities in Western societies gradually abandoned the ideal, with one notable exception:  the Roman Catholic Church.  Since then, nearly everyone has adopted a model of the human being that views sexuality as independent of reproduction. 

If you believe that human beings arose by means of mindless undirected evolution and no God was ever in the picture, it's hard for me to understand how you can also believe sexuality should be independent of reproduction.  Isn't that how we got here, by means of sexual attraction between opposite-sex fertile men and women?  Oh, but now we're beyond all that, you say.  We've taken control of our own evolution and can do anything we like, implant chips to turn our women into sex robots or what have you.  Reproducing is somebody else's job—seems like we will never run out of people.  To that I would say, ask Japan.

Japan is the incredible shrinking country.  For the last four years in a row, Japan's population has suffered a net decline, even with immigration taken into account.  In 2013 there were about 238,000 more deaths than births in the famously insular island nation.  While not all of this decline can be attributed to contraceptive technologies, those means go together with a cultural mindset that focuses people on careers and individual success to the detriment of families, marriage, and (in Japan) even between-sex relationships, which many Japanese have given up on altogether.  The future for Japan looks grim, as it does to a greater or lesser degree for many European countries whose birth rates are not much better than Japan's.

I was going to bring religion into this argument, but I don't think there is a need to.  Plain lunkheaded observation of simple statistics shows that cultures and countries that discourage reproduction, whether by abortion, birth control, or a mindset that disses family life, will tend to grow smaller, will experience widespread economic and social dislocations, and possibly disappear altogether.  And in the course of time they will be replaced, if at all, by other cultures that encourage reproduction and promote stable family structures that produce mature, competent people who have the long-term interests of their societies at heart.  And that is a totally Darwinist secular evolutionary argument.

Excuse me, but DUH.

One of my favorite Eudora Welty short stories ends up with a small boy being punished for a minor infraction in a hair salon.  He breaks loose from his mother and runs out the door, but as he leaves he stops to get in the last word: "If you're so smart, why ain't you rich?"  I would turn it around and ask Mr. Gates, "If you're so rich, why ain't you smart enough to realize that contraceptive technology is not in the best interests of humanity?" 

Mr. Gates is not going to pay any attention to me, and I expect that many of my readers will not see eye-to-eye with my position on this either.  Though not a Catholic myself, after many years of experience, both personal and second-hand, I have come to the conclusion that the Roman Catholic Church has the most philosophically and theologically sound positions on human sexuality of any institution around—scientific, cultural, religious, political, or otherwise.  But that is a story for another time and place.

Sources:  For information on the contraceptive chip, I referred to an article at, and also one at  Sigmund Freud's Lecture XX, "The Sexual Life of Human Beings," from which the above quotation was taken, is available in numerous print editions of his 1915 lectures, Introductory Lectures on Psycho-analysis, which is apparently in the public domain in some translations.  My particular source online was a George Mason University site,  The Eudora Welty short story I referred to is "Petrified Man."  Readers interested in knowing more about the Roman Catholic Church's position on sexuality in a highly readable and useful form can consult Christopher West's Good News About Sex & Marriage (Cincinnati, OH:  St. Anthony Messenger Press, 2004).  This book is especially recommended for young people who have most of their lifetimes ahead of them in which to avoid the mistakes of an older generation.

Monday, July 07, 2014

The Robot Says You Flunked: Algorithms versus Judgment

Harvard and MIT have teamed to develop an artificial-intelligence system that grades essay questions on exams.  The way it works is this.  First, a human grader manually grades a hundred essays, and feeds the essays and the grades to the computer.  Then the computer allegedly learns to imitate the grader, and goes on to grade the rest of the essays a lot faster than any manual grader could—so fast, in fact, that often the system provides students nearly instant feedback on their essays, and a chance to improve their grade by rewriting the essay before the final grade is assigned.  So we have finally gotten to the point of grading essays by algorithms, which is all computers can do.

Joshua Schulz, a philosophy professor at DeSales University, doesn't think much of using machines to grade essays.  His criticisms appeared in the latest issue of The New Atlantis, a quarterly on technology and society, and he accuses the software developers of "functionalism."  Functionalism is a theory of the mind that says, basically, the mind is nothing more than what the mind does.  So if you have a human being who can grade essays and a computer that can grade the same essays just as well, why, then, with regard to grading essays, there is no essential difference between the two. 

With all due respect to Prof. Schulz, I think he is speculating, at least when he supposes that the essay-grading-software developers espouse a particular theory of the mind, or for that matter, any theory of the mind whatsoever.  The head of the consortium that developed the software is an electrical engineer, not a philosopher.  Engineers as a group are famously impatient with theorizing, and simply use whatever tools fall to hand to get the job done.  And that's what apparently happened here.  Problem:  tons and tons of essay questions and not enough skilled graders to grade them.  Solution: an  automated essay grader whose output can't be distinguished from the work of skilled human graders.  So where is the beef?

The thing that bothers Prof. Schulz is that the use of automated essay-grading tends to blur the distinction between the human mind and everything else.  And here he touches on a genuine concern:  the tendency of large bureaucracies to turn matters of judgment into automatic procedures that a machine can perform. 

Going to extremes can make a point clearer, so let's try that here.  Suppose you are unjustly accused of murder.  By some unlikely coincidence, you were driving a car of a similar make to the car driven by a bank robber who shot and killed three people and escaped in a car whose license plate number matches yours except for the last two digits, which the eyewitness to the crime didn't remember.  The detectives on the case didn't find the real bank robber, but they did find you.  You are arrested, and in due time you enter the courtroom to find seated at the judge's bench, not a black-robed judge, but a computer terminal at which a data-entry clerk has entered all the relevant data.  The computer determines that statistically, the chances of your being guilty are greater than the chances that you're innocent, and the computer has the final word.  Welcome to Justice 2.0. 

Most people would object to such a delicate thing as a murder trial being turned over to a machine.  But nobody has a problem with lawyers who use word processors or PowerPoints in their courtroom presentations.  The difference is that when computers and technology are used as tools by humans exercising that rather mysterious trait called judgment, no one being judged can blame the machines for an unjust judgment, because the persons running the machines are clearly in charge. 

But when a grade comes out of a computer untouched by human hands (or unseen by human eyes until the student gets the grade), you can question whether the grader who set the example for the machine is really in charge or not.  Presumably, there is still an appeals process in which a student could protest a machine-assigned grade to a human grader, and perhaps this type of system will become more popular and cease to excite critical comments.  If it does, we will have moved another step along the road that further systematizes and automates interactions that used to be purely person-to-person.

Something similar has happened in a very different field:  banking.  My father was a loan officer for many years at a small, independent bank.  He never finished college, but that didn't keep him from developing a finely honed gut feel for the credit-worthiness of prospective borrowers.  He wouldn't have known an algorithm if it walked up and introduced itself, but he got to know his customers well, and his personal interactions with them was what he based his judgment on.  He would guess wrong once in a great while, but usually because he allowed some extraneous factor to sway his judgment.  For example, once my mother asked him to loan money to a work colleague of hers, and it didn't work out.  But if he stuck to only the things he knew he should pay attention to, he did pretty well.

Recently I had the occasion to borrow some money from one of the largest national banks in the U. S., and it was not a pleasant experience.  I will summarize the process by saying it was based about 85% on a bunch of numbers that came out of computer algorithms that worked from objective data.  At the very last step in the process, there were a few humans who intervened, but only after I had jumped through a long series of obligatory hoops that allowed the bankers to check off "must-do" boxes.  If even one of those boxes had been left blank, no judgment would have been required—the machine would say no, and that would have been the end of it.  I got the strong impression that the people were there mainly to serve the machines, and not the other way around.

The issue boils down to whether you think there is a genuine essential difference between humans and machines.  If you do, as most people of faith do, then no non-human should judge a human about anything important, whether it's for borrowing money, assigning a grade, or going to jail.  If you don't think there's a difference, there's no reason at all why computers can't judge people, except for purely performance-based factors such as the machines not being good enough yet.  Let's just hope that the people who think there's no difference between machines and people don't end up running all the machines.  Because there's a good chance that soon afterwards, the machines will be running the people instead.

Sources:  The Winter 2014 issue of The New Atlantis carried Joshua Schulz's article Machine Grading and Moral Learning on pp. 109-119.  The New York Times article from which Prof. Schulz learned about the AI-based essay grading system is available at  The Harvard-MIT consortium's name is edX.

Note to Readers:  In my blog of June 16, 2014, I asked for readers to comment on the question of monetizing this blog.  Of the three or four responses received, all but one were mostly positive.  I have decided to attempt it at some level, always subject to reversal if I think it's going badly.  So in the coming weeks, you may see some changes in the blog format, and eventually some ads (I hope, tasteful ones) may appear.  But I will try to preserve the basic format as it stands today as much as possible.