Showing posts with label transhumanism. Show all posts
Showing posts with label transhumanism. Show all posts

Monday, November 08, 2021

Downsides of the Metaverse

 

One important task of the discipline of engineering ethics is to take a look at new technologies and say in effect, "Wait a minute—what could go wrong here?"  Blogger Joe Allen at the website Salvo has done that with the Metaverse idea recently touted by Mark Zuckerberg, when Zuckerberg announced that Facebook will now be known (officially, anyway) as Meta.

 

Allen denies that Zuckerberg was merely trying to distract attention away from the recent bad publicity Facebook has been receiving, and claims that the Metaverse idea is something Zuckerberg and others have been dreaming of for years, especially proponents of the quasi-philosophy known as transhumanism.  What are these dreams?

 

In the Metaverse of the future, you will be able to put on virtual-reality equipment such as goggles or a helmet, and enter an alternate universe fabricated in the same way that the Facebook universe, or the many MMOG (massively multiplayer online games) systems try to do in a comparatively feeble way today.  But the goal of metaverse technology is to make the simulation better than ordinary reality, to the point that you'll really want to stay there. 

 

It's not hard to imagine downsides for this picture.  Allen quotes Israeli author Yuval Harari as saying that mankind's power to "create religions" combined with Metaverse technology will lead to "more powerful fictions and more totalitarian religions than in any previous era."  The Nazis could do no more than put on impressive light shows and hire people such as Leni Riefenstahl to produce propaganda films such as "Triumph of the Will."  Imagine what someone like Joseph Goebbels could have done if he had been put in charge of an entire metaverse, down to every last detail.

 

Impossible?  Facebook and other companies are investing billions to make it happen, and Allen points out that companies are also lobbying Washington to spend federal money on developing the infrastructure needed to support the massive bandwidth and processing power that it will take. 

 

COVID-19 pushed many of us a considerable distance toward the Metaverse when we had to begin meeting people on Zoom rather than in person.  Zoom is better than not meeting people at all, I suppose, but it already has contributed in a small way to a breakdown in what I'd call decorum.  For example, judges have had to reprimand lawyers for coming to hearings on Zoom while lying in bed with no clothes on. And I've talked on Zoom with students who wouldn't dream of showing up in class with what they were wearing in the privacy of their bedrooms, in which I found myself a reluctant virtual guest.

 

Of course, if we had the Metaverse, the lawyer could appear as an avatar in a top hat, tuxedo, and tails if that was what the judge wanted to see.  But the point is that there is a whole complex of social-interaction rules or guidelines that children take years to learn (if they ever do learn), and in a Metaverse, those rules would be set by whoever or whatever is running the system, not just by the individuals involved.

 

Zuckerberg insists, according to Allen, that his Metaverse will be "human-centered."  That may be true, but a maximum-security prison is human-centered too—designed to keep certain humans in the center of the prison.  While Facebook has its positive features—my wife just learned through it yesterday of the passing of an old family friend—the damage it has done to what was formerly called civil discourse, and the sheer amount of bile that social media sites have profited from, show us that even with the relatively low-tech means we currently have, the downsides of corporate-mediated social interactions reach very low points indeed.

 

Does this mean we should jump in with a bunch of government regulations before the genie gets out of the bottle?  Oddly, Zuckerberg is calling for some kind of regulation even now.  But as Allen points out, Zuckerberg may be thinking that eventually, even government power will take a back seat to the influence that the corporate-controlled Metaverse will have over things.

 

Those who see religions as creations of the human brain, and human reality as something to be created from scratch and sold at a profit—these are defective views of what humanity is, as Pope St. John Paul II pointed out with respect to the anthropology of Marxism.  Transhumanist fantasies about recreating the human universe in our image share with Marxism the belief that human beings are the summit of intelligent life, and there is nothing or no One else out there to be considered as we remake the virtual world to be whatever we want it to be.  Even if you grant the dubious premise that the Zuckerbergs of the world merely want to make life better for us all instead of just getting richer, you have to ask the question, "What does 'better' mean to you?"  And whether the machinery is Communist or capitalist, the bottom-line answer tends to be the satisfaction of personal desires. 

 

Any system, human or mechanical, that leaves God out of the picture leads people down a garden path that ends in slavery, as John Bunyan's Pilgrim discovered in Pilgrim's Progress.  Before we are compelled to join the Metaverse in order to earn a living, we should take a very hard look at what those who are planning it really want to do.  Once again, we have a chance to set a new technology on the right path before we let it go on to produce mega-disasters we then have to learn from.  It's the engineers who come up with this stuff, and in view of the lack of interest or even comprehension that government representatives have for such things, perhaps it's the engineers who need to ask the hard questions about what could go wrong with the Metaverse—before it does. 

 

Sources:  Joe Allen's article "The Metaverse:  Heaven for Soy Boys, Hell on Earth for Us" is on the Salvo website at https://salvomag.com/post/the-metaverse-heaven-for-soy-boys-hell-on-earth-for-us.  I also referred to an article on John Paul II's views on anthropology and Marxism at https://www.catholic.com/magazine/online-edition/jpii-anthropology.

Monday, October 11, 2021

Against Cartesian Dualism

 

Every now and then, it's useful to look at the philosophical underpinnings of current thought and what implications they have for engineering ethics.  In a recent post on the website of the journal First Things, professor of biblical and religious studies Carl Trueman noted that Cartesian dualism—a way of looking at the human person promulgated by RenĂ© Descartes (1596-1650)—is enjoying a comeback in the popular mind, although modern philosophy has long since discarded it as an inadequate model.

 

If you know anything about Descartes, you will probably recall his most famous saying:  "I think, therefore I am."  He arrived at that conclusion after discarding everything he could think of that might possibly not be true—the evidence of his senses, things he knew on authority, and so on.  Whatever else might be false, he reasoned, he couldn't help thinking that he was still thinking, and therefore there must be a thinker somewhere.  He was so impressed by this idea that he developed a whole philosophy around it, which came to be known as Cartesian dualism.

 

Descartes believed that the soul—which in modern terms pretty much amounts to what we would call the mind—was a "spiritual substance" that was immaterial, without dimensions or location.  And the body he believed to be completely material, an entirely separate substance from the soul, consisting of the brain, the nerves, the muscles, etc., all of which operate under the control of the immaterial mind and will.  As to exactly how the immaterial controlled the material, Descartes wasn't sure.  But he thought the point of contact might be the pineal gland, a small pine-cone-shaped gland near the middle of the brain.

 

Modern science has discovered that the pineal gland, far from controlling the entire body, mainly secretes melatonin, which affects sleep patterns—but that's about it.  And modern philosophy has discarded Cartesian dualism, because nobody after Descartes was ever able to show how a completely immaterial thing like Descartes' hypothetical soul could affect a physical thing like the body.

 

But this news evidently hasn't reached a lot of women athletes who submitted an "amicus" (friend of the court) brief to the U. S. Supreme Court, urging the court to uphold abortion rights in the upcoming Dobbs v. Jackson Women's Health Organization case, in which the State of Missouri is seeking to overturn Roe v. Wade, the decision that made abortion legal in the U. S.

 

As Trueman observes, the women in the amicus brief speak of their bodies as nothing more than sophisticated tools or instruments, operated by their minds and wills.  They say that they "depend on the right to control their bodies and reproductive lives in order to reach their athletic potential."  If we stop with only that quotation, we can see that (a) the operative verb is "control" and (b) the purpose of controlling the body is to "reach their athletic potential." 

 

In other words, for these women, their body is a means to the end of achieving success in athletics, just as a fast race car is a means to achieving success in the Indy 500.  And prohibiting abortion is like compelling a race-car driver to give a ride to a 300-pound hitchhiker during the race. 

 

Cartesian dualism shows up in lots of places these days outside of law courts.  The whole transhumanist movement, of which famed entrepreneur Elon Musk is a proponent, is based on the idea that the real you is basically a software program running on the wet computer called the brain.  The phrase "meat cage" that some people use to describe the body partakes of this same idea—that we are not our bodies, but that we use our bodies in a way not much different in principle than using a car or a computer. 

 

Perhaps the most pernicious feature of Cartesian dualism is the temptation to assess the humanity or non-humanity of other people based on our judgment as to whether they have a mind worthy of the name.  I would imagine it is easier to contemplate an abortion if you believe the fetus in question has not developed a mind yet.  And the same goes for people who are mentally disabled, suffer from Alzheimer's disease, or are otherwise incapacitated to the extent that their minds no longer control their bodies adequately.  Perhaps it's just as well to sever the connection between the mind and the body if the mind can't do its job controlling the body any more.

 

Well, if Cartesian dualism isn't true, how should we think of the relation between the mind and will (or soul, to use the more old-fashioned term) and the body?  The model of the person Descartes was trying to displace is called hylomorphism, originated by Aristotle.  Philosopher Peter Kreeft explains that Aristotle's theory considers the body to be what the person is made of, and the soul as the form or molding and patterning influence of the body.  So matter (Greek hyle) is "informed" by form (morphe) to create one integral thing with two aspects or causes:  the material cause, namely the body, and the formal cause, namely the soul.  But the human person is one unique thing, not two.

 

If hylomorphism was more popular than Cartesian dualism, I think we would see a lot of salutary changes in everything from attitudes toward the life issues (abortion, euthanasia, etc.) to medical and surgical procedures (sex-change operations, transhumanist initiatives) and even tattoos.  If you thought you were hiring a tattoo artist to burn an image of some hip-hop star on your very being, instead of just some piece of machinery you happen to be living in now, you might think twice before doing it. 

 

But the spirit of the age favors Cartesian dualism.  As consumers, we are urged to treat the rest of the world as a selectable, disposable warehouse of products and services—why not treat our bodies the same way?  I'm glad that my university is a rare holdout among public institutions of higher education for continuing to require that all its undergraduates take at least one philosophy course.  In such a course, they stand  a chance of hearing about Cartesian dualism and why it is no longer respectable.  And they might even take what they hear in class seriously, and apply it to their lives.  Such a hope is all that keeps some educators going.

 

Sources:  Carl Trueman's article "The Body Is More Than a Tool" appeared on the First Things website at https://www.firstthings.com/web-exclusives/2021/10/the-body-is-more-than-a-tool.  Elon Musk's promotion of transhumanism is described at https://futurism.com/elon-musk-is-looking-to-kickstart-transhuman-evolution-with-brain-hacking-tech.  Peter Kreeft demolishes Cartesian dualism (and a lot of other false philosophical ideas) in his book Summa Philosophica (St. Augustine's Press, 2012).  I also referred to the Wikipedia article on hylomorphism.

Monday, January 14, 2019

The Transhumanist Bill of Goods


Depending on your point of view, the intellectual movement (and now political party) that goes under the name of "transhumanism" is either a set of fringe beliefs held by a small number of people who can be safely ignored, or the leading edge of something that will completely transform human life as we know it.  The truth probably lies somewhere in between.  One of transhumanism's intellectual fathers is Ray Kurzweil, who coined the term "the Singularity" to mean the moment when artificial intelligence, cyborgs, and uploading peoples' minds into software converge to create a kind of Big Bang of superintelligent activity that will make everything everyone ever wanted come true, and will also render ordinary biological human lives obsolete.  Significantly, Kurzweil now holds a high-level position at Google, and other tech leaders such as Elon Musk have promoted transhumanist ideas.

Not satisfied with the Silicon Valley reins of power they already hold, the transhumanists have formed a political party and issued a Transhumanist Bill of Rights.  The first version (called 1.0, naturally) was delivered to the U. S. Capitol on Dec. 14, 2015.  Its subsequent fate did not make the news.  In a recent piece reprinted in the Human Life Review, Wesley J. Smith noted that version 2.0 contains enough wacky ideas to wreck the economy, violate fundamental religious freedoms, and erase the difference between people and machines. 

For a group that tends to ignore the past and live mentally in the future, the writers of the Transhumanist Bill of Rights clearly acknowledged some historical precedents.  The very title, Bill of Rights, comes from that 230-year-old set of amendments to the U. S. constitution of the same name.  Their preamble says they "establish" the Bill to "help guide and enact sensible policies in the pursuit of life, liberty, security of person, and happiness."  That phrase goes one better than Thomas Jefferson's in the preamble to the U. S. Declaration of Independence—he left out "security of person."  And at the very end, almost as an afterthought, in Article XV (25, to those of you who can't read Roman numerals), they incorporate by reference all the rights in the United Nations Universal Declaration of Human Rights, which was enacted by the then-new U. N. in 1948.

Like the U. N.'s declaration, the transhumanist Bill is aspirational, not legally binding.  And here is where the vast differences between the U. S. Bill of Rights and this document show most vividly.  The people who gathered in 1789 to debate how best to carry their young experiment in democracy forward were elected leaders of a real nation.  In a sense that the transhumanists don't seem to appreciate, they held their future in their hands.  The fate of a country that they and their compatriots fought for, and many had died for, depended on the wisdom with which they reconstituted their republic, which at the time was suffering from serious problems.  Looking back, we can say that while they didn't do a perfect job—the canker of slavery would have to be removed from the body politic in a horrendous Civil War two generations hence—the constitution they forged has withstood the test of time. 

Contrast what those founding fathers did with what the transhumanists are doing with their Bill of Rights 2.0.  For one thing, the transhumanist Bill's direct effect on the actual politics of the nation has been nil.  Despite the window-dressing of Roman numerals and references to historic documents, the actual content of the Bill reads like something out of a speech at a Comic Con convention.  One can come closest to being able to predict the things most desired by transhumanists by imagining a teenage boy of exceptional intelligence but limited experience, and asking him what his ideal world would be like, given unlimited technological resources and a free imagination.  The answers might go something like the following:
           
Gee, well, nobody would be poor (Article XVIII:  "Present and future societies should ensure that their members will not live in poverty solely for being born to the wrong parents.").  And there wouldn't be any discrimination or prejudice (Article XVI:  "All sentient entities should be protected from discrimination. . . "), and everybody would be healthy (Article VII:  "All sentient entities should be the beneficiaries of a system of universal health care.").  And (snigger) there'd be plenty of sex (Article XII:  "All sentient entities are entitled to reproductive freedom. . . . ").  And college should be free (Article XX:  "Present and future societies should provide education systems accessible and available to all . . . ").  And we wouldn't have nutcases like Trump running the government (Article XXIV:  "Transhumanists stand opposed to the post-truth culture of deception.  All governments should be required to make decisions and communicate information rationally and in accordance with facts. . . .").

Maybe Kurzweil, Musk, and their fellow transhumanists are experts in their deep, narrow pursuits that require specialization in technical fields and a certain amount of leadership and management expertise.  But in politics, they seem to think that if you take some half-baked left-wing notions, mix them with some technospeak, put Roman numerals on them, and quote a few well-known historical documents, the public will come flooding to your door and ask to join.

On the other hand, perhaps we should read this document not as a step in a democratic process that involves persuading the sovereign public to accept one's ideas, but more as a manifesto of what an elite, powerful group of people plan to do once they manage to dispose of all the stupidity and traditionalism of the vast majority of people in the world and run the place the way they know (from their superior expertise) that it ought to be run.  Wesley Smith worries that transhumanists in power would establish a communist-like society.  And I think he is right.  If transhumanists by some means gained real power to implement their ideas, the totalitarian government that would result might very well end human life as we know it—and leave nothing in its place but some buzzing machinery that would run down faster than anyone expects.

Sources:  Wired published the Transhumanist Bill of Rights 2.0 at https://www.wired.com/beyond-the-beyond/2018/08/transhumanist-bill-rights-version-2-0/ on August 21, 2018.  Wesley J. Smith's article "The Transhumanist Bill of Wrongs" appeared in the Fall 2018 edition of the Human Life Review on pp. 91-93, and was reprinted from the American Spectator.
-->

Monday, March 21, 2016

AlphaGo Defeats Human Go Champion: Go Figure


First it was chess:  world champion Garry Kasparov lost a contest of five games to an IBM computer named Deep Blue in 1997.   And now it's the game called Go, which has been popular in Asia for centuries.  Earlier this month, Korean Go champion Lee Sedol lost four out of a series of five games in a match with AlphaGo, a computer program developed by Google-owned London firm DeepMind.  But Sedol says he now has a whole new view of the game and is a much better player from the experience.  This development raises some perennial questions about what makes people human and whether machines will in fact take over the world once they get smarter than us.

As reported in Wired, the Go match between Lee Sedol and AlphaGo was carried on live TV and watched by millions of Go enthusiasts.  For those not familiar with Go (which includes yours truly), it is a superficially simple game played on a 19-by-19 grid of lines with black and white stones, sort of like an expanded checkerboard.  But the rules are both more complicated and simpler than checkers.  They are simpler in that the goal is just to encircle more territory with your stones than your opponent encircles with his.  They are more complicated in that there are vastly more possible moves in Go than there are in checkers or even chess, so strategizing takes at least as much brainpower in Go as it does in chess. 

It's encouraging to note that even when Sedol lost to the machine, he could come up with moves that equalled the machine's moves in subtlety and surprise.  Of course, this may not be the case for much longer.  It seems like once software developers show they can beat humans at a given complex task, they lose interest and move on to something else.  And this shows an aspect of the situation that so far, few have commented on:  the fact that if you go far enough back in the history of AlphaGo, you find not more machines, but humans.

It was humans who figured out the best strategies to use for AlphaGo's design, which involved making a lot of slightly different AlphaGos and having them play against each other and learn from their experiences.  Yes, in that sense the computer was teaching itself, but it didn't start from scratch.  The whole learning environment and the existence of the program in the first place was due, not to other machines, but to human beings. 

This gets to one of the main problems I have with artificial-intelligence (AI) proponents who see as inevitable a day when non-biological, non-human entities will, in short, take over.  Proponents of what is called transhumanism, such as inventor and author Ray Kurzweil, call this day the Singularity, because they think it will mark the beginning of a kind of explosion of intelligence that will make all of human history look like mudpies by comparison.  They point to machines like DeepBlue and AlphaGo as precursors of what we should expect machines to be capable of in every phase of life, not just specialized rule-bound activities like chess and Go. 

But while the transhumanists may be right in certain details, I think there is an oversimplified aspect to their concept of the singularity which is often overlooked.  The mathematical notion of a singularity is that it's a point where the rules break down.  True, you don't know what's going on at the singularity point itself, but you can handle singularities in mathematics and even physics as long as you're not standing right at the point and asking questions about it.  I teach an electrical engineering course in which we routinely deal with mathematical singularities called poles.  As long as the circuit conditions stay away from the poles, everything is fine.  The circuit is perfectly comprehensible despite the presence of poles, and performs its functions in accordance with the human-directed goals set out for it. 

All I'm seeing in artificial intelligence tells me that people are still in control of the machines.  For the opposite to be the case—for machines to be superior to people in the same sense that people are now superior to machines—we'd have to see something like the following.  The only way new people would come into being is when the machines decide to make one, designing the DNA from scratch and growing and training the totally-designed person for a specific task.  This implies that first, the old-fashioned way of making people would be eliminated, and second, that people would have allowed this elimination to take place. 

Neither of these eventualities strikes me as at all likely, at least as a deliberate decision made by human beings.  I will admit to being troubled by the degree to which human interactions are increasingly mediated by opaque computer-network-intensive means.  If people end up interacting primarily or exclusively through AI-controlled systems, the system has an excellent opportunity to manipulate people to their disadvantage, and to the advantage of the system, or whoever is in charge of the system. 

But so far, all the giant AI-inspired systems are all firmly under the control of human beings, not machines.  No computer has ever applied for the position of CEO of a company, and if it did, it would probably get crossways to its board of directors in the first few days and get fired anyway.  As far as I can tell, we are still in the regime of Man exerting control over Nature, not Artifice exerting control over Man.  And as C. S. Lewis wrote in 1947, ". . . what we call Man's power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument." 

I think it is significant that AlphaGo beat Lee Sedol, but I'm not going to start worrying that some computerized totalitarian government is going to take over the world any time soon.  Because whatever window-dressing the transhumanists put on their Singularity, that is what it would have to be in practice:  an enslavement of humanity, not a liberation. And as long as enough people remember that humans are not machines, and machines are made by, and should be controlled by, humans, I think we don't have to lose a lot of sleep about machines taking over the world.  What we should watch are the humans running the machines.

Sources:  The match between Lee Sedol and AlphaGo was described by Cade Metz in Wired at http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/.  I also referred to the Wikipedia articles on DeepBlue, Go, and AlphaGo.  The quotation from The Abolition of Man by C. S. Lewis is from the Macmillan paperback edition of 1955, p. 69. 

Monday, February 04, 2013

The Worth of Work


Most professional engineers work for pay, and that leads to an interesting question:  which is more important, the work or the pay you get for it?  I bring up that question after reading an essay on work by the well-known medievalist C. S. Lewis. 

In the essay, Lewis distinguished between two types of work.  The first type is work that is worth doing for its own sake.  Some professions are automatically included in this classification:  teachers (Lewis was a professor at Oxford), doctors, pastors, and other members of the helping professions, for instance.  As long as members of these groups do their work faithfully and competently, they should have no problem looking themselves in the mirror and saying, “I’m glad I do what I do, because it makes the world a better place.”  There are other types of work that can fit into this first category, and I’ll get to those in a minute.

The second kind of work is done merely to get a paycheck.  The thing you do for the paycheck is almost irrelevant:  it is simply a means to the end of getting money.  Now there is nothing intrinsically wrong about earning money.  In a fallen world, money and economics are inescapable aspects of existence.  But if you make money your No. 1 priority and aren’t too particular about how you get it, you can end up doing things that, at best, are unnecessary for the world’s betterment, and at worst, positively harm others.  Scam operators, burglars, and drug dealers all get money, but the legal system has objections to their methods.

Where do engineers fit into all this?  There is no easy general answer to that question.  I think the question of pay is high on the list of most young engineering graduates early in their careers.  It’s the first thing they often mention when you ask them what they’ll do after graduation:  “go out and earn some bucks!”  But with their special expertise and competencies in design, engineers at least have a chance to wind up doing the first kind of job:  one that is intrinsically worth doing on its own merits, regardless of the pay scale. 

Besides engineering tasks that serve the obvious helping professions, I think a wide variety of other kinds of engineering jobs are worth doing on their own.  What if the thing you help create doesn’t directly help people, in the sense of medical treatments and so on, but is a thing of beauty—an artistic creation that helps others see the world in a way they had not seen it before?  Take, for example, the platoons of engineers needed to make an animated film these days, the kind that takes the natural world seriously and attempts to portray it the way it really looks and acts. 

If you peruse the output of the Association of Computing Machinery’s annual SIGGRAPH conferences (many examples of which are on YouTube), you will find an amazing array of animations of everything from hair blowing in the wind, to cannonballs blasting through realistic curtains, to ribbons tying themselves into realistic knots.  These things wind up in almost unnoticeable corners of animated films, but they add realism and depth as the engineers behind the scenes overcome the challenges of using great but limited computing power to portray the way physical objects really interact with each other.  The audience gets to see only those simulations that worked.  The ones that blow up or produce screen confetti end up on the digital cutting-room floor, and serve as stepping stones along the way to success. 

A less straightforward example of engineering that is worth doing is the work of engineers who create machines that do work formerly done by people.  The chairman of Foxconn, the company that makes iPhones and employs over a million people worldwide, says that he wants to replace as many of his workers as he can with robots.  Three-dimensional printers that turn CAD drawings into working machines with moving parts are on the market now—my school is thinking of buying one, so you know they can’t be that expensive.  The story of technological unemployment is at least as old as the Industrial Revolution, but signs are that it’s going to be a huge factor in the worldwide economy in the next few years.   And engineers are behind all the technology that will let Foxconn run with more robots than people, if that ever comes to pass.

Does this mean that engineers will eventually work themselves out of a job, like the mythical snake that started eating its own tail until it disappeared?  Some people think so.  A group calling itself the Transhumanists believe computers will soon become smarter than people and basically take over the world, leaving behind the old-fashioned “meat-cage” models of people who are based in natural biology.

Those of us with a Christian worldview know this isn’t possible, however, because machines don’t have spirits.  You could in principle have a world full of machines busily making other machines and exchanging bits and so on, but without humans there would be no spirit and no life.  There might be a great deal going on in that world, but without anyone to see it, it would be a dead world, as dead as the moon. 

The thing called a human being is an amalgam of spirit and matter, and exists because of love.  To the extent we recognize that fact, we are guided into the right occupation and work for the right reasons.  To the extent we forget it, we play into the hands of those for whom money is everything, and for whom love is simply another overhead expense to be eliminated.

Sources:  C. S. Lewis’s essay “Good Work and Good Works” appears as chapter 5 in The World’s Last Night and Other Essays (Harcourt Brace & Co., 1959).  I learned of Foxconn’s plans and interesting facts on 3-D printing from an article by Michael Ventura that appeared in the online edition of the Austin Chronicle on Jan. 25, 2013 at http://www.austinchronicle.com/columns/2013-01-25/letters-at-3am-what-are-human-beings-for/.