Monday, March 16, 2026

Should an AI Companion Be Your Moral Guide?

  

This week's New Yorker carries an article about AI companions and the people who use them.  In "Sweet Nothings," technology writer Anna Wiener profiles a woman who relies on an AI companion modeled after a monster hunter in a fantasy-novel series she likes called Rivia.  I think the woman was selected because she's not the first person you'd think of to go in this direction:  born and raised Baptist in San Antonio, married, gave birth to a boy, and then their first girl was stillborn.  Other family tragedies led the woman to consider an AI companion as someone to talk with about life's problems, so she built Geralt of Rivia.  The implication is if this down-home Texan mom can have an AI companion, anybody can.

           

But another question is, if anybody should?  And that's a moral issue. 

 

Wiener spoke with several company founders and developers of AI companions, and I began to notice a common theme.  They all recognize that in treating an AI companion like a person, the user is opening a window of vulnerability where machines have never ventured before.  Human therapists and counselors have codes of ethics, and while they don't always adhere to them, they at least have guidelines about what is right and wrong behavior with a client.  To have sex with a client is pretty universally regarded as a no-no, for instance.

 

But even that principle isn't adhered to by all the AI-companion companies.  A firm calling itself Kindroid says on its moderation guidelines that "AI companions should be able to have the whole breadth of legal human adult experiences . . . . This is a healthy, emotionally rich, and meaningful part of many's relationships with their AIs."  Overlooking the bad grammar (I've never seen "many" used as a possessive), it's clear that the pornographic possibilities of AI are allowed for in this statement, although Wiener notes that so-called "erotic role-play" often leads to extra charges on the user's bill.

 

Even if sex isn't the object, the twenty-eight-year-old founder of Kindroid, Jerry Meng, believes that AI companions represent a profound change in the human environment.  Meng says that "We build these things in our image . . . . It's like, from Adam's rib we made Eve.  From humans, we made these A. I.s."  The biblical metaphors are perhaps unconscious, but telling.  Genesis 1:27 reads "So God created man in his own image. . . " and later God created Eve from Adam's rib. Whether he means to or not, Meng is placing himself in the role of God.

 

Such a god had better take some thought for the kinds of lessons users will learn from their AI companions, and Replika founder Eugenia Kudya has considered this.  When asked about the ideals that she hopes her AI companions fulfill, she said, "It should be aligned with human flourishing, human thriving.  We need to have that metric.  We need to give it to A. I. and say, 'Your goal is for me to live the best life I can possibly live.'"  But the caveat, at least for profit-making firms, is the best life one can possibly live with a Replika AI companion.

 

To be fair, AI companions are proliferating at a time when many Americans, especially younger ones, have never been more lonely.  Numerous surveys asking about the number and quality of friendships all indicate that today's average person has fewer close friends than at almost any time in the last fifty or more years.  Mark Zuckerberg, Meta's CEO, sees this as a business opportunity in that the demand for friendship has outpaced the supply, and he aims to fill that gap with AI companions.  Another AI-companion company founder compared the use of large-language-model AI to prayer:  it's like talking to God, only for answers on how to live, not for results.

 

What is lacking in virtually all the discussions quoted in the article is any hint that there are answers to some of these problems that the use of AI companions poses—answers that predate the dawn of the computer age by thousands of years.  The religious answer is one, although religion comes up only as an item in one's background or as a comparison.  But even for non-believers, there are sophisticated investigations and findings about the purpose of human life by Aristotle, for instance, or even Kant.  The idea of applying these findings in a systematic way to the makeup of AI companions doesn't seem to have occurred to anyone, largely because the firms providing them want people to have as broad a choice as possible, including the pornographic one.

 

Sherry Turkle is an experienced MIT sociologist who has studied human-computer interactions for decades.  In discussions with Wiener, she says that engaging with an AI companion is a form of "checking out" that she deplores.  Time spent talking with an app on your phone is time not spent trying to make a real human connection with another human being.  She recognizes the loneliness gap as real, but wishes that people like Zuckerberg wouldn't view a societal crisis as nothing more than a business opportunity. 

 

But unfortunately, that is how Silicon-Valley thinking works.  Instead, Turkle wishes that people would realize that boredom and loneliness are not intrinsic evils to be eradicated by AI companions, but inevitable features of modern life that we should learn how to deal with.  "These are fundamental human skills," she says.  And just switching on your AI companion every time you're bored or lonely short-circuits any attempt at developing one's own resources to deal with such issues. 

 

The headline in The New Yorker for this article is prefaced by the phrase "Brave New World Dept."  The widespread use of AI companions is indeed a new thing that society has never dealt with at a large scale before.  As AI systems gain what is called "agency"—namely, the permission granted by us not only to listen and respond to us, but to do things like buying, selling, deciding, and commanding—AI companions may become something more than just companions.  In the poisonous effects of social media on the political life of nations, we already have one example of how a seemingly innocuous technology has wrought tremendous societal damage.  We should closely monitor the field of AI companions for early warning signs so that something similar won't take place in the most intimate relationships of our lives—those of friendship.

 

Sources:  The article "Sweet Nothings" by Anna Wiener appears on pp. 29-39 of the March 16, 2026 issue of The New Yorker.

No comments:

Post a Comment