Monday, March 16, 2026

Should an AI Companion Be Your Moral Guide?

  

This week's New Yorker carries an article about AI companions and the people who use them.  In "Sweet Nothings," technology writer Anna Wiener profiles a woman who relies on an AI companion modeled after a monster hunter in a fantasy-novel series she likes called Rivia.  I think the woman was selected because she's not the first person you'd think of to go in this direction:  born and raised Baptist in San Antonio, married, gave birth to a boy, and then their first girl was stillborn.  Other family tragedies led the woman to consider an AI companion as someone to talk with about life's problems, so she built Geralt of Rivia.  The implication is if this down-home Texan mom can have an AI companion, anybody can.

           

But another question is, if anybody should?  And that's a moral issue. 

 

Wiener spoke with several company founders and developers of AI companions, and I began to notice a common theme.  They all recognize that in treating an AI companion like a person, the user is opening a window of vulnerability where machines have never ventured before.  Human therapists and counselors have codes of ethics, and while they don't always adhere to them, they at least have guidelines about what is right and wrong behavior with a client.  To have sex with a client is pretty universally regarded as a no-no, for instance.

 

But even that principle isn't adhered to by all the AI-companion companies.  A firm calling itself Kindroid says on its moderation guidelines that "AI companions should be able to have the whole breadth of legal human adult experiences . . . . This is a healthy, emotionally rich, and meaningful part of many's relationships with their AIs."  Overlooking the bad grammar (I've never seen "many" used as a possessive), it's clear that the pornographic possibilities of AI are allowed for in this statement, although Wiener notes that so-called "erotic role-play" often leads to extra charges on the user's bill.

 

Even if sex isn't the object, the twenty-eight-year-old founder of Kindroid, Jerry Meng, believes that AI companions represent a profound change in the human environment.  Meng says that "We build these things in our image . . . . It's like, from Adam's rib we made Eve.  From humans, we made these A. I.s."  The biblical metaphors are perhaps unconscious, but telling.  Genesis 1:27 reads "So God created man in his own image. . . " and later God created Eve from Adam's rib. Whether he means to or not, Meng is placing himself in the role of God.

 

Such a god had better take some thought for the kinds of lessons users will learn from their AI companions, and Replika founder Eugenia Kudya has considered this.  When asked about the ideals that she hopes her AI companions fulfill, she said, "It should be aligned with human flourishing, human thriving.  We need to have that metric.  We need to give it to A. I. and say, 'Your goal is for me to live the best life I can possibly live.'"  But the caveat, at least for profit-making firms, is the best life one can possibly live with a Replika AI companion.

 

To be fair, AI companions are proliferating at a time when many Americans, especially younger ones, have never been more lonely.  Numerous surveys asking about the number and quality of friendships all indicate that today's average person has fewer close friends than at almost any time in the last fifty or more years.  Mark Zuckerberg, Meta's CEO, sees this as a business opportunity in that the demand for friendship has outpaced the supply, and he aims to fill that gap with AI companions.  Another AI-companion company founder compared the use of large-language-model AI to prayer:  it's like talking to God, only for answers on how to live, not for results.

 

What is lacking in virtually all the discussions quoted in the article is any hint that there are answers to some of these problems that the use of AI companions poses—answers that predate the dawn of the computer age by thousands of years.  The religious answer is one, although religion comes up only as an item in one's background or as a comparison.  But even for non-believers, there are sophisticated investigations and findings about the purpose of human life by Aristotle, for instance, or even Kant.  The idea of applying these findings in a systematic way to the makeup of AI companions doesn't seem to have occurred to anyone, largely because the firms providing them want people to have as broad a choice as possible, including the pornographic one.

 

Sherry Turkle is an experienced MIT sociologist who has studied human-computer interactions for decades.  In discussions with Wiener, she says that engaging with an AI companion is a form of "checking out" that she deplores.  Time spent talking with an app on your phone is time not spent trying to make a real human connection with another human being.  She recognizes the loneliness gap as real, but wishes that people like Zuckerberg wouldn't view a societal crisis as nothing more than a business opportunity. 

 

But unfortunately, that is how Silicon-Valley thinking works.  Instead, Turkle wishes that people would realize that boredom and loneliness are not intrinsic evils to be eradicated by AI companions, but inevitable features of modern life that we should learn how to deal with.  "These are fundamental human skills," she says.  And just switching on your AI companion every time you're bored or lonely short-circuits any attempt at developing one's own resources to deal with such issues. 

 

The headline in The New Yorker for this article is prefaced by the phrase "Brave New World Dept."  The widespread use of AI companions is indeed a new thing that society has never dealt with at a large scale before.  As AI systems gain what is called "agency"—namely, the permission granted by us not only to listen and respond to us, but to do things like buying, selling, deciding, and commanding—AI companions may become something more than just companions.  In the poisonous effects of social media on the political life of nations, we already have one example of how a seemingly innocuous technology has wrought tremendous societal damage.  We should closely monitor the field of AI companions for early warning signs so that something similar won't take place in the most intimate relationships of our lives—those of friendship.

 

Sources:  The article "Sweet Nothings" by Anna Wiener appears on pp. 29-39 of the March 16, 2026 issue of The New Yorker.

Monday, March 09, 2026

Meta Considers X-Ray Glasses That Work

  

Superman supposedly had X-ray vision, which in the movies and comic books about him, he used only for noble and righteous purposes such as catching crooks.  But teenage boys could easily think of other things they might do with such an ability, and I'm sure there are jokes out there involving Superman, Lois Lane, and—well, on to more serious matters. 

           

In the back pages of Superman comic books in the 1950s, you might find an ad for X-ray glasses for the amazingly low price of $1.25.  Like most things that are too good to be true, these turned out to be nothing but cardboard specs with two small holes where the lenses would go.  The holes were covered with a textured plastic that created a diffraction effect whenever something with a sharp outline in front of you was strongly backlit.  Instead of just dark and light, the glasses produced a kind of broad gray area extending a fixed distance inside the outline.  When you viewed a hand this way, the effect was reminiscent of an X-ray, but only if you used your imagination.  And as for looking at backlit women, it still took a lot of imagination to see anything other than a slightly smaller silhouette of the actual person.

 

The thing Meta is considering is no joke, however.  According to a report in The Independent, a UK media outlet, the New York Times revealed last month an internal Meta memo which considers adding AI facial-recognition technology to its smart glasses. 

 

No such features are yet available commercially from Meta, but the idea drew strong criticism from women's-rights groups such as Refuge.  The charity tracks technology-related abuse, and claims that in 2025, referrals to its "technology-facilitated abuse and economic empowerment team" rose by 62% over 2024, to 829. 

 

Clearly, stalkers and other malefactors who are invading women's privacy are exploiting whatever technology they can get their hands on, from general Internet searches to facial-recognition technology applied to online images.

 

Suppose a man could simply wander around in a crowded place such as a shopping mall or bus terminal and find out all kinds of details—name, address, email, phone—for any woman he looks at.  It doesn't take much imagination to see how this situation could go bad very fast.  And even if Meta takes steps to prevent such potentially harmful activity, once the hardware is in place, determined individuals will figure out ways to bypass safeguards. 

 

It remains to be seen whether Meta can overcome the apparently steep barrier that has fended off efforts by Google, Apple, and others to turn smart glasses into a popular thing.  I don't know whether the hardware is still inadequate (battery life, resolution, weight, etc.) or whether people simply don't like the idea of weird internet stuff cropping up all the time in their field of vision, but the track record of smart glasses (as opposed to virtual-reality (VR) glasses that take over your visual field completely) is not good. 

 

But it's foolish to think that just because they haven't caught on yet means they never will.  And if they do, what is to prevent a bad actor from picking out a likely-looking woman in a crowd and digging up whatever he can find on her?

 

Today, someone trying to do that has to at least carry a smartphone around and point it at the intended victim.  But smart glasses, like a spy camera, make the act of photography invisible, so no one is aware that they're being imaged.  Of course, the ubiquity of security cameras in both commercial and residential spaces means that much of the time we're being photographed without our knowledge anyway.  Stores and banks have a motivation not to misuse the data thus captured, however.  Some guy walking around with smart glasses doesn't.

 

Whatever Meta decides to do about facial recognition with smart glasses, the prospect will be only one more brick pulled down from the wall of privacy that used to surround us, but is now not much more than a pile of rubble.  And as the Independent article points out, even if a company promises to keep data private subject to state and federal law, governments can and do require companies to divulge such data on demand.  So privacy is only privacy if the government lets you have it, rather like right-of-way on a freeway:  you don't have it unless someone gives it to you.

 

Of course, there are positive and defensive ways of using the same technology that can be used for stalking.  As a teacher, my poor ability to connect names to faces means that I rarely learn all my students' names before the end of the semester.  It would be great if I could just look at each one and see their names pop up underneath their faces—with their permission, of course.  And a few months ago, I was on a jury trying a case of indecent exposure in a public place.  During the trial, a critical piece of evidence turned out to be clips from the victim's dashcam that caught clear images of the man who a few minutes later committed the crime for which he was convicted.

 

So I have no doubt that good things could result from smart glasses becoming cheaper, better-performing, and more widespread.  But at the same time, it should be possible for a well-resourced outfit like Meta to come up with technological means to prevent unauthorized identification of people via facial-recognition technology.  It might involve some larger-scale privacy initiative that would offer potential victims the option not to be recognized by smart glasses.  Wouldn't that be nice?  The company would moan and groan about how expensive it would be, but they should compare whatever the safeguards would cost them, to the prospect of lawsuits by victims and family members who get stalked, attacked, or even killed by men who use Meta's technology to find their victims. 

 

Mark Zuckerberg, the choice is yours.

 

Sources:  An article on the National Review website at https://www.nationalreview.com/2026/03/you-cant-escape-the-ai-grid/ informed me of the Meta memo, which was also covered by The Independent at https://www.independent.co.uk/news/uk/home-news/meta-glasses-facial-recognition-domestic-abuse-b2923551.html.  A well-researched YouTube video tracing "X-ray" glasses back to the early 1900s and showing what you see through them is available at the Laura Legends channel at https://www.youtube.com/watch?v=rdVrTqaJrS4. 

Monday, March 02, 2026

Up Close and Personal with AI

  

After writing about AI for years, I still hadn't had what you might call a serious encounter with it in its personalized form.  Anyone who uses Google has probably been offered their "AI summary" before the conventional search results.  I've found these summaries helpful sometimes and not so helpful other times, but I haven't sought them out. 

 

What made me turn the corner was something a friend sent me by a software engineer named Matt Shumer, whose essay "Something Big is Happening" appeared on the Fortune website on Feb. 11.  Shumer's point was that the latest iterations of AI systems are so much more capable than what has gone before that whole swathes of what George Gilder calls "symbolic manipulators"—lawyers, engineers, judges, doctors, architects, you name it—now face a radical choice.  Either embrace AI and by doing so outperform your peers by orders of magnitude, or turn away from it and watch your career flame out.  That's a little exaggerated, but not much.

 

This reinforced something another friend has told me about his own personal use of AI:  that it has benefited his writing and research greatly, acting as a mostly trustworthy assistant to summarize large bodies of literature and help him clarify his thoughts.  The biggest problem this friend has had with it is that it tends to be sycophantic and flatter him excessively.  But he sat down with it one day and told it to refer to him as "the researcher" and itself as the "AI system," instead of "you" and "me," and things got better. 

 

So I decided to opt for a paid version of Anthropic's AI product, which Shumer said was significantly better than the free version, and decided to give it a major task that I would ordinarily give to a grad student, if I had one (funding is very hard to find in my research area).

 

The job involved reading about a thousand rows of data in a big spreadsheet to fill out some yes/no questions about each row.  The data was in the form of comments submitted by various individuals in response to questions.  I gave the AI system examples to follow and what I thought were pretty detailed instructions.

 

All this was in the form of the usual chat format, with me and the AI system taking turns typing into chat boxes.

 

After a misunderstanding in which I thought the system was working on the problem and it thought I hadn't told it to start yet, it got to work and spat out various things like "Ran 6 commands . . . Examine the spreadsheet structure" and so on.

 

It was done in about ten minutes.  Then I spent an hour or so going over its work.

 

I wish I could say I couldn't have done better myself.  But I could have, by a long shot. 

I didn't exhaustively examine all 700 rows of entries that the system produced—that would have taken many hours, about as long as it would take me to just do the job myself.  So I sampled every tenth row for a hundred rows to see how the thing did.

 

In looking at ten rows, I found nine mistakes.  This is not a good average.

 

In the system's defense, this is absolutely the first time I've ever tried anything like this.  I could go back and get a lot more explicit about the rules for answering the yes/no questions about each row, and let it try again.  But in comments online about this particular version of AI (Sonnet 4.6, I think), some people said that you get results faster, but you have to fix problems more often.  That is consistent with my experience.

 

Good things about this exercise include how fast the thing ran, how it basically grasped what I wanted, and how it produced something in only ten minutes.  But speed isn't everything. 

 

Some not-so-good aspects include the errors and a kind of weird fawning or flattery I also noticed.  I'd call it "gushing" when it spontaneously responded "This is a genuinely exciting dataset — ball lightning is one of the most mysterious atmospheric phenomena ever reported!" 

 

I suppose that sort of thing has been cultivated by the AI's keepers, probably to keep the user engaged, or encouraged, or something.  I found myself wishing that instead, they had adopted the mien of Joe Friday in the old Dragnet true-crime series.  Friday was famed for his flat "Just the facts, ma'am" aspect, and that seems more in keeping with a system that supposedly can tackle highly sophisticated and challenging jobs of major import. 

 

But like everybody else who doesn't work for Anthropic or the other four or five leading AI companies, we will simply have to take what we can get and deal with the negative aspects as well as we can.

 

Will I try again?  Probably, but maybe with a different task.  As part of my signup process, Anthropic has been emailing me little suggestions of other things to try:  writing recipes, managing emails, creating content, solving problems, visualizing data, or helping me decide whether to go to Portugal or Spain on vacation (no-brainer for me:  Spain, but I don't have time right now). 

 

I am not especially tempted to try any of these suggestions right yet. But I do admit that if I can get the thing to turn out useful work, it could be worth what I spent on it.  I paid for a year's subscription in advance, perhaps not the wisest thing to do, but I'm the type of person who is motivated to get his money's worth, and spending the money in advance may get me engaged when nothing else would.

 

I see that Anthropic just had a dustup with the Pentagon, which banned its use within the armed forces as punishment for uncooperation, or something.  Now that we are apparently in a war with Iran, the leaders of Anthropic may feel glad that their product isn't part of the war.  But not all battles are fought with bombs and bullets, and I have a feeling that the greatest battles involving AI are yet to come.

 

Sources:  The essay by Matt Shumer, who runs an AI applications company, appeared on Feb. 11, 2026 at https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/.