Monday, August 25, 2025

RAND Says AI Apocalypse Unlikely

 

In 2024, several hundred artificial-intelligence (AI) researchers signed a statement calling for serious actions to avert the possibility that AI could break bad and kill the human race.  In an interview last February, Elon Musk mused that there is "only" a 20% chance of annihilation from AI.  With so many prominent people speculating that AI may spell the end of humanity, Michael J. D. Vermeer of the RAND Corporation began a project to explore just how AI could wipe out all humans.  It's not as easy as you think.

 

RAND is one of the original think tanks, founded in 1948 to develop U. S. military policies, and has since studied a wide range of issues in quantitative ways.  As Vermeer writes in the September Scientific American, he and his fellow researchers considered three main approaches to the extinction problem:  (1) nuclear weapons, (2) pandemics, and (3) deliberately-induced global warming. 

 

It turns out that nuclear weapons, although capable of killing billions if set off in densely-populated areas, would not do the job.  There would be little remnants of people scattered in remote places, and they would probably be enough to reconstitute human life indefinitely.

 

The most likely scenario that would work is a combination of pathogens that together would kill nearly every human who caught them.  The problem here ("problem" from AI's point of view) is that once people figured out what was going on, they would invoke quarantines, much as New Zealand did during COVID, and entire island nations or other isolated regions could survive until the pandemic burned itself out.

 

Artificially-induced global warming was the hardest way to do it.  There are compounds such as sulfur hexafluoride which have about 25,000 times the global-warming capability of carbon dioxide.  And if you made a few million tons of that and spread it around, it could raise the global average temperature so much that "there would be no environmental niche left for humanity."  But factories pumping megatons of bad stuff into the atmosphere would be hard to hide from people, who naturally would want to know what's going on.

 

So while an AI apocalypse is theoretically possible, all the scenarios they considered had common flaws.  In order for any of them to happen, the AI system would first have to make up its mind, so to speak, to persist in the goal of wiping out humanity until the job was actually done.  Then it would have to wrest control of the relevant technology (nuclear or biological weapons, chemical plants) and conduct extensive projects with them to execute the goal.  It would also have to obtain the cooperation of humans, or at least their unwitting participation.  And finally, as civilization collapsed, the AI system would have to carry on without human help, as the few remaining humans would be useless for AI's purposes and simply targets for extinction.

 

While this is an admirable and objectively scientific study, I think it overlooks a few things. 

 

First, it draws an arbitrary line between the AI system (which in practice would be a conglomeration of systems) and human beings.  Both now and in the foreseeable future, humans will be an essential part of AI because it needs us.  Let's imagine the opposite scenario:  how would humans wipe out all AI from the planet?  If every IT person in the world just didn't show up for work tomorrow, what would happen?  A lot of bad things, certainly, because computers (not just AI, but increasingly systems involving AI) are intimately woven into modern economies.  Nevertheless, I think issues (caused by stupid non-IT humans, probably) would start showing up, and in a short time we would have a global computer crash the likes of which have never been seen.  True, millions of people would die along with the AI systems.  But I'm not aware of any truly autonomous AI system of any complexity and importance that has no humans dealing with it in any way, as apparently was the case in the 1970 sci-fi film "Colossus:  The Forbin Project."

 

So if an AI-powered system showed signs of getting out of hand—taking over control of nuclear weapons, doing back-room pathogen experiments on its own, etc.—we could kill it by just walking away from it, at least the way things are now.

 

More likely than any of the hypothetical disasters imagined by the RAND folks is a possibility they didn't seem to consider.  What if AI just gradually supplants humans until the last human dies?  This is essentially the stated goal of many transhumanists, who foresee the uploading of human consciousness into computer hardware as their equivalent of eternal life.  They don't realize that their idea is equivalent to thinking that making an animated effigy of myself will guarantee my survival after death, much as the ancient Egyptians prepared their pharaohs for the afterlife. 

 

But pernicious ideas like this can gain traction, and we are already seeing an unexpected downturn in fertility worldwide as civilizations benefit from technology-powered prosperity.  If AI, and its auxiliary technological forms, ever puts an end to humanity, I think the gradual, slow replacement of humans by AI-powered systems is more likely than any sudden, concentrated catastrophe, like the ones the RAND people considered.  And the creepy thing about this one is that it's happening already, right now, every day.

 

Romano Guardini was a theologian and philosopher who in 1956 wrote The End of the Modern World, in which he foresaw in broad terms what was going to happen to modernity as the last vestiges of Christian influence were replaced by a focus on the achievement of power for power's sake alone.  Here are a few quotes from near the end of the book:  "The family is losing its significance as an integrating, order-preserving factor . . . . The modern state . . . is losing its organic structure, becoming more and more a complex of all-controlling functions.  In it the human being steps back, the apparatus forward."  As Guardini saw it, the only power rightly controlled is exercised under God.  And once God is abolished and man sets up technology as an idol, looking to it for salvation, the spiritual death of humanity is assured, and physical death may not be far behind.

 

I'm glad we don't have to worry about an AI apocalypse that would make a good, fast, dramatic movie, as the RAND people assure us won't happen.  But there are other dangers from AI, and the slow insidious attack is the one to guard against most vigilantly.

 

Sources:  Michael J. D. Vermeer's "Could AI Really Kill Off Humans?" appeared on pp. 73-74 of the September 2025 issue of Scientific American, and is also available online at https://www.scientificamerican.com/article/could-ai-really-kill-off-humans/.  I also referred to the Wikipedia article on sulfur hexafluoride.  The Romano Guardini quotes are from pp. 161-162 of his The End of the Modern World, in an edition published by ISI Press in 1998. 

Monday, August 18, 2025

Is the Internet Emulsifying Society?

 

About a year ago I had cataract surgery, which these days means replacing the natural lens in the eye with an artificial one.  Curious about what happens to the old lens, I looked up the details of the process.  It turns out that one of the most common procedures uses an ultrasonic probe to emulsify the old lens, turning a highly structured and durable object that served me well for 70 years into a liquefied mess that was easily removed. 

 

If you're wondering what this has to do with the internet and society, be patient.

 

A recent report in The Dispatch by Yascha Mounk describes the results of an analysis by Financial Times journalist John Burn-Murdoch of data from a large Understanding America survey of more than 14,000 respondents.  Psychologists have standardized certain personality traits as being fairly easy to assess in surveys and also predictive of how well people do in society.  Among these traits are conscientiousness, extraversion, and neuroticism.  People who are conscientious make good citizens and employees:  they are "organized, responsible, and hardworking."  Extraversion makes for better social skills and community involvement, while neuroticism indicates a trend toward anxiety and depression.

 

Burn-Murdoch divided up the results by age categories, with the youngest being 16 to 39, and compared the rates of these traits to what prevailed in the full population in 2014, less than ten years ago.  The results are shocking.

 

Everybody (16-39, 40-59, and 60+) has declined in extraversion from the 50th to the 40th percentile, although by only ten percentile points out of 100.  (If a number is unchanged from 2014, the results would be 50th percentile today).  But in neuroticism, those under 40, who were already in the 60th percentile in 2014, have now zoomed up to the 70th.  Lots of young neurotics out there.  And they have distinguished themselves even more in the categories of agreeableness (declining from 45 to 35) and most of all, in conscientiousness.  From a relatively good 47th percentile or so in 2014, the younger set have plummeted to an abysmal 28th percentile of conscientiousness in less than a decade.

 

When the results of conscientiousness are broken down into their constituent parts, it gets even worse.  Starting about 2016, the 16-39 group shows jumps in positive responses to "is easily distracted" and "can be careless." 

 

If the survey was restricted to teenagers, you would expect such results, although not necessarily this big.  But we're talking about people in their prime earning years too, twenty- to forty-year-olds. 

 

Mounk ascribes most of these disastrous changes to influences traceable to the Internet, and specifically, social media.  He contrasts the ballyhoo and wild optimism that greeted various Internet-based developments such as online dating and worldwide free phone and Zoom calls with the reality of cyberbullying, trolling, cancel culture, and the mob psychology on steroids that the Internet provides fertile soil for. 

 

Now for the emulsion part.  An emulsion takes something that tends to keep its integrity—such as a blob of oil in water or the natural lens of an eye—and breaks it up into individual pieces that are surrounded by a foreign agent.  In the case of mayonnaise, the oil used is separated into tiny drops surrounded by water.  Oil doesn't naturally mix with water, but when an emulsifier is used (the lecithin in egg yolk, in this case), it reduces surface tension and breaks up the oil into tiny droplets.

 

That's fine in the case of mayonnaise.  But in the case of a society, surrounding each individual with a foreign film of Internet-mediated software that passes through firms interested not primarily in the good of society, but in making a profit, all kinds of pernicious effects can happen.

 

There is nothing intrinsically wrong with making money, so this is not a diatribe against big tech as such.  But in the case of cigarettes, when a popular habit that made the tobacco companies rich was shown to have hidden dangers, it took a lot of political will and persistence to change things so that at least the dangers were known to anyone who picks up a pack of cigarettes.

 

Mounk thinks it may be too late to do much about the social and psychological harms caused by the Internet, but we are still at the early stage of adoption when it comes to generative artificial intelligence (AI).  I tend not to make such a sharp distinction between the way the Internet is currently used and what difference the widespread deployment of free software such as chatGPT will make.  For decades, the tech companies have been using what amounts to AI systems to addict people to their social media services and to profit from political polarization.  So as AI becomes more commonplace it will be a change only in degree, not necessarily in kind.

 

AI or no, we have had plenty of time already to see the pernicious results among young people of interacting with other humans mainly through the mediation of mobile phones.  It's not good.  Just as man does not live by bread alone, people aren't intended to interact by smartphone alone.  If they do, they get less conscientious, more neurotic, more isolated and lonely, and more easily distracted and error-prone.  They also find it increasingly difficult to follow any line of reasoning of more than one step.

 

Several states have recently passed laws restricting the use of smartphones in K-12 education.  This is a controversial but ultimately beneficial step in the right direction, although it will take a while to see how seriously individual school districts take it and whether it makes much of a difference in how young people think and act.  For those of you who believe in the devil, I'm pretty sure he is delighted to see that society is breaking up into isolated individuals who can communicate only through the foreign agent of the Internet, rather than being fully present—physically, emotionally, and spiritually—to the Other. 

 

Perhaps warnings like these will help us realize how bad things have become, and what we need to do to stop them from getting any worse.  In the meantime, enjoy your mayonnaise.

 

Sources:  John Burn-Murdoch's article "How We Got the Internet All Wrong" appeared in The Dispatch on Aug. 12, 2025 at https://thedispatch.com/article/social-media-children-dating-neurotic/.  I also referred to the survey on which it was based at https://uasdata.usc.edu/index.php. 

Monday, August 11, 2025

"Winter's Tale" and the Spirit of Engineering

 

Once in a great while I will review a non-fiction book in this space that I think is worth paying attention to if one is interested in engineering ethics.  Winter's Tale by Mark Helprin is a novel, published in 1983, and even now I can't say exactly why I think it should be more widely known among engineers and those interested in engineering.  But it should be.

 

Every profession has a spirit: a bundle of intuitive and largely emotional feelings that go along with the objective knowledge and actions that constitute the profession.  Among many other things, Winter's Tale captures the spirit of engineering better than any other fiction work I know.  And for that reason alone, it deserves praise.

 

The book is hard to describe.  There are some incontestable facts about it, so I'll start with those.  It is set mainly in New York City, with excursions to an imaginary upstate region called Lake of the Coheeries, and side trips to San Francisco.  It is not a realistic novel, in the sense that some characters in it live longer than normal lifespans, and various other meta-realistic things happen.  There are more characters in it than you'd find in a typical nineteenth-century Russian novel.  There is no single plot, but instead a complex tapestry that dashes back and forth in time like a squirrel crossing a street. 

 

But all these matters are secondary.  The novel's chief virtue is the creation of an atmosphere of hope and, not optimism, exactly—some truly terrible things happen to people in it—but a temperate yet powerful energy and drive shared by nearly all the characters, except for a few completely evil ones.  And even the evil ones are interesting. 

 

The fertility of Helprin's imagination is astounding, as he creates technical terms, flora and fauna, and other things that are, strictly speaking, imaginary yet somehow make sense within the story.  One of the many recurring elements in the book is the appearance of a "cloud wall" which seems to be a kind of matrix of creation and time travel.  Here is how Virginia, one of the principal characters, describes it to her son Martin:

 

           ". . . It swirls around the city in uneven cusps, sometimes dropping down like a tornado to spirit people away or deposit them there, sometimes opening white roads from the city, and sometimes resting out at sea while connections are made with other places.  It is a benevolent storm, a place of refuge, the neutral flow in which we float.  We wonder if there is anything beyond it, and we think that perhaps there is."

           "Why?" Martin asked from within the covers.

            "Because," said Virginia, "in those rare times when all things coalesce to serve beauty, symmetry, and justice, it becomes the color of gold—warm and smiling, as if God were reminded of the perfection and complexity of what He had long ago set to spinning, and long ago forgotten."

 

The whole novel is like that.

 

Although there is no preaching, no doctrine expounded, and very few explicitly religious characters such as ordained ministers, a thread of holiness, or at least awareness of life beyond this one, runs throughout the book.  This is probably why I learned about it from a recommendation by the Catholic philosopher Peter Kreeft, who mentioned it in Doors in the Walls of the World.

 

The reason engineers might benefit from reading it is that machines and other engineered structures—steam engines, cranes, bridges, locomotives—and those who design, build, and tend them, are portrayed in a way that is both appealing and transcendent.  At this moment I feel a frustration stemming from my inability to express what is so attractive about this book. 

 

You may learn something from the fact that the reviews of it I could find fell into two camps.  One camp loved it and wished it would go on forever.  The other camp, of which I turned out to be a member, said that after a while they found the book annoying, and almost didn't finish it.  I think one reason for the latter reaction is that structurally, it is all trees and very little forest.

 

The very fertility of Helprin's imagination leads him to introduce novel and fascinating creations, incidents, and characters every page or two, and the result is a loss of coherence in the overall story and sequence of events.  A chart of every character and incident with lines drawn among them would look like the wiring diagram of a Boeing 747. 

 

But every time I said to myself that I was going to stop reading it, I picked it up again, and finally chose one free day to finish the thing, all the time hoping that it would get to the point.  There is no crashing finale in which everything is tied up neatly with a bow.  There is, however, a climax of sorts, and toward the end events occur which have parallels in the New Testament.  Farther than that I shouldn't go, for fear of spoiling the ending for anyone who wants to read it. 

 

The only other novel I can think of that bears even a faint resemblance to Winter's Tale is G. K. Chesterton's The Man Who Was Thursday.  It is also a fantasy in the sense that unrealistic things happen, and it features characters who are what Kreeft calls archetypes, embodied representations of ideas.  Not everyone likes or can even make sense of Chesterton's novel, and the same will undoubtedly be true of Winter's Tale.

 

For a fantasy, Helprin's book is rather earthy in spots, and for that reason I wouldn't recommend it for children.  But the earthiness is not gratuitous, and rounds out the realism of his character portrayals.  Many of the main actors behave courageously and even nobly, and would be good subjects for the exemplary mode of engineering ethics, in which one describes how engineering went right in a particular case with ethical implications. 

 

If you pick up the book, you will know in the first few pages whether you can stand to read the rest.  If you persist till the end, you will have experienced a world unlike our own in some ways, but very like what it could be if we heeded, in Lincoln's phrase, the better angels of our nature. 

 

Sources:  Winter's Tale was published in 1983 by Harcourt Brace Jovanovich.  Peter Kreeft's Doors in the Walls of the World was published in 2018 by Ignatius Press.

Monday, August 04, 2025

Should We Worry About Teens Befriending AI Companions?

A recent survey-based study by Common Sense Media shows that a substantial minority of teenagers surveyed use AI "companions" for social interaction and relationships.  In a survey of over a thousand young people aged 13 to 17 last April and May, the researchers found that 33% used applications such as ChatGPT, Character, or Replika for things like conversation, role-playing, emotional support, or just as a friend.  Another 43% of those surveyed used AI as a "tool or program," and about a third reported no use of AI at all.

 

Perhaps more troubling than the percentages were some comments made by teens who were interviewed in an Associated Press report on the survey.  An 18-year-old named Ganesh Nair said, "When you're talking to AI, you are always right.  You're always interesting.  You are always emotionally justified."

           

The researchers also found that teens were more sophisticated than you might think about the reliability of AI and the wisdom of using it as a substitute for "meat" friends.  Half of those surveyed said they do not trust advice given to them by AI, although the younger teens tended to be more trusting.  And two-thirds said that their interactions with AI were less satisfying than those with real-life friends, but one-third said they were either about the same or better.  And four out of five teens spend more time with real friends than with AI.

 

The picture that emerges from the survey itself, as opposed to somewhat hyped news reports, is one of curiosity, cautious use, and skepticism.  However, there may be a small number of teens who either turn to AI as a more trusted interlocutor than live friends, or develop unhealthy dependencies of various kinds with AI chatbots. 

 

At present, we are witnessing an uncontrolled experiment in how young people deal with AI companions.  The firms backing these systems with their multibillion-dollar server farms and sophisticated software are motivated to engage young people especially, as habits developed before age 20 or so tend to stay with us for a lifetime.  It's hard to picture a teenager messaging ChatGPT to "grow old along with me," but it may be happening somewhere.

 

I once knew a woman in New England who kept a life-size cloth doll in her house, made to resemble a former husband.  Most people would regard this as a little peculiar.  But what difference is there between that sort of thing and spending time in online chats with a piece of software that simulates a caring and sympathetic friend?  The interaction with AI is more private, at least until somebody hacks the system.  But why does the notion of teenagers who spend time chatting with Character as though it were a real person bother us?

 

By saying "us," I implicitly separate myself from teens who do this sort of thing.  But there are teens who realize the dangers of AI overuse or misuse, and older teens especially expressed concerns to the AP reporter that too much socializing with chatbots could be bad. 

 

The same teen quoted above got "spooked" about AI companions when he learned that a friend of his used his companion to compose a Dear Jill message to his girlfriend of two years when he decided to break up.  I suppose that is not much different than a nineteenth-century swain paging through a tome entitled "Letters for All Occasions," although I doubt that even the Victorians were that thorough in providing examples for the troubled ex-suitor. 

 

Lurking in the background of all this is a very old theological principle:  idolatry.  An idol is anything less than God that we treat as God, in the sense of resorting to it for help instead of God.  For those who don't believe in God, idolatry would seem to be an empty concept.  But even atheists can see the effects of idolatry in extreme cases, even if they don't acknowledge the God who should be worshipped instead of the idol.

 

For a teen in a radically dysfunctional household, turning to an AI companion might be a good alternative, but a kind, loving human being would always be better.  Kind, loving human beings aren't always available, though, and so perhaps an AI companion would suffice in a pinch like a "donut" spare tire until you can get the flat fixed.  But you shouldn't drive on a temporary tire indefinitely, and teens who make AI companions a regular and significant part of their social lives are probably headed for problems.

 

What kind of problems?  Dependency, for one thing.  The AI firms are not promoting their companions out of the kindness of their collective hearts, and the more people rely on their products the more money they make.  The researchers who executed the survey are concerned that teens who use AI companions that never argue, never disagree with them, and validate everything they say, will be ill-prepared for the real world where other humans have their own priorities, interests, and desires. 

 

In an ideal world, every teen would have a loving mother and father they would trust with their deepest concerns, and perhaps friends as well who would give them good advice.  Not many of us grew up in that ideal world, however, and so perhaps teens in really awful situations may find some genuine solace in turning to AI companions rather than humans.

 

The big news of this survey is the fact that use of AI companions among teens is so widespread, though still in the minority.  The next thing to do is to focus on those small numbers of teens for which AI companions are not simply something fun to play with, but form a deep and significant part of their emotional lives.  These are the teens we should be the most concerned about, and finding out why they get so wrapped up with AI companions and what needs the companions satisfy will take us a long way toward understanding this new potential threat to the well-being of teenagers, who are the future of our society.

 

Sources:  The AP article "Teens say they are turning to AI for friendship" appears on the AP website at https://apnews.com/article/ai-companion-generative-teens-mental-health-9ce59a2b250f3bd0187a717ffa2ad21f, and the Common Sense Media survey on which it was based is at https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf.