In 2024, several hundred artificial-intelligence (AI) researchers signed a statement calling for serious actions to avert the possibility that AI could break bad and kill the human race. In an interview last February, Elon Musk mused that there is "only" a 20% chance of annihilation from AI. With so many prominent people speculating that AI may spell the end of humanity, Michael J. D. Vermeer of the RAND Corporation began a project to explore just how AI could wipe out all humans. It's not as easy as you think.
RAND is one of the original think tanks, founded in 1948 to develop U. S. military policies, and has since studied a wide range of issues in quantitative ways. As Vermeer writes in the September Scientific American, he and his fellow researchers considered three main approaches to the extinction problem: (1) nuclear weapons, (2) pandemics, and (3) deliberately-induced global warming.
It turns out that nuclear weapons, although capable of killing billions if set off in densely-populated areas, would not do the job. There would be little remnants of people scattered in remote places, and they would probably be enough to reconstitute human life indefinitely.
The most likely scenario that would work is a combination of pathogens that together would kill nearly every human who caught them. The problem here ("problem" from AI's point of view) is that once people figured out what was going on, they would invoke quarantines, much as New Zealand did during COVID, and entire island nations or other isolated regions could survive until the pandemic burned itself out.
Artificially-induced global warming was the hardest way to do it. There are compounds such as sulfur hexafluoride which have about 25,000 times the global-warming capability of carbon dioxide. And if you made a few million tons of that and spread it around, it could raise the global average temperature so much that "there would be no environmental niche left for humanity." But factories pumping megatons of bad stuff into the atmosphere would be hard to hide from people, who naturally would want to know what's going on.
So while an AI apocalypse is theoretically possible, all the scenarios they considered had common flaws. In order for any of them to happen, the AI system would first have to make up its mind, so to speak, to persist in the goal of wiping out humanity until the job was actually done. Then it would have to wrest control of the relevant technology (nuclear or biological weapons, chemical plants) and conduct extensive projects with them to execute the goal. It would also have to obtain the cooperation of humans, or at least their unwitting participation. And finally, as civilization collapsed, the AI system would have to carry on without human help, as the few remaining humans would be useless for AI's purposes and simply targets for extinction.
While this is an admirable and objectively scientific study, I think it overlooks a few things.
First, it draws an arbitrary line between the AI system (which in practice would be a conglomeration of systems) and human beings. Both now and in the foreseeable future, humans will be an essential part of AI because it needs us. Let's imagine the opposite scenario: how would humans wipe out all AI from the planet? If every IT person in the world just didn't show up for work tomorrow, what would happen? A lot of bad things, certainly, because computers (not just AI, but increasingly systems involving AI) are intimately woven into modern economies. Nevertheless, I think issues (caused by stupid non-IT humans, probably) would start showing up, and in a short time we would have a global computer crash the likes of which have never been seen. True, millions of people would die along with the AI systems. But I'm not aware of any truly autonomous AI system of any complexity and importance that has no humans dealing with it in any way, as apparently was the case in the 1970 sci-fi film "Colossus: The Forbin Project."
So if an AI-powered system showed signs of getting out of hand—taking over control of nuclear weapons, doing back-room pathogen experiments on its own, etc.—we could kill it by just walking away from it, at least the way things are now.
More likely than any of the hypothetical disasters imagined by the RAND folks is a possibility they didn't seem to consider. What if AI just gradually supplants humans until the last human dies? This is essentially the stated goal of many transhumanists, who foresee the uploading of human consciousness into computer hardware as their equivalent of eternal life. They don't realize that their idea is equivalent to thinking that making an animated effigy of myself will guarantee my survival after death, much as the ancient Egyptians prepared their pharaohs for the afterlife.
But pernicious ideas like this can gain traction, and we are already seeing an unexpected downturn in fertility worldwide as civilizations benefit from technology-powered prosperity. If AI, and its auxiliary technological forms, ever puts an end to humanity, I think the gradual, slow replacement of humans by AI-powered systems is more likely than any sudden, concentrated catastrophe, like the ones the RAND people considered. And the creepy thing about this one is that it's happening already, right now, every day.
Romano Guardini was a theologian and philosopher who in 1956 wrote The End of the Modern World, in which he foresaw in broad terms what was going to happen to modernity as the last vestiges of Christian influence were replaced by a focus on the achievement of power for power's sake alone. Here are a few quotes from near the end of the book: "The family is losing its significance as an integrating, order-preserving factor . . . . The modern state . . . is losing its organic structure, becoming more and more a complex of all-controlling functions. In it the human being steps back, the apparatus forward." As Guardini saw it, the only power rightly controlled is exercised under God. And once God is abolished and man sets up technology as an idol, looking to it for salvation, the spiritual death of humanity is assured, and physical death may not be far behind.
I'm glad we don't have to worry about an AI apocalypse that would make a good, fast, dramatic movie, as the RAND people assure us won't happen. But there are other dangers from AI, and the slow insidious attack is the one to guard against most vigilantly.
Sources: Michael J. D. Vermeer's "Could AI Really Kill Off Humans?" appeared on pp. 73-74 of the September 2025 issue of Scientific American, and is also available online at https://www.scientificamerican.com/article/could-ai-really-kill-off-humans/. I also referred to the Wikipedia article on sulfur hexafluoride. The Romano Guardini quotes are from pp. 161-162 of his The End of the Modern World, in an edition published by ISI Press in 1998.
No comments:
Post a Comment