Monday, May 29, 2023

Will AI End Civilization?

 

Notice I didn't say, "Will AI (artificial intelligence) End Civilization As We Know It?"  Because it will.  It already has.  Civilization as we knew it as recently as five years ago is considerably different from what we have now—better in some ways, worse in others—and a good part of those changes have been due to widespread adoption of AI.  But the speakers in a Mar. 9, 2023 talk put on YouTube by the Center for Humane Technology raises a more fundamental question:  what are the chances that humans as a species will "go extinct" because we lose control of AI?

 

Based on internal evidence, these speakers–Tristan Harris, a co-founder of the Center, and Aza Raskin—are worth listening to.  The Center is in the heart of Silicon Valley and seems to be very well connected with Big Tech insiders, as attested by the fact that they were introduced before their talk by Steve Wozniak, co-founder of Apple.  And in numerous references during the talk, they emphasized that things are happening very fast, so fast that they have to revise the content of their talk almost weekly to keep up with the explosion of AI progress.

 

I'd like to concentrate here on a list that Harris and Raskin showed when they examined the potential downside of the way AI is currently being deployed as a competitive edge by corporations such as Microsoft and Google.  This list appeared after they cited a chilling statistic from a survey of leading AI experts.  738 experts were asked, "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?"  About half of the researchers think there is a greater than 10% chance of this disaster happening.  To put this result in perspective, Harris and Raskin then showed the burned-out hulk of a crashed airliner and asked in effect, "If 50% of the engineers who designed an airplane thought there was a 10% chance of it crashing, would you get on it?"  Yet we are all being taken down the AI road at breakneck speed by the corporations that see it as a business necessity.

 

OpenAI, a formerly non-profit AI behemoth that was recently turned into a profit-making enterprise, has famously offered its ChatGPT software to the public.  Simply turning powerful AI systems loose on the populace can lead to a number of dire consequences, many of which Raskin and Harris listed and showed examples of.  I'll focus on just a few of these that I think are near-term most likely.

 

"Trust collapse"—One of the leading features of modern economies is the mutual extension of trust.  If you fundamentally do not trust the person you're dealing with, you will spend most of your time and effort trying to avoid getting cheated, and won't have much energy left to do actual productive business.  In some countries, people with even moderate wealth by U. S. standards feel compelled to erect high masonry walls topped with broken glass around their dwellings, simply because if they don't, they will be robbed as a matter of course.  If messages or communications get so easy to fake that bad actors mimic your most close and trusted colleagues, it's hard to see how we could trust anybody anymore unless they are in the room with us. 

 

"Exponential scams [and] blackmail"—The AI experts seem to be most concerned that eventually, AI will develop a kind of super-con-artist ability that will fool even the cleverest and most sophisticated human being into doing stupid and harmful things.  In an interview on Fox News recently, Elon Musk worried that super-intelligent AI would be so persuasive that it could get us to do the civilizational equivalent of walking off a cliff.  It's hard to imagine a scenario that would make that credible, but I will have more to say about that below.

 

"Automated exploitation of code"—Computerized hacking, in other words.  Harris and Raskin showed an example of just such an activity they had carried out with ChatGPT after they told it in essence, "Hack this code." 

 

"Automated fake religions" and "Synthetic relationships"—I was a little surprised to see religion mentioned, but I put these two consequences together because religion involves the worship of something, or someone, and a synthetic relationship means a human would begin to treat a synthesized AI "person" as real.  Already there have been experiments in which disabled individuals (dementia patients, etc.) have gotten to know AI robots as "caregivers," and it is far from clear whether the patients understood that their new companion was only a pile of wires.  From a utilitarian point of view, there seems to be nothing wrong with this—after all, if we don't have enough real caregivers, why not make robots do the job?  But this approach puts superficial happiness above truth and reality, which is always a mistake.

 

For most of these dire things to happen, some human beings with either evil intent or with a short-sighted eagerness to profit ahead of the competition have to implement AI in a way that corrodes the social contract and pits ordinary human beings against a giant automated system that makes them putty in the hands of the robot.  As C. S. Lewis said long ago in The Abolition of Man, humanity's power over Nature, which has enabled it to produce the amazing advances in AI we see today, is really just the power of some small group of people (call them the controllers) over the rest of humanity—the controlled.  The tendency of all too many AI forecasters—including Harris and Raskin—is to treat AI as a wholly autonomous entity beyond the ability of anyone to control. 

 

While that is not a logical impossibility, the far more likely case is one in which bad actors take control of super-AI and use it for malevolent purposes—purposes which may not seem malevolent at the time, but which turn out to be that way long after we have become dependent on the systems that embody them.  This is a real and present danger, and I hope the scary scenarios portrayed by Harris and Raskin in their talk motivate the major players to stop driving us toward the AI cliff before it's too late—whenever that might be.

 

Sources:  A blog on the Salvo website by Robin Phillips on May 17, 2023 at https://salvomag.com/post/sam-altmans-greatest-fear had a link to the Center for Humane Technology talk by Tristan Harris and Aza Raskin at https://www.youtube.com/watch?v=xoVJKj8lcNQ, and that is how I found out about the talk.  It's over an hour long, but anyone concerned about the current dangers of AI should watch it. 

No comments:

Post a Comment