Monday, April 06, 2026

US Fears AI, Uses It Anyway

  

A recent report in National Review summarized opinion polls about what U. S. residents think of artificial intelligence (AI) and how much they are using it.  Paradoxically, the more people use AI, the more they fear it.

 

A poll by NBC News showed that 46% of those queried had a negative opinion of AI versus only 26% positive.  Other polls show that citizens expect mostly or entirely negative effects on society from the widespread use of AI, and believe it will lead to serious job losses.  Over half the Americans polled by Democratic research firm Blue Rose feared that AI will lose them their job or a relative's job. 

 

At the same time, polls asking about AI use show that most people queried have used an AI tool in the past month, and a fourth say they use it every day.  So the old saying "familiarity breeds contempt" may be a guiding principle in how AI is viewed by the general public.

 

In a way, none of this matters.  If a new technology gets widely used and the companies providing it make money, who cares what people think about it?  Another technology that spread rapidly in only a few years, and also had profound effects on society, was television.  In 1950, only 9% of households had a TV, but by 1955 over half did.  And while there may have been a few voices raised in opposition to its growth, I think it's fair to say that the only groups that looked on the spread of TV with disfavor were industries threatened by it:  Hollywood, for instance.  And Hollywood has long ago made peace with the advent of television.  Your average person in the early 1950s was just waiting to see when TV sets got affordable enough to buy, and any negative consequences of TV use were not noted much in public before the 1960s.

 

One difference between the advent of TV and the advent of AI is that TV didn't threaten jobs like AI does.  And one job sector that is already seeing big effects from AI is computer science and computer programming.  The thing about public perception, regardless of whether it's accurate or not, is that it can easily become reality.  I work at a university, and I have heard in the last week that enrollment in computer-science programs is dropping across the board, after years of steady growth.  The reasons for this are not entirely clear, but one factor may well be that students fear spending four or more years getting a degree and then finding that all the entry-level positions are now being done by a few senior people writing AI prompts. 

 

On the other hand, one of the most enthusiastic proponents of AI I know is an 80-plus professor of biology who has been using ChatGPT in his research for the last year or two.  He says it helps him write papers more clearly and to organize his thoughts, and claims it's the greatest thing that's happened to him research-wise in a long time. 

 

Many of the polls mentioned were commissioned by political interests with a view toward forming policies about AI.  Currently, the Trump administration favors few if any regulations on the technology, and wants to keep states from enacting a patchwork of legislation that would encumber the field.  Historically, this approach has worked well for computer- and network-intensive industries themselves, allowing them to create vast new economies and profit mightily therefrom.  But it has also led to a number of real and lasting problems, ranging from the maleficent effects on politics of social media and the quantified and well-known harms to children and teenagers whose lives are distorted by the use of smartphones. 

 

The crystal ball of predicting how technologies will affect society is always more or less cloudy, and I will not venture to say what the future effects of AI's negative polling will be.  Even if AI were universally detested, it's not clear that Washington could get its act together enough to pass meaningful regulatory legislation, especially when Big Tech and the federal government sometimes seem to blur into each other.  On the state level, if the feds don't stop them, some states may pass laws attempting to regulate AI, but it's a little bit like trying to nail Jell-O to the wall.  When the thing you are trying to regulate is so protean and shape-changing, it's hard to decide what regulations to pass, let alone to figure out if they've been violated.

 

Some of the anxiety the public feels about AI is simply due to the breathtaking speed with which it has advanced and improved.  Arthur C. Clarke's principle that any sufficiently advanced technology is indistinguishable from magic applies here.  Real magic is scary if it happens, and I still feel a kind of queasiness when I type commands into a chat box and the program comes back with "I did this and that."  It's understandable that millions of teenagers use AI chatbots as a substitute friend, and it's also very creepy.

 

Going to extremes, a few people believe AI will engender the end of civilization as we know it.  Other hyper-tech-optimists such as Ray Kurzweil look forward to being uploaded to an eternal cloud and think it will be heaven on earth.  The truth probably lies somewhere in between.  What we can do as individuals is to keep reminding ourselves that AI systems are not human beings, and that human beings are not machines.  But both of those truths may become harder to keep in mind as time goes on. 

 

Sources:  The National Review website carried James Lynch's article "The More Americans Use AI, the More They Fear It" on Mar. 25, 2026 at https://www.nationalreview.com/news/the-more-americans-use-ai-the-more-they-fear-it/. 

No comments:

Post a Comment