Monday, June 23, 2025

Should Chatbots Replace Government-Worker Phone Banks?

 

The recent slashes in federal-government staffing and funding have drawn the attention of the Distributed AI Research Institute (DAIR), and two of the Institute's members warn of impending disaster if the Department of Governmental Efficiency (DOGE) carries through its stated intention to replace hordes of government workers with AI chatbots.  In the July/August issue of Scientific American, DAIR founder Timnit Gebru, joined by staffer Asmelash Teka Hadgu, decry the current method of applying general-purpose large-language-model AI to the specific task of speech recognition, which would be necessary if one wants to replace the human-staffed phone banks that are at the other end of the telephone numbers for Social Security and the IRS with machines. 

 

The DAIR people give vivid examples of the kinds of things that can go wrong.  They focused on Whisper, which is a speech-recognition feature of OpenAI, and the results of studies by four universities of how well Whisper converted audio files of a person talking into transcribed text.

 

The process of machine transcription has come a long way since the very early days of computers in the 1970s, when I heard Bell Labs' former head of research John R. Pierce say that he doubted speech recognition would ever be computerized.  But anyone who phones a large organization today is likely to deal with some form of automated speech recognition, as well as anyone who has a Siri or other voice-controlled device in the home.  Just last week I was on vacation, and the TV in the room had to be controlled with voice commands.  Simple operations like asking for music or a TV channel are fairly well performed by these systems, but that's not what the DAIR people are worried about.

 

With more complex language, Whisper was shown not only to misunderstand things, but to make up stuff as well that was not in the original audio file at all.  For example, the phrase "two other girls and one lady" in the audio file became after Whisper transcribed it, "two other girls and one lady, um, which were Black." 

 

This is an example of what is charitably called "hallucinating" by AI proponents.  If a human being did something like this, we'd just call it lying, but to lie requires a will and an intellect that chooses a lie rather than the truth.  Not many AI experts want to attribute will and intellect to AI systems, so they default to calling untruths hallucinations.

 

This problem arises, the authors claim, when companies try to develop AI systems that can do everything and train them on huge unedited swaths of the Internet, rather than tailoring the design and training to a specific task, which of course costs more in terms of human input and guidance.  They paint a picture of a dystopian future in which somebody who calls Social Security can't ever talk to a human being, but just gets shunted around among chatbots which misinterpret, misinform, and simply lie about what the speaker said.

 

Both government-staffed interfaces with the public and speech-recognition systems vary greatly in quality.  Most people have encountered at least one or two government workers who are memorable for their surliness and aggressively unhelpful demeanor.  But there are also many such people who go out of their way to pay personal attention to the needs of their clients, and these are the kinds of employees we would miss if they got replaced by chatbots.

 

Elon Musk's brief tenure as head of DOGE is profiled in the June 23 issue of The New Yorker magazine, and the picture that emerges is that of a techie dude roaming around in organizations he and his tech bros didn't understand, causing havoc and basically throwing monkey wrenches into finely-adjusted clock mechanisms.  The only thing that is likely to happen in such cases is that the clock will stop working.  Improvements are not in the picture, not even cost savings in many cases.  As an IRS staffer pointed out, many IRS employees end up bringing in many times their salary's worth of added tax revenue by catching tax evaders.  Firing those people may look like an immediate short-term economy, but in the long term it will cost billions.

 

Now that Musk has left DOGE, the threat of massive-scale replacement of federal customer-service people by chatbots is less than it was.  But we would be remiss in ignoring DAIR's warning that AI systems can be misused or abused by large organizations in a mistaken attempt to save money.

 

In the private sector, there are limits to what harm can be done.  If a business depends on answering phone calls accurately and helpfully, and they install a chatbot that offends every caller, pretty soon that business will not have any more business and will go out of business.  But in the U. S. there's only one Social Security Administration and one Internal Revenue Service, and competition isn't part of that picture. 

 

The Trump administration does seem to want to do some revolutionary things to the way government operates.  But at some level, they are also aware that if they do anything that adversely affects millions of citizens, they will be blamed for it. 

 

So I'm not too concerned that all the local Social Security offices scattered around the country will be shuttered, and one's only alternative will be to call a chatbot which hallucinates by concluding the caller is dead and cuts off his Social Security check.  Along with almost every other politician in the country, Trump recognizes Social Security is a third rail that he touches at his peril. 

 

But that still leaves plenty of room for future abuse of AI by trying to make it do things that people really still do better, and maybe even more economically than computers.  While the immediate threat may have passed from the scene with Musk's departure from DOGE, the tendency is still there.  Let's hope that sensible mid-level managers will prevail against the lightning strikes of DOGE and its ilk, and the needed work of government will go on.

 

Sources:  The article "A Chatbot Dystopian Nightmare" by Asmelash Teka Hadgu and Timnit Gebru appeared in the July/August 2025 Scientific American on pp. 89-90.  I also referred to the article "Move Fast and Break Things" by Benjamin Wallace-Wells on pp. 24-35 of the June 23, 2025 issue of The New Yorker.

No comments:

Post a Comment