In an online commentary on The New Yorker website, writer Joshua Rothman tackles the question of the artificial-intelligence (AI) bubble. On this first week of the new year, that seems like an appropriate question to ask. It's pretty clear that AI is not going away. Too many systems have embedded it in their productive processes for that to happen. But Rothman raises two related questions that only time will answer for sure.
The first question is whether the money spent on AI is going to be worth it. "Worth it" can mean a variety of things. The most obvious (and frightening, to some) application of AI is direct replacement of workers: think a roomful of draftsmen replaced by three engineers at computer workstations. Accountants can most easily justify this way of leveraging AI by showing their managers how much the firm is saving in salaries, offset by whatever the AI system cost. And assuming the tasks, whatever they were, are being done just as well by AI as they were by people before, the difference is the net savings AI can effect.
But as Rothman points out, that approach is both overly simplistic and doesn't reflect how AI is typically being used most effectively. The most powerful use mode he has found in his own life is to use AI as a mind-augmenting tool. He gives the example of helping his seven-year-old son write better code. (I will overlook the implications of what the future will be like with a world full of people who were coding when they were seven.) ChatGPT helped Rothman find several applications that his son was both able to master, and enjoyed as well.
And in general, the most fruitful way AI is used seems to be as a quasi-intelligent assistant to a human being, not a wholesale replacement. The problem for businesses is that this sort of employee augmentation is much harder to account for.
He points out that if an employee uses AI to become better educated and more capable, that fact does not show up on the firm's balance sheet. Yet it is a form of capital, capital being broadly defined as anything that enables a firm to be productive. Rothman cites economist Theodore Schultz as the originator of the term "human capital," which captures the concept that an employee has value for his or her abilities, which can depreciate or be improved just as physical capital such as factory buildings or machinery can be.
In a book I read recently called Redeeming Economics, John D. Mueller points out that modern economic theory simply cannot account for human capital in a logically consistent way. This constitutes a basic flaw that is still in the process of being remedied. The usual metrics of economics such as GNP (gross national product) treat investments in human capital such as education and training as consumption, the same as if you took your college tuition and blew it on a vacation to Aruba.
So it's no surprise that businesses are unsure about how to justify spending billions on AI if they can't point to their balance sheets and say, "Here's how we made more money by buying all those AI resources."
Something similar happened with CAD software. When companies discovered how much more effective their designers were when they began using computer-aided design programs such as AutoCAD, and their competitors began underbidding them as a result, they had to get with the program and spend what it took to keep up.
It's not clear that the results of widespread use of AI will be quite as obvious as that. Some bubbles are just that: illusory things that pop and leave no significant remnants. Rothman cites a rather cynical writer named Cory Doctorow who believes the AI bubble will pop soon, leaving scrap data servers and unhappy accountants all over the world.
But other bubbles turn out to be merely the youthful exuberance of an industry that was just getting established. A good example of that kind of bubble was the automotive industry in the years 1910 to 1925. There were literally dozens of automakers that popped up like mushrooms after a rain. Most of them failed in a few years, but that didn't take us back to riding horses.
Both Rothman and I suspect that the AI boom, or bubble, will be more like what happened with automobiles and CAD software. The feverish pace of expansion will slow down, because anything that can't go on forever has to stop sometime. But the long-term future of AI depends on the answer to Rothman's second question: how good will AI get?
It's clearly not equal in any general sense to human intelligence today. As Rothman puts it, AI assistants are "disembodied, forgetful, unnatural, and sometimes glaringly stupid." These characteristics may simply be the defects that further research will iron out in ways that aren't obvious.
While I'm not in the market for a job right now, I nevertheless receive lists of possible jobs from my LinkedIn subscription. A surprising number of them lately have been what I'd call "AI checking" jobs: companies seeking a subject-matter expert to make queries of AI systems and critique the results. Clearly, the purpose of that is to fix problems that show up so the mistakes aren't made the next time.
It's entirely possible that some negative world event will trigger an AI panic and rush to the exits. But even if the short-term spending on AI does crash, we still have come a long way in the last five years, and that progress isn't going to go away. As Rothman says, AI is a weird field to try and make forecasts for, because it involves human-like capabilities that are not well defined, let alone well understood. My guess is that things will slow down, but it's unlikely that humanity will abandon AI altogether, unless some terrifying doomsday-sci-fi tragedy involving it scares us away. And that hasn't happened so far.
Sources: Joshua Rothman's article "Is A. I. Actually a Bubble?" appeared on Dec. 12 on The New Yorker website at https://www.newyorker.com/culture/open-questions/is-ai-actually-a-bubble?. I also referred to John D. Mueller's Redeeming Economics (ISI Books, 2010), pp. 84-86.