Monday, January 12, 2026

The Best Investment for Engineering—and Everything Else

 

Engineering is inseparable from economics.  Doing engineering without regard for costs or return on investment (ROI) is doomed to failure.  So here's a question to ponder:  what investment can produce a return over time of $60.04 for every dollar invested?  Is it something like Tesla, or SpaceX, or a particular stock? 

 

Not according to Ray Perryman, a consulting economist who runs the Perryman Group and writes a newspaper column.  The investment he has identified that returns sixty-for-one is surprising in its nature and simplicity. 

 

It's early childhood education, conventionally defined as any formal education that takes place from birth up to the third grade in elementary school. 

 

The returns break down roughly as follows.  Every dollar invested in early childhood education returns about $15 in future earnings of the child educated.  It returns almost as much ($14) in parental earnings, presumably because the parents benefit from having a better-educated child.  Improved earnings from future generations amount to $6.  The largest single return is reduced social costs from decreased need for crimefighting and social services:  $21.  And because the child will be healthier and live longer, those effects are quantified at about $2.50.

 

Economists are not used to treating people as anything other than utility-maximizing consumers.  But as both popular writer John C. Médaille and academic John D. Mueller have pointed out, modern economics subscribes to what Mueller calls the "stork theory" of human development.  In the economists' mathematical models, people simply appear fully grown and ready to consume.  The effort of parents in raising children, all educational work, and the process of procreation itself is simply regarded as consumption, the same as if it were all blown on trips to Aruba.  Critically, modern economics ignores everything in the nature of gift:  the giving of parents to children, the giving of charity to the needy, and so on. 

 

Fortunately, some economists are beginning to remedy these glaring omissions in the way economics models the human world.  And Perryman's analysis of the profound effects of early childhood education is a good step in this direction. 

 

Engineering education is as blind as modern economics in this regard.  We set up engineering schools and just assume that somehow, qualified students will show up and be ready to learn how to be engineers.  And so far, it's worked.  But declining birthrates worldwide and massive educational disruptions such as COVID caused may show up as a severe shortage of qualified students, which is already causing problems for some institutes of higher education. 

 

In his book Redeeming Economics, Mueller shows how these blind spots in modern economics have caused profound distortions in the way economic predictions and advice turn out.  Perryman's calculation of the stupendous ROI of early childhood education shows that, systemically speaking, we should be pouring more money into Head-Start-like programs, preschools, and grades 1 through 3 than we spend on highways, the Internet, or AI.  But especially in the U. S., education is an unglamorous and often politically contentious field which enjoys little in the way of prestige or a united opinion as to what should be done.  This is one aspect of the larger anti-child tone of current culture, but that's a discussion for another day.

 

While money can help improve early childhood education, what helps most of all is interested and engaged parents.  And here we go way beyond economics, at least the modern kind.  I was once invited to sit in on a focus-group panel convened by some STEM (science-technology-engineering-math) educators who had recently received a National Science Foundation grant to improve engineering education.  The details of what they were trying to do have faded from my memory.  But at one point, the panel leaders asked us members what we thought were important factors in making students ready for engineering education.

 

I said that a stable family environment of two biological different-sex parents was one of the most important factors in favor of fostering children who could grow up to be engineers.  What I won't forget was the expressions of strained indifference that met my statement.  Family structure and stability were clearly beyond NSF's scope.  And this is not to say that children from broken homes or abusive parents can't make it as engineers.  They can, but it's harder. 

 

Back in the Early Pleistocene when I was a child, my mother, an intelligent woman educated as a teacher who went on to get her master's degree, looked upon me as a kind of project, and challenged both me and herself to see how much I could learn about the alphabet and reading before I entered first grade.  I was too young to appreciate it at the time, but she blessed me (an act of giving) with what Perryman would value at many dollars of equivalent early childhood education.  And that blessing enabled me to go on to have a moderately successful career first in industry, and then in academia. 

She put aside her teaching career to raise her children until her youngest was in school, and then went back to teaching.  But the dollars she deferred while she was raising us paid off handsomely, as Perryman's analysis indicates.

 

If it ever happens, it will probably take a generation before economists as a group learn how to quantify giving as well as consuming.  But if they do, we will get a vastly different picture of the economy than what we see now, which is focused on stuff and organizations, and neglects the vast amount of giving among people that is fundamentally vital to any economy that isn't going to die out in one generation.  My metaphorical hat is off to Ray Perryman and his colleagues, who have made one small step in a much-needed direction of acknowledging that investment in people is not only morally right—it pays much better than almost any other kind. 

 

Sources:  Ray Perryman's column "The Economist:  Essential Early Education" appeared in the Jan. 2, 2026 edition of the San Marcos Daily Record.  The study on which the column was based can be found at https://www.perrymangroup.com/media/uploads/brief/perryman-essential-early-education-12-09-25.pdf.  For more on the blind spots of modern economics, see John C. Médaille's Toward a Truly Free Market (ISI Books, 2010) and John D. Mueller's Redeeming Economics (ISI Books, 2010). 


Monday, January 05, 2026

What Will Happen to the AI Bubble?

 

In an online commentary on The New Yorker website, writer Joshua Rothman tackles the question of the artificial-intelligence (AI) bubble.  On this first week of the new year, that seems like an appropriate question to ask.  It's pretty clear that AI is not going away.  Too many systems have embedded it in their productive processes for that to happen.  But Rothman raises two related questions that only time will answer for sure.

 

The first question is whether the money spent on AI is going to be worth it.  "Worth it" can mean a variety of things.  The most obvious (and frightening, to some) application of AI is direct replacement of workers:  think a roomful of draftsmen replaced by three engineers at computer workstations.  Accountants can most easily justify this way of leveraging AI by showing their managers how much the firm is saving in salaries, offset by whatever the AI system cost.  And assuming the tasks, whatever they were, are being done just as well by AI as they were by people before, the difference is the net savings AI can effect.

 

But as Rothman points out, that approach is both overly simplistic and doesn't reflect how AI is typically being used most effectively.  The most powerful use mode he has found in his own life is to use AI as a mind-augmenting tool.  He gives the example of helping his seven-year-old son write better code.  (I will overlook the implications of what the future will be like with a world full of people who were coding when they were seven.)  ChatGPT helped Rothman find several applications that his son was both able to master, and enjoyed as well. 

 

And in general, the most fruitful way AI is used seems to be as a quasi-intelligent assistant to a human being, not a wholesale replacement.  The problem for businesses is that this sort of employee augmentation is much harder to account for.

 

He points out that if an employee uses AI to become better educated and more capable, that fact does not show up on the firm's balance sheet.  Yet it is a form of capital, capital being broadly defined as anything that enables a firm to be productive.  Rothman cites economist Theodore Schultz as the originator of the term "human capital," which captures the concept that an employee has value for his or her abilities, which can depreciate or be improved just as physical capital such as factory buildings or machinery can be. 

 

In a book I read recently called Redeeming Economics, John D. Mueller points out that modern economic theory simply cannot account for human capital in a logically consistent way.  This constitutes a basic flaw that is still in the process of being remedied.  The usual metrics of economics such as GNP (gross national product) treat investments in human capital such as education and training as consumption, the same as if you took your college tuition and blew it on a vacation to Aruba. 

 

So it's no surprise that businesses are unsure about how to justify spending billions on AI if they can't point to their balance sheets and say, "Here's how we made more money by buying all those AI resources." 

 

Something similar happened with CAD software.  When companies discovered how much more effective their designers were when they began using computer-aided design programs such as AutoCAD, and their competitors began underbidding them as a result, they had to get with the program and spend what it took to keep up. 

 

It's not clear that the results of widespread use of AI will be quite as obvious as that.  Some bubbles are just that:  illusory things that pop and leave no significant remnants.  Rothman cites a rather cynical writer named Cory Doctorow who believes the AI bubble will pop soon, leaving scrap data servers and unhappy accountants all over the world. 

 

But other bubbles turn out to be merely the youthful exuberance of an industry that was just getting established.  A good example of that kind of bubble was the automotive industry in the years 1910 to 1925.  There were literally dozens of automakers that popped up like mushrooms after a rain.  Most of them failed in a few years, but that didn't take us back to riding horses. 

 

Both Rothman and I suspect that the AI boom, or bubble, will be more like what happened with automobiles and CAD software.  The feverish pace of expansion will slow down, because anything that can't go on forever has to stop sometime.  But the long-term future of AI depends on the answer to Rothman's second question:  how good will AI get?

 

It's clearly not equal in any general sense to human intelligence today.  As Rothman puts it, AI assistants are "disembodied, forgetful, unnatural, and sometimes glaringly stupid."  These characteristics may simply be the defects that further research will iron out in ways that aren't obvious.

 

While I'm not in the market for a job right now, I nevertheless receive lists of possible jobs from my LinkedIn subscription.  A surprising number of them lately have been what I'd call "AI checking" jobs:  companies seeking a subject-matter expert to make queries of AI systems and critique the results.  Clearly, the purpose of that is to fix problems that show up so the mistakes aren't made the next time.

 

It's entirely possible that some negative world event will trigger an AI panic and rush to the exits.  But even if the short-term spending on AI does crash, we still have come a long way in the last five years, and that progress isn't going to go away.  As Rothman says, AI is a weird field to try and make forecasts for, because it involves human-like capabilities that are not well defined, let alone well understood.  My guess is that things will slow down, but it's unlikely that humanity will abandon AI altogether, unless some terrifying doomsday-sci-fi tragedy involving it scares us away.  And that hasn't happened so far.

 

Sources:  Joshua Rothman's article "Is A. I. Actually a Bubble?" appeared on Dec. 12 on The New Yorker website at https://www.newyorker.com/culture/open-questions/is-ai-actually-a-bubble?.  I also referred to John D. Mueller's Redeeming Economics (ISI Books, 2010), pp. 84-86.