Monday, January 05, 2026

What Will Happen to the AI Bubble?

 

In an online commentary on The New Yorker website, writer Joshua Rothman tackles the question of the artificial-intelligence (AI) bubble.  On this first week of the new year, that seems like an appropriate question to ask.  It's pretty clear that AI is not going away.  Too many systems have embedded it in their productive processes for that to happen.  But Rothman raises two related questions that only time will answer for sure.

 

The first question is whether the money spent on AI is going to be worth it.  "Worth it" can mean a variety of things.  The most obvious (and frightening, to some) application of AI is direct replacement of workers:  think a roomful of draftsmen replaced by three engineers at computer workstations.  Accountants can most easily justify this way of leveraging AI by showing their managers how much the firm is saving in salaries, offset by whatever the AI system cost.  And assuming the tasks, whatever they were, are being done just as well by AI as they were by people before, the difference is the net savings AI can effect.

 

But as Rothman points out, that approach is both overly simplistic and doesn't reflect how AI is typically being used most effectively.  The most powerful use mode he has found in his own life is to use AI as a mind-augmenting tool.  He gives the example of helping his seven-year-old son write better code.  (I will overlook the implications of what the future will be like with a world full of people who were coding when they were seven.)  ChatGPT helped Rothman find several applications that his son was both able to master, and enjoyed as well. 

 

And in general, the most fruitful way AI is used seems to be as a quasi-intelligent assistant to a human being, not a wholesale replacement.  The problem for businesses is that this sort of employee augmentation is much harder to account for.

 

He points out that if an employee uses AI to become better educated and more capable, that fact does not show up on the firm's balance sheet.  Yet it is a form of capital, capital being broadly defined as anything that enables a firm to be productive.  Rothman cites economist Theodore Schultz as the originator of the term "human capital," which captures the concept that an employee has value for his or her abilities, which can depreciate or be improved just as physical capital such as factory buildings or machinery can be. 

 

In a book I read recently called Redeeming Economics, John D. Mueller points out that modern economic theory simply cannot account for human capital in a logically consistent way.  This constitutes a basic flaw that is still in the process of being remedied.  The usual metrics of economics such as GNP (gross national product) treat investments in human capital such as education and training as consumption, the same as if you took your college tuition and blew it on a vacation to Aruba. 

 

So it's no surprise that businesses are unsure about how to justify spending billions on AI if they can't point to their balance sheets and say, "Here's how we made more money by buying all those AI resources." 

 

Something similar happened with CAD software.  When companies discovered how much more effective their designers were when they began using computer-aided design programs such as AutoCAD, and their competitors began underbidding them as a result, they had to get with the program and spend what it took to keep up. 

 

It's not clear that the results of widespread use of AI will be quite as obvious as that.  Some bubbles are just that:  illusory things that pop and leave no significant remnants.  Rothman cites a rather cynical writer named Cory Doctorow who believes the AI bubble will pop soon, leaving scrap data servers and unhappy accountants all over the world. 

 

But other bubbles turn out to be merely the youthful exuberance of an industry that was just getting established.  A good example of that kind of bubble was the automotive industry in the years 1910 to 1925.  There were literally dozens of automakers that popped up like mushrooms after a rain.  Most of them failed in a few years, but that didn't take us back to riding horses. 

 

Both Rothman and I suspect that the AI boom, or bubble, will be more like what happened with automobiles and CAD software.  The feverish pace of expansion will slow down, because anything that can't go on forever has to stop sometime.  But the long-term future of AI depends on the answer to Rothman's second question:  how good will AI get?

 

It's clearly not equal in any general sense to human intelligence today.  As Rothman puts it, AI assistants are "disembodied, forgetful, unnatural, and sometimes glaringly stupid."  These characteristics may simply be the defects that further research will iron out in ways that aren't obvious.

 

While I'm not in the market for a job right now, I nevertheless receive lists of possible jobs from my LinkedIn subscription.  A surprising number of them lately have been what I'd call "AI checking" jobs:  companies seeking a subject-matter expert to make queries of AI systems and critique the results.  Clearly, the purpose of that is to fix problems that show up so the mistakes aren't made the next time.

 

It's entirely possible that some negative world event will trigger an AI panic and rush to the exits.  But even if the short-term spending on AI does crash, we still have come a long way in the last five years, and that progress isn't going to go away.  As Rothman says, AI is a weird field to try and make forecasts for, because it involves human-like capabilities that are not well defined, let alone well understood.  My guess is that things will slow down, but it's unlikely that humanity will abandon AI altogether, unless some terrifying doomsday-sci-fi tragedy involving it scares us away.  And that hasn't happened so far.

 

Sources:  Joshua Rothman's article "Is A. I. Actually a Bubble?" appeared on Dec. 12 on The New Yorker website at https://www.newyorker.com/culture/open-questions/is-ai-actually-a-bubble?.  I also referred to John D. Mueller's Redeeming Economics (ISI Books, 2010), pp. 84-86. 

 

Monday, December 29, 2025

AI For Kids: Be the Hero of Your Own Story

 

Charlie is six years old.  He is the grandson of a cousin of mine who I met at supper over the Christmas holidays last week.  His grandmother showed me a book she bought Charlie for Christmas.  It was a short but well-made book with slick paper and professional-looking illustrations.  What amazed me about it was the way it was customized.  Not only did it have Charlie's name in bold black type on the cover, but Charlie himself, complete with thick glasses, was portrayed in a recognizable artwork on the front cover, and also showed up as the main character on most of the thirty or so pages. 

 

I asked the grandmother how much it cost.  "It wasn't cheap," she said, "but it was worth it."  For his part, Charlie seemed to be tickled with it too.

 

Back home, I looked up the outfit that makes these books.  It's something called CoziTales.  Because I have no research staff and spend only an hour a week on this blog, I cannot say whether CoziTales is associated in any way with the homonymous COZITV, an over-the-air TV network that shows reruns of old TV shows. 

 

Their website makes plain what they do.  If you supply them with some basic information about the target child (name, age, sex) and presumably some photos (I didn't have a photo of a kid available to try it out, but they need a good photo or two), they will insert a recognizable artistic likeness of said child into any one of a number of storybooks.  The available stories fall into the categories of Seasonal, Popular Adventures, Magical Realms, Nature and Exploration, and Imagination and Wonder.  I failed to see the category of Budding AI Developer among the themes, but maybe it's in there somewhere.  If you want your kid to have a job that hasn't been replaced by AI when he or she grows up, you might want to get them interested in AI now.

 

Presumably, the firm takes the images and generates (almost certainly by AI, although the existence of a horde of fantastically skilled and underpaid artists somewhere is remotely possible) the images for the chosen story, with your selected child appearing as the main character.  Then it's only a matter of approving it and paying for it.  According to (sorry) an AI search, prices start at $40, and you should allow four days for production.  That's all—four days.  Plus shipping time, of course.

 

I suppose back in 1958, somebody like J. Paul Getty could have hired an artist, handed him some photos of a favored grandson, and commissioned a customized storybook of this kind, when I was of an age to be in the market for such a thing.  But it would have cost many thousands of dollars for the artist's pay, the printing of a single copy, etc.  Now in 2025, it's a mass-market thing—anybody with a good picture of one's little darling can create a truly unique artifact at a price most people can afford.

 

It's another specific example of a process I've seen happen in countless ways over the course of my lifetime:  the transformation of an economically valued product or service from something only governments or rich people can afford, to something that nearly everyone can afford.  It happened with watching any movie you want privately (a privilege formerly reserved for the President of the United States, who could get any movie for the White House viewing room in 24 hours) to sending messages worldwide (something that only businessmen and news organizations could afford to do routinely until the 1970s), to doing fantastically complex calculations involved in artificial intelligence (something that only researchers could do as recently as twenty years ago, but is now devolving to your laptop or phone). 

 

If the product or service involved is benign, there's really no ethical issue of significance involved in the mass-consumer transformation of a formerly costly and one-off product or service.  Person-to-person telecommunication, for instance, whether by email, voice, or video, is almost entirely a benign technology.  It's a good thing that I can send a message to a friend in the Ukraine in two minutes for basically no cost, rather than scribbling on flimsy blue paper, licking it and putting in in the mail, and hoping he'll get it two or three weeks later—or months, before air mail. 

 

And this business of putting a kid into a customized storybook looks pretty benign to me, at least at first glance.  CoziTales is aware of the concern that they are in a position to collect names, ages, faces, and Internet info on thousands of young children, and they address those concerns in their frequently-asked-questions (FAQ) page.  They have a simple statement answering the question, "Is my child's photo secure?" which goes, "Absolutely.  All photos are encrypted, used only to create your story, and automatically deleted after processing."  And I'm sure that's the company's intention too.  But determined hackers can invade even the best-intentioned firms.  It depends on the value of the data to the hacker and the quality of the IT defenses that the company maintains.

 

CoziTales seems to be pretty new, as there is not much auxiliary information I could find on the Internet and many of the FAQs have a newbie air about them.  International shipping or foreign languages are not yet available, for instance, but they hope to add these capabilities soon. 

 

If I encounter a new AI-powered thing and I can think of a pre-AI analogy to it that turned out okay, I feel better about it, whether or not the analogy proves anything.  The first analogy I could think of for a CoziTales book is those painted boards you see outside tourist attractions, at which you position your face behind a hole in the board and have someone take your picture as Caveman and Wife Dragged Along Behind at a show cave, or Sexy Bathing-Suit-Clad Gal and Hunky Boyfriend at a beach.  I'm not aware that any child photographed in such a setting has suffered long-term emotional damage from it, and I doubt that Charlie's CoziTales book will bring anything but pleasure to him. 

 

So why talk about it in an engineering ethics column?  Well, I say so many negative things about AI in this space that it's only fair I put out something positive for a change.  It's too late to get your favorite nephew one for Christmas, but there's always birthdays.

 

Sources:  I referred to the CoziTales website https://cozitales.com/ and a Google AI query about how much a book costs.  And Charlie, of course.  His age is an estimate, as I didn't have time to interview him in detail. 

 

Monday, December 22, 2025

The Spy in Your Living Room

 

Did you know that your smart TV will in all likelihood (a) use audio and video recognition technology to determine exactly what programming you are watching every second, whether it's over-the-air TV, cable TV, streaming, or even something you're watching from a computer hooked up to your TV as a monitor, (b) send this data to servers run by the TV manufacturers, who then (c) market the data to whoever's interested in it, and can combine it with other cross-platform data to create a detailed profile of your viewing, phoning, and living habits? 

 

I didn't know either.

 

Just possibly, you may be one of the few people who know about the Vizio lawsuit filed by the New Jersey attorney general in 2017, joined by the Federal Trade Commission.  The issue there was the Automated Content Recognition (ACR) software installed in every Vizio set.  ACR takes audio and video snippets of whatever is being played on the TV and sends them to be matched to vast databases of content.  In this way, the TV maker has a second-by-second profile of exactly what you are watching.  Simply knowing that some third party can spy on your viewing habits is bad enough.  But when they turn around and sell that information without your permission to advertisers or even nefarious actors (what if you use your TV as a monitor to check your bank account?), insult can turn quickly into injury.

 

Vizio settled the lawsuit by paying fines and promising to improve its disclosure practices.  But clearly, if the new lawsuit filed by Texas Attorney General Ken Paxton on Dec. 15 of this year is any indication, ACR is still very much with us.

 

The companies named in Paxton's suit are Sony, Samsung, LG, Hisense, and TCL Technology.  Paxton is concerned that China's National Security Law will allow its government to obtain data on U. S. citizens that is obtained by TV companies based in China.  ACR data-harvesting is a big deal.  One report said that at one point, Vizio was making more money selling ACR-acquired data than it was from selling TVs. 

 

In retrospect, this isn't surprising.  The idea that the customer is the product made the Google founders billionaires, so if the technical capability is there, why shouldn't TV makers share in the moolah?  It's virtually impossible to even come near a mobile phone without having it figure out what kind of coffee you like, where you go for vacation, and what you talk about while you're waiting in line at the grocery store.  That particular privacy horse is miles away from the barn, and there seems to be no way of getting it back.

 

But the idea that the outfit which made my TV is profiting from selling my viewing habits is a new one on me, anyway.

 

I'm old enough to remember when companies had to go to a lot of trouble to find out what shows were being watched, back in the 1960s when there were only three networks and maybe a local channel or two.  Firms such as the Nielsen ratings people first asked selected consumers to keep written diaries of what they watched.  This was a rather irksome process of questionable accuracy, so later Nielsen developed a machine they called the Audimeter.  It automatically recorded the channel setting of the selected family's TV so they no longer had to write anything down. 

 

Of course, such families were highly aware that they were being watched, and perhaps even took pride in being able to "vote" in a sense for shows that they liked.  But that system is worlds away from the invasive practice of ACR, which every purchaser of TVs from certain brands participates in without his or her knowledge.

 

A study was made of how much trouble it is to turn off the ACR on a new TV, and the number of clicks required range from 11 to 27, once you figure out what the manufacturer calls their variant of ACR.  Names like "Smart Features," "Enhanced Viewing Experience," and my favorite, "Personalized Recommendations" obscure the true nature of the function. 

 

Opinions of Attorney General Paxton vary widely, and I am no fan of his in general.  But in the case of ACR, I think his actions are highly warranted, especially because the data-gathering has been successfully concealed from millions of consumers. 

 

Privacy is an odd kind of right that doesn't come first to mind when one thinks of human rights.  The phrase "life, liberty, and the pursuit of happiness" from the Declaration of Independence encapsulates the gist of rights that we in the United States lay claim to.  But the right to privacy, which can be defined as the right of keeping one's activities, personal data, and other identifying information away from scrutiny by strangers, is essential to one's freedom of action and even thought. 

 

We are living at a time when, rightly or wrongly, certain opinions and trends of thought are being seized upon by government agencies and used to penalize individuals and groups.  A case affecting my own university comes to mind.  A professor at Texas State University gave a talk last September at an online conference that was being monitored by a conservative blogger, who re-posted his words as an example of inflammatory speech advocating the overthrow of the U. S. government.  Texas Governor Greg Abbott's office reposted the item and brought pressure to bear on President Damphousse of Texas State University, who fired the tenured faculty member.  The faculty member is suing the University, but currently he is out of a job.

 

Right now, the worst use that TV manufacturers are making of their ACR-gathered data is to sell it to advertisers and marketers.  Perhaps it's annoying to watch a show on water skiing and then get bombarded by water-ski-equipment ads, but it's not a fundamental breach of your civil rights, exactly.  What if at some point, the government decides that anybody who watches Show X is a traitor to the country, and deserves to be deported?  That sounds ludicrous now, but if you had told me even five years ago that a tenured professor would be summarily dismissed for something he said in an online conference, I would have been reluctant to believe it. 

 

Sources:  The news release about Ken Paxton's lawsuit against five TV makers is at https://texasattorneygeneral.gov/news/releases/attorney-general-paxton-sues-five-major-tv-companies-including-some-ties-ccp-spying-texans.  The website Captain Compliance has a detailed explanation of ACR and its implications at https://captaincompliance.com/education/privacy-alert-how-automated-content-recognition-acr-is-watching-everything-you-watch/.  I also referred to a news item at https://www.kxan.com/news/texas-politics/texas-ag-sues-several-tv-companies-says-smart-tvs-are-spying-on-texans/ and the Wikipedia articles on Vizio and audience measurement.


Monday, December 15, 2025

G. K. Chesterton on Enjoying Daily Life

Someone writing around 1930 that civilization might be in ruins in about fifteen years, is worth paying attention to even nearly a century later.  One such person was Gilbert Keith Chesterton (1874-1936), whose biography by Maisie Ward I have been reading lately. 

 

Chesterton was unquestionably a genius, but of a type that we are unfamiliar with today.  In our science-besotted age, we cannot conceive of geniuses other than the Albert Einstein type:  the scientific or technological wizard whose superiority is manifested by the fact that nobody except perhaps a few other geniuses can understand what he or she is doing. 

 

Chesterton wasn't that kind of genius.  He was a genius for the masses.  His gift was to take everyday things that everyone knew about—sunrises, beer, family affection, work—and look at them in a paradoxical way that made people sit up and see them in a new light.  He called himself a journalist, and if writing enough published material to fill a bookshelf several feet long with one's collected works qualifies you to be a journalist, he was not wrong to do so.

 

But he was so much more than that:  philosopher, theologian, controversialist, debater, radio personality, and apologist for Christianity in general and Catholicism in particular.  Believe it or not, he wrote many things that reflect on current issues in engineering ethics, and I'd like to focus on one quote in particular.

 

Toward the end of his life, he looked around the England that he loved, and saw how people were being drawn toward what we would now call electronic media (back then, mainly radio and especially sound movies) as a distraction from everyday life.  The very word "boredom" dates only from the early 1800s, and seems to be a peculiarly modern malady.  By the 1930s, Chesterton saw a danger in the fact that ordinary life, without spicing it up with a distraction that called for the observer to do nothing more than sit back and watch, was seen as increasingly boring.  This insight led him to pen the following words, which are quoted from the extensive biography of Chesterton by his friend Maisie Ward:

 

"Unless we can bring men back to enjoying the daily life which moderns call a dull life, our whole civilization will be in ruins in about fifteen years.  Whenever anybody proposes anything really practical, to solve the economic evil today, the answer always is that the solution would not work, because the modern town populations would think life dull.  That is because they are entirely unacquainted with life.  They know nothing but distractions from life; dreams which may be found in the cinema; that is, brief oblivions of life . . . . Unless we can make daybreak and daily bread and the creative secrets of labour interesting in themselves, there will fall on all our civilisations a fatigue which is the one disease from which civilisations do not recover.  So died the great Pagan civilisation; of bread and circuses and forgetfulness of the household gods."

 

The "economic evil" of which he spoke meant the Great Depression of the 1930s, which was arguably harsher in England than in the United States.  What he called for in this passage was not any innovations in technology.  While he was in favor of the good which advances in science and technology can produce, he saw that when people are provided with distractions that let them forget about the daily task at hand, they all too often choose the distraction over the task.

 

This is an ethical issue, and cannot be evaded with the old saw that "technology is neutral—the good and bad lies in how you use it."  Moving to today, vast fortunes and huge industries rely on the power of smartphone-enabled technology to divert the attention of billions into channels that profit corporations while delivering distractions that waste time at best.  And one of the harms it causes is the tidal wave of depression sweeping over the children and youth of the world—a tidal wave that Australia is attempting to sweep back with their recently-enacted ban on social media for anyone under 16, which took effect on Dec. 10.

 

I'm pretty sure that Chesterton, once he understood what social media is about, would be in favor of that ban.  He might even go so far as to recommend that every smartphone on earth be subjected to the treatment that a Houston private school gives every phone that is confiscated from students who violate their school-day ban on them:  slicing in two with a diamond saw. 

 

But the problem isn't just the phones:  it's what we have allowed the phones to do to our psyches.  When was the last time you sat outside for more than five minutes and did not look at your phone, but instead looked at the world around you?  Perhaps it's a busy cityscape.  Perhaps it's a field of wheat in the countryside.  Perhaps it's a forest.  But unless you live in Antarctica or on a desert island, there is life and there are other people around to observe and wonder at. 

 

That was the kind of thing that Chesterton himself could do regardless of where he was, whether in the dullest, dingiest part of London or in the beauty of his countryside home in Beaconsfield.  And appreciation for the ordinary aspects of real life is what he saw lacking in the lives of his fellow citizens of England in the 1930s, just fifteen short years before the devastation at the end of World War II.

 

I'm no prophet, and recycling old prophecies usually doesn't work.  But if mass distraction is a sign of civilizational breakdown, and Chesterton called it right in the 1930s, we should at least pause to consider whether radical-seeming steps such as the one Australia is taking may save us from a breakup that will not be World War III, exactly, but might be just as devastating.  Already, the political systems of many countries have suffered tremendous damage from the pernicious influence of social media.  And worse effects may come. 

 

If the tide is to be turned, it will happen as one person at a time learns how to use technology as a tool, and not allow it to be a master.  Put down the phone and spend time with the real world this Christmas—and with people you love.

 

Sources:  The quote above is from Chapter 21 of Maisie Ward's Gilbert Keith Chesterton, originally published in 1942.  The Kindle edition I'm reading lacks page numbers or references to the source in Chesterton's voluminous writings. 

Monday, December 08, 2025

Pros and Cons of Proposed Fuel Economy Standards

 

That's not a very exciting headline, perhaps.  But the Trump administration's proposed changes to the so-called Corporate Average Fuel Economy (CAFE) rules have already drawn criticism from many quarters.  The Environmental Defense Fund, for example, claims in a headline that the changes will "cost Americans more for gas, weaken national security, and increase pollution."  If it's so bad, why is the administration doing it? 

 

At the news conference announcing the proposal, Trump was surrounded by representatives of several domestic automakers, who favor the move.  It is actually a further step in a series of actions that Trump has taken to step back from the Biden administration's CAFE standards.

 

Under the previous administration, each automaker had to ensure that the average "fleet" economy (all their current model-year production, basically) measured in miles per gallon, had to increase by 2% per year.  (Electric vehicles are assigned an equivalent value of fuel economy on the order of 140 MPG.)  Companies not meeting the standards pay fines or purchase credits from firms who exceed them. 

 

Already, the Trump administration has ceased levying the associated fines under the spending bill recently passed by Congress.  They are now proposing to lower the 2% figure to 0.5%, and roll back the "baseline" from which the percentages are calculated to 2022. 

 

Let's compare two cases:  the former policy with fines and the 2% rate versus the proposed policy.  And let's see how various constituencies are affected by the two cases.

 

In the former Biden-administration case, automakers faced the fact that in less than 10 years, the CAFE standards would require raising their average fuel economy by 20%.  Any time an engineering system has some of its performance mandated by law, engineers eventually run up against another law:  physical law.  While modern automobiles differ in thousands of ways from the typical 1955 car—computer-controlled engines, greater use of plastic for reduced weight, etc.—there is still only so much energy in a gallon of gasoline.  And beyond a certain point, the accessible design space shrinks as the required fuel economy rises.  What the old CAFE standards were doing in practice was to compel the auto industry to move toward smaller, lighter cars and more electric vehicles.

 

Yes, that would save people money in fuel costs, and make the U. S. more energy-independent, and reduce our carbon footprint.  But it also makes cars somewhat more expensive, at least at first, and gradually would eliminate certain larger sizes that consumers might want to buy.  So for automakers, the old rules meant compulsory redesigns against fundamental constraints that might eventually eliminate whole classes of vehicles.  For consumers, they meant more limited choices of somewhat more expensive cars, although ones that would be slightly cheaper to run.  And for the environment, it meant slower increases in carbon and other emissions, which are good things. 

 

The proposed reduction in CAFE increases to 0.5% means that the time to get to 20% higher than at present goes from 10 years to 36 years.  And in any case, there are no longer fines for violating the standards, so they are essentially an aspirational goal with no teeth in them.  Under the proposed rules, automakers will no longer be obliged to make cars steadily more fuel-efficient unless consumers ask for that.  Consumers will have a wider choice of cars that won't be more expensive simply because of the CAFE standards.  And while presumably we will have more carbon emissions than if the old standards were retained, unexpected advances in electric-vehicle technology may change this picture.

 

For example, in the December issue of Physics Today, the umbrella publication of the American Institute of Physics, researchers describe their work on solid-state lithium batteries that could vastly out-perform current lithium-ion batteries.  One radical improvement they hope to make is to replace the current graphite anodes, which can absorb only one lithium ion for every six carbon atoms, with solid-lithium ones, which raises the charge capacity of the anode by a factor of ten.  There are problems with solid-state battery technology, but if they are overcome, it might be possible to manufacture electric vehicles that are both cheaper to buy than gasoline-powered ones and travel farther on a charge. 

 

And if consumers are presented with such a choice, it's quite likely that the internal-combustion-engine-powered vehicles would be relegated to specialist uses in construction, etc., leaving most of the field to electric vehicles.  That would come about not because of any government mandate, but because competitive forces in the marketplace produced innovations that consumers genuinely want, and have the byproduct of increasing CAFE mileage and reducing pollution.

 

Obviously, there's no guarantee that solid-state batteries or any other innovation will ever make all-electric vehicles outperform gas-guzzlers in all significant ways:  first cost, performance (including range), and per-mile costs, including power and maintenance.  But it could happen, just as we saw a huge reduction in carbon emissions from power plants when coal was replaced by cheaper abundant natural gas, due not mainly to government mandates but to the privately-funded development of fracking technology. 

 

You can't count on these serendipitous things happening.  But it's equally short-sighted to think that the only good things that go on in a market are government-mandated changes. 

 

The CAFE changes proposed by the Trump administration are still open to comments before they are implemented.  I'm not holding my breath that the current regime will take negative comments into consideration, but it might happen.  Perhaps what is most harmful in this whole situation is the every-four-year policy shifts that manufacturers have been trying to deal with, as Obama was replaced by Trump, who was replaced by Biden, who was replaced by Trump again.  But a small-r republican form of government, as messy as it is, is better than being dictated to by a small group of powerful individuals with no term limits, which is how China is governed.  And for now, it looks like we may be reverting to more of a free-market model in the auto industry.  Consumers and manufacturers should enjoy it while they can, because it may not last. 

 

Sources:  I referred to an NPR report on the Trump administration proposed CAFE-standard changes at https://www.npr.org/2025/12/03/nx-s1-5630389/trump-administration-rolls-back-fuel-economy-standards.  I also referred to a webpage of the Environmental Defense Fund at https://www.edf.org/media/trump-administration-announces-plan-weaken-fuel-economy-standards-cars-and-trucks.  The Physics Today article "Solid-State Batteries:  Hype, Hope, and Hurdles" by S. Muy, K. Hatzell, S. Meng, and Y. Shao-Horn appeared on pp. 40-46 of the December 2025 issue.

Monday, December 01, 2025

The Hong Kong Apartment Complex Fire

 

In the afternoon of Wednesday, November 26, it was windy in the part of Hong Kong where eight 31-story apartment towers called Wang Fuk Court were undergoing renovations.  As has been the custom for hundreds of years, exterior scaffolding made of bamboo was erected around the towers.  To prevent damage to the single-pane windows of the complex, workers had covered the windows with foam-plastic sheets.  Nylon safety nets surrounded the scaffolding.  What could go wrong?

 

The world found out when an unknown cause started a fire in the lower level of one of the scaffolds.  The flames quickly spread from high winds to the flammable plastic sheets over the windows.  Burning debris and flames spread from tower to tower until seven of the eight towers were engulfed.  About 40% of the residents were over 65, and the fire alarms in many of the buildings were later found to be out of order.

 

For the next day or so, the fire defeated efforts of thousands of firefighters to control it.  As of this writing, the confirmed death toll stands at 146, with dozens more missing.  About 4,600 people lived in the complex, so many got out alive.  But many others, probably including most of those who needed mobility assistance to escape, didn't make it.

 

News reports are calling this tragedy a "man-made disaster," and I couldn't agree more.  Authorities have arrested several officials of the construction company in charge of the renovations, Prestige Construction and Engineering Company, and have halted the firm's work on all other projects in order to conduct safety inspections. 

 

Whoever made the choice of protective window panels may have chosen low cost over safety.  In any event, that choice directly contributed to the fire perhaps more than anything else, although final conclusions will have to await a thorough investigation.  The use of nylon for safety netting and bamboo for scaffolding are also questionable, although China has a centuries-old tradition of bamboo scaffolding that has only recently come into question as metal scaffolding gradually supplants it.  Photos of the scene after the disaster seem to indicate that the scaffolding largely stayed in place, but was charred to the point of being structurally unsound.

 

As soon as I learned of this fire, I thought of the 2017 Grenfell Tower fire in London.  In that disaster, a refrigerator caught fire in a lower apartment and the flames spread to flammable exterior insulating sheathing that had been installed in an upgrade some years before.  The fire propagated behind the sheathing and was difficult to extinguish, and before it was all over, 72 people died and many more were injured.  Again, negligent planning and a failure to take account of the flammability of exterior sheathing was at fault, just as it appears to be in the Wang Fuk Court fire.

 

Engineering ethics requires imagination of a particularly informed type:  imagination bolstered by in-depth technical knowledge.  Engineers sometimes have a reputation for being dour pessimists who always jump to the worst-case scenario of a given situation.  But in the case of the Hong Kong fire, engineers were not pessimistic enough.  Nobody apparently imagined what would happen if one of the plastic window-sheathing panels caught fire, especially if it was on a lower floor of a building that happened to be upwind of most of the others on a blustery day. 

 

It's possible that workers were instructed not to smoke or to engage in operations that might lead to a fire.  That's all very well, but not everybody follows instructions. And even the best-intentioned workers can be using equipment that shorts out or otherwise becomes a source of ignition.  Good safety practitioners imagine that something statistically unlikely will nevertheless go wrong, and then draw conclusions from that premise. 

 

Judging from the number of residents listed and the casualty list, it's possible that about 5% of the listed residents died in the fire.  That means that 95% escaped either with no injuries or non-life-threatening ones, while however losing most of their worldly possessions.  No one I know would like to go through an unplanned experience that will strip you of your things and lead to a one-in-twenty chance of death.  So while fire escapes and other built-in safety features allowed most residents to escape the flames, over a hundred didn't.

 

Major renovation projects are routinely inspected by civil authorities, and I can't imagine that this project was an exception.  If a construction firm neglects to take due safety precautions, it is the civil authority's responsibility to step in and halt work if necessary in order for safety hazards to be addressed.  This obviously wasn't done. 

 

I don't know the nature of safety inspection services in Hong Kong, but among those to be held accountable should be whoever permitted the work to proceed.  Such officials can be subject to corruption pressures, and until a tragedy like this occurs, no light is shed on the fact that corners are being cut by paying off inspectors.  I have no reason to believe that this was the case here, but it is certainly an avenue worth investigating.

 

Disasters like this one can have the silver lining of making future safety regulations and inspections much more rigorous.  As the investigations proceed and the chain of causation is revealed, Hong Kong engineers, construction firms, and officials can all learn valuable lessons that are driven home by the horrible example of what can go wrong if safety measures are neglected.  My sympathy is with those who lost loved ones in the fire.  And my hope is that nobody anywhere in the world puts flammable sheathing on high-rises ever again.

 

Sources:  I referred to a BBC report at https://www.bbc.com/news/articles/cdxe9r7wjgro, a report from the news outlet livemint.com at https://www.livemint.com/news/world/hong-kong-apartment-fire-death-toll-climbs-to-146-probe-reveals-fire-code-violations-11764496767266.html, and the Wikipedia article "Grenfell Tower fire.

Monday, November 24, 2025

AI Ghosts and the Yuck Factor

 

The December Scientific American highlights an article by David Berreby that gets personal.  Berreby's father was born in 1927, the same year as my father, and died in 2013.  Yet the article opens with Berreby asking, "How is your existence these days?" and getting a reply:  "... Being dead is a strange experience."

 

In this conversation, Berreby is using a generative-artificial-intelligence (genAI) version of his father to investigate what it is like to interact with an AI ghost:  a digital simulation of a dead loved one that some psychologists and computer scientists are promoting as an aid for grieving people.

 

I'll be frank about my initial reaction to this idea.  I thought it was terrible.  The "yuck factor" is a phrase popularized by ethicist Leon Kass in describing a gut-level negative reaction to a thing.  He says we should at least pay attention whenever we have such a reaction, because such a sensation may embody wisdom that we can't articulate.  The AI-ghost idea reminded me of Norman Bates, the mentally defective protagonist of Alfred Hitchcock's movie Psycho, who kept his mother around long after her bury-by date and talked with her as though she were still alive. 

 

And to his credit, Berreby admits that there may be dangers to some people whose mental makeup makes them vulnerable to confusing fiction with reality, and could become harmfully addicted to the use of this type of AI.  But in the limited number of cases examined (only 10 in one study) in which grieving patients were encouraged to interact with AI ghosts, they all reported positive outcomes and a better ability to interact with live humans.  As one subject commented, "Society doesn't really like grief."  Who better to discuss your feelings of loss with than an infinitely-patient AI ghost who is both the cause and the solace of one's grief? 

 

Still, it troubles me that AI ghosts could become a widespread way of coping with the death of those we love.  One's worldview context is important here. 

 

Historically, death has been viewed as the portal to the afterlife.  Berreby chose to title his article "Mourning Becomes Electric," a takeoff on Eugene O'Neill's play cycle "Mourning Becomes Electra," which itself was based on the Oresteia play cycle by Aeschylus, a famous Greek playwright who died around 450 B. C.  In the plays, Aeschylus describes the tragic murder of the warrior Agamemnon by his unfaithful wife Clytemnestra, and how gods interacted with humans as things went from bad to worse.  That reference, and a few throwaway lines about ectoplasm and Edison's boast that if there was life after death, he could detect it with a new invention of his, are the only mentions of the possibility that the dead are no longer in existence in any meaningful way.    

 

If you believe that death is the final end of the existence of any given human personality, and you miss interacting with that personality, it only makes sense to use any technical means at your disposal to scratch that itch and conjure up your late father, mother, or Aunt Edna.  Berreby quotes Amy Kurzweil, artist and daughter of famed transhumanist Ray Kurzweil, as saying that we don't usually worry that children will do things like expecting the real Superman to show up in an emergency, because they early learn to distinguish fiction from reality.  And so she isn't concerned that grieving people will start to treat AI ghosts like the real person the machine is intended to simulate.  It's like looking at an old photo or video of a dead person:  there's no confusion, only a stimulus to memory, and nobody complains about keeping photos of our dear departed around.

 

In the context of secular psychology, where the therapeutic goal is to minimize distress and increase the feeling of well-being, anything that moves the needle in that direction is worth doing.  And if studies show that grieving people feel better after extensive chats with custom-designed AI ghosts, then that's all the evidence therapists need that it's a useful thing to do.

 

The article is written in the nearly-universal etsi Deus non daretur style—a Latin phrase meaning roughly "as though God doesn't exist."  And in a secular publication such as Scientific American, this is appropriate, I suppose, though it leaves out the viewpoints of billions of people who believe otherwise.  But what if these billions are right?  That puts a different light on the thing.

 

Even believers in God acknowledge that grieving over the loss of a loved one is an entirely appropriate and natural response.  A couple we have known for 45 years was sitting at the breakfast table last summer praying, and the man suddenly had a massive hemorrhagic stroke, dying later that same day.  It was a terrible shock, and at the funeral there were photos and memorabilia of him to remind those in attendance of what he was like.  But everyone there had a serene confidence that David Jenkins was in the hands of a merciful God.  While it was a sad occasion, there was an undercurrent of bottomless joy that we knew he was enjoying, and that we the mourners participated in by means that cannot be fully expressed in writing.

 

In Christian theology, an idol is something that humans create which takes the place of God.  While frank ancestor worship is practiced by some cultures, and is idolatry by definition, a more subtle temptation to idolatry is offered by AI chatbots of all kinds, and especially by AI ghosts. 

 

While I miss my parents, they died a long time ago (my father was the last to go, in 1984).  I will confess to writing a note to my grandmother once, not long after she died.  So did Richard Feynman write a note to his late wife, who died tragically young of tuberculosis, and a less likely believer in the supernatural would be hard to find. 

 

I suppose it might do no harm for me to cobble up an AI ghost of my father.  But for me, anyway, the temptation to credit it with more existence than it really would have would be strong, and I will take no steps in that direction. 

 

As for people who don't believe in an afterlife, AI ghosts may help them cope for a while with death.  But only the truth will make them free of loss, grieving, and the fear of death once and for all.  And however good an AI ghost gets at imitating the lost reality of a person, it will never be the real thing.

 

Sources:  "Mourning Becomes Electric" by David Berreby appeared on pp. 64-67 of the December 2025 issue of Scientific American.  I referred to Wikipedia articles on "Wisdom of repugnance" and "Oresteia."