Monday, November 24, 2025

AI Ghosts and the Yuck Factor

 

The December Scientific American highlights an article by David Berreby that gets personal.  Berreby's father was born in 1927, the same year as my father, and died in 2013.  Yet the article opens with Berreby asking, "How is your existence these days?" and getting a reply:  "... Being dead is a strange experience."

 

In this conversation, Berreby is using a generative-artificial-intelligence (genAI) version of his father to investigate what it is like to interact with an AI ghost:  a digital simulation of a dead loved one that some psychologists and computer scientists are promoting as an aid for grieving people.

 

I'll be frank about my initial reaction to this idea.  I thought it was terrible.  The "yuck factor" is a phrase popularized by ethicist Leon Kass in describing a gut-level negative reaction to a thing.  He says we should at least pay attention whenever we have such a reaction, because such a sensation may embody wisdom that we can't articulate.  The AI-ghost idea reminded me of Norman Bates, the mentally defective protagonist of Alfred Hitchcock's movie Psycho, who kept his mother around long after her bury-by date and talked with her as though she were still alive. 

 

And to his credit, Berreby admits that there may be dangers to some people whose mental makeup makes them vulnerable to confusing fiction with reality, and could become harmfully addicted to the use of this type of AI.  But in the limited number of cases examined (only 10 in one study) in which grieving patients were encouraged to interact with AI ghosts, they all reported positive outcomes and a better ability to interact with live humans.  As one subject commented, "Society doesn't really like grief."  Who better to discuss your feelings of loss with than an infinitely-patient AI ghost who is both the cause and the solace of one's grief? 

 

Still, it troubles me that AI ghosts could become a widespread way of coping with the death of those we love.  One's worldview context is important here. 

 

Historically, death has been viewed as the portal to the afterlife.  Berreby chose to title his article "Mourning Becomes Electric," a takeoff on Eugene O'Neill's play cycle "Mourning Becomes Electra," which itself was based on the Oresteia play cycle by Aeschylus, a famous Greek playwright who died around 450 B. C.  In the plays, Aeschylus describes the tragic murder of the warrior Agamemnon by his unfaithful wife Clytemnestra, and how gods interacted with humans as things went from bad to worse.  That reference, and a few throwaway lines about ectoplasm and Edison's boast that if there was life after death, he could detect it with a new invention of his, are the only mentions of the possibility that the dead are no longer in existence in any meaningful way.    

 

If you believe that death is the final end of the existence of any given human personality, and you miss interacting with that personality, it only makes sense to use any technical means at your disposal to scratch that itch and conjure up your late father, mother, or Aunt Edna.  Berreby quotes Amy Kurzweil, artist and daughter of famed transhumanist Ray Kurzweil, as saying that we don't usually worry that children will do things like expecting the real Superman to show up in an emergency, because they early learn to distinguish fiction from reality.  And so she isn't concerned that grieving people will start to treat AI ghosts like the real person the machine is intended to simulate.  It's like looking at an old photo or video of a dead person:  there's no confusion, only a stimulus to memory, and nobody complains about keeping photos of our dear departed around.

 

In the context of secular psychology, where the therapeutic goal is to minimize distress and increase the feeling of well-being, anything that moves the needle in that direction is worth doing.  And if studies show that grieving people feel better after extensive chats with custom-designed AI ghosts, then that's all the evidence therapists need that it's a useful thing to do.

 

The article is written in the nearly-universal etsi Deus non daretur style—a Latin phrase meaning roughly "as though God doesn't exist."  And in a secular publication such as Scientific American, this is appropriate, I suppose, though it leaves out the viewpoints of billions of people who believe otherwise.  But what if these billions are right?  That puts a different light on the thing.

 

Even believers in God acknowledge that grieving over the loss of a loved one is an entirely appropriate and natural response.  A couple we have known for 45 years was sitting at the breakfast table last summer praying, and the man suddenly had a massive hemorrhagic stroke, dying later that same day.  It was a terrible shock, and at the funeral there were photos and memorabilia of him to remind those in attendance of what he was like.  But everyone there had a serene confidence that David Jenkins was in the hands of a merciful God.  While it was a sad occasion, there was an undercurrent of bottomless joy that we knew he was enjoying, and that we the mourners participated in by means that cannot be fully expressed in writing.

 

In Christian theology, an idol is something that humans create which takes the place of God.  While frank ancestor worship is practiced by some cultures, and is idolatry by definition, a more subtle temptation to idolatry is offered by AI chatbots of all kinds, and especially by AI ghosts. 

 

While I miss my parents, they died a long time ago (my father was the last to go, in 1984).  I will confess to writing a note to my grandmother once, not long after she died.  So did Richard Feynman write a note to his late wife, who died tragically young of tuberculosis, and a less likely believer in the supernatural would be hard to find. 

 

I suppose it might do no harm for me to cobble up an AI ghost of my father.  But for me, anyway, the temptation to credit it with more existence than it really would have would be strong, and I will take no steps in that direction. 

 

As for people who don't believe in an afterlife, AI ghosts may help them cope for a while with death.  But only the truth will make them free of loss, grieving, and the fear of death once and for all.  And however good an AI ghost gets at imitating the lost reality of a person, it will never be the real thing.

 

Sources:  "Mourning Becomes Electric" by David Berreby appeared on pp. 64-67 of the December 2025 issue of Scientific American.  I referred to Wikipedia articles on "Wisdom of repugnance" and "Oresteia." 

Monday, November 17, 2025

Can AI Make Roads Safer?

 

The answer is almost certainly yes, as a recent Associated Press report shows. 

 

I have been less than complimentary in my treatment of artificial intelligence in some of these columns.  Any new technology can have potential hazards, and one of the main tasks of those who do engineering ethics is to examine technologies for problems that might occur in the future as well as current concerns.  But there's no denying that AI has potential for saving lives in numerous ways, including the improvement of road safety.  The AP article highlights AI's contributions to road safety, mainly in the area of information gathering to allow road crews to allocate their resources more intelligently.

 

Road maintenance is unique in that the physical extent of property an organization or government is responsible for, exceeds the area of almost any private entity.  Just in my town of San Marcos, a town of about 70,000 people, there are hundreds of stop signs, dozens of signals, and hundreds of miles of road.  And in the whole state of Texas there are over 680,000 "lane miles" of road, more than any other state.  Just inspecting those miles for damage such as potholes, worn-out signs, and debris is a gargantuan task.  Especially if a problem shows up in a low-traffic remote area, it could be years before highway workers are even aware of it, and longer before it gets fixed.

 

In Hawaii, the highway authorities are giving away 1,000 dashcams equipped to send what they see on the road to a central AI processing facility which will pick up on things like damaged guardrails, faulty signs, and other safety hazards.  The ability of sophisticated AI software to pore through millions of photos for specific problems like these makes it possible to use the floods of data from cameras to do this with minimal human involvement.  Hawaii has to import virtually all its equipment and supplies for road maintenance, so resource allocation there is especially important.

 

San Jose, California has mounted cameras on streetsweepers and found that their rate of identifying potholes was 97%, and will now expand the program to parking enforcement vehicles.  And driver cellphone data can pinpoint issues such as a stop sign hidden by bushes, which caused drivers to brake suddenly a lot at a particular intersection in Washington, D. C. later identified by AI software.  The mayor of San Jose, a former tech entrepreneur, hopes that cities will begin to share their databases so that problems common to more than one area can be identified more quickly.

 

The application of AI to identifying road maintenance needs seems to be one of the most benign AI-application cases around.  The information being gathered is not personal.  Rather, it's simply factual data about the state of the road bed and its surroundings.  While places such as Hawaii and other locations that use anonymized cellphone data do interact with private citizens to gather this data, their intent is not to spy on the people whose dashcams or phones are sensing the information.  And it would be hard to exploit such databases for illicit purposes, although some evil people can twist the best-intended system to their nefarious purposes. 

 

All the data in the world won't help fix a pothole if nobody goes out and fixes it, of course.  But the beauty of the AI-assisted data gathering is that a much better global picture of the state of the road inventory is available, allowing officials to prioritize their available maintenance resources much better.  A dangerous pothole in a part of town where nobody much goes or complains won't get ignored as long now that AI is being used to find it.  And data-sharing among municipal and state governments seems to have mostly upsides to it, although due precautions would have to be taken to make sure the larger accumulation of data isn't hacked.

 

As autonomous vehicles become more widespread, the database of road hazards could be made available to driverless cars, which would then "know" hazards to avoid even before the hazards are repaired.  To a limited extent, this is happening already. 

 

Whenever my wife runs her Waze software on her phone when we're on a trip, the voice warns us of stalled vehicles or law-enforcement personnel before we get to them.  I've often wondered how the system obtains such ephemeral information.  It seems almost inevitable that it uses anonymized location data from the stalled cars themselves, which gives one a slightly creepy feeling.  But if it keeps somebody from plowing into me some day while I'm fixing a tire on the side of the road, I'll put up with the creepy feeling in the meantime.

 

Highway fatalities in the U. S. have been declining overall since at least the 1960s, with a minimum of less than 33,000 a year in 2011.  Since then, they rose significantly, peaking after COVID at a rate of 43,000 in 2021, with a slight decline since then.  Part of the increase has been attributed to the rise in cellphone use, although that is difficult to disentangle from many other factors.  While most traffic accidents are due to driver error, bad road conditions can also contribute to accidents and fatalities, and everything we can do to minimize this factor will help to save lives.

 

The engineering idealists among us would like to see autonomous vehicles taking over, as there are some indications that accidents per road-mile of such vehicles can be lower than those shown by cars with human drivers.  But the comparison does not take into account the fact that most truly autonomous cars are operating over highly restricted areas such as city centers, where circumstances are fairly predictable and well known.  General suburban or rural driving seems to pose serious challenges for autonomous vehicles, and until they can prove that they are safer than human-driven vehicles in every type of driving environment, it's not clear that replacing humans with robots behind the wheel will decrease traffic fatalities overall, even if the robots get clued in about road hazards from a national database.

 

At least in this country, the citizens will decide how many driverless cars get on the road, and for the foreseeable future we will have mostly humans behind the wheel.  It's good to know that AI is helping to identify and fix road hazards, but even if such systems work perfectly, other things can go wrong on the road.

 

Sources:  The AP article "Cities and states are turning to AI to improve road safety" by Jeff McMurry appeared on the AP website on Nov. 15, 2025 at https://apnews.com/article/ai-transportation-guardrails-

 potholes-hawaii-san-jose-9b34a62b2994177ece224a8ed9645577.  I also referred to the websites https://blog.cubitplanning.com/2010/02/road-miles-by-state for the lane-miles figures and

https://cdan.dot.gov/tsftables/Fatalities%20and%20Fatality%20Rates.pdf for road fatality rates.


Monday, November 10, 2025

Questions About UPS Flight 2976

 

At 5:15 PM on Tuesday, November 4, UPS Flight 2976 bound for Hawaii took off from Louisville Muhammed Ali International Airport in Kentucky.  Louisville is the main worldwide UPS hub from which millions of packages are shipped weekly on aircraft such as flight 2976's three-engine McDonnell-Douglas MD-11.  The MD-11 is somewhat of an orphan, as it was originally developed to be a wide-body passenger aircraft in competition with Boeing's 767.  But only a couple hundred of them were built before production shut down in 2000 after Boeing acquired McDonnell-Douglas.  As with most of the existing MD-11s, this one, owned originally by Thai Airways, was converted to freight service later, and was 34 years old at the time of takeoff.

 

Almost simultaneously with rollout, the left engine and its supporting pylon separated from the wing, and a fire broke out.  An alarm bell went off in the cockpit, and for the next 25 seconds Captain Richard Wartenberg and First Officer Lee Truitt struggled to control the plane.  But after reaching an altitude of only 100 feet, the plane began to roll to the left.  It tore a 300-foot gash in a UPS warehouse south of the airport, left a blazing trail of fuel along its path, and collided with oil tanks at an oil-recycling tank, leading to explosions and a much bigger fire before the bulk of the plane came to rest in a truck parking area and an auto junkyard.  Besides the three crew members including Relief Officer Captain Dana Diamond, eleven people on the ground died and about as many were injured, some critically.  This was the most fatalities incurred in any UPS flight accident.  On Saturday, the FAA temporarily grounded all MD-11s to perform inspections in case a mechanical defect is at fault.

 

This crash was one of the best-documented ones in recent memory, as it was in full view of a major road (Grade Lane), many security cameras, and numerous stationary and moving dashcams.  One dashcam video posted on the Wikipedia site about the crash shows a blazing trail of fuel sweeping from right to left across the scene, engulfing trucks and other structures within seconds.  An aerial view from south of the airport looking north shows what looks like the path of a tornado, as a wide swath of destruction leads from the runway to the foreground. 

 

As the integrity of the support pylons is necessary for the structural integrity of the entire aircraft, aerospace engineers normally make sure that the engines are fastened really well to the rest of the plane.  Typically, there are multiple points of attachment between the engine and the pylon, but the pylon itself is a structural member that is permanently affixed to the wing.  While it's possible that the last time the engine was detached from the plane, somebody didn't finish the job reattaching it, because of the multiple attachment points it's unlikely that any mistakes would lead to the whole thing falling off. 

 

Instead, my uninformed non-mechanical-engineer's initial guess is that fatigue or some other issue weakened the pylon's attachment to the wing, causing a crack or cracks that eventually led to the failure of the attachment point, which would let the engine and pylon fall off as they did.  And it's natural that this would occur at a moment of maximum stress on the pylon, which occurs during takeoff.

 

Planes are supposed to be inspected regularly for such hidden flaws.  But sometimes they can show up in inaccessible areas that might require X-ray or ultrasound equipment to detect.  That is the main reason that the FAA has grounded the remaining fleet of MD-11s: so they can be inspected for similar flaws. 

 

This early in the investigation, it's unclear whether the pieces of the aircraft will tell a definite story of what happened.  It's a good sign that the left engine was recovered presumably without major fire damage near the runway, as the end of the attached pylon will give investigators a lot of information about how the thing came loose. 

 

Inspections and maintenance are boring compared to design and construction, and so they sometimes get short shrift in an organization with limited resources.  But there's an engineering version of the old saying, "the price of liberty is eternal vigilance."  It goes something like, "the price of reliable operation is regular maintenance."  I'm facing a much smaller-scale but similar situation here in my own home. 

 

In Texas, air conditioning has become a well-nigh necessity, a fact recognized by everyone except the Texas legislature, which steadfastly refused once again this year to use some of the budget surplus to air-condition all Texas prisons.  (Sorry for the soapbox moment, but I couldn't resist.)  Anyway, every spring and fall I have an HVAC company come out and inspect our heat-pump heating and cooling unit.  Last spring they said it needed a new contactor that was about to go out, and the fan motor bearing didn't look too good, but it was otherwise okay.

 

Things changed over the summer.  Now the evaporator has sprung three leaks, the compressor has been working so hard that its insulation and capacitor are compromised, and to make a long sad story short, we need a whole new unit.

 

I could have just ignored matters till something major disabled the unit:  the compressor shorting out, the fan motor freezing, any number of things.  As often happens in such cases, it might have failed either in the middle of the coldest day of the year, or next August when the thermometer reads 102 in the shade.  Not wishing for such emergencies, I choose to have regular maintenance checks, which have paid off, both for me and for the HVAC people who get to install a new unit under less-than-urgent conditions.

 

My sympathy is with those who lost loved ones both in the air and on the ground in the crash of Flight 2976.  And my hope is that if lack of maintenance is found to be a contributing cause, that the grounding of the other MD-11s will prevent another accident like the one we saw last Tuesday.

 

Sources:  I referred to an article on the FAA action at https://abcnews.go.com/US/final-moments-ups-plane-crash-detailed-ntsb/story?id=127313407, a comment on engine support pylons at https://aviation.stackexchange.com/questions/79872/what-are-different-components-of-an-engine-pylon, and the Wikipedia articles on MD-11 and UPS Airlines Flight 2976.

Monday, November 03, 2025

Can We Afford to Power AI?

 

That is the question posed by Stephen Witt's article in this week's New Yorker, "Information Overload."  Witt, a biographer of Nvidia founder Jensen Huang, has toured several of the giant data centers operated by CoreWeave, the leading independent data-center operator in the U. S.  He brings from these visits and interviews some news that may affect everyone in the country, whether or not you think you use artificial-intelligence (AI) services:  the need to power the growing number of data centers may send electric-power costs through the roof.

 

Witt gives a fascinating inside view of what actually goes on in the highly secure, anonymous-looking giant buildings that house the Nvidia equipment used by virtually all large-scale AI firms.  Inside are rack after rack of "nodes," each node holding four water-cooled graphics-processing units (GPUs), which are the workhorse silicon engines of current AI operations.  Thousands of these nodes are housed in each data center, and each runs at top speed, emphasizing performance over energy conservation. 

 

Gigawatts of power are used both in the training phase of AI implementation, which feeds on gigabytes of raw data (books, images, etc.) to develop the weights used by AI networks to perform "inferences," which basically means answering queries.  Both training and inference use power, although training appears to demand more intense bursts of energy. 

 

The global economy has taken a headlong plunge into AI, making Nvidia the first company worldwide to surpass $5 trillion in market capitalization.  (Just in case anybody's thinking about a government takeover of Nvidia, $5 trillion would pay off only about one-eighth of the U. S. government's debt, but it's still a lot of money.)  With about 90% of the market share for GPU chips, Nvidia is in a classic monopoly position, and significant competitors are not yet on the horizon.

 

Simple rules of supply and demand tell us that the other commodity needed by data centers, namely electric power, will also rise in price unless a lot of new suppliers rush into the market.  Unfortunately, increasing the supply of electricity overnight is well-nigh impossible. 

 

The U. S. electric-utility industry is coming off a period when demand was increasing slowly, if at all.  Because both generation and transmission systems take years to plan and build, the industry was expecting only gradual increases in demand for both, and planned accordingly.  But only in the last couple of years has it become clear that data centers are like the hungry baby bird of the utility industry:  mouths always open demanding more. 

 

This is putting a severe strain on the existing power grid, and promises to get worse if the rate of data-center construction keeps up its current frenetic pace.  Witt cites an analysis by Bloomberg showing that wholesale electricity costs near data centers have about doubled in the last five years.  And at some point, the power simply won't be available no matter what companies like CoreWeave are willing to pay.  If that happens, the data centers may move offshore to more hospitable climes where power is cheaper, such as China.  As China's power still comes largely from coal, that would bode no good for the climate.

 

Witt compares the current data-center boom to the U. S. railroad-building boom of the nineteenth century, which consumed even more of the country's GNP than data centers are doing now.  That boom resulted in overbuilding and a crash, and there are signs that something similar may be in the offing with AI.  Something that can't go on forever must eventually stop, and besides limitations in power production, another limit that may be even harder to overcome is the finite amount of data available for training.  Witt says there are concerns that in less than a decade, AI developers could use up the entire "usable supply of human text."  Of course, once that's done, the AI systems can still deal with it in more sophisticated ways.  But lawyers are starting to go to work suing AI developers for copyright infringement.  Recently, one AI developer named Anthropic paid $1.5 billion to a class-action lawsuit group of publishers whose material was used without permission.  That is a drop in the bucket of trillions that are sloshing around in the AI business, but once the lawyers get in their stride, the copyright-infringement leak in the bucket might get bigger.

 

The overall picture is of a disruptive new technology wildly boosted by sheep-like business leaders to the point of stressing numerous more traditional sectors, and causing indirect distress to electricity consumers, namely everybody else.  Witt cites Jevons's Paradox in this connection, which says increasing the efficiency with which a resource is used can cause it to be used even more.  A good example is the use of electricity for lighting.  When only expensive one-use batteries were available to power noisy, inefficient arc lamps, electric lighting was confined to the special-effects department of the theatrical world.  But when Edison and others developed both efficient generators and more efficient incandescent lamps, the highly price-sensitive market embraced electric lighting, which underwent a boom comparable in some ways to the current AI boom. 

 

Booms always overshoot to some degree, and we don't yet know what overbuilding or saturation looks like in AI development.  The market for AI is so new that pricing structures are still uncertain, and many firms are operating at a loss in an attempt to gain market share.  That can't go on forever either, and so five years from now we will see a very different picture in the AI world than the one we see now. 

 

Whether it will be one of only modest electric-price increases and a stable stock of data centers, or a continuing boom in some out-of-the-way energy-rich and regulation-poor country remains to be seen.  Independent of the morality and social influences of AI, the sheer size of the hardware footprint needed and its insatiable demand for fresh human-generated information may place natural limits on it.  After the novelty wears off, AI may be like a new guest at a party who has three good jokes, but after that can't say anything that anybody wants to listen to.  We will just have to wait and see.

 

Sources:  Stephen Witt's article "Information Overload" appeared on pp. 20-25 of the Nov. 3, 2025 issue of The New Yorker.  I also referred to an article from the University of Wisconsin at https://ls.wisc.edu/news/the-hidden-cost-of-ai and Wikipedia articles on Nvidia and Jevons Paradox.