Monday, November 24, 2025

AI Ghosts and the Yuck Factor

 

The December Scientific American highlights an article by David Berreby that gets personal.  Berreby's father was born in 1927, the same year as my father, and died in 2013.  Yet the article opens with Berreby asking, "How is your existence these days?" and getting a reply:  "... Being dead is a strange experience."

 

In this conversation, Berreby is using a generative-artificial-intelligence (genAI) version of his father to investigate what it is like to interact with an AI ghost:  a digital simulation of a dead loved one that some psychologists and computer scientists are promoting as an aid for grieving people.

 

I'll be frank about my initial reaction to this idea.  I thought it was terrible.  The "yuck factor" is a phrase popularized by ethicist Leon Kass in describing a gut-level negative reaction to a thing.  He says we should at least pay attention whenever we have such a reaction, because such a sensation may embody wisdom that we can't articulate.  The AI-ghost idea reminded me of Norman Bates, the mentally defective protagonist of Alfred Hitchcock's movie Psycho, who kept his mother around long after her bury-by date and talked with her as though she were still alive. 

 

And to his credit, Berreby admits that there may be dangers to some people whose mental makeup makes them vulnerable to confusing fiction with reality, and could become harmfully addicted to the use of this type of AI.  But in the limited number of cases examined (only 10 in one study) in which grieving patients were encouraged to interact with AI ghosts, they all reported positive outcomes and a better ability to interact with live humans.  As one subject commented, "Society doesn't really like grief."  Who better to discuss your feelings of loss with than an infinitely-patient AI ghost who is both the cause and the solace of one's grief? 

 

Still, it troubles me that AI ghosts could become a widespread way of coping with the death of those we love.  One's worldview context is important here. 

 

Historically, death has been viewed as the portal to the afterlife.  Berreby chose to title his article "Mourning Becomes Electric," a takeoff on Eugene O'Neill's play cycle "Mourning Becomes Electra," which itself was based on the Oresteia play cycle by Aeschylus, a famous Greek playwright who died around 450 B. C.  In the plays, Aeschylus describes the tragic murder of the warrior Agamemnon by his unfaithful wife Clytemnestra, and how gods interacted with humans as things went from bad to worse.  That reference, and a few throwaway lines about ectoplasm and Edison's boast that if there was life after death, he could detect it with a new invention of his, are the only mentions of the possibility that the dead are no longer in existence in any meaningful way.    

 

If you believe that death is the final end of the existence of any given human personality, and you miss interacting with that personality, it only makes sense to use any technical means at your disposal to scratch that itch and conjure up your late father, mother, or Aunt Edna.  Berreby quotes Amy Kurzweil, artist and daughter of famed transhumanist Ray Kurzweil, as saying that we don't usually worry that children will do things like expecting the real Superman to show up in an emergency, because they early learn to distinguish fiction from reality.  And so she isn't concerned that grieving people will start to treat AI ghosts like the real person the machine is intended to simulate.  It's like looking at an old photo or video of a dead person:  there's no confusion, only a stimulus to memory, and nobody complains about keeping photos of our dear departed around.

 

In the context of secular psychology, where the therapeutic goal is to minimize distress and increase the feeling of well-being, anything that moves the needle in that direction is worth doing.  And if studies show that grieving people feel better after extensive chats with custom-designed AI ghosts, then that's all the evidence therapists need that it's a useful thing to do.

 

The article is written in the nearly-universal etsi Deus non daretur style—a Latin phrase meaning roughly "as though God doesn't exist."  And in a secular publication such as Scientific American, this is appropriate, I suppose, though it leaves out the viewpoints of billions of people who believe otherwise.  But what if these billions are right?  That puts a different light on the thing.

 

Even believers in God acknowledge that grieving over the loss of a loved one is an entirely appropriate and natural response.  A couple we have known for 45 years was sitting at the breakfast table last summer praying, and the man suddenly had a massive hemorrhagic stroke, dying later that same day.  It was a terrible shock, and at the funeral there were photos and memorabilia of him to remind those in attendance of what he was like.  But everyone there had a serene confidence that David Jenkins was in the hands of a merciful God.  While it was a sad occasion, there was an undercurrent of bottomless joy that we knew he was enjoying, and that we the mourners participated in by means that cannot be fully expressed in writing.

 

In Christian theology, an idol is something that humans create which takes the place of God.  While frank ancestor worship is practiced by some cultures, and is idolatry by definition, a more subtle temptation to idolatry is offered by AI chatbots of all kinds, and especially by AI ghosts. 

 

While I miss my parents, they died a long time ago (my father was the last to go, in 1984).  I will confess to writing a note to my grandmother once, not long after she died.  So did Richard Feynman write a note to his late wife, who died tragically young of tuberculosis, and a less likely believer in the supernatural would be hard to find. 

 

I suppose it might do no harm for me to cobble up an AI ghost of my father.  But for me, anyway, the temptation to credit it with more existence than it really would have would be strong, and I will take no steps in that direction. 

 

As for people who don't believe in an afterlife, AI ghosts may help them cope for a while with death.  But only the truth will make them free of loss, grieving, and the fear of death once and for all.  And however good an AI ghost gets at imitating the lost reality of a person, it will never be the real thing.

 

Sources:  "Mourning Becomes Electric" by David Berreby appeared on pp. 64-67 of the December 2025 issue of Scientific American.  I referred to Wikipedia articles on "Wisdom of repugnance" and "Oresteia." 

Monday, November 17, 2025

Can AI Make Roads Safer?

 

The answer is almost certainly yes, as a recent Associated Press report shows. 

 

I have been less than complimentary in my treatment of artificial intelligence in some of these columns.  Any new technology can have potential hazards, and one of the main tasks of those who do engineering ethics is to examine technologies for problems that might occur in the future as well as current concerns.  But there's no denying that AI has potential for saving lives in numerous ways, including the improvement of road safety.  The AP article highlights AI's contributions to road safety, mainly in the area of information gathering to allow road crews to allocate their resources more intelligently.

 

Road maintenance is unique in that the physical extent of property an organization or government is responsible for, exceeds the area of almost any private entity.  Just in my town of San Marcos, a town of about 70,000 people, there are hundreds of stop signs, dozens of signals, and hundreds of miles of road.  And in the whole state of Texas there are over 680,000 "lane miles" of road, more than any other state.  Just inspecting those miles for damage such as potholes, worn-out signs, and debris is a gargantuan task.  Especially if a problem shows up in a low-traffic remote area, it could be years before highway workers are even aware of it, and longer before it gets fixed.

 

In Hawaii, the highway authorities are giving away 1,000 dashcams equipped to send what they see on the road to a central AI processing facility which will pick up on things like damaged guardrails, faulty signs, and other safety hazards.  The ability of sophisticated AI software to pore through millions of photos for specific problems like these makes it possible to use the floods of data from cameras to do this with minimal human involvement.  Hawaii has to import virtually all its equipment and supplies for road maintenance, so resource allocation there is especially important.

 

San Jose, California has mounted cameras on streetsweepers and found that their rate of identifying potholes was 97%, and will now expand the program to parking enforcement vehicles.  And driver cellphone data can pinpoint issues such as a stop sign hidden by bushes, which caused drivers to brake suddenly a lot at a particular intersection in Washington, D. C. later identified by AI software.  The mayor of San Jose, a former tech entrepreneur, hopes that cities will begin to share their databases so that problems common to more than one area can be identified more quickly.

 

The application of AI to identifying road maintenance needs seems to be one of the most benign AI-application cases around.  The information being gathered is not personal.  Rather, it's simply factual data about the state of the road bed and its surroundings.  While places such as Hawaii and other locations that use anonymized cellphone data do interact with private citizens to gather this data, their intent is not to spy on the people whose dashcams or phones are sensing the information.  And it would be hard to exploit such databases for illicit purposes, although some evil people can twist the best-intended system to their nefarious purposes. 

 

All the data in the world won't help fix a pothole if nobody goes out and fixes it, of course.  But the beauty of the AI-assisted data gathering is that a much better global picture of the state of the road inventory is available, allowing officials to prioritize their available maintenance resources much better.  A dangerous pothole in a part of town where nobody much goes or complains won't get ignored as long now that AI is being used to find it.  And data-sharing among municipal and state governments seems to have mostly upsides to it, although due precautions would have to be taken to make sure the larger accumulation of data isn't hacked.

 

As autonomous vehicles become more widespread, the database of road hazards could be made available to driverless cars, which would then "know" hazards to avoid even before the hazards are repaired.  To a limited extent, this is happening already. 

 

Whenever my wife runs her Waze software on her phone when we're on a trip, the voice warns us of stalled vehicles or law-enforcement personnel before we get to them.  I've often wondered how the system obtains such ephemeral information.  It seems almost inevitable that it uses anonymized location data from the stalled cars themselves, which gives one a slightly creepy feeling.  But if it keeps somebody from plowing into me some day while I'm fixing a tire on the side of the road, I'll put up with the creepy feeling in the meantime.

 

Highway fatalities in the U. S. have been declining overall since at least the 1960s, with a minimum of less than 33,000 a year in 2011.  Since then, they rose significantly, peaking after COVID at a rate of 43,000 in 2021, with a slight decline since then.  Part of the increase has been attributed to the rise in cellphone use, although that is difficult to disentangle from many other factors.  While most traffic accidents are due to driver error, bad road conditions can also contribute to accidents and fatalities, and everything we can do to minimize this factor will help to save lives.

 

The engineering idealists among us would like to see autonomous vehicles taking over, as there are some indications that accidents per road-mile of such vehicles can be lower than those shown by cars with human drivers.  But the comparison does not take into account the fact that most truly autonomous cars are operating over highly restricted areas such as city centers, where circumstances are fairly predictable and well known.  General suburban or rural driving seems to pose serious challenges for autonomous vehicles, and until they can prove that they are safer than human-driven vehicles in every type of driving environment, it's not clear that replacing humans with robots behind the wheel will decrease traffic fatalities overall, even if the robots get clued in about road hazards from a national database.

 

At least in this country, the citizens will decide how many driverless cars get on the road, and for the foreseeable future we will have mostly humans behind the wheel.  It's good to know that AI is helping to identify and fix road hazards, but even if such systems work perfectly, other things can go wrong on the road.

 

Sources:  The AP article "Cities and states are turning to AI to improve road safety" by Jeff McMurry appeared on the AP website on Nov. 15, 2025 at https://apnews.com/article/ai-transportation-guardrails-

 potholes-hawaii-san-jose-9b34a62b2994177ece224a8ed9645577.  I also referred to the websites https://blog.cubitplanning.com/2010/02/road-miles-by-state for the lane-miles figures and

https://cdan.dot.gov/tsftables/Fatalities%20and%20Fatality%20Rates.pdf for road fatality rates.


Monday, November 10, 2025

Questions About UPS Flight 2976

 

At 5:15 PM on Tuesday, November 4, UPS Flight 2976 bound for Hawaii took off from Louisville Muhammed Ali International Airport in Kentucky.  Louisville is the main worldwide UPS hub from which millions of packages are shipped weekly on aircraft such as flight 2976's three-engine McDonnell-Douglas MD-11.  The MD-11 is somewhat of an orphan, as it was originally developed to be a wide-body passenger aircraft in competition with Boeing's 767.  But only a couple hundred of them were built before production shut down in 2000 after Boeing acquired McDonnell-Douglas.  As with most of the existing MD-11s, this one, owned originally by Thai Airways, was converted to freight service later, and was 34 years old at the time of takeoff.

 

Almost simultaneously with rollout, the left engine and its supporting pylon separated from the wing, and a fire broke out.  An alarm bell went off in the cockpit, and for the next 25 seconds Captain Richard Wartenberg and First Officer Lee Truitt struggled to control the plane.  But after reaching an altitude of only 100 feet, the plane began to roll to the left.  It tore a 300-foot gash in a UPS warehouse south of the airport, left a blazing trail of fuel along its path, and collided with oil tanks at an oil-recycling tank, leading to explosions and a much bigger fire before the bulk of the plane came to rest in a truck parking area and an auto junkyard.  Besides the three crew members including Relief Officer Captain Dana Diamond, eleven people on the ground died and about as many were injured, some critically.  This was the most fatalities incurred in any UPS flight accident.  On Saturday, the FAA temporarily grounded all MD-11s to perform inspections in case a mechanical defect is at fault.

 

This crash was one of the best-documented ones in recent memory, as it was in full view of a major road (Grade Lane), many security cameras, and numerous stationary and moving dashcams.  One dashcam video posted on the Wikipedia site about the crash shows a blazing trail of fuel sweeping from right to left across the scene, engulfing trucks and other structures within seconds.  An aerial view from south of the airport looking north shows what looks like the path of a tornado, as a wide swath of destruction leads from the runway to the foreground. 

 

As the integrity of the support pylons is necessary for the structural integrity of the entire aircraft, aerospace engineers normally make sure that the engines are fastened really well to the rest of the plane.  Typically, there are multiple points of attachment between the engine and the pylon, but the pylon itself is a structural member that is permanently affixed to the wing.  While it's possible that the last time the engine was detached from the plane, somebody didn't finish the job reattaching it, because of the multiple attachment points it's unlikely that any mistakes would lead to the whole thing falling off. 

 

Instead, my uninformed non-mechanical-engineer's initial guess is that fatigue or some other issue weakened the pylon's attachment to the wing, causing a crack or cracks that eventually led to the failure of the attachment point, which would let the engine and pylon fall off as they did.  And it's natural that this would occur at a moment of maximum stress on the pylon, which occurs during takeoff.

 

Planes are supposed to be inspected regularly for such hidden flaws.  But sometimes they can show up in inaccessible areas that might require X-ray or ultrasound equipment to detect.  That is the main reason that the FAA has grounded the remaining fleet of MD-11s: so they can be inspected for similar flaws. 

 

This early in the investigation, it's unclear whether the pieces of the aircraft will tell a definite story of what happened.  It's a good sign that the left engine was recovered presumably without major fire damage near the runway, as the end of the attached pylon will give investigators a lot of information about how the thing came loose. 

 

Inspections and maintenance are boring compared to design and construction, and so they sometimes get short shrift in an organization with limited resources.  But there's an engineering version of the old saying, "the price of liberty is eternal vigilance."  It goes something like, "the price of reliable operation is regular maintenance."  I'm facing a much smaller-scale but similar situation here in my own home. 

 

In Texas, air conditioning has become a well-nigh necessity, a fact recognized by everyone except the Texas legislature, which steadfastly refused once again this year to use some of the budget surplus to air-condition all Texas prisons.  (Sorry for the soapbox moment, but I couldn't resist.)  Anyway, every spring and fall I have an HVAC company come out and inspect our heat-pump heating and cooling unit.  Last spring they said it needed a new contactor that was about to go out, and the fan motor bearing didn't look too good, but it was otherwise okay.

 

Things changed over the summer.  Now the evaporator has sprung three leaks, the compressor has been working so hard that its insulation and capacitor are compromised, and to make a long sad story short, we need a whole new unit.

 

I could have just ignored matters till something major disabled the unit:  the compressor shorting out, the fan motor freezing, any number of things.  As often happens in such cases, it might have failed either in the middle of the coldest day of the year, or next August when the thermometer reads 102 in the shade.  Not wishing for such emergencies, I choose to have regular maintenance checks, which have paid off, both for me and for the HVAC people who get to install a new unit under less-than-urgent conditions.

 

My sympathy is with those who lost loved ones both in the air and on the ground in the crash of Flight 2976.  And my hope is that if lack of maintenance is found to be a contributing cause, that the grounding of the other MD-11s will prevent another accident like the one we saw last Tuesday.

 

Sources:  I referred to an article on the FAA action at https://abcnews.go.com/US/final-moments-ups-plane-crash-detailed-ntsb/story?id=127313407, a comment on engine support pylons at https://aviation.stackexchange.com/questions/79872/what-are-different-components-of-an-engine-pylon, and the Wikipedia articles on MD-11 and UPS Airlines Flight 2976.

Monday, November 03, 2025

Can We Afford to Power AI?

 

That is the question posed by Stephen Witt's article in this week's New Yorker, "Information Overload."  Witt, a biographer of Nvidia founder Jensen Huang, has toured several of the giant data centers operated by CoreWeave, the leading independent data-center operator in the U. S.  He brings from these visits and interviews some news that may affect everyone in the country, whether or not you think you use artificial-intelligence (AI) services:  the need to power the growing number of data centers may send electric-power costs through the roof.

 

Witt gives a fascinating inside view of what actually goes on in the highly secure, anonymous-looking giant buildings that house the Nvidia equipment used by virtually all large-scale AI firms.  Inside are rack after rack of "nodes," each node holding four water-cooled graphics-processing units (GPUs), which are the workhorse silicon engines of current AI operations.  Thousands of these nodes are housed in each data center, and each runs at top speed, emphasizing performance over energy conservation. 

 

Gigawatts of power are used both in the training phase of AI implementation, which feeds on gigabytes of raw data (books, images, etc.) to develop the weights used by AI networks to perform "inferences," which basically means answering queries.  Both training and inference use power, although training appears to demand more intense bursts of energy. 

 

The global economy has taken a headlong plunge into AI, making Nvidia the first company worldwide to surpass $5 trillion in market capitalization.  (Just in case anybody's thinking about a government takeover of Nvidia, $5 trillion would pay off only about one-eighth of the U. S. government's debt, but it's still a lot of money.)  With about 90% of the market share for GPU chips, Nvidia is in a classic monopoly position, and significant competitors are not yet on the horizon.

 

Simple rules of supply and demand tell us that the other commodity needed by data centers, namely electric power, will also rise in price unless a lot of new suppliers rush into the market.  Unfortunately, increasing the supply of electricity overnight is well-nigh impossible. 

 

The U. S. electric-utility industry is coming off a period when demand was increasing slowly, if at all.  Because both generation and transmission systems take years to plan and build, the industry was expecting only gradual increases in demand for both, and planned accordingly.  But only in the last couple of years has it become clear that data centers are like the hungry baby bird of the utility industry:  mouths always open demanding more. 

 

This is putting a severe strain on the existing power grid, and promises to get worse if the rate of data-center construction keeps up its current frenetic pace.  Witt cites an analysis by Bloomberg showing that wholesale electricity costs near data centers have about doubled in the last five years.  And at some point, the power simply won't be available no matter what companies like CoreWeave are willing to pay.  If that happens, the data centers may move offshore to more hospitable climes where power is cheaper, such as China.  As China's power still comes largely from coal, that would bode no good for the climate.

 

Witt compares the current data-center boom to the U. S. railroad-building boom of the nineteenth century, which consumed even more of the country's GNP than data centers are doing now.  That boom resulted in overbuilding and a crash, and there are signs that something similar may be in the offing with AI.  Something that can't go on forever must eventually stop, and besides limitations in power production, another limit that may be even harder to overcome is the finite amount of data available for training.  Witt says there are concerns that in less than a decade, AI developers could use up the entire "usable supply of human text."  Of course, once that's done, the AI systems can still deal with it in more sophisticated ways.  But lawyers are starting to go to work suing AI developers for copyright infringement.  Recently, one AI developer named Anthropic paid $1.5 billion to a class-action lawsuit group of publishers whose material was used without permission.  That is a drop in the bucket of trillions that are sloshing around in the AI business, but once the lawyers get in their stride, the copyright-infringement leak in the bucket might get bigger.

 

The overall picture is of a disruptive new technology wildly boosted by sheep-like business leaders to the point of stressing numerous more traditional sectors, and causing indirect distress to electricity consumers, namely everybody else.  Witt cites Jevons's Paradox in this connection, which says increasing the efficiency with which a resource is used can cause it to be used even more.  A good example is the use of electricity for lighting.  When only expensive one-use batteries were available to power noisy, inefficient arc lamps, electric lighting was confined to the special-effects department of the theatrical world.  But when Edison and others developed both efficient generators and more efficient incandescent lamps, the highly price-sensitive market embraced electric lighting, which underwent a boom comparable in some ways to the current AI boom. 

 

Booms always overshoot to some degree, and we don't yet know what overbuilding or saturation looks like in AI development.  The market for AI is so new that pricing structures are still uncertain, and many firms are operating at a loss in an attempt to gain market share.  That can't go on forever either, and so five years from now we will see a very different picture in the AI world than the one we see now. 

 

Whether it will be one of only modest electric-price increases and a stable stock of data centers, or a continuing boom in some out-of-the-way energy-rich and regulation-poor country remains to be seen.  Independent of the morality and social influences of AI, the sheer size of the hardware footprint needed and its insatiable demand for fresh human-generated information may place natural limits on it.  After the novelty wears off, AI may be like a new guest at a party who has three good jokes, but after that can't say anything that anybody wants to listen to.  We will just have to wait and see.

 

Sources:  Stephen Witt's article "Information Overload" appeared on pp. 20-25 of the Nov. 3, 2025 issue of The New Yorker.  I also referred to an article from the University of Wisconsin at https://ls.wisc.edu/news/the-hidden-cost-of-ai and Wikipedia articles on Nvidia and Jevons Paradox.

Monday, October 27, 2025

Asking the Wrong Question About Artificial General Intelligence

 

An article in the October issue of IEEE Spectrum investigates the status of artificial-intelligence IQ tests, and speculates on when and whether we will see the arrival of so-called artificial general intelligence (AGI), which author Matthew Hutson says is "AI technology that can match the abilities of humans at most tasks."  But mostly unasked in the article is an even more basic question:  what do we mean by human intelligence? 

 

To be fair, Hutson has done a good job of surveying several popular benchmark tests for AI intelligence.  One of the more popular tests is something called the Abstraction and Reasoning Corpus (ARC for short), whose developer François Chollet has made something of a go-to standard, as charts in the article show the scores of over a dozen different AI programs on various versions of Chollets tests.  Engineers like numbers, and standardizing a test is a good thing as long as the test measures what you want to know.  But does ARC do that?

 

The version of the ARC test described in the article consists largely of coming up with patterns of colored figures that follow rules abstracted from examples.  Human beings can score higher than AI systems on these tests, although the systems are improving.  But it's an open question as to whether abstracting patterns from geometric shapes has a lot to do with being generally intelligent.

 

Chollet feels that his test measures the ability to acquire new abilities easily, which he thinks is the prime measure of intelligence.  Whether it actually does that is debatable, but it looks like ARC is to the AI world what the old Stanford-Binet IQ test is for people.  That IQ test was developed over a century ago and is now in its fifth edition.

 

Hutson comes close to the problem when he admits that "notions of intelligence vary across place and time."  The Stanford-Binet test is mainly used to identify people who don't fit in well with the public-school system, which is mainly designed to produce worker bees for the modern economy.  As the modern economy is shifting all the time, what counts as intelligence does too. 

And even if we could perfectly track these shifts, the admittedly infinite array of "tasks" that people perform present an almost insurmountable problem to anyone who wants not only to define, but to evaluate something that could justifiably be called artificial general intelligence.

 

Geoffrey Hinton, a recent Nobel Prize winner in AI, is quoted in the article as saying that if an AI robot could successfully do household plumbing, that would be a milestone in AGI, and he thinks it's still about ten years off.  I hope I'm around in ten years to check this prediction, which I personally feel is optimistic.  For one thing, humanoid robots will have to get a lot cheaper before people even consider using one to fix a toilet.

 

All these approaches to AGI ignore a distinction in the field of human psychology which was first pointed out by Aristotle.  The distinction has been described in various ways, but the most succinct is to differentiate between perceptual thought and conceptual thought. 

 

Perceptual thought, which humans share with other animals and machines, consists in perceiving, remembering, imagining, and making associations among perceptions and memories, broadly speaking.  Inanimate material objects like computers can display perceptual thought, and in crawling the Internet for raw material by which to answer queries, all AI chatbots and similar systems use perceptual thought, which ultimately has to do with concrete individual things.

 

On the other hand, conceptual thought involves the consideration of universals:  freedom, for example, or the color blue, or triangularity as a property of a geometric figure, as opposed to considering any individual triangle.  There are good reasons to believe that no strictly material system (and this includes all AI) can engage in truly conceptual thought.  With suitable programming by humans, a computing system may provide a good simulation of conceptual thought, as a movie provides a good simulation of human beings walking around and even engaging in conceptual thought.  But a movie is just a sequence of images and sounds, and can't respond to its environment in an intelligent way.

 

Neither can an AI program engage in conceptual thought, although by finding examples of such thought in its training, it can provide a convincing simulation of it.  While having a robot do plumbing is all very well, the real goal sought by those who want to achieve AGI is human-likeness in every significant respect.  And a human incapable of conceptual thought would at the least be considered severely disabled, though still worthy of respect as a member of the human community.

 

The vital and provable distinction between perceptual and conceptual thought has been all but forgotten by AI researchers and the wider culture.  But if we ignore it, and allow AI to take over more and more tasks formerly done by humans, we will surround ourselves with concept-free entities.  This will be dangerous. 

 

A good example of a powerful concept-free entity is a tiger.  If you walk into the cage of a hungry tiger, all it sees in you is a specific perception:  here's dinner.  There is no reasoning over abstractions with a tiger, just a power struggle in which the human has a distinct disadvantage.

 

Aristotle restricted the term "intellect" to mean that part of the human mind capable of dealing with concepts.  It is what distinguishes us from the other animals, and from every AI system as well.  Try as they might, AI researchers will not be able to develop anything that can entertain concepts.  And attempts to replace humans in jobs where concepts are important, such as almost any occupation that involves dealing with humans as one ethical being to another, can easily turn into the kind of hungry-tiger encounter that humans generally lose.  Anyone who has struggled with an AI-powered phone-answering system to gain the privilege of talking with an actual human being will know what I mean.

 

ARC may become the default IQ test for new AI prototypes vying for the title of AGI.  But the concept symbolized by the acronym AGI is itself incomprehensible by AI.  As long as there are humans left, we will be the ones awarding the titles, not the AI bots.  But only if they let us.

 

Sources:  Matthew Hutson's article "Can We Build a Better IQ Test for AI?" appears on pp. 34-39 of the October 2025 issue of IEEE Spectrum.  I also referred to the Stanford Encyclopedia of Philosophy article on Aristotle.  For a more detailed argument about why AI cannot perform conceptual thought, see the article in AI & Society "Artificial Intelligence and Its Natural Limits," by Karl Stephan and Gyula Klima, pp. 9-18, vol. 36 (2021).

Monday, October 20, 2025

Shackleton's Flawed Ship

 

Ernest Shackleton (1874-1922) almost reached the South Pole in 1909, although he lost the title of being first to get there to Roald Amundsen, who achieved that record in 1911.  Shackleton then set his sights on being the first to traverse Antarctica from one side to the other, and for that project purchased the wooden excursion ship Endurance.  He embarked on his grandly-named "Imperial Trans-Antarctic Expedition" on Dec. 5, 1914 from the small island near the tip of South America called South Georgia, intending to land at Vahsel Vay in the Weddell Sea, reach the South Pole, and cross to the other side with the aid of a second provisions-laying party.

 

It was a complicated and ambitious undertaking.  Endurance got stuck in the ice in mid-January after nearly reaching Vahsel Bay, and Shackleton decided to wait on board the ship until the following spring, nine months later.  Meanwhile, over the Antarctic winter, the drifting ice slowly moved the ship several hundred miles northwest until the following October, when the spring thaws began to exert extreme pressure on the hull.

 

On October 24, the hull broke, water rushed in, and Shackleton ordered the ship abandoned.  The crew transferred to camps on the ice, and the ship finally sank on Nov. 21, 1915.  This began a series of adventures for Shackleton and his men which would be too long to recite here, but eventually they made it back to something like civilization in August of 1916.

 

Now we fast-forward to 2022, when an equally daring expedition called Endurance22 found Endurance below 3000 meters of water (9900 feet) and did extensive photographic documentation of the wreck, which by international agreement will remain undisturbed. 

The details of what they found about why the Endurance sank are described in a recent UPI report by Stephen Feller.

 

After the wreck was found, researcher Jukka Tuhkuri and his colleagues of Aalto University in Finland conducted an investigation into why Endurance sank.  Even as long ago as the 1910s, shipwrights knew how to construct ships that would withstand the extreme pressures exerted by ice in the Antarctic.  But as their analysis of documents such as ship's plans, diaries, and other sources indicated, Endurance wasn't built that way.

 

On the lowest deck, which contained the boilers and steam engine, there was only one beam that crossed the entire ship from one side to the other.  Ships designed to withstand the compressive forces of ice normally had several such beams spaced along the length of the ship to resist the ice, which otherwise will crack a hull like someone squeezing an eggshell too hard.  But that is apparently what happened to Endurance.  Although it lasted nearly a year stuck in the ice, the stresses caused by the following spring's thaw exceeded its capacity to resist them, and it cracked and sank.

 

According to Tukhuri, Shackleton was aware that the ship he bought was built only for polar excursions and not what amounted to ice-breaking duty.  The researchers even found a letter written by Shackleton, mentioning that he had recommended adding additional internal hull-bracing beams for another polar exploration ship that had got stuck in ice but didn't get crushed.  Tukhuri speculates that Shackleton was in a hurry and bought the ship knowing it wasn't sufficiently braced, but hoping that he could avoid putting it in a situation where the missing beams wouldn't be needed.  Unfortunately, that's not what happened.

 

Although he wasn't an engineer, Shackleton was making engineering decisions as he prepared for his grand expedition.  Engineering is the application of limited resources to a technical problem, and that's exactly what Shackleton was doing.  Sometimes the time and expense required to prevent a somewhat unlikely event from happening simply isn't available.  Rather than abort the whole project, Shackleton decided to go ahead, trusting in his navigation skills and previous Antarctic experience to avoid disaster.  But his gamble with the missing beams didn't pay off, and he had to rely even more than he expected to on his survival skills to get him and his crew off the ice and back to civilization.

 

Shackleton's ill-fated expedition reminds me of the Apollo 13 near-disaster, in which three astronauts were stranded in space on their way to the moon in 1970 by an oxygen tank that exploded when some damaged insulation resulted in an internal fire.  The resulting damage led to a long series of improvised solutions to unexpected problems, which the astronauts carried out in coordination with the extensive NASA support staff on the ground.  Like Shackleton, the Apollo 13 team never reached their goal, but simply getting back to civilization after the accident was a bigger triumph than even landing on the moon again. 

 

Apollo 13's accident was caused by a manufacturing flaw, not a design flaw.  Still, the improvisation and backup systems used to rescue the mission were similar to what Shackleton used in getting his crew back safely. 

 

Most of us will never go on expeditions to unexplored lands or planets.  But the civilizational urge is still there, which is why several countries continue to plan both manned and unmanned expeditions to the Moon, Mars, and even farther. 

 

If and when these plans come to fruition, we can count on several things.  One is that not everything will go according to plan.  When engineers encounter a novel situation, despite all the advance information they can gather about it in advance, there is always something unexpected.  Sometimes it's just a matter for curiosity, but other times it can be a matter of life and death. 

 

Another thing is that good engineering practice and planning can provide enough backup resources to allow clever individuals to create a survival plan even in the face of a major disaster.  Losing the Endurance was a big setback, but Shackleton had packed enough auxiliary supplies in the form of food, shelter, and other necessaries, so that he and his crew could perform the extraordinary feat of extracting themselves from what must have looked like certain death at times.  And the ingenuity of NASA engineers combined with the intrepid actions of the Apollo 13 crew to get them safely home, despite major damage to the Service Module that contained the oxygen tank which blew up.

 

Companies such as SpaceX are now leading the way into space, and it remains to be seen how well they balance the goals of achieving a mission first at any cost, including death, versus proceeding more slowly with more backup systems and more thoughtful engineering.  So far, none of the commercial space enterprises has ever lost a life in space.  Let's hope it stays that way as long as possible.

 

Sources:  The article "Shackleton's sunken polar ship may have been weaker than thought" by Stephen Feller was published on the UPI website on Oct. 6, 2025 at https://www.upi.com/Science_News/2025/10/06/shackleton-endurance-ship-crushed-in-ice/9321759774913/.  The Endurance22 website is at https://endurance22.org/.  I also referred to the Wikipedia articles on Apollo 13,  Ernest Shackleton, and Endurance. 

Monday, October 13, 2025

What Managers Think About Replacing Workers With AI

 

It's hard to look at a website, talk to anybody in business, or read a magazine for very long these days without encountering something about artificial intelligence (AI).  One of the biggest concerns for the average symbolic-manipulator employee, in George Gilder's phrase, is whether AI will replace you in your job.  Examples of symbolic manipulators are software developers, accountants, writers, and to some degree sales people and counselors.  Hospital nurses and construction workers, on the other hand, are not symbolic manipulators, at least not most of the time. 

 

A company called Trio realized that even if AI was available to replace a lot of these folks, somebody would have to decide to do it.  And that somebody would be middle-level managers, for the most part.  So last month, they performed an online survey of about 3000 U. S. managers in all 50 states to find out their attitudes toward replacing their employees with AI.  And the results are illuminating.

 

First of all, when broken down by state, there are wide variations in how enthusiastic managers are in replacing flesh-and-blood workers with AI software.  Openness to doing this varies from a high of 67% in Maine to a low of 8% in Idaho.  Both being fairly rural states, the difference is hard to account for except by cultural factors.  If I had to guess, I'd say finding good, reliable workers is more of a challenge in Maine, and so that may be one reason why managers in our most northeastern state would rather skip the hassle of hiring people and go straight to an AI program.

 

When all states were lumped together, the top reason that managers would replace workers with AI turns out to be pressure from upper management or shareholders, at 36%.  Presumably this was one of a list of "choose-one" options handed to the survey respondents.  The next most favored reason was for productivity gains (31%) and then cost savings (27%).  This pressure from above makes me think that a sheep-like mentality which now and then manifests itself in the boardroom may be why we are hearing so much about AI replacing workers.  No CEO wants to be left behind in a stampede to the next great thing, even if the thing turns out to be not so great.

 

What is perhaps most disturbing to engineers about the survey results is the kinds of jobs that managers see most ripe for replacement by AI.  "Technical roles like coding and design" (sounds like engineering to me) were perceived as most replaceable at 33%, while the least replaceable jobs were seen to be sales (11%) and "creative work" at 15%.  There are a lot of sales jobs that could pretty easily be replaced by good AI software, but evidently managers still believe in that personal touch that good salespeople can bring to the task.  Whereas, engineers and programmers, who are always carried as overhead on budgets, don't have as direct a connection between sales and their salaries as salespeople do. 

 

Independent of the question about whether a given job can actually be done better by AI than by a human, this survey looks at those who would be making the immediate decision to do so.  Of course, the options presented to those surveyed were simplified ones.  The fact is that rather than making a simple choice between AI and a human in a given job, what seems to be happening is that almost anybody in the symbolic-manipulation business is adopting some form of AI almost by default—some deliberately and enthusiastically, others (like myself) reluctantly and only if it can't be avoided. 

 

These large workplace shifts tend to be hard to discern over the short term, because they happen gradually.  Take the development of computer-aided design (CAD) software as an example.  My late father-in-law never obtained a four-year college degree, yet in the 1950s he got a good job as a civil engineer and worked for the Texas Highway Department.  If you'd visited him shortly after he went to work there, he would have been sitting at a drafting table in a huge room full of guys (all guys) sitting at drafting tables, churning out drawings that were turned into blueprints for the construction crews working on the new interstate-highway system.

 

Visit that same office today (the building is still there), and you'll see fewer engineers than those old drafting rooms held, and they'll be sitting at computers.  The computer can't design anything by itself, and while the engineer is in some sense in charge of the process, the amount of sheer dogwork handled by the computer far exceeds the mental effort put forth by the engineer, who now does the work of ten or fifteen (or more) of the old drafting-room people.  And the field of civil engineering didn't collapse:  my school (Texas State University) started a new civil-engineering program a few years ago, and we have no problem placing our graduates.

 

The advent of AI, which has actually been going on for a decade at least and isn't as sudden as news reports make it sound, will probably be like the advent of CAD, only more so.  It's easy to forget that the computers need us as much as we need the computers.  Take away the largely-human-produced Internet from ChatGPT, and you'd have a lot of useless server farms on your hands. 

 

There are clearly dangers, of course, if we get too lazy and allow AI to make decisions that should remain in human hands, or minds.  And there are sectors where AI has already done serious damage, such as the harm AI-fueled social media has done to the psychological health of children and teenagers.  But we're not letting the pied piper of AI march away with all our kids.  Schools all across the U. S. are starting to ban smartphone use during classes, and parents are wising up to how harmful too-early use of smartphones can be to young people. 

 

Even if all managers were dying to replace their staff with AI as fast as they could, the software simply isn't available yet.  At the present time, AI has the looks of a fad, indicated by the survey's showing that pressure from upper management is the biggest reason bosses are considering it.  So it's no time to panic, but keep your wits about you and be ready to deal with AI in your job, assuming you still have one.

 

Sources:  The summarized results of the Trio survey can be seen at https://trio.dev/managers-are-ready-to-replace-employees-with-ai/.  The San Marcos Daily Record of Oct. 10, 2025 carried a story on the survey on p. 8, which is how I found out about it, from an old-fashioned piece of paper.  But then I went online. 

Monday, October 06, 2025

Waymo 80% Safer than Human Drivers: Why Not Switch?

 

According to Waymo, the Google division that operates self-driving robotaxis in several U. S. cities, their vehicles are involved in 80% fewer injury-causing crashes than human-driven cars.  This was a surprise to me, as it may also have been to Kelsey Piper, who writes in a substack called The Argument that if we want to reduce the number of traffic fatalities in the U. S by 80%, all we have to do is switch to Waymos.

 

At the current rate of nearly 40,000 U. S. traffic fatalities a year, that would be saving 31,000 lives a year.  But she cites a poll conducted by The Argument which found overall that only about 28% of respondents favored allowing self-driving cars in one's town or city, and 41% favored a ban, as Boston and other cities have considered doing.

 

Piper speculates that a general distaste for AI-powered things may be behind what looks like irrational opposition to self-driving cars.  Some Waymos have even been attacked by modern-day Luddites.  As she puts it, ". . . you can't vandalize ChatGPT, so anti-AI sentiment finds its expression in harrassing Waymos."

 

She realizes that anything like a wholesale move to self-driving cars would cause massive disruption to our present transportation system.  She also acknowledges the tendency of government to make optional things mandatory.  Right now, Waymo is one private company offering a specific service in a few carefully chosen markets such as Atlanta, Austin, San Francisco, and Phoenix.  With the possible exception of Atlanta, these are all locations with plenty of sunshine and relatively few days of inclement weather.  And the service in Atlanta only commenced last June, so Waymos in that city have not gone through a typical Atlanta winter, which almost always includes an ice storm, as I know from living there a couple of years.  Ice may present serious challenges to Waymo's algorithms.

 

So there are practical limitations to the areas that Waymo can operate.  As one of the commenters on Piper's article mentions, Waymo has somewhat cherry-picked markets in which it can compete while maintaining its very good safety record.  Even if Boston decided to allow Waymos, I expect a long development period would delay its deployment as the company figured out how to deal with snow, ice, and slush on the former cowpaths that are now Boston streets.  At least the Waymo cars probably wouldn't get lost as much as I did whenever I drove to Boston when I lived in New England, which was, well, pretty much every time I went there. 

 

But there's those 31,000 lives.  As Piper points out, that's more lives than are lost every year to homicides.  Wouldn't it be nice if we could eliminate all homicides?  Well, numerically, changing to self-driving cars would do that. 

 

The psychology of why we tolerate such a lot of annual deaths to traffic accidents is interesting.  People don't always act toward various risks in reasonable ways.  The classic example is driving to the airport to fly somewhere.  Unless you live within walking distance of the airport, you're going to drive or ride.  And unless you take a Waymo there, you will ride in a human-operated car.  While lots of people are afraid of flying much more than they are afraid of driving, the chances of dying in a plane crash are much lower than the chances of dying in a car wreck on the way to or from the airport. 

 

I suspect that we have gotten so used to the 40,000 or so traffic deaths every year with some variation on the notion that it always happens to someone else, and if I were in the same situation that somebody else died in, I would have been clever enough to save myself.  These are pure rationalizations, but the alternative is to experience a little squirt of adrenaline every time we buckle the seat belt and pull out of the garage. 

 

Five or ten years ago, there was a lot of hype about how every new car would be self-driving within a few years.  That obviously hasn't happened, for a variety of reasons.  One factor is the expense per vehicle.  Waymo doesn't advertise how much each of their vehicles cost, but various estimates say it's probably on the order of $160,000 to $300,000.  This puts them in the super-luxury class, and explains why they aren't deployed in areas that will not generate pretty good revenue.  Even with the carefully-chosen markets that Waymo has selected, it appears that they are not making a profit yet, which means the whole thing is still an elaborate experiment oriented toward some future situation that hasn't materialized.

 

Still, I will admit that if I could have a self-driving car that was absolutely trustworthy, a Level-5 one according to the SAE autonomous-vehicle rating system, one you could read or sleep in while the machine takes care of all driving tasks, and not pay a whole lot more for it than I'm paying now, I would at least consider it.  But in my relatively small town, which would not provide enough revenue for Waymo under its present operating circumstances, that's not going to happen.

 

Now and then on my trips to Austin, I see a Waymo eerily coasting along with nobody inside, and once I saw two in a row.  I will admit to having a flash of mischief when I saw one the first time, and wondering what would happen if you pulled in front of it and slammed on the brakes suddenly.  Probably nothing bad.  Fortunately, I was a passenger in the car I was in, not the driver, and so the experiment was never performed.

 

Unless something radical happens in the areas of AI, sensors, or legislation regarding the right to drive your own vehicle, it looks like Waymo and similar autonomous vehicle companies may not spread their wares much farther than they've already done.  And that's too bad for the 40,000 or so people who die on roadways each year.  Some ideas look good in theory, but when you start examining everything that would have to change to put them into practice, it just falls apart.  And converting our fleet of vehicles to nearly 100% self-driving looks like another one of those nice ideas that has had an unfortunate encounter with reality.

 

Sources:  A note in the online magazine The Dispatch referred me to Kelsey Piper's article "Please let the robots have this one," at https://www.theargumentmag.com/p/please-let-the-robots-have-this-one.  I also referred to a Reddit item on the estimated cost of Waymo vehicles at https://www.reddit.com/r/SelfDrivingCars/comments/1g8vv7o/where_did_the_whole_talk_about_the_cost_of_waymo/, and the Wikipedia article "Waymo."

Monday, September 29, 2025

Life For Teenagers Without Social Media

 

About a year ago, the Associated Press ran an article profiling two teenagers who were bucking the social-media frenzy by consciously limiting their phone use.  Since that time, teenagers without social media have only become more newsworthy.  As of June of this year, fourteen states have enacted some form of statewide ban on the use of cellphones in classrooms, and the trend is for more states to get on the bandwagon. 

 

What is it like for a teenager to do without most of the social media that their peers use?  Reporter Jocelyn Gecker profiled two teenage girls:  Gabriela Durham, who hopes to pursue a dancing career once she graduates from her Brooklyn high school, and Kate Bulkeley, who is a fifteen-year-old high schooler who is co-president of her Bible study club and has participated in a Model UN conference.  She ran into a problem when the other conference participants wanted to exchange only Instagram addresses rather than phone numbers.  And sometimes she relies on friends with Snapchat to tell her about important student government messages.  But overall, she is glad her social-media use is as low as it is.

 

Kate's parents knew that their daughter's school had a cellphone ban, but it wasn't enforced.  They were concerned about the bad publicity surrounding teens' use of social media, so when she became a freshman in high school they told her she couldn't use it.  Fine with it at first, she found as a sophomore that she needed Instagram to coordinate after-school activities. 

 

But the 15-year-old says she still only uses social media about two hours a week.  This is far below the average weekly use by teens, which runs over 35 hours a week for half of today's teens, according to one study cited in the report.  Kate simply sees most uses of social media as a waste of time, and prefers to use her time studying and encountering friends in the flesh, so to speak.

 

Gabriela Durham received a cellphone as soon as she was old enough to use public transportation in New York.  This was much later than her peers, who have often been using their cellphones since early in elementary school.  Her mother, Elena Romero, enforces a strict ban on social media until her daughters are 18.  They have fallen off the wagon only once, secretly using TikTok for a few weeks until Romero found out about it.

           

As a dance major at the Brooklyn High School of the Arts, Gabriela dances outside of school every day.  Dancing and dance practice, plus commuting on the subway, take up time that she might otherwise use for social media.  But she is scandalized by peers who say they log up to 60 hours or more of social-media use weekly, saying it's "insane."

 

Both girls admit there are inconveniences to not being on social media in high school.  So much of what goes on socially happens online, and so they miss out on jokes, memes, rumors, and a lot of other things that people in the pre-cellphone days were familiar with from high school.  But back then you didn't need advanced technology to be in the know.

 

These young women serve as test cases to prove that reasonably happy and fulfilling lives can be led by teenagers who nevertheless make minimal use of social media. 

 

I think it's important to note, however, that both sets of parents began their restrictions by delaying the time when their children received a cellphone for the first time.  As an educator, I long ago learned the lesson that it is much easier to start out with strict rules and then ease them gradually, than it is to begin with laxness and tighten up later. 

 

Parents who have not laid any cellphone restrictions on their children and suddenly get convicted that they have to do something, may find it very difficult to remove privileges with social media that their children have grown accustomed to already.  Education, whether in college or the nursery, is a long-term enterprise.  So it behooves parents to give serious thought to how they will deal with cellphones long before their children start asking for one.

 

I am acquainted with a family of five who seem to have negotiated the cellphone issue pretty well so far.  Their oldest child is fourteen.  After she was homeschooled up to the age of twelve, her parents put her in a local Christian school for a year.  But she came home complaining that "those other kids spend all their spare time on their phones!" and eventually moved her to a home-school cooperative, where cellphones are not in evidence.  This shows that good habits can be ingrained in children to the extent that they embody virtuous attitudes themselves, even when placed in tempting situations.

 

This is what all parents want for their kids, I hope.  All children are different, and in the same family one child is biddable and does everything she is told, while the other one raised in the same environment rebels against all strictures and cheats every chance he gets. 

 

But the current trend of recognizing on an institutional scale that constant access to social media by teenagers does on balance more harm than good, is vastly encouraging to those parents out there who saw the dangers years ago and have been taking positive action ever since. 

 

While there are still situations in which using aspects of social media are logistically necessary, the hope is that the same philosophy of delaying social-media use becomes generally acceptable in K-12 educational institutions, whether forced to change by state legislators, or guided from within by enlightened educators at all levels. 

 

The damage has already been done to millions of children and teens, however.  And my point about suddenly withdrawing social media from those who have made it an integral part of their lives is still valid.  The results are likely to be comparable to Prohibition, which became effective in 1920 and only made alcohol-consumption problems worse. 

           

But if the educational system is adapted to minimize social-media dependence at all levels, there is real hope that the current increases in teen depression and suicide can be reversed. 

 

Sources:  The article "Life as a teen without social media isn't easy.  These families are navigating adolescence offline" by Jocelyn Gecker was dated June 5, 2024 and appeared at https://apnews.com/article/influenced-social-media-teens-mental-health-e32f82d46ea74b807c9099d61aec25d5.  I also referred to data about state-adopted school cellphone bans at https://www.newsweek.com/map-shows-us-states-school-phone-bans-2090411.