Monday, December 08, 2025

Pros and Cons of Proposed Fuel Economy Standards

 

That's not a very exciting headline, perhaps.  But the Trump administration's proposed changes to the so-called Corporate Average Fuel Economy (CAFE) rules have already drawn criticism from many quarters.  The Environmental Defense Fund, for example, claims in a headline that the changes will "cost Americans more for gas, weaken national security, and increase pollution."  If it's so bad, why is the administration doing it? 

 

At the news conference announcing the proposal, Trump was surrounded by representatives of several domestic automakers, who favor the move.  It is actually a further step in a series of actions that Trump has taken to step back from the Biden administration's CAFE standards.

 

Under the previous administration, each automaker had to ensure that the average "fleet" economy (all their current model-year production, basically) measured in miles per gallon, had to increase by 2% per year.  (Electric vehicles are assigned an equivalent value of fuel economy on the order of 140 MPG.)  Companies not meeting the standards pay fines or purchase credits from firms who exceed them. 

 

Already, the Trump administration has ceased levying the associated fines under the spending bill recently passed by Congress.  They are now proposing to lower the 2% figure to 0.5%, and roll back the "baseline" from which the percentages are calculated to 2022. 

 

Let's compare two cases:  the former policy with fines and the 2% rate versus the proposed policy.  And let's see how various constituencies are affected by the two cases.

 

In the former Biden-administration case, automakers faced the fact that in less than 10 years, the CAFE standards would require raising their average fuel economy by 20%.  Any time an engineering system has some of its performance mandated by law, engineers eventually run up against another law:  physical law.  While modern automobiles differ in thousands of ways from the typical 1955 car—computer-controlled engines, greater use of plastic for reduced weight, etc.—there is still only so much energy in a gallon of gasoline.  And beyond a certain point, the accessible design space shrinks as the required fuel economy rises.  What the old CAFE standards were doing in practice was to compel the auto industry to move toward smaller, lighter cars and more electric vehicles.

 

Yes, that would save people money in fuel costs, and make the U. S. more energy-independent, and reduce our carbon footprint.  But it also makes cars somewhat more expensive, at least at first, and gradually would eliminate certain larger sizes that consumers might want to buy.  So for automakers, the old rules meant compulsory redesigns against fundamental constraints that might eventually eliminate whole classes of vehicles.  For consumers, they meant more limited choices of somewhat more expensive cars, although ones that would be slightly cheaper to run.  And for the environment, it meant slower increases in carbon and other emissions, which are good things. 

 

The proposed reduction in CAFE increases to 0.5% means that the time to get to 20% higher than at present goes from 10 years to 36 years.  And in any case, there are no longer fines for violating the standards, so they are essentially an aspirational goal with no teeth in them.  Under the proposed rules, automakers will no longer be obliged to make cars steadily more fuel-efficient unless consumers ask for that.  Consumers will have a wider choice of cars that won't be more expensive simply because of the CAFE standards.  And while presumably we will have more carbon emissions than if the old standards were retained, unexpected advances in electric-vehicle technology may change this picture.

 

For example, in the December issue of Physics Today, the umbrella publication of the American Institute of Physics, researchers describe their work on solid-state lithium batteries that could vastly out-perform current lithium-ion batteries.  One radical improvement they hope to make is to replace the current graphite anodes, which can absorb only one lithium ion for every six carbon atoms, with solid-lithium ones, which raises the charge capacity of the anode by a factor of ten.  There are problems with solid-state battery technology, but if they are overcome, it might be possible to manufacture electric vehicles that are both cheaper to buy than gasoline-powered ones and travel farther on a charge. 

 

And if consumers are presented with such a choice, it's quite likely that the internal-combustion-engine-powered vehicles would be relegated to specialist uses in construction, etc., leaving most of the field to electric vehicles.  That would come about not because of any government mandate, but because competitive forces in the marketplace produced innovations that consumers genuinely want, and have the byproduct of increasing CAFE mileage and reducing pollution.

 

Obviously, there's no guarantee that solid-state batteries or any other innovation will ever make all-electric vehicles outperform gas-guzzlers in all significant ways:  first cost, performance (including range), and per-mile costs, including power and maintenance.  But it could happen, just as we saw a huge reduction in carbon emissions from power plants when coal was replaced by cheaper abundant natural gas, due not mainly to government mandates but to the privately-funded development of fracking technology. 

 

You can't count on these serendipitous things happening.  But it's equally short-sighted to think that the only good things that go on in a market are government-mandated changes. 

 

The CAFE changes proposed by the Trump administration are still open to comments before they are implemented.  I'm not holding my breath that the current regime will take negative comments into consideration, but it might happen.  Perhaps what is most harmful in this whole situation is the every-four-year policy shifts that manufacturers have been trying to deal with, as Obama was replaced by Trump, who was replaced by Biden, who was replaced by Trump again.  But a small-r republican form of government, as messy as it is, is better than being dictated to by a small group of powerful individuals with no term limits, which is how China is governed.  And for now, it looks like we may be reverting to more of a free-market model in the auto industry.  Consumers and manufacturers should enjoy it while they can, because it may not last. 

 

Sources:  I referred to an NPR report on the Trump administration proposed CAFE-standard changes at https://www.npr.org/2025/12/03/nx-s1-5630389/trump-administration-rolls-back-fuel-economy-standards.  I also referred to a webpage of the Environmental Defense Fund at https://www.edf.org/media/trump-administration-announces-plan-weaken-fuel-economy-standards-cars-and-trucks.  The Physics Today article "Solid-State Batteries:  Hype, Hope, and Hurdles" by S. Muy, K. Hatzell, S. Meng, and Y. Shao-Horn appeared on pp. 40-46 of the December 2025 issue.

Monday, December 01, 2025

The Hong Kong Apartment Complex Fire

 

In the afternoon of Wednesday, November 26, it was windy in the part of Hong Kong where eight 31-story apartment towers called Wang Fuk Court were undergoing renovations.  As has been the custom for hundreds of years, exterior scaffolding made of bamboo was erected around the towers.  To prevent damage to the single-pane windows of the complex, workers had covered the windows with foam-plastic sheets.  Nylon safety nets surrounded the scaffolding.  What could go wrong?

 

The world found out when an unknown cause started a fire in the lower level of one of the scaffolds.  The flames quickly spread from high winds to the flammable plastic sheets over the windows.  Burning debris and flames spread from tower to tower until seven of the eight towers were engulfed.  About 40% of the residents were over 65, and the fire alarms in many of the buildings were later found to be out of order.

 

For the next day or so, the fire defeated efforts of thousands of firefighters to control it.  As of this writing, the confirmed death toll stands at 146, with dozens more missing.  About 4,600 people lived in the complex, so many got out alive.  But many others, probably including most of those who needed mobility assistance to escape, didn't make it.

 

News reports are calling this tragedy a "man-made disaster," and I couldn't agree more.  Authorities have arrested several officials of the construction company in charge of the renovations, Prestige Construction and Engineering Company, and have halted the firm's work on all other projects in order to conduct safety inspections. 

 

Whoever made the choice of protective window panels may have chosen low cost over safety.  In any event, that choice directly contributed to the fire perhaps more than anything else, although final conclusions will have to await a thorough investigation.  The use of nylon for safety netting and bamboo for scaffolding are also questionable, although China has a centuries-old tradition of bamboo scaffolding that has only recently come into question as metal scaffolding gradually supplants it.  Photos of the scene after the disaster seem to indicate that the scaffolding largely stayed in place, but was charred to the point of being structurally unsound.

 

As soon as I learned of this fire, I thought of the 2017 Grenfell Tower fire in London.  In that disaster, a refrigerator caught fire in a lower apartment and the flames spread to flammable exterior insulating sheathing that had been installed in an upgrade some years before.  The fire propagated behind the sheathing and was difficult to extinguish, and before it was all over, 72 people died and many more were injured.  Again, negligent planning and a failure to take account of the flammability of exterior sheathing was at fault, just as it appears to be in the Wang Fuk Court fire.

 

Engineering ethics requires imagination of a particularly informed type:  imagination bolstered by in-depth technical knowledge.  Engineers sometimes have a reputation for being dour pessimists who always jump to the worst-case scenario of a given situation.  But in the case of the Hong Kong fire, engineers were not pessimistic enough.  Nobody apparently imagined what would happen if one of the plastic window-sheathing panels caught fire, especially if it was on a lower floor of a building that happened to be upwind of most of the others on a blustery day. 

 

It's possible that workers were instructed not to smoke or to engage in operations that might lead to a fire.  That's all very well, but not everybody follows instructions. And even the best-intentioned workers can be using equipment that shorts out or otherwise becomes a source of ignition.  Good safety practitioners imagine that something statistically unlikely will nevertheless go wrong, and then draw conclusions from that premise. 

 

Judging from the number of residents listed and the casualty list, it's possible that about 5% of the listed residents died in the fire.  That means that 95% escaped either with no injuries or non-life-threatening ones, while however losing most of their worldly possessions.  No one I know would like to go through an unplanned experience that will strip you of your things and lead to a one-in-twenty chance of death.  So while fire escapes and other built-in safety features allowed most residents to escape the flames, over a hundred didn't.

 

Major renovation projects are routinely inspected by civil authorities, and I can't imagine that this project was an exception.  If a construction firm neglects to take due safety precautions, it is the civil authority's responsibility to step in and halt work if necessary in order for safety hazards to be addressed.  This obviously wasn't done. 

 

I don't know the nature of safety inspection services in Hong Kong, but among those to be held accountable should be whoever permitted the work to proceed.  Such officials can be subject to corruption pressures, and until a tragedy like this occurs, no light is shed on the fact that corners are being cut by paying off inspectors.  I have no reason to believe that this was the case here, but it is certainly an avenue worth investigating.

 

Disasters like this one can have the silver lining of making future safety regulations and inspections much more rigorous.  As the investigations proceed and the chain of causation is revealed, Hong Kong engineers, construction firms, and officials can all learn valuable lessons that are driven home by the horrible example of what can go wrong if safety measures are neglected.  My sympathy is with those who lost loved ones in the fire.  And my hope is that nobody anywhere in the world puts flammable sheathing on high-rises ever again.

 

Sources:  I referred to a BBC report at https://www.bbc.com/news/articles/cdxe9r7wjgro, a report from the news outlet livemint.com at https://www.livemint.com/news/world/hong-kong-apartment-fire-death-toll-climbs-to-146-probe-reveals-fire-code-violations-11764496767266.html, and the Wikipedia article "Grenfell Tower fire.

Monday, November 24, 2025

AI Ghosts and the Yuck Factor

 

The December Scientific American highlights an article by David Berreby that gets personal.  Berreby's father was born in 1927, the same year as my father, and died in 2013.  Yet the article opens with Berreby asking, "How is your existence these days?" and getting a reply:  "... Being dead is a strange experience."

 

In this conversation, Berreby is using a generative-artificial-intelligence (genAI) version of his father to investigate what it is like to interact with an AI ghost:  a digital simulation of a dead loved one that some psychologists and computer scientists are promoting as an aid for grieving people.

 

I'll be frank about my initial reaction to this idea.  I thought it was terrible.  The "yuck factor" is a phrase popularized by ethicist Leon Kass in describing a gut-level negative reaction to a thing.  He says we should at least pay attention whenever we have such a reaction, because such a sensation may embody wisdom that we can't articulate.  The AI-ghost idea reminded me of Norman Bates, the mentally defective protagonist of Alfred Hitchcock's movie Psycho, who kept his mother around long after her bury-by date and talked with her as though she were still alive. 

 

And to his credit, Berreby admits that there may be dangers to some people whose mental makeup makes them vulnerable to confusing fiction with reality, and could become harmfully addicted to the use of this type of AI.  But in the limited number of cases examined (only 10 in one study) in which grieving patients were encouraged to interact with AI ghosts, they all reported positive outcomes and a better ability to interact with live humans.  As one subject commented, "Society doesn't really like grief."  Who better to discuss your feelings of loss with than an infinitely-patient AI ghost who is both the cause and the solace of one's grief? 

 

Still, it troubles me that AI ghosts could become a widespread way of coping with the death of those we love.  One's worldview context is important here. 

 

Historically, death has been viewed as the portal to the afterlife.  Berreby chose to title his article "Mourning Becomes Electric," a takeoff on Eugene O'Neill's play cycle "Mourning Becomes Electra," which itself was based on the Oresteia play cycle by Aeschylus, a famous Greek playwright who died around 450 B. C.  In the plays, Aeschylus describes the tragic murder of the warrior Agamemnon by his unfaithful wife Clytemnestra, and how gods interacted with humans as things went from bad to worse.  That reference, and a few throwaway lines about ectoplasm and Edison's boast that if there was life after death, he could detect it with a new invention of his, are the only mentions of the possibility that the dead are no longer in existence in any meaningful way.    

 

If you believe that death is the final end of the existence of any given human personality, and you miss interacting with that personality, it only makes sense to use any technical means at your disposal to scratch that itch and conjure up your late father, mother, or Aunt Edna.  Berreby quotes Amy Kurzweil, artist and daughter of famed transhumanist Ray Kurzweil, as saying that we don't usually worry that children will do things like expecting the real Superman to show up in an emergency, because they early learn to distinguish fiction from reality.  And so she isn't concerned that grieving people will start to treat AI ghosts like the real person the machine is intended to simulate.  It's like looking at an old photo or video of a dead person:  there's no confusion, only a stimulus to memory, and nobody complains about keeping photos of our dear departed around.

 

In the context of secular psychology, where the therapeutic goal is to minimize distress and increase the feeling of well-being, anything that moves the needle in that direction is worth doing.  And if studies show that grieving people feel better after extensive chats with custom-designed AI ghosts, then that's all the evidence therapists need that it's a useful thing to do.

 

The article is written in the nearly-universal etsi Deus non daretur style—a Latin phrase meaning roughly "as though God doesn't exist."  And in a secular publication such as Scientific American, this is appropriate, I suppose, though it leaves out the viewpoints of billions of people who believe otherwise.  But what if these billions are right?  That puts a different light on the thing.

 

Even believers in God acknowledge that grieving over the loss of a loved one is an entirely appropriate and natural response.  A couple we have known for 45 years was sitting at the breakfast table last summer praying, and the man suddenly had a massive hemorrhagic stroke, dying later that same day.  It was a terrible shock, and at the funeral there were photos and memorabilia of him to remind those in attendance of what he was like.  But everyone there had a serene confidence that David Jenkins was in the hands of a merciful God.  While it was a sad occasion, there was an undercurrent of bottomless joy that we knew he was enjoying, and that we the mourners participated in by means that cannot be fully expressed in writing.

 

In Christian theology, an idol is something that humans create which takes the place of God.  While frank ancestor worship is practiced by some cultures, and is idolatry by definition, a more subtle temptation to idolatry is offered by AI chatbots of all kinds, and especially by AI ghosts. 

 

While I miss my parents, they died a long time ago (my father was the last to go, in 1984).  I will confess to writing a note to my grandmother once, not long after she died.  So did Richard Feynman write a note to his late wife, who died tragically young of tuberculosis, and a less likely believer in the supernatural would be hard to find. 

 

I suppose it might do no harm for me to cobble up an AI ghost of my father.  But for me, anyway, the temptation to credit it with more existence than it really would have would be strong, and I will take no steps in that direction. 

 

As for people who don't believe in an afterlife, AI ghosts may help them cope for a while with death.  But only the truth will make them free of loss, grieving, and the fear of death once and for all.  And however good an AI ghost gets at imitating the lost reality of a person, it will never be the real thing.

 

Sources:  "Mourning Becomes Electric" by David Berreby appeared on pp. 64-67 of the December 2025 issue of Scientific American.  I referred to Wikipedia articles on "Wisdom of repugnance" and "Oresteia." 

Monday, November 17, 2025

Can AI Make Roads Safer?

 

The answer is almost certainly yes, as a recent Associated Press report shows. 

 

I have been less than complimentary in my treatment of artificial intelligence in some of these columns.  Any new technology can have potential hazards, and one of the main tasks of those who do engineering ethics is to examine technologies for problems that might occur in the future as well as current concerns.  But there's no denying that AI has potential for saving lives in numerous ways, including the improvement of road safety.  The AP article highlights AI's contributions to road safety, mainly in the area of information gathering to allow road crews to allocate their resources more intelligently.

 

Road maintenance is unique in that the physical extent of property an organization or government is responsible for, exceeds the area of almost any private entity.  Just in my town of San Marcos, a town of about 70,000 people, there are hundreds of stop signs, dozens of signals, and hundreds of miles of road.  And in the whole state of Texas there are over 680,000 "lane miles" of road, more than any other state.  Just inspecting those miles for damage such as potholes, worn-out signs, and debris is a gargantuan task.  Especially if a problem shows up in a low-traffic remote area, it could be years before highway workers are even aware of it, and longer before it gets fixed.

 

In Hawaii, the highway authorities are giving away 1,000 dashcams equipped to send what they see on the road to a central AI processing facility which will pick up on things like damaged guardrails, faulty signs, and other safety hazards.  The ability of sophisticated AI software to pore through millions of photos for specific problems like these makes it possible to use the floods of data from cameras to do this with minimal human involvement.  Hawaii has to import virtually all its equipment and supplies for road maintenance, so resource allocation there is especially important.

 

San Jose, California has mounted cameras on streetsweepers and found that their rate of identifying potholes was 97%, and will now expand the program to parking enforcement vehicles.  And driver cellphone data can pinpoint issues such as a stop sign hidden by bushes, which caused drivers to brake suddenly a lot at a particular intersection in Washington, D. C. later identified by AI software.  The mayor of San Jose, a former tech entrepreneur, hopes that cities will begin to share their databases so that problems common to more than one area can be identified more quickly.

 

The application of AI to identifying road maintenance needs seems to be one of the most benign AI-application cases around.  The information being gathered is not personal.  Rather, it's simply factual data about the state of the road bed and its surroundings.  While places such as Hawaii and other locations that use anonymized cellphone data do interact with private citizens to gather this data, their intent is not to spy on the people whose dashcams or phones are sensing the information.  And it would be hard to exploit such databases for illicit purposes, although some evil people can twist the best-intended system to their nefarious purposes. 

 

All the data in the world won't help fix a pothole if nobody goes out and fixes it, of course.  But the beauty of the AI-assisted data gathering is that a much better global picture of the state of the road inventory is available, allowing officials to prioritize their available maintenance resources much better.  A dangerous pothole in a part of town where nobody much goes or complains won't get ignored as long now that AI is being used to find it.  And data-sharing among municipal and state governments seems to have mostly upsides to it, although due precautions would have to be taken to make sure the larger accumulation of data isn't hacked.

 

As autonomous vehicles become more widespread, the database of road hazards could be made available to driverless cars, which would then "know" hazards to avoid even before the hazards are repaired.  To a limited extent, this is happening already. 

 

Whenever my wife runs her Waze software on her phone when we're on a trip, the voice warns us of stalled vehicles or law-enforcement personnel before we get to them.  I've often wondered how the system obtains such ephemeral information.  It seems almost inevitable that it uses anonymized location data from the stalled cars themselves, which gives one a slightly creepy feeling.  But if it keeps somebody from plowing into me some day while I'm fixing a tire on the side of the road, I'll put up with the creepy feeling in the meantime.

 

Highway fatalities in the U. S. have been declining overall since at least the 1960s, with a minimum of less than 33,000 a year in 2011.  Since then, they rose significantly, peaking after COVID at a rate of 43,000 in 2021, with a slight decline since then.  Part of the increase has been attributed to the rise in cellphone use, although that is difficult to disentangle from many other factors.  While most traffic accidents are due to driver error, bad road conditions can also contribute to accidents and fatalities, and everything we can do to minimize this factor will help to save lives.

 

The engineering idealists among us would like to see autonomous vehicles taking over, as there are some indications that accidents per road-mile of such vehicles can be lower than those shown by cars with human drivers.  But the comparison does not take into account the fact that most truly autonomous cars are operating over highly restricted areas such as city centers, where circumstances are fairly predictable and well known.  General suburban or rural driving seems to pose serious challenges for autonomous vehicles, and until they can prove that they are safer than human-driven vehicles in every type of driving environment, it's not clear that replacing humans with robots behind the wheel will decrease traffic fatalities overall, even if the robots get clued in about road hazards from a national database.

 

At least in this country, the citizens will decide how many driverless cars get on the road, and for the foreseeable future we will have mostly humans behind the wheel.  It's good to know that AI is helping to identify and fix road hazards, but even if such systems work perfectly, other things can go wrong on the road.

 

Sources:  The AP article "Cities and states are turning to AI to improve road safety" by Jeff McMurry appeared on the AP website on Nov. 15, 2025 at https://apnews.com/article/ai-transportation-guardrails-

 potholes-hawaii-san-jose-9b34a62b2994177ece224a8ed9645577.  I also referred to the websites https://blog.cubitplanning.com/2010/02/road-miles-by-state for the lane-miles figures and

https://cdan.dot.gov/tsftables/Fatalities%20and%20Fatality%20Rates.pdf for road fatality rates.


Monday, November 10, 2025

Questions About UPS Flight 2976

 

At 5:15 PM on Tuesday, November 4, UPS Flight 2976 bound for Hawaii took off from Louisville Muhammed Ali International Airport in Kentucky.  Louisville is the main worldwide UPS hub from which millions of packages are shipped weekly on aircraft such as flight 2976's three-engine McDonnell-Douglas MD-11.  The MD-11 is somewhat of an orphan, as it was originally developed to be a wide-body passenger aircraft in competition with Boeing's 767.  But only a couple hundred of them were built before production shut down in 2000 after Boeing acquired McDonnell-Douglas.  As with most of the existing MD-11s, this one, owned originally by Thai Airways, was converted to freight service later, and was 34 years old at the time of takeoff.

 

Almost simultaneously with rollout, the left engine and its supporting pylon separated from the wing, and a fire broke out.  An alarm bell went off in the cockpit, and for the next 25 seconds Captain Richard Wartenberg and First Officer Lee Truitt struggled to control the plane.  But after reaching an altitude of only 100 feet, the plane began to roll to the left.  It tore a 300-foot gash in a UPS warehouse south of the airport, left a blazing trail of fuel along its path, and collided with oil tanks at an oil-recycling tank, leading to explosions and a much bigger fire before the bulk of the plane came to rest in a truck parking area and an auto junkyard.  Besides the three crew members including Relief Officer Captain Dana Diamond, eleven people on the ground died and about as many were injured, some critically.  This was the most fatalities incurred in any UPS flight accident.  On Saturday, the FAA temporarily grounded all MD-11s to perform inspections in case a mechanical defect is at fault.

 

This crash was one of the best-documented ones in recent memory, as it was in full view of a major road (Grade Lane), many security cameras, and numerous stationary and moving dashcams.  One dashcam video posted on the Wikipedia site about the crash shows a blazing trail of fuel sweeping from right to left across the scene, engulfing trucks and other structures within seconds.  An aerial view from south of the airport looking north shows what looks like the path of a tornado, as a wide swath of destruction leads from the runway to the foreground. 

 

As the integrity of the support pylons is necessary for the structural integrity of the entire aircraft, aerospace engineers normally make sure that the engines are fastened really well to the rest of the plane.  Typically, there are multiple points of attachment between the engine and the pylon, but the pylon itself is a structural member that is permanently affixed to the wing.  While it's possible that the last time the engine was detached from the plane, somebody didn't finish the job reattaching it, because of the multiple attachment points it's unlikely that any mistakes would lead to the whole thing falling off. 

 

Instead, my uninformed non-mechanical-engineer's initial guess is that fatigue or some other issue weakened the pylon's attachment to the wing, causing a crack or cracks that eventually led to the failure of the attachment point, which would let the engine and pylon fall off as they did.  And it's natural that this would occur at a moment of maximum stress on the pylon, which occurs during takeoff.

 

Planes are supposed to be inspected regularly for such hidden flaws.  But sometimes they can show up in inaccessible areas that might require X-ray or ultrasound equipment to detect.  That is the main reason that the FAA has grounded the remaining fleet of MD-11s: so they can be inspected for similar flaws. 

 

This early in the investigation, it's unclear whether the pieces of the aircraft will tell a definite story of what happened.  It's a good sign that the left engine was recovered presumably without major fire damage near the runway, as the end of the attached pylon will give investigators a lot of information about how the thing came loose. 

 

Inspections and maintenance are boring compared to design and construction, and so they sometimes get short shrift in an organization with limited resources.  But there's an engineering version of the old saying, "the price of liberty is eternal vigilance."  It goes something like, "the price of reliable operation is regular maintenance."  I'm facing a much smaller-scale but similar situation here in my own home. 

 

In Texas, air conditioning has become a well-nigh necessity, a fact recognized by everyone except the Texas legislature, which steadfastly refused once again this year to use some of the budget surplus to air-condition all Texas prisons.  (Sorry for the soapbox moment, but I couldn't resist.)  Anyway, every spring and fall I have an HVAC company come out and inspect our heat-pump heating and cooling unit.  Last spring they said it needed a new contactor that was about to go out, and the fan motor bearing didn't look too good, but it was otherwise okay.

 

Things changed over the summer.  Now the evaporator has sprung three leaks, the compressor has been working so hard that its insulation and capacitor are compromised, and to make a long sad story short, we need a whole new unit.

 

I could have just ignored matters till something major disabled the unit:  the compressor shorting out, the fan motor freezing, any number of things.  As often happens in such cases, it might have failed either in the middle of the coldest day of the year, or next August when the thermometer reads 102 in the shade.  Not wishing for such emergencies, I choose to have regular maintenance checks, which have paid off, both for me and for the HVAC people who get to install a new unit under less-than-urgent conditions.

 

My sympathy is with those who lost loved ones both in the air and on the ground in the crash of Flight 2976.  And my hope is that if lack of maintenance is found to be a contributing cause, that the grounding of the other MD-11s will prevent another accident like the one we saw last Tuesday.

 

Sources:  I referred to an article on the FAA action at https://abcnews.go.com/US/final-moments-ups-plane-crash-detailed-ntsb/story?id=127313407, a comment on engine support pylons at https://aviation.stackexchange.com/questions/79872/what-are-different-components-of-an-engine-pylon, and the Wikipedia articles on MD-11 and UPS Airlines Flight 2976.

Monday, November 03, 2025

Can We Afford to Power AI?

 

That is the question posed by Stephen Witt's article in this week's New Yorker, "Information Overload."  Witt, a biographer of Nvidia founder Jensen Huang, has toured several of the giant data centers operated by CoreWeave, the leading independent data-center operator in the U. S.  He brings from these visits and interviews some news that may affect everyone in the country, whether or not you think you use artificial-intelligence (AI) services:  the need to power the growing number of data centers may send electric-power costs through the roof.

 

Witt gives a fascinating inside view of what actually goes on in the highly secure, anonymous-looking giant buildings that house the Nvidia equipment used by virtually all large-scale AI firms.  Inside are rack after rack of "nodes," each node holding four water-cooled graphics-processing units (GPUs), which are the workhorse silicon engines of current AI operations.  Thousands of these nodes are housed in each data center, and each runs at top speed, emphasizing performance over energy conservation. 

 

Gigawatts of power are used both in the training phase of AI implementation, which feeds on gigabytes of raw data (books, images, etc.) to develop the weights used by AI networks to perform "inferences," which basically means answering queries.  Both training and inference use power, although training appears to demand more intense bursts of energy. 

 

The global economy has taken a headlong plunge into AI, making Nvidia the first company worldwide to surpass $5 trillion in market capitalization.  (Just in case anybody's thinking about a government takeover of Nvidia, $5 trillion would pay off only about one-eighth of the U. S. government's debt, but it's still a lot of money.)  With about 90% of the market share for GPU chips, Nvidia is in a classic monopoly position, and significant competitors are not yet on the horizon.

 

Simple rules of supply and demand tell us that the other commodity needed by data centers, namely electric power, will also rise in price unless a lot of new suppliers rush into the market.  Unfortunately, increasing the supply of electricity overnight is well-nigh impossible. 

 

The U. S. electric-utility industry is coming off a period when demand was increasing slowly, if at all.  Because both generation and transmission systems take years to plan and build, the industry was expecting only gradual increases in demand for both, and planned accordingly.  But only in the last couple of years has it become clear that data centers are like the hungry baby bird of the utility industry:  mouths always open demanding more. 

 

This is putting a severe strain on the existing power grid, and promises to get worse if the rate of data-center construction keeps up its current frenetic pace.  Witt cites an analysis by Bloomberg showing that wholesale electricity costs near data centers have about doubled in the last five years.  And at some point, the power simply won't be available no matter what companies like CoreWeave are willing to pay.  If that happens, the data centers may move offshore to more hospitable climes where power is cheaper, such as China.  As China's power still comes largely from coal, that would bode no good for the climate.

 

Witt compares the current data-center boom to the U. S. railroad-building boom of the nineteenth century, which consumed even more of the country's GNP than data centers are doing now.  That boom resulted in overbuilding and a crash, and there are signs that something similar may be in the offing with AI.  Something that can't go on forever must eventually stop, and besides limitations in power production, another limit that may be even harder to overcome is the finite amount of data available for training.  Witt says there are concerns that in less than a decade, AI developers could use up the entire "usable supply of human text."  Of course, once that's done, the AI systems can still deal with it in more sophisticated ways.  But lawyers are starting to go to work suing AI developers for copyright infringement.  Recently, one AI developer named Anthropic paid $1.5 billion to a class-action lawsuit group of publishers whose material was used without permission.  That is a drop in the bucket of trillions that are sloshing around in the AI business, but once the lawyers get in their stride, the copyright-infringement leak in the bucket might get bigger.

 

The overall picture is of a disruptive new technology wildly boosted by sheep-like business leaders to the point of stressing numerous more traditional sectors, and causing indirect distress to electricity consumers, namely everybody else.  Witt cites Jevons's Paradox in this connection, which says increasing the efficiency with which a resource is used can cause it to be used even more.  A good example is the use of electricity for lighting.  When only expensive one-use batteries were available to power noisy, inefficient arc lamps, electric lighting was confined to the special-effects department of the theatrical world.  But when Edison and others developed both efficient generators and more efficient incandescent lamps, the highly price-sensitive market embraced electric lighting, which underwent a boom comparable in some ways to the current AI boom. 

 

Booms always overshoot to some degree, and we don't yet know what overbuilding or saturation looks like in AI development.  The market for AI is so new that pricing structures are still uncertain, and many firms are operating at a loss in an attempt to gain market share.  That can't go on forever either, and so five years from now we will see a very different picture in the AI world than the one we see now. 

 

Whether it will be one of only modest electric-price increases and a stable stock of data centers, or a continuing boom in some out-of-the-way energy-rich and regulation-poor country remains to be seen.  Independent of the morality and social influences of AI, the sheer size of the hardware footprint needed and its insatiable demand for fresh human-generated information may place natural limits on it.  After the novelty wears off, AI may be like a new guest at a party who has three good jokes, but after that can't say anything that anybody wants to listen to.  We will just have to wait and see.

 

Sources:  Stephen Witt's article "Information Overload" appeared on pp. 20-25 of the Nov. 3, 2025 issue of The New Yorker.  I also referred to an article from the University of Wisconsin at https://ls.wisc.edu/news/the-hidden-cost-of-ai and Wikipedia articles on Nvidia and Jevons Paradox.

Monday, October 27, 2025

Asking the Wrong Question About Artificial General Intelligence

 

An article in the October issue of IEEE Spectrum investigates the status of artificial-intelligence IQ tests, and speculates on when and whether we will see the arrival of so-called artificial general intelligence (AGI), which author Matthew Hutson says is "AI technology that can match the abilities of humans at most tasks."  But mostly unasked in the article is an even more basic question:  what do we mean by human intelligence? 

 

To be fair, Hutson has done a good job of surveying several popular benchmark tests for AI intelligence.  One of the more popular tests is something called the Abstraction and Reasoning Corpus (ARC for short), whose developer François Chollet has made something of a go-to standard, as charts in the article show the scores of over a dozen different AI programs on various versions of Chollets tests.  Engineers like numbers, and standardizing a test is a good thing as long as the test measures what you want to know.  But does ARC do that?

 

The version of the ARC test described in the article consists largely of coming up with patterns of colored figures that follow rules abstracted from examples.  Human beings can score higher than AI systems on these tests, although the systems are improving.  But it's an open question as to whether abstracting patterns from geometric shapes has a lot to do with being generally intelligent.

 

Chollet feels that his test measures the ability to acquire new abilities easily, which he thinks is the prime measure of intelligence.  Whether it actually does that is debatable, but it looks like ARC is to the AI world what the old Stanford-Binet IQ test is for people.  That IQ test was developed over a century ago and is now in its fifth edition.

 

Hutson comes close to the problem when he admits that "notions of intelligence vary across place and time."  The Stanford-Binet test is mainly used to identify people who don't fit in well with the public-school system, which is mainly designed to produce worker bees for the modern economy.  As the modern economy is shifting all the time, what counts as intelligence does too. 

And even if we could perfectly track these shifts, the admittedly infinite array of "tasks" that people perform present an almost insurmountable problem to anyone who wants not only to define, but to evaluate something that could justifiably be called artificial general intelligence.

 

Geoffrey Hinton, a recent Nobel Prize winner in AI, is quoted in the article as saying that if an AI robot could successfully do household plumbing, that would be a milestone in AGI, and he thinks it's still about ten years off.  I hope I'm around in ten years to check this prediction, which I personally feel is optimistic.  For one thing, humanoid robots will have to get a lot cheaper before people even consider using one to fix a toilet.

 

All these approaches to AGI ignore a distinction in the field of human psychology which was first pointed out by Aristotle.  The distinction has been described in various ways, but the most succinct is to differentiate between perceptual thought and conceptual thought. 

 

Perceptual thought, which humans share with other animals and machines, consists in perceiving, remembering, imagining, and making associations among perceptions and memories, broadly speaking.  Inanimate material objects like computers can display perceptual thought, and in crawling the Internet for raw material by which to answer queries, all AI chatbots and similar systems use perceptual thought, which ultimately has to do with concrete individual things.

 

On the other hand, conceptual thought involves the consideration of universals:  freedom, for example, or the color blue, or triangularity as a property of a geometric figure, as opposed to considering any individual triangle.  There are good reasons to believe that no strictly material system (and this includes all AI) can engage in truly conceptual thought.  With suitable programming by humans, a computing system may provide a good simulation of conceptual thought, as a movie provides a good simulation of human beings walking around and even engaging in conceptual thought.  But a movie is just a sequence of images and sounds, and can't respond to its environment in an intelligent way.

 

Neither can an AI program engage in conceptual thought, although by finding examples of such thought in its training, it can provide a convincing simulation of it.  While having a robot do plumbing is all very well, the real goal sought by those who want to achieve AGI is human-likeness in every significant respect.  And a human incapable of conceptual thought would at the least be considered severely disabled, though still worthy of respect as a member of the human community.

 

The vital and provable distinction between perceptual and conceptual thought has been all but forgotten by AI researchers and the wider culture.  But if we ignore it, and allow AI to take over more and more tasks formerly done by humans, we will surround ourselves with concept-free entities.  This will be dangerous. 

 

A good example of a powerful concept-free entity is a tiger.  If you walk into the cage of a hungry tiger, all it sees in you is a specific perception:  here's dinner.  There is no reasoning over abstractions with a tiger, just a power struggle in which the human has a distinct disadvantage.

 

Aristotle restricted the term "intellect" to mean that part of the human mind capable of dealing with concepts.  It is what distinguishes us from the other animals, and from every AI system as well.  Try as they might, AI researchers will not be able to develop anything that can entertain concepts.  And attempts to replace humans in jobs where concepts are important, such as almost any occupation that involves dealing with humans as one ethical being to another, can easily turn into the kind of hungry-tiger encounter that humans generally lose.  Anyone who has struggled with an AI-powered phone-answering system to gain the privilege of talking with an actual human being will know what I mean.

 

ARC may become the default IQ test for new AI prototypes vying for the title of AGI.  But the concept symbolized by the acronym AGI is itself incomprehensible by AI.  As long as there are humans left, we will be the ones awarding the titles, not the AI bots.  But only if they let us.

 

Sources:  Matthew Hutson's article "Can We Build a Better IQ Test for AI?" appears on pp. 34-39 of the October 2025 issue of IEEE Spectrum.  I also referred to the Stanford Encyclopedia of Philosophy article on Aristotle.  For a more detailed argument about why AI cannot perform conceptual thought, see the article in AI & Society "Artificial Intelligence and Its Natural Limits," by Karl Stephan and Gyula Klima, pp. 9-18, vol. 36 (2021).