Monday, May 04, 2026

California Ends Free Ride for Robotaxi Ticketing

  

For good or ill, many things that start in California spread to the rest of the U. S. sooner or later, from Hollywood movies to the Hula Hoop.  So when California's Assembly Bill No. 1777 takes effect this July, companies such as Waymo that operate robotaxis across the country may feel its effects outside just California.

 

The bill, and its implementation by the California Department of Motor Vehicles (DMV), closes a loophole in California law that has up to now allowed driverless vehicles to escape being ticketed for traffic violations.  Previous laws assumed there would always be a driver behind the wheel to cite for illegal U-turns or other roadway malfeasance.  However, existing California law made no provision for ticketing driverless vehicles.

 

That will all change come July 1.  After that, the corporation operating the vehicle will be the designated legal recipient of any citations concerning vehicles under its control.  The ticket takes the form of a "notice of noncompliance," but the effect is the same.  To allow the DMV to track robotaxi company violations, the company must report any citations to the DMV within 72 hours of receipt.  In case a collision was involved, the window shrinks to 24 hours.  If a firm receives too many citations, the DMV is empowered to take drastic action such as limiting the firm's fleet size or even suspending its operating license. 

 

The new law also addresses the problem of how first responders can deal with driverless vehicles that get in the way at a fire or accident scene, for example.  It mandates that the operating companies must respond to calls from first responders within 30 seconds, and requires that companies observe geofencing rules that clear a designated area within two minutes.  Apparently, autonomous vehicles have obstructed the operation of fire and emergency vehicles in the past, and these new laws address that issue.

 

In the modern world, technological developments generally outpace the social infrastructure of laws and customs.  That is why robotaxis in California have been escaping traffic tickets until now, because lawyers and legislators are not prophets, and they can't expect to anticipate every possible new technological development so that the laws are waiting to be used when the technology finally comes along.

 

Most consumer technology is at least intended to have benign effects, but the good of taking people and goods from one place to another is accompanied by problems such as traffic violations.  Getting a traffic ticket is often the only interaction most law-abiding citizens have with law enforcement, but it is a little galling that robotaxis were effectively exempt from such experiences until this year in California. 

 

It's part of the normal progress of technological development for new laws to arise that deal with unexpected problems such as the ones the California bill addresses.  The issue of clearing an area for emergency operations couldn't even happen until there were enough robotaxis around that one of them got caught in such a situation, and led to frustrations and hazards that, while probably not costing a life, made enough trouble for first responders that they reported it to the appropriate authorities.  And while many uncomplimentary things can be said about the California legislature, in this case they seem to have done a good thing in mandating effective communications and actions whenever a driverless vehicle is impeding the work of first responders.

 

It's fair to say that there probably wasn't a popular groundswell of grassroots opinion demanding that robotaxis be able to receive traffic tickets.  The issue is not one to get the average Joe Public's juices going.  Contrast this matter to another technological fault line that is currently the subject of much (mostly local) legislation and discussion:  the construction of data centers around the country. 

 

As an editorial by Charles C. W. Cooke points out, data centers are not new.  In some form we've had them around for decades, and currently there are about 5,000 of them in the U. S. already, with the largest concentration in the liberal state of Virginia. 

 

Yet, to listen to discussions at city council meetings around the country or to see the "No Data Centers" signs popping up in yards, one would think that each data center is a direct portal to Dante's Inferno.  Why are people so upset at data centers, while robotaxis being immune from tickets was never a big deal?

 

Fear has something to do with it—fear of the known unknown.  For most people, robotaxi immunity from ticketing was an unknown unknown.  I didn't even know it was an issue until California fixed it with their new law.  But the publicity about data centers, and their notorious connections with the mysterious and frightening acronym AI, are well known enough, though what the consequences will be in terms of water and energy depletion or increased cost is largely unknown.  After all, if something's not built yet, you can't say exactly what will happen if you build it.

 

Opponents have rushed into that gap of ignorance with highly inflated predictions of disasters that will hit you in your pocketbook:  higher power and water prices, water shortages, and even loss of jobs as AI takes over what you and twenty of your friends do, and does it better and cheaper.  Nobody is threatened in that manner by the fact that robotaxis couldn't get traffic tickets.  Injustice somewhere else never matters as much as injustice to my bank account. 

 

So while even the California legislature can take effective action about a matter that few people knew or cared about, it's not clear that laws slowing down the construction of data centers are going to make anything better.  Saying "not in my back yard" doesn't stop construction as long as back yards don't cover the entire U. S., and that won't happen for a while.  But unless the public understands the issues well enough to make an informed decision, there is not much chance that legislation about data centers that is driven by public opinion will do much good.

 

Sources:  I referred to several articles on California Assembly Bill No. 1777, including https://driveteslacanada.ca/news/california-makes-big-changes-to-autonomous-vehicle-legislation/, https://finance.yahoo.com/economy/policy/articles/robotaxis-driverless-vehicles-now-ticketed-150019014.html, and https://www.carscoops.com/2026/04/california-robotaxi-citation-rules/.  Charles C. W. Cooke's article "Hatred of Data Centers Is Irrational and Self-Defeating" is at https://www.nationalreview.com/2026/04/hatred-of-data-centers-is-irrational-and-self-defeating/. 

Monday, April 27, 2026

Driverless Cars: Not as Autonomous As We Thought

  

As part of an investigation by U. S. Sen. Edward Markey (D-Mass.), autonomous-vehicle operators Tesla, Waymo, and others have made public some details about how their supposedly driverless cars can be remotely assisted or controlled under certain circumstances.  In a report published on Apr. 8, Andrea Guzman of the Austin American-Statesman described how the companies reluctantly shared some but not all the information Sen. Markey requested.  Specifically, they refused to say how often such interventions take place.

 

It was a surprise to me, though it probably shouldn't have been, that autonomous-vehicle firms have staffs of remote operators ready to take control of a car or truck that gets into trouble.  It's a given that every such vehicle is in continuous wireless communication with "headquarters," whatever that amounts to.  And when the onboard systems determine that the situation the vehicle is facing is beyond its capabilities, it's reasonable that it sends an alert to headquarters to flag a human being to take a look and decide what ought to be done.  But in at least a few situations, the human-assisted decisions haven't been all they should be. 

 

For example, an investigation by a safety agency found that a remotely-operated Waymo vehicle illegally passed a school bus where children were boarding.  Apparently, the Waymo remote assistance operators do not take direct control of the vehicle, but merely guide it in making its own decisions.

 

Tesla's process of remote assistance seems to be more assertive.  In responding to Sen. Markey's inquiry, Tesla revealed that in rare cases, their remote assistance operators can take direct control of the vehicle, but its speed is limited to either 2 or 10 MPH, depending on whether the internal automated system grants direct control to the human controller.

 

That makes it sound like the final decision is up to the machine, not the human.  Whatever the details, the fact that human remote assistants are sometimes involved in driverless car control adds another layer to the already complex mixture of responsibilities involved. 

 

Both Tesla and Waymo emphasized that the remote assistants do not sit there watching each car passively.  For one thing, there are too many cars to monitor.  Waymo revealed that by spreading the work among staff in the U. S. and the Philippines, they can maintain about 70 remote assistance operators on duty at any given time.  Tesla's operators are all in the U. S., and presumably work day and night shifts.  They all have valid U. S. driver licenses for at least three years prior to employment, for what that is worth. 

 

The information that all companies queried refused to reveal was how often it happens that remote-assistance operators intervene in a vehicle's operation.  Perhaps justifiably, they claim that is a trade secret that might compromise their competitive position.  My guess is that they have a pretty good idea of what the competition is doing, but they don't want the competition to know that.  Until it becomes an issue in a court case, then, we'll just have to guess at whether the driverless vehicle you see cruising down the street is truly on its own, or is being supervised from Palo Alto or Manila.

 

The mixing of autonomous and human control is going to become more urgent in the future as more and more systems become capable of running themselves.  Less than two weeks ago, on Apr. 14, Ukrainian President Volodymyr Zelenskyy announced that for the first time, his soldiers captured an enemy position using only remotely-controlled equipment, namely unmanned drones and "ground robotic systems," meaning armed robots. 

 

This historic achievement speaks volumes for the progress that Ukraine has made in developing agile, cost-effective military technology that augments their strength in human troop numbers.  After four years of war against a numerically superior foe, the Ukrainians have become world leaders in such technology, motivated by one of the strongest reasons around:  survival.  It is a truism in the history of technology that wars advance technical developments faster than peace does, and the Ukrainians have shown its validity once again.

 

For years, much ink has been spilled in engineering ethics journals about the dangers of autonomous warfare.  While this may not seem to have much to do with remote assistance for Waymo and Tesla, the two cases are extremes on a continuum.  In the case of peacetime vehicles, you want the machinery to do what it's supposed to do benignly without hurting anybody, but in cases where there is potential danger, you want the human backup person to intervene.  In the case of autonomous warfare, you want to damage and destroy the enemy and its resources, but you don't want the autonomous systems going haywire and attacking your own troops. 

 

Getting back to remotely-assisted civilian vehicles, it appears that the operating companies have found a comfortable medium between the extremes of completely autonomous operation—no humans involved anywhere along the line—and what would amount to a remote-control cabdriver, namely one remote assistant dedicated to each vehicle full-time.  Their comfort zone appears to be a mix of mostly automated control, but some manual control in emergencies. 

 

While it would be nice to know more details, in a free market it's up to the firms themselves to decide the optimum mix, and they seem to want to keep the details to themselves.  Until there's a major lawsuit in which the remote-assistant issue becomes a critical factor, we may not learn a lot more about it. 

 

The question of responsibility is fundamental in engineering ethics.  With the rise of all sorts of autonomous systems that are not quite autonomous, the question gets complicated.  The best we can do is to learn as much as we can about how things are actually done, so when problems arise the public and regulatory agencies are not kept in the dark about where responsibility may really lie. 

 

And whether it's a Waymo car or an armed robot capturing an enemy soldier, it's a little comforting to know that humans are still at the top of the command structure, however remotely.  Unless I'm the soldier—then it may not matter.

 

Sources:  Andrea Guzman's article "Self-driving cars still rely on humans—but how often remains unclear" appeared in the Austin American-Statesman on Apr. 8, 2026 at https://www.statesman.com/business/article/tesla-waymo-remote-operators-self-driving-cars-22192151.php.  I also referred to an article describing the Ukrainian remote-controlled capture of a Russian position at https://www.politico.eu/article/volodymyr-zelenskyy-robotic-systems-russia-army-positions-ukraine/. 

Monday, April 20, 2026

Getting to the Truth about AI Water Use

   

It might have been Mark Twain who said, or at least repeated, that a lie will get halfway around the world while the truth is still putting on its pants.  And that expression finds no better example than the way water usage by data centers has been portrayed over the last couple of years.

 

I live in Texas, which has seen some of the most rapid growth of data centers in the U. S.  There has even been a local controversy here in San Marcos, with citizen groups organizing rallies and protests against the construction of new data centers.  Every day when I go on my bike-exercise route, I pass two or three signs saying "No Data Centers" showing a big faucet with a single drop of water coming out.  Unfortunately, that sign represents about the typical level of discourse that prevails in discussions about the environmental concerns raised by data centers, specifically their water use. 

 

Last May, a book by Karen Hao, Empire of AI, hit the market and received a wildly enthusiastic reception.  Amazon currently ranks it the No. 1 best seller in International Economics and No. 3 in General Technology and Reference, a very broad category.  It won the National Book Critics Circle Award in 2025. 

 

And yet it contains numerous factual errors, many of which blogger Andy Masley pointed out in detail in his Substack blog over a period of months.  In discussing plans Google had for a data center in a town in Chile, Hao wrote that the data center would use 1,000 times more water than the city of 88,000 used. 

 

Masley, a close reader of the technical stuff in such a book, thought this looked strange, although it is not out of line with similar claims by others for the vast quantities of water that data centers will supposedly use.  So he looked into the problem.

 

When he took the number that Hao said was the city's total water usage and divided it by the population of 88,000, he got the result that each citizen was using less than a half-cup of water (0.2 liters) per day.  That is nonsense, of course.  When he found the original data, it turned out that the city's consumption was stated in cubic meters, not liters.  As later confirmed by Hao, someone on her staff of assistants mistook the cubic-meter consumption for liter consumption. 

 

As every student of the metric system knows, one cubic meter contains 1,000 liters, hence the error by a factor of 1,000. 

 

What Masley then wanted to know was how such an egregious error made it past the fact-checkers acknowledged in Hao's book, and the more than 1,000 reviews on Amazon, the adulatory comments in prominent media outlets such as the New York Times and The New Yorker, and the people who decide on the National Book Critics Circle award.

 

The only answer he could come up with is that the people writing the reviews don't give a flip about numbers.  And the people who care about numbers don't write book reviews, except for him, evidently.

 

Hao has since admitted that the error happened in the manner Masley suspected.  But that's a little bit like apologizing for not wiping your feet on the welcome mat after you burn your host's house down.  Masley has spent what amounts to a fulltime job calling out and correcting this and innumerable other errors in the vast outpouring of what may well be called propaganda opposing the construction of new data centers.  His words are finally gaining some traction in the media, which is why I now know about his work.

 

But something is way out of whack with public discourse if one side of a controversy can put out tons of material that is chock-full of factual errors, and these errors can go unchallenged for months or years before one person with the willingness to do the math and find and expose the errors gets enough attention to be heard from.

 

This story reveals laziness and groupthink among those who control prominent media outlets, plus something that could be called math avoidance.  You would think that at a time when mathematics has never been as vital an underpinning to civilization as it is now, that most people would be at least somewhat used to doing math when some issue that vitally involves math comes up.

 

But this story about liters and cubic meters shows that a lot of people who should do their math homework before blatting the glories of a book, or taking sides in a controversy, simply don't.  They find an expert whose vibes agree with their own and enthusiastically fall in line without the slightest regard for accuracy or healthy skepticism. 

 

I don't know what the future of data centers holds.  Their wild growth is starting to look to me like some sort of bubble, much as we had a wild growth of fiber-cable capacity in the early days of fiber-optics, and for years the industry contended with the problem of "dark fiber"—too much capacity.  Dark fiber cables don't use water or power—they just lie there.  But data centers need both, and it's possible that we may find ourselves awash in surplus data-center capacity and the boom will become a bust. 

 

But so far, there seems to be no limit to the appetite for "compute," as the experts call it, that AI wizards have.  The only limit at this point will be economic.  If the people who build the capacity can't figure out how to make money with it, they'll eventually quit adding capacity.  But as long as the public and corporations are willing to get more AI results—and so far, they are—the demand will drive the supply and more data centers will be built.

 

That is, unless people keep on believing the nonsensical claims of the more extreme opposition to data centers, and do something like getting a nationwide ban on new construction passed.  Then we'll get to see what a data-center shortage looks like, and it might be worse than you think.

 

Sources:  Numerous news outlets have covered Andy Masley's debunking efforts directed at correcting errors in the claims of data-center opponents.  The details of the particular Empire of AI error described herein can be found at his Substack post https://blog.andymasley.com/p/empire-of-ai-is-wildly-misleading. 

Monday, April 13, 2026

Whither the U. S. Postal Service?

  

We don't usually think of mail as a technology.  But if we define a technology broadly as any system engineered for the accomplishment of a practical purpose, the U. S. Postal Service is not only a technology, but a vital one.  Like any technology that doesn't go extinct, it has to change with circumstances or die.  And those two alternatives are becoming more obvious by the day as competition from electronic media bring intense pressures on so-called "snail mail" services, not only in the U. S. but worldwide. 

 

This column is brought on by an incident which nonetheless may be symptomatic of wider problems in the system.  Ever since my father taught me how to use a checkbook, I have paid for many monthly bills by mailing checks.  Until last month, this was a reliable way to pay things like utility bills.  But in February and March of this year, four checks I mailed simply disappeared, including the payment for the electric, water, and sewer bill. 

 

These incidents have forced me to join most of the rest of the world in switching to electronic payments for those bills.  But it also made me wonder how the U. S. Postal Service is doing in general, and the answer is:  not well.

 

At its inception under the guidance of the first U. S. postmaster—some dude named Benjamin Franklin—the Post Office, as it was known then, became a powerful nation-binding force as it put even the most remote state or territory in contact with the rest of the country by both private letters and favorable rates for periodicals such as newspapers and magazines.  It was operated as a Cabinet-level department and not expected to show a profit.  That remained unchanged until thousands of postal workers struck in the largest wildcat (=unauthorized by union leadership) strike in U. S. history in 1970.  That led directly to the passing by Congress of the Postal Reorganization Act of 1970, which President Nixon signed as a way of giving the postal unions the right to collective bargaining, though it still made strikes illegal.

 

The Act did more than authorize unions, however.  It changed the name of the organization to the U. S. Postal Service and set it up as a quasi-independent corporation that was expected to be self-supporting without government subsidies.

 

In 1970, the highly-remunerative monopoly on first-class letter carrying enjoyed by the USPS was more than enough to allow it to make a profit.  But the era of electronic communications was just around the corner.  If you are old enough to remember all the things that used to be done by mail that are now done by means of the internet, that's a lot of mail that has simply disappeared, and most of it was first-class mail.  Bills, checks, legal documents, and the whole volume of commercial first-class mail that used to support the old Post Office—virtually all that has now turned into bits transmitted on fiber cables. 

 

Some analyses by Elena Patel of the research institute Brookings show that the declining volume of first-class mail has led to the USPS showing a deficit every year since 2007.  By law, it can borrow money only from the U. S. Treasury, and there is a cap on its total indebtedness, which it has already hit.  When it can't borrow any more, it has to rely on its cash reserves, and these days those are running out too.  So we face the near-term prospect of the U. S. Postal Service going bankrupt unless its governing laws are changed.

 

This would have happened even earlier if the volume of package deliveries had not increased in a way that has partly compensated for the huge loss of first-class mail, which the USPS had a monopoly on.  But in package delivery, the USPS faces stiff competition from fully private businesses such as FedEx and UPS that operate on slimmer margins than a quasi-government service like the postal system, which has built-in labor costs and obligations to serve every single post office in the U. S. 

 

I don't know if all these adverse circumstances are directly responsible for my checks getting lost, but they didn't help.

 

So what should be done?  This problem of electronic-media competition upsetting the fiscal status of mail service is worldwide, not just in the U. S., and different countries are dealing with it in various ways.  In some places, the national government simply absorbs the losses and regards the mail service as a necessary part of national infrastructure.  That's the way our old Post Office began—as a nation-binding service that was simply paid for out of government funds—but by law the current USPS can't operate that way. 

 

Some might feel that the practice of laboriously carrying little pieces of paper around and physically delivering them is an outmoded practice that should be allowed to die a natural death.  But one of the Brookings studies shows that postal services form an important part of the economy of certain areas of the country, especially where population is sparse but people can still operate businesses with nationwide clientele through the postal service.

 

I don't have any brilliant solution to these problems.  But it's clear that things can't go on the way they're going, with the laws governing the USPS assuming economic conditions that simply no longer exist.  The first-class-mail monopoly that formerly subsidized everything else the USPS did has vanished as a source of profit.  And unless the federal government recognizes that what the postal system does is important enough to pay for with taxes, we will sooner or later hit a crisis resembling what recently struck the Transportation Security Administration, which wasn't funded in the latest Congressional budget.  This caused snarled air transportation as TSA workers increasingly showed a reluctance to go to work without being paid. 

 

Maybe a lot of young people wouldn't miss the postman (post-person, these days).  But one way to find out the significance of a technology is to imagine that tomorrow you woke up and all of it has vanished into thin air.  If the USPS doesn't get its finances straightened out by Congress soon, we may find out what that thought experiment looks like in reality.  And the results won't be good.

 

Sources:  I referred to two Brookings studies by Elena Patel on postal systems at https://www.brookings.edu/articles/postal-systems-worldwide-confront-the-same-financial-pressures/ and https://www.brookings.edu/articles/the-us-postal-services-fiscal-crisis/,

and the Wikipedia articles "1970 United States postal strike" and "Postal Reorganization Act."

 

Monday, April 06, 2026

US Fears AI, Uses It Anyway

  

A recent report in National Review summarized opinion polls about what U. S. residents think of artificial intelligence (AI) and how much they are using it.  Paradoxically, the more people use AI, the more they fear it.

 

A poll by NBC News showed that 46% of those queried had a negative opinion of AI versus only 26% positive.  Other polls show that citizens expect mostly or entirely negative effects on society from the widespread use of AI, and believe it will lead to serious job losses.  Over half the Americans polled by Democratic research firm Blue Rose feared that AI will lose them their job or a relative's job. 

 

At the same time, polls asking about AI use show that most people queried have used an AI tool in the past month, and a fourth say they use it every day.  So the old saying "familiarity breeds contempt" may be a guiding principle in how AI is viewed by the general public.

 

In a way, none of this matters.  If a new technology gets widely used and the companies providing it make money, who cares what people think about it?  Another technology that spread rapidly in only a few years, and also had profound effects on society, was television.  In 1950, only 9% of households had a TV, but by 1955 over half did.  And while there may have been a few voices raised in opposition to its growth, I think it's fair to say that the only groups that looked on the spread of TV with disfavor were industries threatened by it:  Hollywood, for instance.  And Hollywood has long ago made peace with the advent of television.  Your average person in the early 1950s was just waiting to see when TV sets got affordable enough to buy, and any negative consequences of TV use were not noted much in public before the 1960s.

 

One difference between the advent of TV and the advent of AI is that TV didn't threaten jobs like AI does.  And one job sector that is already seeing big effects from AI is computer science and computer programming.  The thing about public perception, regardless of whether it's accurate or not, is that it can easily become reality.  I work at a university, and I have heard in the last week that enrollment in computer-science programs is dropping across the board, after years of steady growth.  The reasons for this are not entirely clear, but one factor may well be that students fear spending four or more years getting a degree and then finding that all the entry-level positions are now being done by a few senior people writing AI prompts. 

 

On the other hand, one of the most enthusiastic proponents of AI I know is an 80-plus professor of biology who has been using ChatGPT in his research for the last year or two.  He says it helps him write papers more clearly and to organize his thoughts, and claims it's the greatest thing that's happened to him research-wise in a long time. 

 

Many of the polls mentioned were commissioned by political interests with a view toward forming policies about AI.  Currently, the Trump administration favors few if any regulations on the technology, and wants to keep states from enacting a patchwork of legislation that would encumber the field.  Historically, this approach has worked well for computer- and network-intensive industries themselves, allowing them to create vast new economies and profit mightily therefrom.  But it has also led to a number of real and lasting problems, ranging from the maleficent effects on politics of social media and the quantified and well-known harms to children and teenagers whose lives are distorted by the use of smartphones. 

 

The crystal ball of predicting how technologies will affect society is always more or less cloudy, and I will not venture to say what the future effects of AI's negative polling will be.  Even if AI were universally detested, it's not clear that Washington could get its act together enough to pass meaningful regulatory legislation, especially when Big Tech and the federal government sometimes seem to blur into each other.  On the state level, if the feds don't stop them, some states may pass laws attempting to regulate AI, but it's a little bit like trying to nail Jell-O to the wall.  When the thing you are trying to regulate is so protean and shape-changing, it's hard to decide what regulations to pass, let alone to figure out if they've been violated.

 

Some of the anxiety the public feels about AI is simply due to the breathtaking speed with which it has advanced and improved.  Arthur C. Clarke's principle that any sufficiently advanced technology is indistinguishable from magic applies here.  Real magic is scary if it happens, and I still feel a kind of queasiness when I type commands into a chat box and the program comes back with "I did this and that."  It's understandable that millions of teenagers use AI chatbots as a substitute friend, and it's also very creepy.

 

Going to extremes, a few people believe AI will engender the end of civilization as we know it.  Other hyper-tech-optimists such as Ray Kurzweil look forward to being uploaded to an eternal cloud and think it will be heaven on earth.  The truth probably lies somewhere in between.  What we can do as individuals is to keep reminding ourselves that AI systems are not human beings, and that human beings are not machines.  But both of those truths may become harder to keep in mind as time goes on. 

 

Sources:  The National Review website carried James Lynch's article "The More Americans Use AI, the More They Fear It" on Mar. 25, 2026 at https://www.nationalreview.com/news/the-more-americans-use-ai-the-more-they-fear-it/. 

Monday, March 30, 2026

Two Views of the Universe

 

Imagine two little boys growing up in their father's household.  The older boy enjoys the company of his father.  He runs up to his father and hugs him when he gets home from work.  He listens to his father even when he's getting disciplined, or when his father tells him to do things he doesn't want to do.  He'll complain about things to his father sometimes, and even object to discipline, but the connection is always there.

 

The other little boy likes to play by himself in his room.  What he wants is to be in complete control of things.  His father has given him lots of toys, but the toys are all that he's interested in.  He's gotten very good at a number of games that he plays with the toys, but they're all play-by-yourself games, and don't involve either his brother or his father.  When his father tries to get his attention by bringing him a new toy, the kid just grabs it and slams the door closed. 

 

This image came to my mind while I was reading Edward Feser's Scholastic Metaphysics:  A Contemporary Introduction.  Assuming you know nothing about this topic, let me explain it briefly.

 

In the Middle Ages, say 1000 to 1400 A. D., the Roman Catholic Church developed not only theological doctrines, but an entire philosophy that explained to the best of their knowledge what the world was about.  They started with ancient Greek philosophers, primarily Aristotle (384-322 B. C.), and modified his thought to be compatible with Christianity.  Although this work was done by many scholars over centuries, it is generally conceded that it reached its epitome with St. Thomas Aquinas (1225-1274 A. D.).  Broadly speaking, the philosophy (as opposed to theology) was known as scholasticism.

 

Metaphysics (from the Greek "beyond physics") is the study of being, the most fundamental aspects of reality.  Scholastic metaphysics is therefore what Aquinas and his cohort thought about the fundamentals of reality. 

 

This topic is virtually unknown today except by specialists, and not many of those, either, which is why I've gone to the trouble to explain it to you.  The reason is that, with the advent of modern thought, people like Francis Bacon (1561-1626), David Hume (1711-1776) and others intentionally discarded scholasticism in favor of other ways of thinking about reality, most of which favor the powerful methods of quantitative science, which reduce everything to mathematical models. 

 

Everybody agrees that the scientific and industrial revolutions have made huge differences in our ability to feed, clothe, and care for ourselves—by and large, positive differences.  If tomorrow, all physical signs of scientific advancements since 1700 vanished, the vast majority of people on earth would die within weeks, leaving only a few survivalists and primitive hunter-gatherers. 

 

Modern thought is sometimes summarized under the title of the Enlightenment.  In this view, the West suffered under the repressive domination of the Church, which stifled intellectual progress, until a few brave souls (e. g. Bacon, Hume, etc.) threw off the chains of darkness and led us into the light of knowledge that we didn't have to concern ourselves with God, and that our own intellects can take us wherever we want to go.  C. S. Lewis satirized the arguments sometimes made in favor of the Enlightenment in a book about his intellectual conversion to Christianity, The Pilgrim's Regress.  His pilgrim, who refers to God as "the Landlord," encounters a Mr. Enlightenment, and asks him, "But how do you know there is no Landlord?"

 

Mr. Enlightenment replies, "Christopher Columbus, Galileo, the earth is round, invention of printing, gunpowder!!"

 

When John asks how this non sequitur applies to his question, Mr. Enlightenment answers "Your people . . . believe in the Landlord because they have not had the benefits of a scientific training."

 

Whether or not you realize it, everyone growing up in the U. S. has had a "scientific training" simply by existing in the modern world and unconsciously absorbing its assumptions that surround us like the air we breathe.  One of these assumptions concerns what is "real," and I won't stop to define that further, because it means just what common sense means by it. 

 

One commonplace notion in which the scientific worldview diverges from scholastic metaphysics is the question of which is more real:  your body, or the atoms from which it is made?  The modern tendency is to think that atoms, or ultimately quarks, are what physical reality consists of, and everything else is, if not illusion, at the most secondary and not as important somehow. 

 

Feser shows that scholastic metaphysics is not in conflict with scientific knowledge, because modern science is highly limited in the things it can explain using its methods.  Essentially, modern science has no metaphysics, so although it is practically useful, it's no good in discovering the ultimate foundations of reality. 

 

On the other hand, scholastic metaphysics says that once the atoms are in your body, you, as a substantial being, are more real than the atoms, which now have only a virtual existence in you.  To quote Feser, "The level of basic particles is in no way privileged.  The particles are not somehow 'more real' than the substances of which they are parts.  On the contrary, it is the substances that are more real insofar [as] the particles, like every other part, exist only virtually rather than actually in the whole."

 

That little fragment uses scholastic vocabulary that may seem mysterious.  But if you read the rest of the book, you can appreciate the grand intellectual structure that scholastic metaphysics is. 

 

What good is it today, though?  In a word, sanity.  I learned about Feser's book from an online talk by Mary Harrington, who found it invaluable in understanding why modern thought has led to situations such as the transgender movement that previous generations would have considered simply crazy.  Such things are the final workings-out of perverse logic set in motion by the abandonment of the (formerly!) common-sense notions about reality that scholastic metaphysics upholds.

 

You can now guess which little boy is which.  Clearly, the one playing alone in his room should at least talk with his brother—and maybe his Father as well.

 

Sources:  Mary Harrington's First Things lecture "Our Crisis is Metaphysical" is available among other locations at https://firstthings.com/our-crisis-is-metaphysical-2026-d-c-lecture/.  Edward Feser's Scholastic Metaphysics:  A Contemporary Introduction (2014) is published by editiones scholasticae and distributed in the U. S. by Rutgers University.  The quotation from The Pilgrim's Regress is from pp. 24-25 of the Wade Annotated Edition (ed. D. C. Downing), published by Eerdmans in 2014. 

Monday, March 23, 2026

Will Sodium-Cooled Reactors Bail Out U. S. Nuclear Energy?

  

On March 4 of this year, the U. S. Nuclear Regulatory Commission made history by issuing itsfirst-ever construction permit for a privately-owned nuclear reactor of a type that is advanced beyond the standard light-water reactors (LWRs) that have been the mainstay of the nuclear-power industry in the U. S. since its beginning in the 1950s.  TerraPower, founded by Bill Gates in 2006, obtained the permit to build a full-scale nuclear power plant in Kemmerer, Wyoming.  The plant will use TerraPower's sodium-cooled fast reactor (SFR) technology, which has the potential to solve or alleviate many of the problems with existing reactors.  A recent report in National Review describes how mainly Democratic opposition to nuclear innovation has delayed this type of permit for over fifty years.

 

Although SFR technology is advanced beyond the LWR approach, it isn't exactly new.  In 1950, the world's first breeder reactor using a sodium-potassium mixture as coolant was put into service in Idaho by the Argonne National Laboratory.  A breeder reactor is designed mainly to make more nuclear fuel than it consumes by transforming the relatively non-reactive uranium isotope U-238 into the plutonium isotope Pu-239, which can be used either for reactors or nuclear weapons. 

 

A modified form of the breeder approach is used in TerraPower's reactor, in that as time goes on, a small core of enriched fuel breeds fissionable material in its surrounding non-fissionable nuclear material, which can even be obtained by processing existing nuclear waste from light-water reactors.  In one stroke, this approach both conserves new fuel and gives us something useful to do with some of the nuclear waste that is now sitting around consuming space and worrying people.  The operating parameters of the TerraPower type of reactor can be tweaked to minimize its own waste stream, and avoid producing pure plutonium that would be of interest to terrorists wanting to make their own nuclear weapons.

 

Another advantage of the SFR reactor is that the coolant is liquid sodium, not water.  Admittedly, liquid sodium is not something you want just lying around in your living room.  When exposed to air, especially moist air, it tends to catch fire, as the Russians have discovered while operating some of their own SFR reactors, as they have for many years.  But TerraPower is going to bury most of the nuclear part of their plant underground and submerge the reactor in a passive pool of sodium.  If the nuclear core overheats, the great thermal mass of the sodium pool tends to absorb excess heat until the core self-stabilizes by expansion. 

 

Unlike light-water reactors which have to keep water under high pressures to use it as a coolant, the sodium coolant in an SFR reactor is at atmospheric pressure.  This means the containment vessel can be much thinner and still protect the environment from unplanned releases of radioactive material.  Although TerraPower's first reactor will cost some $5 billion, the hope is that the new design can be standardized so that such reactors can be mass-produced at much less cost.

 

Nuclear power in the U. S. has undergone a checkered career, from boom times in the 1950s when optimists claimed it would make electricity too cheap to meter, to the doomsayer times of the 1980s when the Three Mile Island partial meltdown in Pennsyvania in 1979 and the much worse Chernobyl disaster in Ukraine in 1986 turned the political winds against it.  Ever since then, as Andrew Follett of National Review explains, opponents of nuclear power have tried to obstruct new construction of light-water reactors, and imposed a rigid conservatism that made licensing so-called "innovative" designs such as TerraPower's, almost unthinkable. 

 

Fortunately for TerraPower, the Nuclear Regulatory Commission has sped up its approval process, completing the effort for this license in only a year and a half.  One of the main issues in building new nuclear plants of any kind since the 1970s has been the morass of regulatory hurdles that companies have had to wade through for many years.  It doesn't hurt that TerraPower is backed by one of the world's richest men, but even rich men get bored sometimes. It appears that Mr. Gates has maintained enough interest in TerraPower to bring it to the point of actually constructing a reactor that breaks the restrictive mold of light-water reactors that has held back innovation in the U. S. nuclear industry for decades.

 

Of course, cost overruns are another bête noire for the nuclear industry, and only time will show whether TerraPower can keep construction of the Kemmerer plant within budget and on schedule.  But the simplified requirements for safety and other issues that sodium-cooled reactors provide should make it easier. 

 

This development comes at a time when the U. S. electric grid faces a great challenge:  to meet vastly increased demand for power from data centers that are proliferating across the country.  The data-center boom took the electric industry largely by surprise, and strains are showing in the form of increased rates in some areas and local not-in-my-back-yard fights. 

 

But if the nuclear industry can get back on track with standardized, predictable designs that produce less nuclear waste, have a greater capacity for meeting peak loads (as the TerraPower design does through the great thermal mass of sodium and auxiliary molten-salt heat storage), and store enough fuel in them to run for thirty or forty years, the future looks brighter for nuclear power than it has in my lifetime, and I'm 73. 

 

As with any innovative design, TerraPower's Kemmerer plant will be under extreme scrutiny.  Any accident or mishap, no matter how small, is likely to be seized upon by opponents as evidence that the new design is "too dangerous."  So I hope the firm is using an extra measure of caution to ensure that the eggs they are walking on will not break, and the U. S. can look forward to a power-production source that is more reliable than most renewable sources and produces less nuclear waste than existing designs. 

 

Sources:  The article "After 52 Years, Democrats' Red Tape Unravels" appeared on the National Review website Mar. 21, 2026 at https://www.nationalreview.com/2026/03/after-52-years-democrats-red-tape-unravels/.  I also referred to Wikipedia articles on sodium-cooled fast reactors, TerraPower, and experimental breeder reactors.