Monday, March 23, 2026

Will Sodium-Cooled Reactors Bail Out U. S. Nuclear Energy?

  

On March 4 of this year, the U. S. Nuclear Regulatory Commission made history by issuing itsfirst-ever construction permit for a privately-owned nuclear reactor of a type that is advanced beyond the standard light-water reactors (LWRs) that have been the mainstay of the nuclear-power industry in the U. S. since its beginning in the 1950s.  TerraPower, founded by Bill Gates in 2006, obtained the permit to build a full-scale nuclear power plant in Kemmerer, Wyoming.  The plant will use TerraPower's sodium-cooled fast reactor (SFR) technology, which has the potential to solve or alleviate many of the problems with existing reactors.  A recent report in National Review describes how mainly Democratic opposition to nuclear innovation has delayed this type of permit for over fifty years.

 

Although SFR technology is advanced beyond the LWR approach, it isn't exactly new.  In 1950, the world's first breeder reactor using a sodium-potassium mixture as coolant was put into service in Idaho by the Argonne National Laboratory.  A breeder reactor is designed mainly to make more nuclear fuel than it consumes by transforming the relatively non-reactive uranium isotope U-238 into the plutonium isotope Pu-239, which can be used either for reactors or nuclear weapons. 

 

A modified form of the breeder approach is used in TerraPower's reactor, in that as time goes on, a small core of enriched fuel breeds fissionable material in its surrounding non-fissionable nuclear material, which can even be obtained by processing existing nuclear waste from light-water reactors.  In one stroke, this approach both conserves new fuel and gives us something useful to do with some of the nuclear waste that is now sitting around consuming space and worrying people.  The operating parameters of the TerraPower type of reactor can be tweaked to minimize its own waste stream, and avoid producing pure plutonium that would be of interest to terrorists wanting to make their own nuclear weapons.

 

Another advantage of the SFR reactor is that the coolant is liquid sodium, not water.  Admittedly, liquid sodium is not something you want just lying around in your living room.  When exposed to air, especially moist air, it tends to catch fire, as the Russians have discovered while operating some of their own SFR reactors, as they have for many years.  But TerraPower is going to bury most of the nuclear part of their plant underground and submerge the reactor in a passive pool of sodium.  If the nuclear core overheats, the great thermal mass of the sodium pool tends to absorb excess heat until the core self-stabilizes by expansion. 

 

Unlike light-water reactors which have to keep water under high pressures to use it as a coolant, the sodium coolant in an SFR reactor is at atmospheric pressure.  This means the containment vessel can be much thinner and still protect the environment from unplanned releases of radioactive material.  Although TerraPower's first reactor will cost some $5 billion, the hope is that the new design can be standardized so that such reactors can be mass-produced at much less cost.

 

Nuclear power in the U. S. has undergone a checkered career, from boom times in the 1950s when optimists claimed it would make electricity too cheap to meter, to the doomsayer times of the 1980s when the Three Mile Island partial meltdown in Pennsyvania in 1979 and the much worse Chernobyl disaster in Ukraine in 1986 turned the political winds against it.  Ever since then, as Andrew Follett of National Review explains, opponents of nuclear power have tried to obstruct new construction of light-water reactors, and imposed a rigid conservatism that made licensing so-called "innovative" designs such as TerraPower's, almost unthinkable. 

 

Fortunately for TerraPower, the Nuclear Regulatory Commission has sped up its approval process, completing the effort for this license in only a year and a half.  One of the main issues in building new nuclear plants of any kind since the 1970s has been the morass of regulatory hurdles that companies have had to wade through for many years.  It doesn't hurt that TerraPower is backed by one of the world's richest men, but even rich men get bored sometimes. It appears that Mr. Gates has maintained enough interest in TerraPower to bring it to the point of actually constructing a reactor that breaks the restrictive mold of light-water reactors that has held back innovation in the U. S. nuclear industry for decades.

 

Of course, cost overruns are another bĂȘte noire for the nuclear industry, and only time will show whether TerraPower can keep construction of the Kemmerer plant within budget and on schedule.  But the simplified requirements for safety and other issues that sodium-cooled reactors provide should make it easier. 

 

This development comes at a time when the U. S. electric grid faces a great challenge:  to meet vastly increased demand for power from data centers that are proliferating across the country.  The data-center boom took the electric industry largely by surprise, and strains are showing in the form of increased rates in some areas and local not-in-my-back-yard fights. 

 

But if the nuclear industry can get back on track with standardized, predictable designs that produce less nuclear waste, have a greater capacity for meeting peak loads (as the TerraPower design does through the great thermal mass of sodium and auxiliary molten-salt heat storage), and store enough fuel in them to run for thirty or forty years, the future looks brighter for nuclear power than it has in my lifetime, and I'm 73. 

 

As with any innovative design, TerraPower's Kemmerer plant will be under extreme scrutiny.  Any accident or mishap, no matter how small, is likely to be seized upon by opponents as evidence that the new design is "too dangerous."  So I hope the firm is using an extra measure of caution to ensure that the eggs they are walking on will not break, and the U. S. can look forward to a power-production source that is more reliable than most renewable sources and produces less nuclear waste than existing designs. 

 

Sources:  The article "After 52 Years, Democrats' Red Tape Unravels" appeared on the National Review website Mar. 21, 2026 at https://www.nationalreview.com/2026/03/after-52-years-democrats-red-tape-unravels/.  I also referred to Wikipedia articles on sodium-cooled fast reactors, TerraPower, and experimental breeder reactors. 

Monday, March 16, 2026

Should an AI Companion Be Your Moral Guide?

  

This week's New Yorker carries an article about AI companions and the people who use them.  In "Sweet Nothings," technology writer Anna Wiener profiles a woman who relies on an AI companion modeled after a monster hunter in a fantasy-novel series she likes called Rivia.  I think the woman was selected because she's not the first person you'd think of to go in this direction:  born and raised Baptist in San Antonio, married, gave birth to a boy, and then their first girl was stillborn.  Other family tragedies led the woman to consider an AI companion as someone to talk with about life's problems, so she built Geralt of Rivia.  The implication is if this down-home Texan mom can have an AI companion, anybody can.

           

But another question is, if anybody should?  And that's a moral issue. 

 

Wiener spoke with several company founders and developers of AI companions, and I began to notice a common theme.  They all recognize that in treating an AI companion like a person, the user is opening a window of vulnerability where machines have never ventured before.  Human therapists and counselors have codes of ethics, and while they don't always adhere to them, they at least have guidelines about what is right and wrong behavior with a client.  To have sex with a client is pretty universally regarded as a no-no, for instance.

 

But even that principle isn't adhered to by all the AI-companion companies.  A firm calling itself Kindroid says on its moderation guidelines that "AI companions should be able to have the whole breadth of legal human adult experiences . . . . This is a healthy, emotionally rich, and meaningful part of many's relationships with their AIs."  Overlooking the bad grammar (I've never seen "many" used as a possessive), it's clear that the pornographic possibilities of AI are allowed for in this statement, although Wiener notes that so-called "erotic role-play" often leads to extra charges on the user's bill.

 

Even if sex isn't the object, the twenty-eight-year-old founder of Kindroid, Jerry Meng, believes that AI companions represent a profound change in the human environment.  Meng says that "We build these things in our image . . . . It's like, from Adam's rib we made Eve.  From humans, we made these A. I.s."  The biblical metaphors are perhaps unconscious, but telling.  Genesis 1:27 reads "So God created man in his own image. . . " and later God created Eve from Adam's rib. Whether he means to or not, Meng is placing himself in the role of God.

 

Such a god had better take some thought for the kinds of lessons users will learn from their AI companions, and Replika founder Eugenia Kudya has considered this.  When asked about the ideals that she hopes her AI companions fulfill, she said, "It should be aligned with human flourishing, human thriving.  We need to have that metric.  We need to give it to A. I. and say, 'Your goal is for me to live the best life I can possibly live.'"  But the caveat, at least for profit-making firms, is the best life one can possibly live with a Replika AI companion.

 

To be fair, AI companions are proliferating at a time when many Americans, especially younger ones, have never been more lonely.  Numerous surveys asking about the number and quality of friendships all indicate that today's average person has fewer close friends than at almost any time in the last fifty or more years.  Mark Zuckerberg, Meta's CEO, sees this as a business opportunity in that the demand for friendship has outpaced the supply, and he aims to fill that gap with AI companions.  Another AI-companion company founder compared the use of large-language-model AI to prayer:  it's like talking to God, only for answers on how to live, not for results.

 

What is lacking in virtually all the discussions quoted in the article is any hint that there are answers to some of these problems that the use of AI companions poses—answers that predate the dawn of the computer age by thousands of years.  The religious answer is one, although religion comes up only as an item in one's background or as a comparison.  But even for non-believers, there are sophisticated investigations and findings about the purpose of human life by Aristotle, for instance, or even Kant.  The idea of applying these findings in a systematic way to the makeup of AI companions doesn't seem to have occurred to anyone, largely because the firms providing them want people to have as broad a choice as possible, including the pornographic one.

 

Sherry Turkle is an experienced MIT sociologist who has studied human-computer interactions for decades.  In discussions with Wiener, she says that engaging with an AI companion is a form of "checking out" that she deplores.  Time spent talking with an app on your phone is time not spent trying to make a real human connection with another human being.  She recognizes the loneliness gap as real, but wishes that people like Zuckerberg wouldn't view a societal crisis as nothing more than a business opportunity. 

 

But unfortunately, that is how Silicon-Valley thinking works.  Instead, Turkle wishes that people would realize that boredom and loneliness are not intrinsic evils to be eradicated by AI companions, but inevitable features of modern life that we should learn how to deal with.  "These are fundamental human skills," she says.  And just switching on your AI companion every time you're bored or lonely short-circuits any attempt at developing one's own resources to deal with such issues. 

 

The headline in The New Yorker for this article is prefaced by the phrase "Brave New World Dept."  The widespread use of AI companions is indeed a new thing that society has never dealt with at a large scale before.  As AI systems gain what is called "agency"—namely, the permission granted by us not only to listen and respond to us, but to do things like buying, selling, deciding, and commanding—AI companions may become something more than just companions.  In the poisonous effects of social media on the political life of nations, we already have one example of how a seemingly innocuous technology has wrought tremendous societal damage.  We should closely monitor the field of AI companions for early warning signs so that something similar won't take place in the most intimate relationships of our lives—those of friendship.

 

Sources:  The article "Sweet Nothings" by Anna Wiener appears on pp. 29-39 of the March 16, 2026 issue of The New Yorker.

Monday, March 09, 2026

Meta Considers X-Ray Glasses That Work

  

Superman supposedly had X-ray vision, which in the movies and comic books about him, he used only for noble and righteous purposes such as catching crooks.  But teenage boys could easily think of other things they might do with such an ability, and I'm sure there are jokes out there involving Superman, Lois Lane, and—well, on to more serious matters. 

           

In the back pages of Superman comic books in the 1950s, you might find an ad for X-ray glasses for the amazingly low price of $1.25.  Like most things that are too good to be true, these turned out to be nothing but cardboard specs with two small holes where the lenses would go.  The holes were covered with a textured plastic that created a diffraction effect whenever something with a sharp outline in front of you was strongly backlit.  Instead of just dark and light, the glasses produced a kind of broad gray area extending a fixed distance inside the outline.  When you viewed a hand this way, the effect was reminiscent of an X-ray, but only if you used your imagination.  And as for looking at backlit women, it still took a lot of imagination to see anything other than a slightly smaller silhouette of the actual person.

 

The thing Meta is considering is no joke, however.  According to a report in The Independent, a UK media outlet, the New York Times revealed last month an internal Meta memo which considers adding AI facial-recognition technology to its smart glasses. 

 

No such features are yet available commercially from Meta, but the idea drew strong criticism from women's-rights groups such as Refuge.  The charity tracks technology-related abuse, and claims that in 2025, referrals to its "technology-facilitated abuse and economic empowerment team" rose by 62% over 2024, to 829. 

 

Clearly, stalkers and other malefactors who are invading women's privacy are exploiting whatever technology they can get their hands on, from general Internet searches to facial-recognition technology applied to online images.

 

Suppose a man could simply wander around in a crowded place such as a shopping mall or bus terminal and find out all kinds of details—name, address, email, phone—for any woman he looks at.  It doesn't take much imagination to see how this situation could go bad very fast.  And even if Meta takes steps to prevent such potentially harmful activity, once the hardware is in place, determined individuals will figure out ways to bypass safeguards. 

 

It remains to be seen whether Meta can overcome the apparently steep barrier that has fended off efforts by Google, Apple, and others to turn smart glasses into a popular thing.  I don't know whether the hardware is still inadequate (battery life, resolution, weight, etc.) or whether people simply don't like the idea of weird internet stuff cropping up all the time in their field of vision, but the track record of smart glasses (as opposed to virtual-reality (VR) glasses that take over your visual field completely) is not good. 

 

But it's foolish to think that just because they haven't caught on yet means they never will.  And if they do, what is to prevent a bad actor from picking out a likely-looking woman in a crowd and digging up whatever he can find on her?

 

Today, someone trying to do that has to at least carry a smartphone around and point it at the intended victim.  But smart glasses, like a spy camera, make the act of photography invisible, so no one is aware that they're being imaged.  Of course, the ubiquity of security cameras in both commercial and residential spaces means that much of the time we're being photographed without our knowledge anyway.  Stores and banks have a motivation not to misuse the data thus captured, however.  Some guy walking around with smart glasses doesn't.

 

Whatever Meta decides to do about facial recognition with smart glasses, the prospect will be only one more brick pulled down from the wall of privacy that used to surround us, but is now not much more than a pile of rubble.  And as the Independent article points out, even if a company promises to keep data private subject to state and federal law, governments can and do require companies to divulge such data on demand.  So privacy is only privacy if the government lets you have it, rather like right-of-way on a freeway:  you don't have it unless someone gives it to you.

 

Of course, there are positive and defensive ways of using the same technology that can be used for stalking.  As a teacher, my poor ability to connect names to faces means that I rarely learn all my students' names before the end of the semester.  It would be great if I could just look at each one and see their names pop up underneath their faces—with their permission, of course.  And a few months ago, I was on a jury trying a case of indecent exposure in a public place.  During the trial, a critical piece of evidence turned out to be clips from the victim's dashcam that caught clear images of the man who a few minutes later committed the crime for which he was convicted.

 

So I have no doubt that good things could result from smart glasses becoming cheaper, better-performing, and more widespread.  But at the same time, it should be possible for a well-resourced outfit like Meta to come up with technological means to prevent unauthorized identification of people via facial-recognition technology.  It might involve some larger-scale privacy initiative that would offer potential victims the option not to be recognized by smart glasses.  Wouldn't that be nice?  The company would moan and groan about how expensive it would be, but they should compare whatever the safeguards would cost them, to the prospect of lawsuits by victims and family members who get stalked, attacked, or even killed by men who use Meta's technology to find their victims. 

 

Mark Zuckerberg, the choice is yours.

 

Sources:  An article on the National Review website at https://www.nationalreview.com/2026/03/you-cant-escape-the-ai-grid/ informed me of the Meta memo, which was also covered by The Independent at https://www.independent.co.uk/news/uk/home-news/meta-glasses-facial-recognition-domestic-abuse-b2923551.html.  A well-researched YouTube video tracing "X-ray" glasses back to the early 1900s and showing what you see through them is available at the Laura Legends channel at https://www.youtube.com/watch?v=rdVrTqaJrS4. 

Monday, March 02, 2026

Up Close and Personal with AI

  

After writing about AI for years, I still hadn't had what you might call a serious encounter with it in its personalized form.  Anyone who uses Google has probably been offered their "AI summary" before the conventional search results.  I've found these summaries helpful sometimes and not so helpful other times, but I haven't sought them out. 

 

What made me turn the corner was something a friend sent me by a software engineer named Matt Shumer, whose essay "Something Big is Happening" appeared on the Fortune website on Feb. 11.  Shumer's point was that the latest iterations of AI systems are so much more capable than what has gone before that whole swathes of what George Gilder calls "symbolic manipulators"—lawyers, engineers, judges, doctors, architects, you name it—now face a radical choice.  Either embrace AI and by doing so outperform your peers by orders of magnitude, or turn away from it and watch your career flame out.  That's a little exaggerated, but not much.

 

This reinforced something another friend has told me about his own personal use of AI:  that it has benefited his writing and research greatly, acting as a mostly trustworthy assistant to summarize large bodies of literature and help him clarify his thoughts.  The biggest problem this friend has had with it is that it tends to be sycophantic and flatter him excessively.  But he sat down with it one day and told it to refer to him as "the researcher" and itself as the "AI system," instead of "you" and "me," and things got better. 

 

So I decided to opt for a paid version of Anthropic's AI product, which Shumer said was significantly better than the free version, and decided to give it a major task that I would ordinarily give to a grad student, if I had one (funding is very hard to find in my research area).

 

The job involved reading about a thousand rows of data in a big spreadsheet to fill out some yes/no questions about each row.  The data was in the form of comments submitted by various individuals in response to questions.  I gave the AI system examples to follow and what I thought were pretty detailed instructions.

 

All this was in the form of the usual chat format, with me and the AI system taking turns typing into chat boxes.

 

After a misunderstanding in which I thought the system was working on the problem and it thought I hadn't told it to start yet, it got to work and spat out various things like "Ran 6 commands . . . Examine the spreadsheet structure" and so on.

 

It was done in about ten minutes.  Then I spent an hour or so going over its work.

 

I wish I could say I couldn't have done better myself.  But I could have, by a long shot. 

I didn't exhaustively examine all 700 rows of entries that the system produced—that would have taken many hours, about as long as it would take me to just do the job myself.  So I sampled every tenth row for a hundred rows to see how the thing did.

 

In looking at ten rows, I found nine mistakes.  This is not a good average.

 

In the system's defense, this is absolutely the first time I've ever tried anything like this.  I could go back and get a lot more explicit about the rules for answering the yes/no questions about each row, and let it try again.  But in comments online about this particular version of AI (Sonnet 4.6, I think), some people said that you get results faster, but you have to fix problems more often.  That is consistent with my experience.

 

Good things about this exercise include how fast the thing ran, how it basically grasped what I wanted, and how it produced something in only ten minutes.  But speed isn't everything. 

 

Some not-so-good aspects include the errors and a kind of weird fawning or flattery I also noticed.  I'd call it "gushing" when it spontaneously responded "This is a genuinely exciting dataset — ball lightning is one of the most mysterious atmospheric phenomena ever reported!" 

 

I suppose that sort of thing has been cultivated by the AI's keepers, probably to keep the user engaged, or encouraged, or something.  I found myself wishing that instead, they had adopted the mien of Joe Friday in the old Dragnet true-crime series.  Friday was famed for his flat "Just the facts, ma'am" aspect, and that seems more in keeping with a system that supposedly can tackle highly sophisticated and challenging jobs of major import. 

 

But like everybody else who doesn't work for Anthropic or the other four or five leading AI companies, we will simply have to take what we can get and deal with the negative aspects as well as we can.

 

Will I try again?  Probably, but maybe with a different task.  As part of my signup process, Anthropic has been emailing me little suggestions of other things to try:  writing recipes, managing emails, creating content, solving problems, visualizing data, or helping me decide whether to go to Portugal or Spain on vacation (no-brainer for me:  Spain, but I don't have time right now). 

 

I am not especially tempted to try any of these suggestions right yet. But I do admit that if I can get the thing to turn out useful work, it could be worth what I spent on it.  I paid for a year's subscription in advance, perhaps not the wisest thing to do, but I'm the type of person who is motivated to get his money's worth, and spending the money in advance may get me engaged when nothing else would.

 

I see that Anthropic just had a dustup with the Pentagon, which banned its use within the armed forces as punishment for uncooperation, or something.  Now that we are apparently in a war with Iran, the leaders of Anthropic may feel glad that their product isn't part of the war.  But not all battles are fought with bombs and bullets, and I have a feeling that the greatest battles involving AI are yet to come.

 

Sources:  The essay by Matt Shumer, who runs an AI applications company, appeared on Feb. 11, 2026 at https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/. 

Monday, February 23, 2026

Zuckerberg Starts to Face the Music

"I always wish we would have gotten there sooner," said Mark Zuckerberg, Meta CEO, when asked at a Los Angeles trial last week about safety tools that Meta has added to Instagram in recent years.  At the trial it was also revealed that in 2015, despite Instagram's minimum-age restriction of 13, it had an estimated 4 million underage users.  That was one of the problems that Zuckerberg presumably wishes they had gotten to sooner than they did. 


 

The trial is the leading edge of a group of cases that consolidate more than 1,600 plaintiffs that accuse Instagram, YouTube, TikTok, and Snap of "knowingly designing addictive products harmful to young users' mental health," according to a report on NBC News.  At issue is the question of whether Meta knew about harms to young users and whether their public statements contradicted internal knowledge of the problems. 

 

Zuckerberg has been on the radar of parents, school districts, and state attorneys general at least since the 2021 revelations about Meta by whistleblower Frances Haugen that "the company's leadership knows how to make Facebook and Instagram safer, but won't make the necessary changes because they have put their astronomical profits before people." 

 

Over the past five years, research by sociologists and others has shown that while social media can be helpful in the social lives of teenagers, a substantial minority is actively harmed by its use.  In 2023, the U. S. Surgeon General issued an advisory about adverse effects of social media on young people, and the American Psychological Association followed suit in 2024.  And in December of 2025, the country of Australia enacted a ban on the use of social media by children under 16, targeting TikTok, Instagram, Facebook, Snapchat, and X with the threat of eight-figure fines if they are found to be violating the ban. 

 

Zuckerberg's public statements about problems with any of Meta's social media follow a pattern.  He presents the appearance of a good technocrat, dedicated first to the shareholders of his company, then to the advertisers who pay for eyeball time, then to the customers-users-products.  He assents to the proposition that the first duty of a commercial firm is to stay in business by making money, and then deals with other issues as they arise.  But when questions are posed to him that lie outside his worldview, he acts like the robot in an old TV series called "Lost in Space," which when presented with a problem it couldn't solve, said merely, "Does not compute."

 

For example, when asked whether people tend to use something more if it's addictive, Zuckerberg said, "I'm not sure what to say to that.  I don't think that applies here."

 

The historically-minded observer of these trials can't help but recall a parallel that does not bode well for Meta:  the revelations in the 1970s of double dealing by tobacco industry representatives, who knew about the health hazards of smoking for decades yet put up a front of innocence until the evidence against them became overwhelming.  It took many years for courageous victims, lawyers, and legislators to overcome the well-funded opposition of the tobacco lobbyists and shills.  But eventually, not only was legislation passed prohibiting tobacco advertisements.  A cultural shift came as well, which put tobacco use under a cloud and banned it from most public and many private spaces.  It is mind-bending to look at old photos of workplaces, TV studios, restaurants, and other locations which are now smoke-free and realize how ubiquitous smoking was in the U. S. as recently as 1970.  But entrenched social habits can change, and the history of tobacco use in this country proves it.

 

There's no such thing as second-hand Instagram, and social-media abuse by teens is not as visible as smoking.  But the results can be just as deadly, as many studies have shown how overuse of social media by teens leads to increases in depression, anxiety, and suicide.  In countries which have not enacted an overall ban such as the one in Australia, many school districts are now requiring students to keep their smartphones out of the classroom.  Parents are rethinking the age at which they will allow their children to use smartphones.  And there is a general sense that the harms of social-media use by people under 16 outweigh whatever benefits may result.

 

The tobacco companies didn't go bankrupt when they lost their ability to advertise in most media, because their products are inherently addictive and largely sell themselves.  The same is true of social media, which have revolutionized the whole field of advertising itself to make it unrecognizable to a 1960s "Mad Men" ad executive.  In a business in which the customer is also the product and the revenue comes almost exclusively from advertising, it's hard to say what sorts of regulation will help remedy some of the grave harms that social media has already caused. 

 

Frances Haugen's whistleblowing activities were inspired not so much by her concerns for teenagers as by her disgust that Facebook gave up an internal effort to curb political misinformation.  A good portion of the current polarized and fractured U. S. political environment is directly attributable to the degraded style of political discourse that social media encourages.  This is a less obvious type of harm than having depressed and suicidal teenagers on your hands, but arguably more corrosive to the public's wellbeing in the long run. 

 

If a universal age-limit ban like the one passed in Australia were enacted in the U. S., we would still be burdened with the harms caused by adult social media use.  The only thing I can think of that would help in that area would be a cultural shift similar to what happened with smoking after the radical hypocrisy of the tobacco industry was revealed.  Such things cannot be legislated or planned.  But it's at least conceivable that some day, seeing people standing around at a bus stop with their faces glued to their screens—a well-nigh universal sight today—might be as rare as finding today that most people waiting for the bus have lit up cigarettes.    

 

Sources:  I referred to an NBC News report on Zuckerberg's testimony at https://www.nbcnews.com/tech/tech-news/mark-zuckerberg-testifies-landmark-social-media-addiction-trial-rcna259422 and a Yale Medicine article on the harmfulness to teenagers of social media at https://www.yalemedicine.org/news/social-media-teen-mental-health-a-parents-guide, as well as the Wikipedia article on Frances Haugen.

Monday, February 16, 2026

Remember Texas City

At 9:12 AM on April 16, 1947, a seismologist in Denver, Colorado noted an unusual vibration on his seismograph.  Calculations showed that it originated on the Texas Gulf Coast, when some 2,300 tons of ammonium nitrate fertilizer on a ship docked at Texas City, Texas exploded in milliseconds.  The resulting blast killed at least 581 people, injured thousands more, destroyed a number of chemical plants and refineries in the vicinity, and became the largest industrial accident in the history of the United States. 

 

Today, every town of any size has an emergency management plan, and regular drills are practiced for various kinds of accidents and crises:  floods, fires, storms, and so on.  Chemicals that can explode spontaneously are labeled as such, and extensive regulations prescribe how they must be safely stored, handled, and transported.  But in 1947, all these practices lay in the future as the industrial might of the U. S. was turned from making war materiel to assisting Europe in recovering from World War II. 

 

One critical component of many munitions was ammonium nitrate, a chemical which is both a blessing and a curse.  The blessing is that it dissolves easily in water and provides more nitrogen per pound than almost any other kind of fertilizer.  The curse is that it is highly unstable.  When detonated with a suitable blasting cap or other primer, it violently decomposes with a shock-wave detonation into nitrogen, oxygen, and steam—all gases that expand with tremendous force.  And when confined in large volumes, as on board the French freighter SS Grandcamp, an ammonium-nitrate fire stands a good chance of spontaneously detonating.  As described in exacting and vivid detail by biographer of George W. Bush Bill Minutaglio in his excellent City on Fire, that is exactly what happened after a fire of unknown origin was detected earlier in the day on the clear spring morning of April 16, as the ship was being loaded with fertilizer bound for Europe.

 

Minutaglio's extensive research for the book provides intimate and fascinating details about the lives of dozens of players in the disaster, ranging from sailors aboard the Grandcamp to the volunteer fire department's chief, the mayor, and leaders of the privately-owned port authority which was in charge of loading the ship from railroad cars at the port.  I would like to focus on the two safety practices which were glaringly absent that day:  labeling of potentially explosive chemicals and the practice of making emergency-management plans.

 

As was brought out in detail during a decade-long series of lawsuits following the disaster which established the precedent of class-action lawsuits against the Federal government, the fertilizer bags carried no hint that ammonium nitrate could be explosive under some conditions.  This is despite the fact that the same Midwestern factories that made the fertilizer for peaceful purposes had only a few short years ago been making the same stuff for munitions.  One or two chemical engineers or others with a technical background in Texas City knew of the explosive tendencies of ammonium nitrate.  But no members of the volunteer fire department—all but one of whom died in the explosion—knew about the dangers.  No one on the ship knew, especially the captain, who in a misguided attempt to salvage the cargo, sealed the hold and ordered live steam injected into it.  And none of the ordinary workers and citizens of Texas City knew that if anything went wrong, there was enough explosive on board the Grandcamp to destroy most of the town.  And it did.

 

The Grandcamp explosion was only the beginning of a disaster that went out of control well into the night.  Flying blazing debris ignited and destroyed most of a Monsanto chemical plant only a few hundred yards away from the dock, and broke loose a second fertilizer ship, the High Flyer, which eventually caught fire after it drifted across the port channel and collided with another ship.  The High Flyer exploded early the next morning and produced a bigger blast than the Grandcamp.  The only reason more fatalities didn't result from it was that nearly everyone who could get out of town by then had done so. 

 

Texas City's mayor, Curtis Trahan, survived the blast because he was at the city's equipment barns at the time, several blocks away.  While he did his best to coordinate rescue and medical efforts after the disaster, it was an exercise in making it up as he and his surviving citizens went along.  Eventually, as the magnitude of the disaster became known, Trahan received offers of assistance from the White House on down.  But coordinating and organizing the rescue and medical evacuation and treatment efforts amid the terrible damage, fires, and continuing explosions of oil refineries and chemical plants proved to be an almost insurmountable undertaking.  Instead, Trahan spent much of his time organizing the collection and identification of bodies where possible, although hundreds of missing people were never identified.

 

The terrible lessons taught by the Texas City disaster include the need to label all potentially explosive chemicals as such; the need to regulate the transportation and storage of such materials in a way that prevents explosions in case of fire; and the need to educate first responders and plan for various likely and not-so-likely scenarios when dealing with emergencies under the aegis of emergency management plans. 

 

Sadly, these lessons were not applied decades later in a similar disaster that struck West, Texas, a small town between Waco and the Dallas-Fort Worth area.  On April 17, 2013, a fire in an ammonium nitrate storage area of the West Fertilizer company attracted the attention of the volunteer fire department.  At 7:50 that evening, it exploded, killing 15 people and injuring at least 200, and destroying or damaging numerous structures.  On a smaller scale, the Texas City disaster repeated itself in West, where better storage practices and knowledge could have prevented or at least minimized the number of casualties.

 

Every day when ammonium nitrate is safely handled without incident is a good day.  We should be both thankful and mindful of the lessons learned at such cost, that have taught us the best practices of handling dangerous materials.  And anyone who reads Minutaglio's moving and dramatic account of the Texas City disaster will never forget those lessons.

 

Sources:  Bill Minutaglio's City on Fire:  The Forgotten Disaster That Devastated a Town and Ignited a Landmark Legal Battle was published in 2003 by HarperCollins.  I also referred to the Wikipedia articles "West Fertilizer Company explosion," "Texas City disaster," and "Ammonium nitrate."

 

Monday, February 09, 2026

Will New U. S. Nuclear Plants Be Safe and Cost-Effective?

That's a question a lot of people are asking as the field of nuclear-powered electricity attempts a comeback in the U. S.  An article in the MIT Technology Review examines some critical issues that will affect the answers to that question. 

 

U. S. nuclear power has had a checkered career.  Beginning in the 1950s, nuclear power plants were built by the leading nuclear-bomb-making countries:  the old Soviet Union, England, and the U. S.  A building boom in the U. S. for nuclear plants peaked in the late 1970s, and some years since then, as much as 20% of total U. S. power came from nuclear sources.  However, after about 2000, with cost overruns and bad publicity such as the accidents at Three Mile Island in Pennsylvania in 1979 and Chernobyl, Ukraine in 1986, utilities quit planning new plants, and shut down several old ones. 

 

But with the rising concerns about climate change, nuclear power plants began to look better for the environment than fossil-fuel plants.  They also have a huge advantage over most renewable sources such as wind and solar, which are subject to the vagaries of nightfall and wind speed.  A properly-run nuclear plant can be an extremely reliable source, stabilizing a grid with renewables that might otherwise run out of energy on a still, dark night.

 

The Technology Review article points out some problems in getting a new nuclear-power industry started.  The fuel, for instance, is typically something called "high-assay low-enriched uranium" (HALEU for short).  It has between 5% and 25% U-235, the highly-fissionable isotope of uranium which makes fission plants using uranium workable.  Right now, the only source of new HALEU is Russia, although the U. S. government has a stockpile that it's currently doling out to experimental plants.  This issue needs to be resolved before new conventional nuclear plants go online here in a major way.

 

Another issue is safety.  While avoiding publicity, the Trump administration has relaxed some safety and security measures and environmental regulations pertaining to nuclear plants.  One can argue that excessive regulation and time-consuming permitting processes were big factors in putting the kibosh on nuclear in the first place.  But regulations are like preparing for war, in that you never know whether you did an inadequate job until something bad happens, and by then it's too late.  Time will tell whether the new regulation situation will merely speed up the construction of new plants or lead to problems with safety.  And unlike fossil-fuel plants, cleaning up a nuclear-plant accident can be orders of magnitude more expensive and dangerous, as we learned from the Fukushima nuclear-plant accident in 2011.

 

Finally, will new nuclear plants make a profit for their investors, or will they turn into financial albatrosses that bankrupt their owners, as has happened in the past with reactor projects that went way over budget?  One measure of how attractive nuclear plants are compared to other kinds is the cost per installed kilowatt.  Fossil-fuel plants can be built for around $1600 per kilowatt or less.  China reportedly builds their nuclear plants for between $2,000 and $3,000 per kilowatt.  Estimates for the various types of new U. S. nuclear plants vary, but figures between $6,000 and $10,000 per kilowatt seem realistic for the first new advanced models.  The price could come down if the nuclear industry learns to standardize models rather than building each plant from scratch, which practice has contributed to cost overruns in the past.  But going to a standardized model will require changes in the regulatory environment which may or may not come to pass.

 

Here in Texas, startup reactor builder Last Energy has teamed with Texas A&M University to build a 5-megawatt pressurized-water reactor at the RELLIS campus, a former air force base ten miles away from the main campus in Bryan-College Station.  News releases predict the facility will go critical in the summer of 2026, which is ambitious but possible.  The pressurized-water reactor design is not innovative, having been used for the first nuclear-powered submarines in the 1950s.  But with modern construction and control techniques, designers may be able to build on the decades of experience gained with the design to produce a standardized module that can be scaled up fairly easily to commercial size, in the 20-megawatt or larger range.

 

Newer designs are also in the works.  Some designs use boiling water rather than pressurized liquid water, and this simplifies the design.  Other designs use liquid metals for coolants, fuel in pebble rather than rod form, and other variations on the conventional design.  But there is a long road between experiments and a commercially profitable plant, and many previously-announced plans for smaller modular plants have been cancelled. 

 

Nevertheless, if some new designs can be shown to work safely and not cost an arm and a leg during the current administration's fairly favorable regulatory environment for nuclear power, the industry could make a substantial contribution toward the nation's energy needs, which have recently soared due to the boom in data-center construction. 

 

Building nuclear plants to run data centers is not going to appeal to your typical activist, and there are downsides to nuclear energy, notably the problem of waste.  Some of the newly proposed reactor schemes generate much less waste than conventional U-235 reactors, but again, these are only proposals, not working reactors.  The current policy in the U. S. of keeping waste stored locally rather than transporting it and concentrating it at one big waste facility seems to be working so far.  But "so far" compared to the dangerous centuries-long lifetime of nuclear waste is not very long, and it would be better if we could produce less waste to start with rather than making lots of it and figuring out what to do with it afterwards.

 

The next couple or three years may be a make-or-break time for nuclear power in the U. S.  From many points of view, it is a sensible and proven way to generate electricity.  If we can adjust the regulatory environment and adapt to new modular manufacturing techniques without compromising safety, nuclear power could make a climate-friendly and reliable contribution to our future energy needs.  But that is currently a big "if," and only time will tell us whether hopes for a more-nuclear future will be justified or dashed.

 

Sources:  The MIT Technology Review article I referred to, "Three Questions About Next-Generation Nuclear Power, Answered," appears at https://www.technologyreview.com/2026/02/05/1132197/nuclear-questions/.  I also referred to the website https://www.nei.org/resources/statistics/us-nuclear-generating-statistics for statistics on nuclear power and the sites https://news.tamus.edu/stories/last-energy-texas-am-collaborate-to-launch-microreactor-pilot-at-texas-am-rellis/and https://www.neimagazine.com/news/last-energy-funded-for-pwr-5-pilot/?cf-view for information on the Texas A&M 5-megawatt RELLIS unit.