Monday, March 30, 2026

Two Views of the Universe

 

Imagine two little boys growing up in their father's household.  The older boy enjoys the company of his father.  He runs up to his father and hugs him when he gets home from work.  He listens to his father even when he's getting disciplined, or when his father tells him to do things he doesn't want to do.  He'll complain about things to his father sometimes, and even object to discipline, but the connection is always there.

 

The other little boy likes to play by himself in his room.  What he wants is to be in complete control of things.  His father has given him lots of toys, but the toys are all that he's interested in.  He's gotten very good at a number of games that he plays with the toys, but they're all play-by-yourself games, and don't involve either his brother or his father.  When his father tries to get his attention by bringing him a new toy, the kid just grabs it and slams the door closed. 

 

This image came to my mind while I was reading Edward Feser's Scholastic Metaphysics:  A Contemporary Introduction.  Assuming you know nothing about this topic, let me explain it briefly.

 

In the Middle Ages, say 1000 to 1400 A. D., the Roman Catholic Church developed not only theological doctrines, but an entire philosophy that explained to the best of their knowledge what the world was about.  They started with ancient Greek philosophers, primarily Aristotle (384-322 B. C.), and modified his thought to be compatible with Christianity.  Although this work was done by many scholars over centuries, it is generally conceded that it reached its epitome with St. Thomas Aquinas (1225-1274 A. D.).  Broadly speaking, the philosophy (as opposed to theology) was known as scholasticism.

 

Metaphysics (from the Greek "beyond physics") is the study of being, the most fundamental aspects of reality.  Scholastic metaphysics is therefore what Aquinas and his cohort thought about the fundamentals of reality. 

 

This topic is virtually unknown today except by specialists, and not many of those, either, which is why I've gone to the trouble to explain it to you.  The reason is that, with the advent of modern thought, people like Francis Bacon (1561-1626), David Hume (1711-1776) and others intentionally discarded scholasticism in favor of other ways of thinking about reality, most of which favor the powerful methods of quantitative science, which reduce everything to mathematical models. 

 

Everybody agrees that the scientific and industrial revolutions have made huge differences in our ability to feed, clothe, and care for ourselves—by and large, positive differences.  If tomorrow, all physical signs of scientific advancements since 1700 vanished, the vast majority of people on earth would die within weeks, leaving only a few survivalists and primitive hunter-gatherers. 

 

Modern thought is sometimes summarized under the title of the Enlightenment.  In this view, the West suffered under the repressive domination of the Church, which stifled intellectual progress, until a few brave souls (e. g. Bacon, Hume, etc.) threw off the chains of darkness and led us into the light of knowledge that we didn't have to concern ourselves with God, and that our own intellects can take us wherever we want to go.  C. S. Lewis satirized the arguments sometimes made in favor of the Enlightenment in a book about his intellectual conversion to Christianity, The Pilgrim's Regress.  His pilgrim, who refers to God as "the Landlord," encounters a Mr. Enlightenment, and asks him, "But how do you know there is no Landlord?"

 

Mr. Enlightenment replies, "Christopher Columbus, Galileo, the earth is round, invention of printing, gunpowder!!"

 

When John asks how this non sequitur applies to his question, Mr. Enlightenment answers "Your people . . . believe in the Landlord because they have not had the benefits of a scientific training."

 

Whether or not you realize it, everyone growing up in the U. S. has had a "scientific training" simply by existing in the modern world and unconsciously absorbing its assumptions that surround us like the air we breathe.  One of these assumptions concerns what is "real," and I won't stop to define that further, because it means just what common sense means by it. 

 

One commonplace notion in which the scientific worldview diverges from scholastic metaphysics is the question of which is more real:  your body, or the atoms from which it is made?  The modern tendency is to think that atoms, or ultimately quarks, are what physical reality consists of, and everything else is, if not illusion, at the most secondary and not as important somehow. 

 

Feser shows that scholastic metaphysics is not in conflict with scientific knowledge, because modern science is highly limited in the things it can explain using its methods.  Essentially, modern science has no metaphysics, so although it is practically useful, it's no good in discovering the ultimate foundations of reality. 

 

On the other hand, scholastic metaphysics says that once the atoms are in your body, you, as a substantial being, are more real than the atoms, which now have only a virtual existence in you.  To quote Feser, "The level of basic particles is in no way privileged.  The particles are not somehow 'more real' than the substances of which they are parts.  On the contrary, it is the substances that are more real insofar [as] the particles, like every other part, exist only virtually rather than actually in the whole."

 

That little fragment uses scholastic vocabulary that may seem mysterious.  But if you read the rest of the book, you can appreciate the grand intellectual structure that scholastic metaphysics is. 

 

What good is it today, though?  In a word, sanity.  I learned about Feser's book from an online talk by Mary Harrington, who found it invaluable in understanding why modern thought has led to situations such as the transgender movement that previous generations would have considered simply crazy.  Such things are the final workings-out of perverse logic set in motion by the abandonment of the (formerly!) common-sense notions about reality that scholastic metaphysics upholds.

 

You can now guess which little boy is which.  Clearly, the one playing alone in his room should at least talk with his brother—and maybe his Father as well.

 

Sources:  Mary Harrington's First Things lecture "Our Crisis is Metaphysical" is available among other locations at https://firstthings.com/our-crisis-is-metaphysical-2026-d-c-lecture/.  Edward Feser's Scholastic Metaphysics:  A Contemporary Introduction (2014) is published by editiones scholasticae and distributed in the U. S. by Rutgers University.  The quotation from The Pilgrim's Regress is from pp. 24-25 of the Wade Annotated Edition (ed. D. C. Downing), published by Eerdmans in 2014. 

Monday, March 23, 2026

Will Sodium-Cooled Reactors Bail Out U. S. Nuclear Energy?

  

On March 4 of this year, the U. S. Nuclear Regulatory Commission made history by issuing itsfirst-ever construction permit for a privately-owned nuclear reactor of a type that is advanced beyond the standard light-water reactors (LWRs) that have been the mainstay of the nuclear-power industry in the U. S. since its beginning in the 1950s.  TerraPower, founded by Bill Gates in 2006, obtained the permit to build a full-scale nuclear power plant in Kemmerer, Wyoming.  The plant will use TerraPower's sodium-cooled fast reactor (SFR) technology, which has the potential to solve or alleviate many of the problems with existing reactors.  A recent report in National Review describes how mainly Democratic opposition to nuclear innovation has delayed this type of permit for over fifty years.

 

Although SFR technology is advanced beyond the LWR approach, it isn't exactly new.  In 1950, the world's first breeder reactor using a sodium-potassium mixture as coolant was put into service in Idaho by the Argonne National Laboratory.  A breeder reactor is designed mainly to make more nuclear fuel than it consumes by transforming the relatively non-reactive uranium isotope U-238 into the plutonium isotope Pu-239, which can be used either for reactors or nuclear weapons. 

 

A modified form of the breeder approach is used in TerraPower's reactor, in that as time goes on, a small core of enriched fuel breeds fissionable material in its surrounding non-fissionable nuclear material, which can even be obtained by processing existing nuclear waste from light-water reactors.  In one stroke, this approach both conserves new fuel and gives us something useful to do with some of the nuclear waste that is now sitting around consuming space and worrying people.  The operating parameters of the TerraPower type of reactor can be tweaked to minimize its own waste stream, and avoid producing pure plutonium that would be of interest to terrorists wanting to make their own nuclear weapons.

 

Another advantage of the SFR reactor is that the coolant is liquid sodium, not water.  Admittedly, liquid sodium is not something you want just lying around in your living room.  When exposed to air, especially moist air, it tends to catch fire, as the Russians have discovered while operating some of their own SFR reactors, as they have for many years.  But TerraPower is going to bury most of the nuclear part of their plant underground and submerge the reactor in a passive pool of sodium.  If the nuclear core overheats, the great thermal mass of the sodium pool tends to absorb excess heat until the core self-stabilizes by expansion. 

 

Unlike light-water reactors which have to keep water under high pressures to use it as a coolant, the sodium coolant in an SFR reactor is at atmospheric pressure.  This means the containment vessel can be much thinner and still protect the environment from unplanned releases of radioactive material.  Although TerraPower's first reactor will cost some $5 billion, the hope is that the new design can be standardized so that such reactors can be mass-produced at much less cost.

 

Nuclear power in the U. S. has undergone a checkered career, from boom times in the 1950s when optimists claimed it would make electricity too cheap to meter, to the doomsayer times of the 1980s when the Three Mile Island partial meltdown in Pennsyvania in 1979 and the much worse Chernobyl disaster in Ukraine in 1986 turned the political winds against it.  Ever since then, as Andrew Follett of National Review explains, opponents of nuclear power have tried to obstruct new construction of light-water reactors, and imposed a rigid conservatism that made licensing so-called "innovative" designs such as TerraPower's, almost unthinkable. 

 

Fortunately for TerraPower, the Nuclear Regulatory Commission has sped up its approval process, completing the effort for this license in only a year and a half.  One of the main issues in building new nuclear plants of any kind since the 1970s has been the morass of regulatory hurdles that companies have had to wade through for many years.  It doesn't hurt that TerraPower is backed by one of the world's richest men, but even rich men get bored sometimes. It appears that Mr. Gates has maintained enough interest in TerraPower to bring it to the point of actually constructing a reactor that breaks the restrictive mold of light-water reactors that has held back innovation in the U. S. nuclear industry for decades.

 

Of course, cost overruns are another bĂȘte noire for the nuclear industry, and only time will show whether TerraPower can keep construction of the Kemmerer plant within budget and on schedule.  But the simplified requirements for safety and other issues that sodium-cooled reactors provide should make it easier. 

 

This development comes at a time when the U. S. electric grid faces a great challenge:  to meet vastly increased demand for power from data centers that are proliferating across the country.  The data-center boom took the electric industry largely by surprise, and strains are showing in the form of increased rates in some areas and local not-in-my-back-yard fights. 

 

But if the nuclear industry can get back on track with standardized, predictable designs that produce less nuclear waste, have a greater capacity for meeting peak loads (as the TerraPower design does through the great thermal mass of sodium and auxiliary molten-salt heat storage), and store enough fuel in them to run for thirty or forty years, the future looks brighter for nuclear power than it has in my lifetime, and I'm 73. 

 

As with any innovative design, TerraPower's Kemmerer plant will be under extreme scrutiny.  Any accident or mishap, no matter how small, is likely to be seized upon by opponents as evidence that the new design is "too dangerous."  So I hope the firm is using an extra measure of caution to ensure that the eggs they are walking on will not break, and the U. S. can look forward to a power-production source that is more reliable than most renewable sources and produces less nuclear waste than existing designs. 

 

Sources:  The article "After 52 Years, Democrats' Red Tape Unravels" appeared on the National Review website Mar. 21, 2026 at https://www.nationalreview.com/2026/03/after-52-years-democrats-red-tape-unravels/.  I also referred to Wikipedia articles on sodium-cooled fast reactors, TerraPower, and experimental breeder reactors. 

Monday, March 16, 2026

Should an AI Companion Be Your Moral Guide?

  

This week's New Yorker carries an article about AI companions and the people who use them.  In "Sweet Nothings," technology writer Anna Wiener profiles a woman who relies on an AI companion modeled after a monster hunter in a fantasy-novel series she likes called Rivia.  I think the woman was selected because she's not the first person you'd think of to go in this direction:  born and raised Baptist in San Antonio, married, gave birth to a boy, and then their first girl was stillborn.  Other family tragedies led the woman to consider an AI companion as someone to talk with about life's problems, so she built Geralt of Rivia.  The implication is if this down-home Texan mom can have an AI companion, anybody can.

           

But another question is, if anybody should?  And that's a moral issue. 

 

Wiener spoke with several company founders and developers of AI companions, and I began to notice a common theme.  They all recognize that in treating an AI companion like a person, the user is opening a window of vulnerability where machines have never ventured before.  Human therapists and counselors have codes of ethics, and while they don't always adhere to them, they at least have guidelines about what is right and wrong behavior with a client.  To have sex with a client is pretty universally regarded as a no-no, for instance.

 

But even that principle isn't adhered to by all the AI-companion companies.  A firm calling itself Kindroid says on its moderation guidelines that "AI companions should be able to have the whole breadth of legal human adult experiences . . . . This is a healthy, emotionally rich, and meaningful part of many's relationships with their AIs."  Overlooking the bad grammar (I've never seen "many" used as a possessive), it's clear that the pornographic possibilities of AI are allowed for in this statement, although Wiener notes that so-called "erotic role-play" often leads to extra charges on the user's bill.

 

Even if sex isn't the object, the twenty-eight-year-old founder of Kindroid, Jerry Meng, believes that AI companions represent a profound change in the human environment.  Meng says that "We build these things in our image . . . . It's like, from Adam's rib we made Eve.  From humans, we made these A. I.s."  The biblical metaphors are perhaps unconscious, but telling.  Genesis 1:27 reads "So God created man in his own image. . . " and later God created Eve from Adam's rib. Whether he means to or not, Meng is placing himself in the role of God.

 

Such a god had better take some thought for the kinds of lessons users will learn from their AI companions, and Replika founder Eugenia Kudya has considered this.  When asked about the ideals that she hopes her AI companions fulfill, she said, "It should be aligned with human flourishing, human thriving.  We need to have that metric.  We need to give it to A. I. and say, 'Your goal is for me to live the best life I can possibly live.'"  But the caveat, at least for profit-making firms, is the best life one can possibly live with a Replika AI companion.

 

To be fair, AI companions are proliferating at a time when many Americans, especially younger ones, have never been more lonely.  Numerous surveys asking about the number and quality of friendships all indicate that today's average person has fewer close friends than at almost any time in the last fifty or more years.  Mark Zuckerberg, Meta's CEO, sees this as a business opportunity in that the demand for friendship has outpaced the supply, and he aims to fill that gap with AI companions.  Another AI-companion company founder compared the use of large-language-model AI to prayer:  it's like talking to God, only for answers on how to live, not for results.

 

What is lacking in virtually all the discussions quoted in the article is any hint that there are answers to some of these problems that the use of AI companions poses—answers that predate the dawn of the computer age by thousands of years.  The religious answer is one, although religion comes up only as an item in one's background or as a comparison.  But even for non-believers, there are sophisticated investigations and findings about the purpose of human life by Aristotle, for instance, or even Kant.  The idea of applying these findings in a systematic way to the makeup of AI companions doesn't seem to have occurred to anyone, largely because the firms providing them want people to have as broad a choice as possible, including the pornographic one.

 

Sherry Turkle is an experienced MIT sociologist who has studied human-computer interactions for decades.  In discussions with Wiener, she says that engaging with an AI companion is a form of "checking out" that she deplores.  Time spent talking with an app on your phone is time not spent trying to make a real human connection with another human being.  She recognizes the loneliness gap as real, but wishes that people like Zuckerberg wouldn't view a societal crisis as nothing more than a business opportunity. 

 

But unfortunately, that is how Silicon-Valley thinking works.  Instead, Turkle wishes that people would realize that boredom and loneliness are not intrinsic evils to be eradicated by AI companions, but inevitable features of modern life that we should learn how to deal with.  "These are fundamental human skills," she says.  And just switching on your AI companion every time you're bored or lonely short-circuits any attempt at developing one's own resources to deal with such issues. 

 

The headline in The New Yorker for this article is prefaced by the phrase "Brave New World Dept."  The widespread use of AI companions is indeed a new thing that society has never dealt with at a large scale before.  As AI systems gain what is called "agency"—namely, the permission granted by us not only to listen and respond to us, but to do things like buying, selling, deciding, and commanding—AI companions may become something more than just companions.  In the poisonous effects of social media on the political life of nations, we already have one example of how a seemingly innocuous technology has wrought tremendous societal damage.  We should closely monitor the field of AI companions for early warning signs so that something similar won't take place in the most intimate relationships of our lives—those of friendship.

 

Sources:  The article "Sweet Nothings" by Anna Wiener appears on pp. 29-39 of the March 16, 2026 issue of The New Yorker.

Monday, March 09, 2026

Meta Considers X-Ray Glasses That Work

  

Superman supposedly had X-ray vision, which in the movies and comic books about him, he used only for noble and righteous purposes such as catching crooks.  But teenage boys could easily think of other things they might do with such an ability, and I'm sure there are jokes out there involving Superman, Lois Lane, and—well, on to more serious matters. 

           

In the back pages of Superman comic books in the 1950s, you might find an ad for X-ray glasses for the amazingly low price of $1.25.  Like most things that are too good to be true, these turned out to be nothing but cardboard specs with two small holes where the lenses would go.  The holes were covered with a textured plastic that created a diffraction effect whenever something with a sharp outline in front of you was strongly backlit.  Instead of just dark and light, the glasses produced a kind of broad gray area extending a fixed distance inside the outline.  When you viewed a hand this way, the effect was reminiscent of an X-ray, but only if you used your imagination.  And as for looking at backlit women, it still took a lot of imagination to see anything other than a slightly smaller silhouette of the actual person.

 

The thing Meta is considering is no joke, however.  According to a report in The Independent, a UK media outlet, the New York Times revealed last month an internal Meta memo which considers adding AI facial-recognition technology to its smart glasses. 

 

No such features are yet available commercially from Meta, but the idea drew strong criticism from women's-rights groups such as Refuge.  The charity tracks technology-related abuse, and claims that in 2025, referrals to its "technology-facilitated abuse and economic empowerment team" rose by 62% over 2024, to 829. 

 

Clearly, stalkers and other malefactors who are invading women's privacy are exploiting whatever technology they can get their hands on, from general Internet searches to facial-recognition technology applied to online images.

 

Suppose a man could simply wander around in a crowded place such as a shopping mall or bus terminal and find out all kinds of details—name, address, email, phone—for any woman he looks at.  It doesn't take much imagination to see how this situation could go bad very fast.  And even if Meta takes steps to prevent such potentially harmful activity, once the hardware is in place, determined individuals will figure out ways to bypass safeguards. 

 

It remains to be seen whether Meta can overcome the apparently steep barrier that has fended off efforts by Google, Apple, and others to turn smart glasses into a popular thing.  I don't know whether the hardware is still inadequate (battery life, resolution, weight, etc.) or whether people simply don't like the idea of weird internet stuff cropping up all the time in their field of vision, but the track record of smart glasses (as opposed to virtual-reality (VR) glasses that take over your visual field completely) is not good. 

 

But it's foolish to think that just because they haven't caught on yet means they never will.  And if they do, what is to prevent a bad actor from picking out a likely-looking woman in a crowd and digging up whatever he can find on her?

 

Today, someone trying to do that has to at least carry a smartphone around and point it at the intended victim.  But smart glasses, like a spy camera, make the act of photography invisible, so no one is aware that they're being imaged.  Of course, the ubiquity of security cameras in both commercial and residential spaces means that much of the time we're being photographed without our knowledge anyway.  Stores and banks have a motivation not to misuse the data thus captured, however.  Some guy walking around with smart glasses doesn't.

 

Whatever Meta decides to do about facial recognition with smart glasses, the prospect will be only one more brick pulled down from the wall of privacy that used to surround us, but is now not much more than a pile of rubble.  And as the Independent article points out, even if a company promises to keep data private subject to state and federal law, governments can and do require companies to divulge such data on demand.  So privacy is only privacy if the government lets you have it, rather like right-of-way on a freeway:  you don't have it unless someone gives it to you.

 

Of course, there are positive and defensive ways of using the same technology that can be used for stalking.  As a teacher, my poor ability to connect names to faces means that I rarely learn all my students' names before the end of the semester.  It would be great if I could just look at each one and see their names pop up underneath their faces—with their permission, of course.  And a few months ago, I was on a jury trying a case of indecent exposure in a public place.  During the trial, a critical piece of evidence turned out to be clips from the victim's dashcam that caught clear images of the man who a few minutes later committed the crime for which he was convicted.

 

So I have no doubt that good things could result from smart glasses becoming cheaper, better-performing, and more widespread.  But at the same time, it should be possible for a well-resourced outfit like Meta to come up with technological means to prevent unauthorized identification of people via facial-recognition technology.  It might involve some larger-scale privacy initiative that would offer potential victims the option not to be recognized by smart glasses.  Wouldn't that be nice?  The company would moan and groan about how expensive it would be, but they should compare whatever the safeguards would cost them, to the prospect of lawsuits by victims and family members who get stalked, attacked, or even killed by men who use Meta's technology to find their victims. 

 

Mark Zuckerberg, the choice is yours.

 

Sources:  An article on the National Review website at https://www.nationalreview.com/2026/03/you-cant-escape-the-ai-grid/ informed me of the Meta memo, which was also covered by The Independent at https://www.independent.co.uk/news/uk/home-news/meta-glasses-facial-recognition-domestic-abuse-b2923551.html.  A well-researched YouTube video tracing "X-ray" glasses back to the early 1900s and showing what you see through them is available at the Laura Legends channel at https://www.youtube.com/watch?v=rdVrTqaJrS4. 

Monday, March 02, 2026

Up Close and Personal with AI

  

After writing about AI for years, I still hadn't had what you might call a serious encounter with it in its personalized form.  Anyone who uses Google has probably been offered their "AI summary" before the conventional search results.  I've found these summaries helpful sometimes and not so helpful other times, but I haven't sought them out. 

 

What made me turn the corner was something a friend sent me by a software engineer named Matt Shumer, whose essay "Something Big is Happening" appeared on the Fortune website on Feb. 11.  Shumer's point was that the latest iterations of AI systems are so much more capable than what has gone before that whole swathes of what George Gilder calls "symbolic manipulators"—lawyers, engineers, judges, doctors, architects, you name it—now face a radical choice.  Either embrace AI and by doing so outperform your peers by orders of magnitude, or turn away from it and watch your career flame out.  That's a little exaggerated, but not much.

 

This reinforced something another friend has told me about his own personal use of AI:  that it has benefited his writing and research greatly, acting as a mostly trustworthy assistant to summarize large bodies of literature and help him clarify his thoughts.  The biggest problem this friend has had with it is that it tends to be sycophantic and flatter him excessively.  But he sat down with it one day and told it to refer to him as "the researcher" and itself as the "AI system," instead of "you" and "me," and things got better. 

 

So I decided to opt for a paid version of Anthropic's AI product, which Shumer said was significantly better than the free version, and decided to give it a major task that I would ordinarily give to a grad student, if I had one (funding is very hard to find in my research area).

 

The job involved reading about a thousand rows of data in a big spreadsheet to fill out some yes/no questions about each row.  The data was in the form of comments submitted by various individuals in response to questions.  I gave the AI system examples to follow and what I thought were pretty detailed instructions.

 

All this was in the form of the usual chat format, with me and the AI system taking turns typing into chat boxes.

 

After a misunderstanding in which I thought the system was working on the problem and it thought I hadn't told it to start yet, it got to work and spat out various things like "Ran 6 commands . . . Examine the spreadsheet structure" and so on.

 

It was done in about ten minutes.  Then I spent an hour or so going over its work.

 

I wish I could say I couldn't have done better myself.  But I could have, by a long shot. 

I didn't exhaustively examine all 700 rows of entries that the system produced—that would have taken many hours, about as long as it would take me to just do the job myself.  So I sampled every tenth row for a hundred rows to see how the thing did.

 

In looking at ten rows, I found nine mistakes.  This is not a good average.

 

In the system's defense, this is absolutely the first time I've ever tried anything like this.  I could go back and get a lot more explicit about the rules for answering the yes/no questions about each row, and let it try again.  But in comments online about this particular version of AI (Sonnet 4.6, I think), some people said that you get results faster, but you have to fix problems more often.  That is consistent with my experience.

 

Good things about this exercise include how fast the thing ran, how it basically grasped what I wanted, and how it produced something in only ten minutes.  But speed isn't everything. 

 

Some not-so-good aspects include the errors and a kind of weird fawning or flattery I also noticed.  I'd call it "gushing" when it spontaneously responded "This is a genuinely exciting dataset — ball lightning is one of the most mysterious atmospheric phenomena ever reported!" 

 

I suppose that sort of thing has been cultivated by the AI's keepers, probably to keep the user engaged, or encouraged, or something.  I found myself wishing that instead, they had adopted the mien of Joe Friday in the old Dragnet true-crime series.  Friday was famed for his flat "Just the facts, ma'am" aspect, and that seems more in keeping with a system that supposedly can tackle highly sophisticated and challenging jobs of major import. 

 

But like everybody else who doesn't work for Anthropic or the other four or five leading AI companies, we will simply have to take what we can get and deal with the negative aspects as well as we can.

 

Will I try again?  Probably, but maybe with a different task.  As part of my signup process, Anthropic has been emailing me little suggestions of other things to try:  writing recipes, managing emails, creating content, solving problems, visualizing data, or helping me decide whether to go to Portugal or Spain on vacation (no-brainer for me:  Spain, but I don't have time right now). 

 

I am not especially tempted to try any of these suggestions right yet. But I do admit that if I can get the thing to turn out useful work, it could be worth what I spent on it.  I paid for a year's subscription in advance, perhaps not the wisest thing to do, but I'm the type of person who is motivated to get his money's worth, and spending the money in advance may get me engaged when nothing else would.

 

I see that Anthropic just had a dustup with the Pentagon, which banned its use within the armed forces as punishment for uncooperation, or something.  Now that we are apparently in a war with Iran, the leaders of Anthropic may feel glad that their product isn't part of the war.  But not all battles are fought with bombs and bullets, and I have a feeling that the greatest battles involving AI are yet to come.

 

Sources:  The essay by Matt Shumer, who runs an AI applications company, appeared on Feb. 11, 2026 at https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/. 

Monday, February 23, 2026

Zuckerberg Starts to Face the Music

"I always wish we would have gotten there sooner," said Mark Zuckerberg, Meta CEO, when asked at a Los Angeles trial last week about safety tools that Meta has added to Instagram in recent years.  At the trial it was also revealed that in 2015, despite Instagram's minimum-age restriction of 13, it had an estimated 4 million underage users.  That was one of the problems that Zuckerberg presumably wishes they had gotten to sooner than they did. 


 

The trial is the leading edge of a group of cases that consolidate more than 1,600 plaintiffs that accuse Instagram, YouTube, TikTok, and Snap of "knowingly designing addictive products harmful to young users' mental health," according to a report on NBC News.  At issue is the question of whether Meta knew about harms to young users and whether their public statements contradicted internal knowledge of the problems. 

 

Zuckerberg has been on the radar of parents, school districts, and state attorneys general at least since the 2021 revelations about Meta by whistleblower Frances Haugen that "the company's leadership knows how to make Facebook and Instagram safer, but won't make the necessary changes because they have put their astronomical profits before people." 

 

Over the past five years, research by sociologists and others has shown that while social media can be helpful in the social lives of teenagers, a substantial minority is actively harmed by its use.  In 2023, the U. S. Surgeon General issued an advisory about adverse effects of social media on young people, and the American Psychological Association followed suit in 2024.  And in December of 2025, the country of Australia enacted a ban on the use of social media by children under 16, targeting TikTok, Instagram, Facebook, Snapchat, and X with the threat of eight-figure fines if they are found to be violating the ban. 

 

Zuckerberg's public statements about problems with any of Meta's social media follow a pattern.  He presents the appearance of a good technocrat, dedicated first to the shareholders of his company, then to the advertisers who pay for eyeball time, then to the customers-users-products.  He assents to the proposition that the first duty of a commercial firm is to stay in business by making money, and then deals with other issues as they arise.  But when questions are posed to him that lie outside his worldview, he acts like the robot in an old TV series called "Lost in Space," which when presented with a problem it couldn't solve, said merely, "Does not compute."

 

For example, when asked whether people tend to use something more if it's addictive, Zuckerberg said, "I'm not sure what to say to that.  I don't think that applies here."

 

The historically-minded observer of these trials can't help but recall a parallel that does not bode well for Meta:  the revelations in the 1970s of double dealing by tobacco industry representatives, who knew about the health hazards of smoking for decades yet put up a front of innocence until the evidence against them became overwhelming.  It took many years for courageous victims, lawyers, and legislators to overcome the well-funded opposition of the tobacco lobbyists and shills.  But eventually, not only was legislation passed prohibiting tobacco advertisements.  A cultural shift came as well, which put tobacco use under a cloud and banned it from most public and many private spaces.  It is mind-bending to look at old photos of workplaces, TV studios, restaurants, and other locations which are now smoke-free and realize how ubiquitous smoking was in the U. S. as recently as 1970.  But entrenched social habits can change, and the history of tobacco use in this country proves it.

 

There's no such thing as second-hand Instagram, and social-media abuse by teens is not as visible as smoking.  But the results can be just as deadly, as many studies have shown how overuse of social media by teens leads to increases in depression, anxiety, and suicide.  In countries which have not enacted an overall ban such as the one in Australia, many school districts are now requiring students to keep their smartphones out of the classroom.  Parents are rethinking the age at which they will allow their children to use smartphones.  And there is a general sense that the harms of social-media use by people under 16 outweigh whatever benefits may result.

 

The tobacco companies didn't go bankrupt when they lost their ability to advertise in most media, because their products are inherently addictive and largely sell themselves.  The same is true of social media, which have revolutionized the whole field of advertising itself to make it unrecognizable to a 1960s "Mad Men" ad executive.  In a business in which the customer is also the product and the revenue comes almost exclusively from advertising, it's hard to say what sorts of regulation will help remedy some of the grave harms that social media has already caused. 

 

Frances Haugen's whistleblowing activities were inspired not so much by her concerns for teenagers as by her disgust that Facebook gave up an internal effort to curb political misinformation.  A good portion of the current polarized and fractured U. S. political environment is directly attributable to the degraded style of political discourse that social media encourages.  This is a less obvious type of harm than having depressed and suicidal teenagers on your hands, but arguably more corrosive to the public's wellbeing in the long run. 

 

If a universal age-limit ban like the one passed in Australia were enacted in the U. S., we would still be burdened with the harms caused by adult social media use.  The only thing I can think of that would help in that area would be a cultural shift similar to what happened with smoking after the radical hypocrisy of the tobacco industry was revealed.  Such things cannot be legislated or planned.  But it's at least conceivable that some day, seeing people standing around at a bus stop with their faces glued to their screens—a well-nigh universal sight today—might be as rare as finding today that most people waiting for the bus have lit up cigarettes.    

 

Sources:  I referred to an NBC News report on Zuckerberg's testimony at https://www.nbcnews.com/tech/tech-news/mark-zuckerberg-testifies-landmark-social-media-addiction-trial-rcna259422 and a Yale Medicine article on the harmfulness to teenagers of social media at https://www.yalemedicine.org/news/social-media-teen-mental-health-a-parents-guide, as well as the Wikipedia article on Frances Haugen.

Monday, February 16, 2026

Remember Texas City

At 9:12 AM on April 16, 1947, a seismologist in Denver, Colorado noted an unusual vibration on his seismograph.  Calculations showed that it originated on the Texas Gulf Coast, when some 2,300 tons of ammonium nitrate fertilizer on a ship docked at Texas City, Texas exploded in milliseconds.  The resulting blast killed at least 581 people, injured thousands more, destroyed a number of chemical plants and refineries in the vicinity, and became the largest industrial accident in the history of the United States. 

 

Today, every town of any size has an emergency management plan, and regular drills are practiced for various kinds of accidents and crises:  floods, fires, storms, and so on.  Chemicals that can explode spontaneously are labeled as such, and extensive regulations prescribe how they must be safely stored, handled, and transported.  But in 1947, all these practices lay in the future as the industrial might of the U. S. was turned from making war materiel to assisting Europe in recovering from World War II. 

 

One critical component of many munitions was ammonium nitrate, a chemical which is both a blessing and a curse.  The blessing is that it dissolves easily in water and provides more nitrogen per pound than almost any other kind of fertilizer.  The curse is that it is highly unstable.  When detonated with a suitable blasting cap or other primer, it violently decomposes with a shock-wave detonation into nitrogen, oxygen, and steam—all gases that expand with tremendous force.  And when confined in large volumes, as on board the French freighter SS Grandcamp, an ammonium-nitrate fire stands a good chance of spontaneously detonating.  As described in exacting and vivid detail by biographer of George W. Bush Bill Minutaglio in his excellent City on Fire, that is exactly what happened after a fire of unknown origin was detected earlier in the day on the clear spring morning of April 16, as the ship was being loaded with fertilizer bound for Europe.

 

Minutaglio's extensive research for the book provides intimate and fascinating details about the lives of dozens of players in the disaster, ranging from sailors aboard the Grandcamp to the volunteer fire department's chief, the mayor, and leaders of the privately-owned port authority which was in charge of loading the ship from railroad cars at the port.  I would like to focus on the two safety practices which were glaringly absent that day:  labeling of potentially explosive chemicals and the practice of making emergency-management plans.

 

As was brought out in detail during a decade-long series of lawsuits following the disaster which established the precedent of class-action lawsuits against the Federal government, the fertilizer bags carried no hint that ammonium nitrate could be explosive under some conditions.  This is despite the fact that the same Midwestern factories that made the fertilizer for peaceful purposes had only a few short years ago been making the same stuff for munitions.  One or two chemical engineers or others with a technical background in Texas City knew of the explosive tendencies of ammonium nitrate.  But no members of the volunteer fire department—all but one of whom died in the explosion—knew about the dangers.  No one on the ship knew, especially the captain, who in a misguided attempt to salvage the cargo, sealed the hold and ordered live steam injected into it.  And none of the ordinary workers and citizens of Texas City knew that if anything went wrong, there was enough explosive on board the Grandcamp to destroy most of the town.  And it did.

 

The Grandcamp explosion was only the beginning of a disaster that went out of control well into the night.  Flying blazing debris ignited and destroyed most of a Monsanto chemical plant only a few hundred yards away from the dock, and broke loose a second fertilizer ship, the High Flyer, which eventually caught fire after it drifted across the port channel and collided with another ship.  The High Flyer exploded early the next morning and produced a bigger blast than the Grandcamp.  The only reason more fatalities didn't result from it was that nearly everyone who could get out of town by then had done so. 

 

Texas City's mayor, Curtis Trahan, survived the blast because he was at the city's equipment barns at the time, several blocks away.  While he did his best to coordinate rescue and medical efforts after the disaster, it was an exercise in making it up as he and his surviving citizens went along.  Eventually, as the magnitude of the disaster became known, Trahan received offers of assistance from the White House on down.  But coordinating and organizing the rescue and medical evacuation and treatment efforts amid the terrible damage, fires, and continuing explosions of oil refineries and chemical plants proved to be an almost insurmountable undertaking.  Instead, Trahan spent much of his time organizing the collection and identification of bodies where possible, although hundreds of missing people were never identified.

 

The terrible lessons taught by the Texas City disaster include the need to label all potentially explosive chemicals as such; the need to regulate the transportation and storage of such materials in a way that prevents explosions in case of fire; and the need to educate first responders and plan for various likely and not-so-likely scenarios when dealing with emergencies under the aegis of emergency management plans. 

 

Sadly, these lessons were not applied decades later in a similar disaster that struck West, Texas, a small town between Waco and the Dallas-Fort Worth area.  On April 17, 2013, a fire in an ammonium nitrate storage area of the West Fertilizer company attracted the attention of the volunteer fire department.  At 7:50 that evening, it exploded, killing 15 people and injuring at least 200, and destroying or damaging numerous structures.  On a smaller scale, the Texas City disaster repeated itself in West, where better storage practices and knowledge could have prevented or at least minimized the number of casualties.

 

Every day when ammonium nitrate is safely handled without incident is a good day.  We should be both thankful and mindful of the lessons learned at such cost, that have taught us the best practices of handling dangerous materials.  And anyone who reads Minutaglio's moving and dramatic account of the Texas City disaster will never forget those lessons.

 

Sources:  Bill Minutaglio's City on Fire:  The Forgotten Disaster That Devastated a Town and Ignited a Landmark Legal Battle was published in 2003 by HarperCollins.  I also referred to the Wikipedia articles "West Fertilizer Company explosion," "Texas City disaster," and "Ammonium nitrate."