Monday, September 22, 2025

Are $100K H-1B Visas a Good Idea?

 

President Trump seems to think so.  The H-1B visa was intended to be used by foreigners wishing to work in the United States in "an occupation that requires theoretical and practical application of a body of highly specialized knowledge" and typically requires a bachelors' degree or higher.  When President Bush signed the Immigration Act of 1990 creating the H-1B as we know it today, it established a quota of persons to be admitted and required that companies hiring such people fill out a Labor Condition Application, which showed that it was unusually hard to find such qualified individuals in the existing U. S. labor pool.

 

On the face of it, the H-1B visa sounds like a good idea.  If we are going to allow immigration at all (a question that is more debatable now than it has been in the past), it would make sense to choose those immigrants who are more capable of contributing to the economy in employment sectors where there are presently shortages. 

 

But there are always unintended consequences for any law, and lately there have been accusations that the H-1B system has been abused by companies simply wanting to hire cheaper foreign workers for jobs that they could fill with better-paid U. S. workers.

 

Evaluating the truth of that accusation is something I'm not personally prepared to do.  But the current H-1B visas are allocated largely by lottery, plus a nominal fee of a few hundred dollars, and a recent article in the Los Angeles Times indicates that companies have been gaming the lottery system by putting in multiple applications for the same person or position.  Authorities claim they have changed the rules to reduce such abuse.  But clearly there is room for improvement in the way the H-1B visa is administered.

 

President Trump's solution to these problems is to raise the annual fee for an H-1B visa holder to at least $100,000, with a variety of more expensive visas for which the main qualification is that you're already rich and can pay a million dollars or more.  Some statistics cited by the LA Times indicate that many H-1B visa holders may be earning as little as $60,000 a year, which is both an indication that the prevailing wages in the high-tech industry are not being paid as they should be for these visa holders, and that slapping a $100K surcharge on such visas will simply make most of them disappear, along with the current visa holders.

 

There are two main groups of stakeholders in this situation.  One group consists of high-tech U. S. firms wanting to hire the best workers at the lowest wages they can get by with.  Another group (or set of groups) are foreign workers who are qualified to do high-tech jobs, and would like to do them in the U. S.  Over 70% of H-1B visa holders are from India, it turns out, but there are other countries involved as well. 

 

Adam Smith's invisible hand would let these workers in at whatever wage they would accept, which would be way below the prevailing U. S. wage scale.  While Smith's rule about trade barriers (namely, the fewer the better) was intended for goods, not people, extending it to free immigration is in the spirit if not the letter of his ideas.  In the free-for-all of illegal immigration we have seen over the last few years before Trump came into office, high-tech companies have not benefited as much as they might have, because even with troops of lawyers at their disposal they would find it difficult to blatantly violate immigration law on a large scale, in contrast to the thousands of construction and other more menial jobs that most illegal immigrants find to do, at least at first. 

 

On the other extreme, you have people such as President Trump and my late friend Steve Unger, who was a classic "leftie" of the old school.  Two more different personalities can scarcely be imagined, but on this issue I think they would agree:  keep out them durn foreigners so that U. S. workers can make higher wages. 

 

If you take this principle to an extreme, we would halt all immigration of whatever kind, even for temporary student visas, which are a kind of back-door way that many well-qualified foreigners get here in the first place.  From personal experience, I can tell you that would be an unmitigated disaster for U. S. higher education, which depends on non-U. S. citizens for a large fraction of its students and ultimately professors, who have to start out as students.

 

Would wages for high-tech jobs and graduate students go up?  Perhaps some, but that assumes other things are equal, and after a while they wouldn't be.  It's easy to forget that the world's most important resource is people, not rare-earth minerals or oil or even water and air.  We are a nation of immigrants, and historically we have turned that into a unique strength by means of converting all kinds of immigrants to something called the American Way.

 

But if the American Way turns into something that people from other countries either can't afford, or can't buy into for some other reason, be it political, ethnic, or what have you, then the future of this country is dim.  We have had practically open borders with toleration of scofflaws for far too long, and it makes sense to reform the system so that the rule of law becomes more respected.  But it should be a rule of law, not a rule of men, or of one man.  And laws, or rules, that change with the whims of one energetic guy in the White House are hard to have respect for.

 

In sum, the hundred-thousand-dollar H-1B visa looks a lot like the tariff situation or the random roundups by masked ICE enforcers.  It's flashy, attracts a lot of attention and support from President Trump's base, but makes little logical sense in the greater scheme of things if you look at it from a practical view.  Not only the H-1B visa system, but the whole immigration process needs a major overhaul.  But in a republic, the ideas for the overhaul should originate with the public who is concerned, as well as stakeholders such as high-tech companies.  That's not being done, and until it is done, we can expect further chaos and distress among highly qualified people who are here to contribute to our economy and just want to better their lives. 

 

Sources:  The Los Angeles Times carried the article "India expresses concern about Trump's move to hike fees for H-1B visas" at https://www.latimes.com/world-nation/story/2025-09-20/india-expresses-concern-about-trump-plan-to-hike-fees-on-h-1b-visas-that-bring-tech-workers-to-us on Sept. 20, 2025.  I also referred to the Wikipedia article on the H-1B visa

Monday, September 15, 2025

Data Centers On the Grid: Ballast or Essential Cargo?

 

Back in the days of sailing ships, the captain had a choice when a storm became so severe that it threatened to sink the ship.  He could throw the cargo overboard, lightening the ship enough to save it and its crew for another day.  But doing that would ruin any chance of profiting from the voyage. 

 

It was a hard decision then, and an equally hard decision is facing operators of U. S. power grids as they try to cope with increasing demand for reliable power from data centers, many of which are being built to power the next generation of artificial-intelligence (AI) technologies. 

 

An Associated Press report by Marc Levy reveals that one option many grid operators are considering is to write into their agreements with new data centers an option to cut off power to them in emergencies. 

 

Texas, which is served by a power grid which is largely independent of the rest of the country's networks, recently passed a law that prescribes situations in which the grid operator can disconnect big electricity users such as semiconductor-fab plants and data centers.  This is not an entirely new practice.  For some years, large utility customers have taken the option of being disconnected in emergencies such as extremely hot or cold days that put a peak strain on the grid.  Typically they receive a discount for normal power usage when they allow the grid operator to have that option.

 

But according to Levy, the practice is being considered in other parts of the country as well.  A large grid operator called PJM Interconnection serves 65 million customers in the mid-Atlantic region.  They have proposed a rule similar to the one adopted in Texas for their data-center customers.  But an organization called the Digital Power Network, which includes data-center operators and bitcoin miners, another big energy user class, complained that if PJM adopts this policy, it may scare off future investment in data centers and cause them to flee to other parts of the U. S. 

 

Another concern is rising electricity prices, which some attribute to the increased demand by data centers.  These prices are being borne by the average consumer, who in effect is subsidizing the gargantuan power needs of data centers, which typically pay less than residential consumers per kilowatt anyway.

 

In a way, this issue is just an extreme example of a problem that power-grid operators have faced since there were power grids:  how to handle peak loads.  Historically, electricity has to be generated at the same time it's consumed, although there is some progress recently in battery storage of electricity, though not enough to make much of a large-scale difference yet.  This immediacy requires a power grid to have enough generating capacity to supply the peak load—the most electricity they will ever have to supply on the hottest (or coldest) day under worst-case situations. 

 

The problem with peak loads from an economic view is that many of those generating facilities sit idle most of the time, not producing a return on their investment.  So it's always been a tradeoff between taking a chance that your grid will manage the peak load and scrimping on capacity, versus spending enough to make sure you have margin even with the worst peak load imaginable, but having a lot of useless generators and network stuff on your hands most of the time.

 

When the electric utility business was highly regulated and companies had a guaranteed rate of return, they could build excess capacity without being punished by the market.  But since the deregulatory era of the 1970s, and especially in hyper-free-market environments such as Texas, the grids no longer have this luxury.  This is one reason why load-shedding (the practice of cutting off certain big customers in emergencies) looks so attractive now:  instead of building excess capacity, the grid operator can simply throw some switches and pull through an emergency while ticking off only a few big customers, rather than cutting it off to everybody, including the old ladies who might freeze or die of heat exhaustion without power. 

 

Understandably, the data-center operators are upset.  They don't want to spend the money on backup generators that they would rather the grid operators spend.  But the semiconductor manufacturers have learned how to do this already, and build costs for giant emergency-generation facilities into their budgets from the start. 

 

Some data-center operators are starting to build their own backup generators so that they can agree to go off-grid in emergencies without interrupting their operations.  After all, it's a lot easier to restart a data center after a shutdown than a semiconductor plant, which could suffer extreme damage after a disorganized shutdown that could put it out of action for months and cost many millions of dollars. 

 

Compared to plants that make real stuff, data centers can easily offload work to other centers in different parts of the country, or even outside the U. S.  So if there is a regional power emergency, and a global operation such as Google has to shut down one data center, they have plenty more to take up the slack. 

 

It looks to me like the data centers don't have much of a rhetorical leg to stand on when they argue that they shouldn't be subjected to load-shedding agreements like many other large power users tolerate already.  We are probably seeing the usual huffing and puffing that accompanies an industry-wide shift to a policy that makes sense for consumers, power-grid operators, and even the data centers themselves, if they agree to take more responsibility for their own power in emergencies. 

 

If electricity gets expensive enough, data-center operators will have an incentive to figure out how to do what they do more efficiently.  There's plenty of low-power technology out there developed for the Internet of Things and personal electronics.  We all want cheap electricity, but if it's too cheap it leads to inefficiencies that are wasteful on a large scale.  Parts of California in the 1970s had water bills that were practically indistinguishable from zero.  When I moved out there to school in 1972 from water-conscious Texas, I was amazed to see shopkeepers cleaning their sidewalks every morning, not with a broom, or a leafblower, but with a spray hose, washing down the whole sidewalk. 

 

I don't think they do that anymore, and I don't think we should guarantee all data centers that they'll never lose power in an emergency either.

 

Sources:  Marc Levy's article "US electric grids under pressure from power-hungry data centers" appeared on the Associated Press website on Sept. 13 at https://apnews.com/article/big-tech-data-centers-electricity-energy-power-texas-pennsylvania-46b42f141d0301d4c59314cc90e3eab5. 

Monday, September 08, 2025

The Fading Glory of Subsidiarity

 

I'll get to subsidiarity in a minute.  First, here is why I'm writing about it this morning.

 

For many years, I have subscribed to the Austin American-Statesman, first in its hard-copy paper form, and then when that got insupportably expensive, in its digital form only.  Already by then, it was owned by a large media conglomerate called the Cox Media Group, but the operations and editorial control of the paper remained in Austin.  An outfit called GateHouse Media bought it from Cox in 2018, but relatively little changed when the owners of GateHouse bought the company that ran USA Today, Gannett Media, and moved the Statesman under the Gannett umbrella.  That caused some changes, but they were tolerable.  Back in February 2025, however, Gannett sold the Statesman to Hearst Communications, another media conglomerate. 

 

This may or may not have anything to do with what happened to me this week, but I suspect it does. 

 

I've been accustomed to propping my iPad on the breakfast table and reading the "e-edition" of the Statesman along with having my cereal and orange juice.  The software worked reasonably well most of the time, and until Wednesday of this week (Sept. 3) everything went smoothly. 

 

Suddenly on Wednesday, I was asked for a password, and the system rejected it.  After futilely trying to reset the password and getting no response from the paper's system, I called a help number and got connected to a man who said there was a software problem, and I should uninstall the Statesman app on my iPad and reinstall it. 

 

I tried that Thursday, but it didn't help.  Then I tried pretending I was a new subscriber (although I had found a place online which said my subscription was paid up until December of 2025), and tried to subscribe.  That didn't even work. 

 

Finally, I called the help line again.  I spoke to one person, who silently connected me to another person, who sounded like she was working in a boiler room with fifteen other people crowded into a space the size of a VW bus. 

 

She tried to identify me by name and phone number, and all those records had been lost.  (This was also the case when I called the day before).  Finally, she could locate me by street address, but it said I wasn't a subscriber.  I asked if she could look up my subscription record to tell when it expired.  She said because of the transfer to Hearst they didn't have that information, and would I like to subscribe now?

 

Seeing no other option, I said yes.  I'd already spent about half an hour on the phone, and I figured this was the only way to get my paper back.  It took about twenty minutes for her to take my information and put it in the system, and I could hear her asking for help in the background.  Then it took another twenty minutes for me to log on and get my new subscription going, and we never could change the password they started me with. 

 

The entire megilla cost me an hour of time I was not planning to spend, and $360 for a year's subscription to the e-edition only, which was the price point that made me cancel the hard-copy edition a few years ago.  We've had some inflation since then, but not that much.

 

If there is anybody under 40 reading this, you are probably wondering why this old guy is insisting on paying that much money for stuff that I could get for free.  Well, while I disagree with the editorial positions of the Statesman staff on most matters, it is still an edited entity that does a fairly good job of telling me what's been going on.  And for another thing, you can't find that many comics all in one place on the internet for free, or if you can I don't know where to go.

 

Now for subsidiarity.  It is a term from Catholic social teaching which describes the principle that "issues should be dealt with at the most immediate or local level that is consistent with their resolution."  That's according to Wikipedia, and while that source is slanted on some matters, they are right on with this definition.

 

Going straight to my problem with the Statesman, most of the paper is written and edited thirty-five miles away up in Austin.  The issue of my e-edition was a local issue, not going any farther in principle than Austin and San Marcos.  The principle of subsidiarity says that the problem with my subscription, and the list of when I've subscribed, and my credit balance which has apparently vanished into the bit void, and my passwords, and whatever other stuff is relevant to the problem including the authority to do something about it, should all be right there in Austin, and not stuck in some anonymous server farm in Seattle and controlled from a boiler-room operation in God knows where, and owned by a corporation based in Manhattan which clearly doesn't give a flip about how it treats its new customers.

 

Simply because technology did not yet permit otherwise, newspaper operations prior to about 1970 had to be local, in accordance with the principle of subsidiarity.  If my father had a problem with his subscription to the Fort Worth Star-Telegram, he'd get on the phone and call their office downtown.  A human being less than 20 miles away would answer the phone, and flip through physical pages of paper until he or she found the hand-written subscription records, and the issue would be resolved, or not.  People made mistakes with paper records too, but they were more easily resolved.  I have no idea what's gone wrong with my subscription to the Statesman, but again, only God knows exactly what the problem is, where all the loose ends are, and whether and how it can be resolved, because it's now so complex and involves incompatible computer systems and who knows what else. 

 

I don't have an answer to this problem, except to point out that if we try moving toward systems that are more in accordance with the principle of subsidiarity, a lot of these kinds of problems might take care of themselves. 

 

Sources:  I referred to the Wikipedia articles on subsidiarity and the Austin American-Statesman

Monday, September 01, 2025

Will AI Doom A Good Chunk of Hollywood?

 

This week's New Yorker carries an article by Joshua Rothman about what artificial intelligence (AI) is poised to do to the arts, broadly speaking.  I'd like to focus on one particularly creepy novelty that AI has recently empowered:  the ability of three guys (one in the U. S., one in Canada, and one in Poland) to produce fully realized short movies without actors, sets, cameras, lights, or any of the production equipment familiar to the motion-picture industry.  The collaborators, who call themselves AI OR DIE, use only prompts to their AI software to do what they do.

 

I spent a few minutes sampling some of their wares.  Their introductory video appears to show what a camera sees while it progresses through a mobile-home park, into one home, where the door opens, another door opens, and finally a piece of toast appears, floating in the air.  Another one-minute clip shows a big guy pretending to be a karate-type expert knocking down smaller harmless people.  If all the characters in that clip were creations of AI without any rotoscoping or other involvement of real humans or their voices, we are very far down a road that I wasn't aware we were even traveling on.

 

Software from a firm called Runway is not only used by AI OR DIE, but by commercial film production companies in mainstream films as well.  But so far, nobody has produced an entire successful feature film using only AI.  But it's only a matter of time, it seems to me.

 

Rothman quotes the AI OR DIE collaborators as saying how thrilled they are when they can have an idea for a scene one day, and then start making it happen the next day.  No years in production hell, fundraising, hiring people, and all the other pre-production hassles that conventional filmmaking entails—just straight from idea to product.  So far, most of what they've done are what Rothman calls "darkly surrealistic comedies."  If the samples I saw were representative, it reminds me of an afternoon I spent at Hampshire College in the 1990s, I think it was, in Massachusetts, where a screening of student animations was held.

 

Early in the development of any new medium, you will come across works that were made simply to exploit the medium, without much thought given to what ought to be said through it.  The short animations we saw that afternoon were like that.  The students were thrilled to be able to express themselves in this semi-autonomous way.  Chuck Jones, the famous Warner Brothers animated-film director, once said that animators are the only artist who "create life."  Most of the time they let their thrills outweigh their judgment.  A good many of the films we saw back in Massachusetts that afternoon were in the category of the classic "Bambi Meets Godzilla."  This film, which I am not too surprised to learn was No. 38 in a book of fifty classic animated films, at least meets the Aristotelian criteria of unities of action, time, and place.  There is one principal action, it happens over less than a 24-hour period, and it happens in only one location.  We see the fawn Bambi, or a reasonable facsimile thereof, browsing peacefully among flowers to idyllic background music.  Suddenly a giant foot—Godzilla's, in fact—drops into the frame and squashes Bambi flat.  End of story.  Most of the other films were like that:  a silly, stupid, or even mildly obscene idea, realized through the painful and tedious process of sole-author animation.

 

Just as our ability to manipulate human life technologically has led us to face fundamental questions about what it means to be human, the ability to only a few people to digitally synthesize works of art that formerly required the intense collaboration and technology-aided actions of hundreds of people will lead us to ask, "What is art?"  And here I'm going to fall back on some classical sources.

 

Plato posed that the transcendentals of truth, goodness, and beauty are things that lie at the roots of the universe.  According to theologian and philosopher Peter Kreeft, art is the cultivation of beauty.  Filmmaking is a type of storytelling, one in which the way the story is told plays as much of a role as the story itself.  And it's obvious that AI can now replace many more expensive older ways of doing moviemaking without compromising what are called production values.  The authentic quality of the AI OR DIE clips I saw would fool anybody not thoroughly familiar with the technology into thinking those were real people and real mobile homes.

 

But AI is in the same category as film, cameras, lights, microphones, technicians, and all the other paraphernalia we traditionally associate with film.  These are all means to an end.  And the end is what Kreeft said:  the cultivation of beauty. 

 

I think the biggest change that the use of AI in film and animation is going to make will be economic.  Just as the advent of phototypesetting obsoleted entire technology sectors (platemaking, Linotyping, etc.), the advent of AI in film is going to obsolete a lot of technical jobs associated with real actors standing in front of real scenery and being photographed with real cameras. 

 

Will we still have movie stars?  Well, is Bugs Bunny a movie star?  You can't get his autograph, but nobody would deny he's famous.  And he's just as alive as he ever was. 

 

Before we push the panic button and write off most jobs in Hollywood, bear in mind that live theater survived the advents of radio, film, and television.  It was no longer something you could find in small towns every week, but it survived in some form.  I think film production with real actors in front of cameras will survive in some form too.  But the economic pressures to use AI for more chunks of major-studio-produced films will be so immense that some companies won't be able to resist.  And if the creatives come up with a way to make a film that cultivates beauty, and also uses mostly AI-generated images and sounds, well, that's the way art works.  Artists use whatever medium comes to hand to cultivate beauty.  But it's beauty that must be cultivated, not profits or gee-whiz dirty jokes.  And unfortunately, the dirty jokes and the profits often win out.

 

Sources:  Jonathan Rothman's "After the Algorithm" appeared on pp. 31-39 of the Sept. 1-8, ,2025 issue of The New Yorker.  I also referred to Wikipedia articles on "Bambi Meets Godzilla," and the software company Runway.  Peter Kreeft's ideas of art as the cultivation of beauty can be found in his Doors In the Walls of the World (Ignatius Press, 2018).

Monday, August 25, 2025

RAND Says AI Apocalypse Unlikely

 

In 2024, several hundred artificial-intelligence (AI) researchers signed a statement calling for serious actions to avert the possibility that AI could break bad and kill the human race.  In an interview last February, Elon Musk mused that there is "only" a 20% chance of annihilation from AI.  With so many prominent people speculating that AI may spell the end of humanity, Michael J. D. Vermeer of the RAND Corporation began a project to explore just how AI could wipe out all humans.  It's not as easy as you think.

 

RAND is one of the original think tanks, founded in 1948 to develop U. S. military policies, and has since studied a wide range of issues in quantitative ways.  As Vermeer writes in the September Scientific American, he and his fellow researchers considered three main approaches to the extinction problem:  (1) nuclear weapons, (2) pandemics, and (3) deliberately-induced global warming. 

 

It turns out that nuclear weapons, although capable of killing billions if set off in densely-populated areas, would not do the job.  There would be little remnants of people scattered in remote places, and they would probably be enough to reconstitute human life indefinitely.

 

The most likely scenario that would work is a combination of pathogens that together would kill nearly every human who caught them.  The problem here ("problem" from AI's point of view) is that once people figured out what was going on, they would invoke quarantines, much as New Zealand did during COVID, and entire island nations or other isolated regions could survive until the pandemic burned itself out.

 

Artificially-induced global warming was the hardest way to do it.  There are compounds such as sulfur hexafluoride which have about 25,000 times the global-warming capability of carbon dioxide.  And if you made a few million tons of that and spread it around, it could raise the global average temperature so much that "there would be no environmental niche left for humanity."  But factories pumping megatons of bad stuff into the atmosphere would be hard to hide from people, who naturally would want to know what's going on.

 

So while an AI apocalypse is theoretically possible, all the scenarios they considered had common flaws.  In order for any of them to happen, the AI system would first have to make up its mind, so to speak, to persist in the goal of wiping out humanity until the job was actually done.  Then it would have to wrest control of the relevant technology (nuclear or biological weapons, chemical plants) and conduct extensive projects with them to execute the goal.  It would also have to obtain the cooperation of humans, or at least their unwitting participation.  And finally, as civilization collapsed, the AI system would have to carry on without human help, as the few remaining humans would be useless for AI's purposes and simply targets for extinction.

 

While this is an admirable and objectively scientific study, I think it overlooks a few things. 

 

First, it draws an arbitrary line between the AI system (which in practice would be a conglomeration of systems) and human beings.  Both now and in the foreseeable future, humans will be an essential part of AI because it needs us.  Let's imagine the opposite scenario:  how would humans wipe out all AI from the planet?  If every IT person in the world just didn't show up for work tomorrow, what would happen?  A lot of bad things, certainly, because computers (not just AI, but increasingly systems involving AI) are intimately woven into modern economies.  Nevertheless, I think issues (caused by stupid non-IT humans, probably) would start showing up, and in a short time we would have a global computer crash the likes of which have never been seen.  True, millions of people would die along with the AI systems.  But I'm not aware of any truly autonomous AI system of any complexity and importance that has no humans dealing with it in any way, as apparently was the case in the 1970 sci-fi film "Colossus:  The Forbin Project."

 

So if an AI-powered system showed signs of getting out of hand—taking over control of nuclear weapons, doing back-room pathogen experiments on its own, etc.—we could kill it by just walking away from it, at least the way things are now.

 

More likely than any of the hypothetical disasters imagined by the RAND folks is a possibility they didn't seem to consider.  What if AI just gradually supplants humans until the last human dies?  This is essentially the stated goal of many transhumanists, who foresee the uploading of human consciousness into computer hardware as their equivalent of eternal life.  They don't realize that their idea is equivalent to thinking that making an animated effigy of myself will guarantee my survival after death, much as the ancient Egyptians prepared their pharaohs for the afterlife. 

 

But pernicious ideas like this can gain traction, and we are already seeing an unexpected downturn in fertility worldwide as civilizations benefit from technology-powered prosperity.  If AI, and its auxiliary technological forms, ever puts an end to humanity, I think the gradual, slow replacement of humans by AI-powered systems is more likely than any sudden, concentrated catastrophe, like the ones the RAND people considered.  And the creepy thing about this one is that it's happening already, right now, every day.

 

Romano Guardini was a theologian and philosopher who in 1956 wrote The End of the Modern World, in which he foresaw in broad terms what was going to happen to modernity as the last vestiges of Christian influence were replaced by a focus on the achievement of power for power's sake alone.  Here are a few quotes from near the end of the book:  "The family is losing its significance as an integrating, order-preserving factor . . . . The modern state . . . is losing its organic structure, becoming more and more a complex of all-controlling functions.  In it the human being steps back, the apparatus forward."  As Guardini saw it, the only power rightly controlled is exercised under God.  And once God is abolished and man sets up technology as an idol, looking to it for salvation, the spiritual death of humanity is assured, and physical death may not be far behind.

 

I'm glad we don't have to worry about an AI apocalypse that would make a good, fast, dramatic movie, as the RAND people assure us won't happen.  But there are other dangers from AI, and the slow insidious attack is the one to guard against most vigilantly.

 

Sources:  Michael J. D. Vermeer's "Could AI Really Kill Off Humans?" appeared on pp. 73-74 of the September 2025 issue of Scientific American, and is also available online at https://www.scientificamerican.com/article/could-ai-really-kill-off-humans/.  I also referred to the Wikipedia article on sulfur hexafluoride.  The Romano Guardini quotes are from pp. 161-162 of his The End of the Modern World, in an edition published by ISI Press in 1998. 

Monday, August 18, 2025

Is the Internet Emulsifying Society?

 

About a year ago I had cataract surgery, which these days means replacing the natural lens in the eye with an artificial one.  Curious about what happens to the old lens, I looked up the details of the process.  It turns out that one of the most common procedures uses an ultrasonic probe to emulsify the old lens, turning a highly structured and durable object that served me well for 70 years into a liquefied mess that was easily removed. 

 

If you're wondering what this has to do with the internet and society, be patient.

 

A recent report in The Dispatch by Yascha Mounk describes the results of an analysis by Financial Times journalist John Burn-Murdoch of data from a large Understanding America survey of more than 14,000 respondents.  Psychologists have standardized certain personality traits as being fairly easy to assess in surveys and also predictive of how well people do in society.  Among these traits are conscientiousness, extraversion, and neuroticism.  People who are conscientious make good citizens and employees:  they are "organized, responsible, and hardworking."  Extraversion makes for better social skills and community involvement, while neuroticism indicates a trend toward anxiety and depression.

 

Burn-Murdoch divided up the results by age categories, with the youngest being 16 to 39, and compared the rates of these traits to what prevailed in the full population in 2014, less than ten years ago.  The results are shocking.

 

Everybody (16-39, 40-59, and 60+) has declined in extraversion from the 50th to the 40th percentile, although by only ten percentile points out of 100.  (If a number is unchanged from 2014, the results would be 50th percentile today).  But in neuroticism, those under 40, who were already in the 60th percentile in 2014, have now zoomed up to the 70th.  Lots of young neurotics out there.  And they have distinguished themselves even more in the categories of agreeableness (declining from 45 to 35) and most of all, in conscientiousness.  From a relatively good 47th percentile or so in 2014, the younger set have plummeted to an abysmal 28th percentile of conscientiousness in less than a decade.

 

When the results of conscientiousness are broken down into their constituent parts, it gets even worse.  Starting about 2016, the 16-39 group shows jumps in positive responses to "is easily distracted" and "can be careless." 

 

If the survey was restricted to teenagers, you would expect such results, although not necessarily this big.  But we're talking about people in their prime earning years too, twenty- to forty-year-olds. 

 

Mounk ascribes most of these disastrous changes to influences traceable to the Internet, and specifically, social media.  He contrasts the ballyhoo and wild optimism that greeted various Internet-based developments such as online dating and worldwide free phone and Zoom calls with the reality of cyberbullying, trolling, cancel culture, and the mob psychology on steroids that the Internet provides fertile soil for. 

 

Now for the emulsion part.  An emulsion takes something that tends to keep its integrity—such as a blob of oil in water or the natural lens of an eye—and breaks it up into individual pieces that are surrounded by a foreign agent.  In the case of mayonnaise, the oil used is separated into tiny drops surrounded by water.  Oil doesn't naturally mix with water, but when an emulsifier is used (the lecithin in egg yolk, in this case), it reduces surface tension and breaks up the oil into tiny droplets.

 

That's fine in the case of mayonnaise.  But in the case of a society, surrounding each individual with a foreign film of Internet-mediated software that passes through firms interested not primarily in the good of society, but in making a profit, all kinds of pernicious effects can happen.

 

There is nothing intrinsically wrong with making money, so this is not a diatribe against big tech as such.  But in the case of cigarettes, when a popular habit that made the tobacco companies rich was shown to have hidden dangers, it took a lot of political will and persistence to change things so that at least the dangers were known to anyone who picks up a pack of cigarettes.

 

Mounk thinks it may be too late to do much about the social and psychological harms caused by the Internet, but we are still at the early stage of adoption when it comes to generative artificial intelligence (AI).  I tend not to make such a sharp distinction between the way the Internet is currently used and what difference the widespread deployment of free software such as chatGPT will make.  For decades, the tech companies have been using what amounts to AI systems to addict people to their social media services and to profit from political polarization.  So as AI becomes more commonplace it will be a change only in degree, not necessarily in kind.

 

AI or no, we have had plenty of time already to see the pernicious results among young people of interacting with other humans mainly through the mediation of mobile phones.  It's not good.  Just as man does not live by bread alone, people aren't intended to interact by smartphone alone.  If they do, they get less conscientious, more neurotic, more isolated and lonely, and more easily distracted and error-prone.  They also find it increasingly difficult to follow any line of reasoning of more than one step.

 

Several states have recently passed laws restricting the use of smartphones in K-12 education.  This is a controversial but ultimately beneficial step in the right direction, although it will take a while to see how seriously individual school districts take it and whether it makes much of a difference in how young people think and act.  For those of you who believe in the devil, I'm pretty sure he is delighted to see that society is breaking up into isolated individuals who can communicate only through the foreign agent of the Internet, rather than being fully present—physically, emotionally, and spiritually—to the Other. 

 

Perhaps warnings like these will help us realize how bad things have become, and what we need to do to stop them from getting any worse.  In the meantime, enjoy your mayonnaise.

 

Sources:  John Burn-Murdoch's article "How We Got the Internet All Wrong" appeared in The Dispatch on Aug. 12, 2025 at https://thedispatch.com/article/social-media-children-dating-neurotic/.  I also referred to the survey on which it was based at https://uasdata.usc.edu/index.php. 

Monday, August 11, 2025

"Winter's Tale" and the Spirit of Engineering

 

Once in a great while I will review a non-fiction book in this space that I think is worth paying attention to if one is interested in engineering ethics.  Winter's Tale by Mark Helprin is a novel, published in 1983, and even now I can't say exactly why I think it should be more widely known among engineers and those interested in engineering.  But it should be.

 

Every profession has a spirit: a bundle of intuitive and largely emotional feelings that go along with the objective knowledge and actions that constitute the profession.  Among many other things, Winter's Tale captures the spirit of engineering better than any other fiction work I know.  And for that reason alone, it deserves praise.

 

The book is hard to describe.  There are some incontestable facts about it, so I'll start with those.  It is set mainly in New York City, with excursions to an imaginary upstate region called Lake of the Coheeries, and side trips to San Francisco.  It is not a realistic novel, in the sense that some characters in it live longer than normal lifespans, and various other meta-realistic things happen.  There are more characters in it than you'd find in a typical nineteenth-century Russian novel.  There is no single plot, but instead a complex tapestry that dashes back and forth in time like a squirrel crossing a street. 

 

But all these matters are secondary.  The novel's chief virtue is the creation of an atmosphere of hope and, not optimism, exactly—some truly terrible things happen to people in it—but a temperate yet powerful energy and drive shared by nearly all the characters, except for a few completely evil ones.  And even the evil ones are interesting. 

 

The fertility of Helprin's imagination is astounding, as he creates technical terms, flora and fauna, and other things that are, strictly speaking, imaginary yet somehow make sense within the story.  One of the many recurring elements in the book is the appearance of a "cloud wall" which seems to be a kind of matrix of creation and time travel.  Here is how Virginia, one of the principal characters, describes it to her son Martin:

 

           ". . . It swirls around the city in uneven cusps, sometimes dropping down like a tornado to spirit people away or deposit them there, sometimes opening white roads from the city, and sometimes resting out at sea while connections are made with other places.  It is a benevolent storm, a place of refuge, the neutral flow in which we float.  We wonder if there is anything beyond it, and we think that perhaps there is."

           "Why?" Martin asked from within the covers.

            "Because," said Virginia, "in those rare times when all things coalesce to serve beauty, symmetry, and justice, it becomes the color of gold—warm and smiling, as if God were reminded of the perfection and complexity of what He had long ago set to spinning, and long ago forgotten."

 

The whole novel is like that.

 

Although there is no preaching, no doctrine expounded, and very few explicitly religious characters such as ordained ministers, a thread of holiness, or at least awareness of life beyond this one, runs throughout the book.  This is probably why I learned about it from a recommendation by the Catholic philosopher Peter Kreeft, who mentioned it in Doors in the Walls of the World.

 

The reason engineers might benefit from reading it is that machines and other engineered structures—steam engines, cranes, bridges, locomotives—and those who design, build, and tend them, are portrayed in a way that is both appealing and transcendent.  At this moment I feel a frustration stemming from my inability to express what is so attractive about this book. 

 

You may learn something from the fact that the reviews of it I could find fell into two camps.  One camp loved it and wished it would go on forever.  The other camp, of which I turned out to be a member, said that after a while they found the book annoying, and almost didn't finish it.  I think one reason for the latter reaction is that structurally, it is all trees and very little forest.

 

The very fertility of Helprin's imagination leads him to introduce novel and fascinating creations, incidents, and characters every page or two, and the result is a loss of coherence in the overall story and sequence of events.  A chart of every character and incident with lines drawn among them would look like the wiring diagram of a Boeing 747. 

 

But every time I said to myself that I was going to stop reading it, I picked it up again, and finally chose one free day to finish the thing, all the time hoping that it would get to the point.  There is no crashing finale in which everything is tied up neatly with a bow.  There is, however, a climax of sorts, and toward the end events occur which have parallels in the New Testament.  Farther than that I shouldn't go, for fear of spoiling the ending for anyone who wants to read it. 

 

The only other novel I can think of that bears even a faint resemblance to Winter's Tale is G. K. Chesterton's The Man Who Was Thursday.  It is also a fantasy in the sense that unrealistic things happen, and it features characters who are what Kreeft calls archetypes, embodied representations of ideas.  Not everyone likes or can even make sense of Chesterton's novel, and the same will undoubtedly be true of Winter's Tale.

 

For a fantasy, Helprin's book is rather earthy in spots, and for that reason I wouldn't recommend it for children.  But the earthiness is not gratuitous, and rounds out the realism of his character portrayals.  Many of the main actors behave courageously and even nobly, and would be good subjects for the exemplary mode of engineering ethics, in which one describes how engineering went right in a particular case with ethical implications. 

 

If you pick up the book, you will know in the first few pages whether you can stand to read the rest.  If you persist till the end, you will have experienced a world unlike our own in some ways, but very like what it could be if we heeded, in Lincoln's phrase, the better angels of our nature. 

 

Sources:  Winter's Tale was published in 1983 by Harcourt Brace Jovanovich.  Peter Kreeft's Doors in the Walls of the World was published in 2018 by Ignatius Press.