Monday, October 27, 2025

Asking the Wrong Question About Artificial General Intelligence

 

An article in the October issue of IEEE Spectrum investigates the status of artificial-intelligence IQ tests, and speculates on when and whether we will see the arrival of so-called artificial general intelligence (AGI), which author Matthew Hutson says is "AI technology that can match the abilities of humans at most tasks."  But mostly unasked in the article is an even more basic question:  what do we mean by human intelligence? 

 

To be fair, Hutson has done a good job of surveying several popular benchmark tests for AI intelligence.  One of the more popular tests is something called the Abstraction and Reasoning Corpus (ARC for short), whose developer François Chollet has made something of a go-to standard, as charts in the article show the scores of over a dozen different AI programs on various versions of Chollets tests.  Engineers like numbers, and standardizing a test is a good thing as long as the test measures what you want to know.  But does ARC do that?

 

The version of the ARC test described in the article consists largely of coming up with patterns of colored figures that follow rules abstracted from examples.  Human beings can score higher than AI systems on these tests, although the systems are improving.  But it's an open question as to whether abstracting patterns from geometric shapes has a lot to do with being generally intelligent.

 

Chollet feels that his test measures the ability to acquire new abilities easily, which he thinks is the prime measure of intelligence.  Whether it actually does that is debatable, but it looks like ARC is to the AI world what the old Stanford-Binet IQ test is for people.  That IQ test was developed over a century ago and is now in its fifth edition.

 

Hutson comes close to the problem when he admits that "notions of intelligence vary across place and time."  The Stanford-Binet test is mainly used to identify people who don't fit in well with the public-school system, which is mainly designed to produce worker bees for the modern economy.  As the modern economy is shifting all the time, what counts as intelligence does too. 

And even if we could perfectly track these shifts, the admittedly infinite array of "tasks" that people perform present an almost insurmountable problem to anyone who wants not only to define, but to evaluate something that could justifiably be called artificial general intelligence.

 

Geoffrey Hinton, a recent Nobel Prize winner in AI, is quoted in the article as saying that if an AI robot could successfully do household plumbing, that would be a milestone in AGI, and he thinks it's still about ten years off.  I hope I'm around in ten years to check this prediction, which I personally feel is optimistic.  For one thing, humanoid robots will have to get a lot cheaper before people even consider using one to fix a toilet.

 

All these approaches to AGI ignore a distinction in the field of human psychology which was first pointed out by Aristotle.  The distinction has been described in various ways, but the most succinct is to differentiate between perceptual thought and conceptual thought. 

 

Perceptual thought, which humans share with other animals and machines, consists in perceiving, remembering, imagining, and making associations among perceptions and memories, broadly speaking.  Inanimate material objects like computers can display perceptual thought, and in crawling the Internet for raw material by which to answer queries, all AI chatbots and similar systems use perceptual thought, which ultimately has to do with concrete individual things.

 

On the other hand, conceptual thought involves the consideration of universals:  freedom, for example, or the color blue, or triangularity as a property of a geometric figure, as opposed to considering any individual triangle.  There are good reasons to believe that no strictly material system (and this includes all AI) can engage in truly conceptual thought.  With suitable programming by humans, a computing system may provide a good simulation of conceptual thought, as a movie provides a good simulation of human beings walking around and even engaging in conceptual thought.  But a movie is just a sequence of images and sounds, and can't respond to its environment in an intelligent way.

 

Neither can an AI program engage in conceptual thought, although by finding examples of such thought in its training, it can provide a convincing simulation of it.  While having a robot do plumbing is all very well, the real goal sought by those who want to achieve AGI is human-likeness in every significant respect.  And a human incapable of conceptual thought would at the least be considered severely disabled, though still worthy of respect as a member of the human community.

 

The vital and provable distinction between perceptual and conceptual thought has been all but forgotten by AI researchers and the wider culture.  But if we ignore it, and allow AI to take over more and more tasks formerly done by humans, we will surround ourselves with concept-free entities.  This will be dangerous. 

 

A good example of a powerful concept-free entity is a tiger.  If you walk into the cage of a hungry tiger, all it sees in you is a specific perception:  here's dinner.  There is no reasoning over abstractions with a tiger, just a power struggle in which the human has a distinct disadvantage.

 

Aristotle restricted the term "intellect" to mean that part of the human mind capable of dealing with concepts.  It is what distinguishes us from the other animals, and from every AI system as well.  Try as they might, AI researchers will not be able to develop anything that can entertain concepts.  And attempts to replace humans in jobs where concepts are important, such as almost any occupation that involves dealing with humans as one ethical being to another, can easily turn into the kind of hungry-tiger encounter that humans generally lose.  Anyone who has struggled with an AI-powered phone-answering system to gain the privilege of talking with an actual human being will know what I mean.

 

ARC may become the default IQ test for new AI prototypes vying for the title of AGI.  But the concept symbolized by the acronym AGI is itself incomprehensible by AI.  As long as there are humans left, we will be the ones awarding the titles, not the AI bots.  But only if they let us.

 

Sources:  Matthew Hutson's article "Can We Build a Better IQ Test for AI?" appears on pp. 34-39 of the October 2025 issue of IEEE Spectrum.  I also referred to the Stanford Encyclopedia of Philosophy article on Aristotle.  For a more detailed argument about why AI cannot perform conceptual thought, see the article in AI & Society "Artificial Intelligence and Its Natural Limits," by Karl Stephan and Gyula Klima, pp. 9-18, vol. 36 (2021).

Monday, October 20, 2025

Shackleton's Flawed Ship

 

Ernest Shackleton (1874-1922) almost reached the South Pole in 1909, although he lost the title of being first to get there to Roald Amundsen, who achieved that record in 1911.  Shackleton then set his sights on being the first to traverse Antarctica from one side to the other, and for that project purchased the wooden excursion ship Endurance.  He embarked on his grandly-named "Imperial Trans-Antarctic Expedition" on Dec. 5, 1914 from the small island near the tip of South America called South Georgia, intending to land at Vahsel Vay in the Weddell Sea, reach the South Pole, and cross to the other side with the aid of a second provisions-laying party.

 

It was a complicated and ambitious undertaking.  Endurance got stuck in the ice in mid-January after nearly reaching Vahsel Bay, and Shackleton decided to wait on board the ship until the following spring, nine months later.  Meanwhile, over the Antarctic winter, the drifting ice slowly moved the ship several hundred miles northwest until the following October, when the spring thaws began to exert extreme pressure on the hull.

 

On October 24, the hull broke, water rushed in, and Shackleton ordered the ship abandoned.  The crew transferred to camps on the ice, and the ship finally sank on Nov. 21, 1915.  This began a series of adventures for Shackleton and his men which would be too long to recite here, but eventually they made it back to something like civilization in August of 1916.

 

Now we fast-forward to 2022, when an equally daring expedition called Endurance22 found Endurance below 3000 meters of water (9900 feet) and did extensive photographic documentation of the wreck, which by international agreement will remain undisturbed. 

The details of what they found about why the Endurance sank are described in a recent UPI report by Stephen Feller.

 

After the wreck was found, researcher Jukka Tuhkuri and his colleagues of Aalto University in Finland conducted an investigation into why Endurance sank.  Even as long ago as the 1910s, shipwrights knew how to construct ships that would withstand the extreme pressures exerted by ice in the Antarctic.  But as their analysis of documents such as ship's plans, diaries, and other sources indicated, Endurance wasn't built that way.

 

On the lowest deck, which contained the boilers and steam engine, there was only one beam that crossed the entire ship from one side to the other.  Ships designed to withstand the compressive forces of ice normally had several such beams spaced along the length of the ship to resist the ice, which otherwise will crack a hull like someone squeezing an eggshell too hard.  But that is apparently what happened to Endurance.  Although it lasted nearly a year stuck in the ice, the stresses caused by the following spring's thaw exceeded its capacity to resist them, and it cracked and sank.

 

According to Tukhuri, Shackleton was aware that the ship he bought was built only for polar excursions and not what amounted to ice-breaking duty.  The researchers even found a letter written by Shackleton, mentioning that he had recommended adding additional internal hull-bracing beams for another polar exploration ship that had got stuck in ice but didn't get crushed.  Tukhuri speculates that Shackleton was in a hurry and bought the ship knowing it wasn't sufficiently braced, but hoping that he could avoid putting it in a situation where the missing beams wouldn't be needed.  Unfortunately, that's not what happened.

 

Although he wasn't an engineer, Shackleton was making engineering decisions as he prepared for his grand expedition.  Engineering is the application of limited resources to a technical problem, and that's exactly what Shackleton was doing.  Sometimes the time and expense required to prevent a somewhat unlikely event from happening simply isn't available.  Rather than abort the whole project, Shackleton decided to go ahead, trusting in his navigation skills and previous Antarctic experience to avoid disaster.  But his gamble with the missing beams didn't pay off, and he had to rely even more than he expected to on his survival skills to get him and his crew off the ice and back to civilization.

 

Shackleton's ill-fated expedition reminds me of the Apollo 13 near-disaster, in which three astronauts were stranded in space on their way to the moon in 1970 by an oxygen tank that exploded when some damaged insulation resulted in an internal fire.  The resulting damage led to a long series of improvised solutions to unexpected problems, which the astronauts carried out in coordination with the extensive NASA support staff on the ground.  Like Shackleton, the Apollo 13 team never reached their goal, but simply getting back to civilization after the accident was a bigger triumph than even landing on the moon again. 

 

Apollo 13's accident was caused by a manufacturing flaw, not a design flaw.  Still, the improvisation and backup systems used to rescue the mission were similar to what Shackleton used in getting his crew back safely. 

 

Most of us will never go on expeditions to unexplored lands or planets.  But the civilizational urge is still there, which is why several countries continue to plan both manned and unmanned expeditions to the Moon, Mars, and even farther. 

 

If and when these plans come to fruition, we can count on several things.  One is that not everything will go according to plan.  When engineers encounter a novel situation, despite all the advance information they can gather about it in advance, there is always something unexpected.  Sometimes it's just a matter for curiosity, but other times it can be a matter of life and death. 

 

Another thing is that good engineering practice and planning can provide enough backup resources to allow clever individuals to create a survival plan even in the face of a major disaster.  Losing the Endurance was a big setback, but Shackleton had packed enough auxiliary supplies in the form of food, shelter, and other necessaries, so that he and his crew could perform the extraordinary feat of extracting themselves from what must have looked like certain death at times.  And the ingenuity of NASA engineers combined with the intrepid actions of the Apollo 13 crew to get them safely home, despite major damage to the Service Module that contained the oxygen tank which blew up.

 

Companies such as SpaceX are now leading the way into space, and it remains to be seen how well they balance the goals of achieving a mission first at any cost, including death, versus proceeding more slowly with more backup systems and more thoughtful engineering.  So far, none of the commercial space enterprises has ever lost a life in space.  Let's hope it stays that way as long as possible.

 

Sources:  The article "Shackleton's sunken polar ship may have been weaker than thought" by Stephen Feller was published on the UPI website on Oct. 6, 2025 at https://www.upi.com/Science_News/2025/10/06/shackleton-endurance-ship-crushed-in-ice/9321759774913/.  The Endurance22 website is at https://endurance22.org/.  I also referred to the Wikipedia articles on Apollo 13,  Ernest Shackleton, and Endurance. 

Monday, October 13, 2025

What Managers Think About Replacing Workers With AI

 

It's hard to look at a website, talk to anybody in business, or read a magazine for very long these days without encountering something about artificial intelligence (AI).  One of the biggest concerns for the average symbolic-manipulator employee, in George Gilder's phrase, is whether AI will replace you in your job.  Examples of symbolic manipulators are software developers, accountants, writers, and to some degree sales people and counselors.  Hospital nurses and construction workers, on the other hand, are not symbolic manipulators, at least not most of the time. 

 

A company called Trio realized that even if AI was available to replace a lot of these folks, somebody would have to decide to do it.  And that somebody would be middle-level managers, for the most part.  So last month, they performed an online survey of about 3000 U. S. managers in all 50 states to find out their attitudes toward replacing their employees with AI.  And the results are illuminating.

 

First of all, when broken down by state, there are wide variations in how enthusiastic managers are in replacing flesh-and-blood workers with AI software.  Openness to doing this varies from a high of 67% in Maine to a low of 8% in Idaho.  Both being fairly rural states, the difference is hard to account for except by cultural factors.  If I had to guess, I'd say finding good, reliable workers is more of a challenge in Maine, and so that may be one reason why managers in our most northeastern state would rather skip the hassle of hiring people and go straight to an AI program.

 

When all states were lumped together, the top reason that managers would replace workers with AI turns out to be pressure from upper management or shareholders, at 36%.  Presumably this was one of a list of "choose-one" options handed to the survey respondents.  The next most favored reason was for productivity gains (31%) and then cost savings (27%).  This pressure from above makes me think that a sheep-like mentality which now and then manifests itself in the boardroom may be why we are hearing so much about AI replacing workers.  No CEO wants to be left behind in a stampede to the next great thing, even if the thing turns out to be not so great.

 

What is perhaps most disturbing to engineers about the survey results is the kinds of jobs that managers see most ripe for replacement by AI.  "Technical roles like coding and design" (sounds like engineering to me) were perceived as most replaceable at 33%, while the least replaceable jobs were seen to be sales (11%) and "creative work" at 15%.  There are a lot of sales jobs that could pretty easily be replaced by good AI software, but evidently managers still believe in that personal touch that good salespeople can bring to the task.  Whereas, engineers and programmers, who are always carried as overhead on budgets, don't have as direct a connection between sales and their salaries as salespeople do. 

 

Independent of the question about whether a given job can actually be done better by AI than by a human, this survey looks at those who would be making the immediate decision to do so.  Of course, the options presented to those surveyed were simplified ones.  The fact is that rather than making a simple choice between AI and a human in a given job, what seems to be happening is that almost anybody in the symbolic-manipulation business is adopting some form of AI almost by default—some deliberately and enthusiastically, others (like myself) reluctantly and only if it can't be avoided. 

 

These large workplace shifts tend to be hard to discern over the short term, because they happen gradually.  Take the development of computer-aided design (CAD) software as an example.  My late father-in-law never obtained a four-year college degree, yet in the 1950s he got a good job as a civil engineer and worked for the Texas Highway Department.  If you'd visited him shortly after he went to work there, he would have been sitting at a drafting table in a huge room full of guys (all guys) sitting at drafting tables, churning out drawings that were turned into blueprints for the construction crews working on the new interstate-highway system.

 

Visit that same office today (the building is still there), and you'll see fewer engineers than those old drafting rooms held, and they'll be sitting at computers.  The computer can't design anything by itself, and while the engineer is in some sense in charge of the process, the amount of sheer dogwork handled by the computer far exceeds the mental effort put forth by the engineer, who now does the work of ten or fifteen (or more) of the old drafting-room people.  And the field of civil engineering didn't collapse:  my school (Texas State University) started a new civil-engineering program a few years ago, and we have no problem placing our graduates.

 

The advent of AI, which has actually been going on for a decade at least and isn't as sudden as news reports make it sound, will probably be like the advent of CAD, only more so.  It's easy to forget that the computers need us as much as we need the computers.  Take away the largely-human-produced Internet from ChatGPT, and you'd have a lot of useless server farms on your hands. 

 

There are clearly dangers, of course, if we get too lazy and allow AI to make decisions that should remain in human hands, or minds.  And there are sectors where AI has already done serious damage, such as the harm AI-fueled social media has done to the psychological health of children and teenagers.  But we're not letting the pied piper of AI march away with all our kids.  Schools all across the U. S. are starting to ban smartphone use during classes, and parents are wising up to how harmful too-early use of smartphones can be to young people. 

 

Even if all managers were dying to replace their staff with AI as fast as they could, the software simply isn't available yet.  At the present time, AI has the looks of a fad, indicated by the survey's showing that pressure from upper management is the biggest reason bosses are considering it.  So it's no time to panic, but keep your wits about you and be ready to deal with AI in your job, assuming you still have one.

 

Sources:  The summarized results of the Trio survey can be seen at https://trio.dev/managers-are-ready-to-replace-employees-with-ai/.  The San Marcos Daily Record of Oct. 10, 2025 carried a story on the survey on p. 8, which is how I found out about it, from an old-fashioned piece of paper.  But then I went online. 

Monday, October 06, 2025

Waymo 80% Safer than Human Drivers: Why Not Switch?

 

According to Waymo, the Google division that operates self-driving robotaxis in several U. S. cities, their vehicles are involved in 80% fewer injury-causing crashes than human-driven cars.  This was a surprise to me, as it may also have been to Kelsey Piper, who writes in a substack called The Argument that if we want to reduce the number of traffic fatalities in the U. S by 80%, all we have to do is switch to Waymos.

 

At the current rate of nearly 40,000 U. S. traffic fatalities a year, that would be saving 31,000 lives a year.  But she cites a poll conducted by The Argument which found overall that only about 28% of respondents favored allowing self-driving cars in one's town or city, and 41% favored a ban, as Boston and other cities have considered doing.

 

Piper speculates that a general distaste for AI-powered things may be behind what looks like irrational opposition to self-driving cars.  Some Waymos have even been attacked by modern-day Luddites.  As she puts it, ". . . you can't vandalize ChatGPT, so anti-AI sentiment finds its expression in harrassing Waymos."

 

She realizes that anything like a wholesale move to self-driving cars would cause massive disruption to our present transportation system.  She also acknowledges the tendency of government to make optional things mandatory.  Right now, Waymo is one private company offering a specific service in a few carefully chosen markets such as Atlanta, Austin, San Francisco, and Phoenix.  With the possible exception of Atlanta, these are all locations with plenty of sunshine and relatively few days of inclement weather.  And the service in Atlanta only commenced last June, so Waymos in that city have not gone through a typical Atlanta winter, which almost always includes an ice storm, as I know from living there a couple of years.  Ice may present serious challenges to Waymo's algorithms.

 

So there are practical limitations to the areas that Waymo can operate.  As one of the commenters on Piper's article mentions, Waymo has somewhat cherry-picked markets in which it can compete while maintaining its very good safety record.  Even if Boston decided to allow Waymos, I expect a long development period would delay its deployment as the company figured out how to deal with snow, ice, and slush on the former cowpaths that are now Boston streets.  At least the Waymo cars probably wouldn't get lost as much as I did whenever I drove to Boston when I lived in New England, which was, well, pretty much every time I went there. 

 

But there's those 31,000 lives.  As Piper points out, that's more lives than are lost every year to homicides.  Wouldn't it be nice if we could eliminate all homicides?  Well, numerically, changing to self-driving cars would do that. 

 

The psychology of why we tolerate such a lot of annual deaths to traffic accidents is interesting.  People don't always act toward various risks in reasonable ways.  The classic example is driving to the airport to fly somewhere.  Unless you live within walking distance of the airport, you're going to drive or ride.  And unless you take a Waymo there, you will ride in a human-operated car.  While lots of people are afraid of flying much more than they are afraid of driving, the chances of dying in a plane crash are much lower than the chances of dying in a car wreck on the way to or from the airport. 

 

I suspect that we have gotten so used to the 40,000 or so traffic deaths every year with some variation on the notion that it always happens to someone else, and if I were in the same situation that somebody else died in, I would have been clever enough to save myself.  These are pure rationalizations, but the alternative is to experience a little squirt of adrenaline every time we buckle the seat belt and pull out of the garage. 

 

Five or ten years ago, there was a lot of hype about how every new car would be self-driving within a few years.  That obviously hasn't happened, for a variety of reasons.  One factor is the expense per vehicle.  Waymo doesn't advertise how much each of their vehicles cost, but various estimates say it's probably on the order of $160,000 to $300,000.  This puts them in the super-luxury class, and explains why they aren't deployed in areas that will not generate pretty good revenue.  Even with the carefully-chosen markets that Waymo has selected, it appears that they are not making a profit yet, which means the whole thing is still an elaborate experiment oriented toward some future situation that hasn't materialized.

 

Still, I will admit that if I could have a self-driving car that was absolutely trustworthy, a Level-5 one according to the SAE autonomous-vehicle rating system, one you could read or sleep in while the machine takes care of all driving tasks, and not pay a whole lot more for it than I'm paying now, I would at least consider it.  But in my relatively small town, which would not provide enough revenue for Waymo under its present operating circumstances, that's not going to happen.

 

Now and then on my trips to Austin, I see a Waymo eerily coasting along with nobody inside, and once I saw two in a row.  I will admit to having a flash of mischief when I saw one the first time, and wondering what would happen if you pulled in front of it and slammed on the brakes suddenly.  Probably nothing bad.  Fortunately, I was a passenger in the car I was in, not the driver, and so the experiment was never performed.

 

Unless something radical happens in the areas of AI, sensors, or legislation regarding the right to drive your own vehicle, it looks like Waymo and similar autonomous vehicle companies may not spread their wares much farther than they've already done.  And that's too bad for the 40,000 or so people who die on roadways each year.  Some ideas look good in theory, but when you start examining everything that would have to change to put them into practice, it just falls apart.  And converting our fleet of vehicles to nearly 100% self-driving looks like another one of those nice ideas that has had an unfortunate encounter with reality.

 

Sources:  A note in the online magazine The Dispatch referred me to Kelsey Piper's article "Please let the robots have this one," at https://www.theargumentmag.com/p/please-let-the-robots-have-this-one.  I also referred to a Reddit item on the estimated cost of Waymo vehicles at https://www.reddit.com/r/SelfDrivingCars/comments/1g8vv7o/where_did_the_whole_talk_about_the_cost_of_waymo/, and the Wikipedia article "Waymo."

Monday, September 29, 2025

Life For Teenagers Without Social Media

 

About a year ago, the Associated Press ran an article profiling two teenagers who were bucking the social-media frenzy by consciously limiting their phone use.  Since that time, teenagers without social media have only become more newsworthy.  As of June of this year, fourteen states have enacted some form of statewide ban on the use of cellphones in classrooms, and the trend is for more states to get on the bandwagon. 

 

What is it like for a teenager to do without most of the social media that their peers use?  Reporter Jocelyn Gecker profiled two teenage girls:  Gabriela Durham, who hopes to pursue a dancing career once she graduates from her Brooklyn high school, and Kate Bulkeley, who is a fifteen-year-old high schooler who is co-president of her Bible study club and has participated in a Model UN conference.  She ran into a problem when the other conference participants wanted to exchange only Instagram addresses rather than phone numbers.  And sometimes she relies on friends with Snapchat to tell her about important student government messages.  But overall, she is glad her social-media use is as low as it is.

 

Kate's parents knew that their daughter's school had a cellphone ban, but it wasn't enforced.  They were concerned about the bad publicity surrounding teens' use of social media, so when she became a freshman in high school they told her she couldn't use it.  Fine with it at first, she found as a sophomore that she needed Instagram to coordinate after-school activities. 

 

But the 15-year-old says she still only uses social media about two hours a week.  This is far below the average weekly use by teens, which runs over 35 hours a week for half of today's teens, according to one study cited in the report.  Kate simply sees most uses of social media as a waste of time, and prefers to use her time studying and encountering friends in the flesh, so to speak.

 

Gabriela Durham received a cellphone as soon as she was old enough to use public transportation in New York.  This was much later than her peers, who have often been using their cellphones since early in elementary school.  Her mother, Elena Romero, enforces a strict ban on social media until her daughters are 18.  They have fallen off the wagon only once, secretly using TikTok for a few weeks until Romero found out about it.

           

As a dance major at the Brooklyn High School of the Arts, Gabriela dances outside of school every day.  Dancing and dance practice, plus commuting on the subway, take up time that she might otherwise use for social media.  But she is scandalized by peers who say they log up to 60 hours or more of social-media use weekly, saying it's "insane."

 

Both girls admit there are inconveniences to not being on social media in high school.  So much of what goes on socially happens online, and so they miss out on jokes, memes, rumors, and a lot of other things that people in the pre-cellphone days were familiar with from high school.  But back then you didn't need advanced technology to be in the know.

 

These young women serve as test cases to prove that reasonably happy and fulfilling lives can be led by teenagers who nevertheless make minimal use of social media. 

 

I think it's important to note, however, that both sets of parents began their restrictions by delaying the time when their children received a cellphone for the first time.  As an educator, I long ago learned the lesson that it is much easier to start out with strict rules and then ease them gradually, than it is to begin with laxness and tighten up later. 

 

Parents who have not laid any cellphone restrictions on their children and suddenly get convicted that they have to do something, may find it very difficult to remove privileges with social media that their children have grown accustomed to already.  Education, whether in college or the nursery, is a long-term enterprise.  So it behooves parents to give serious thought to how they will deal with cellphones long before their children start asking for one.

 

I am acquainted with a family of five who seem to have negotiated the cellphone issue pretty well so far.  Their oldest child is fourteen.  After she was homeschooled up to the age of twelve, her parents put her in a local Christian school for a year.  But she came home complaining that "those other kids spend all their spare time on their phones!" and eventually moved her to a home-school cooperative, where cellphones are not in evidence.  This shows that good habits can be ingrained in children to the extent that they embody virtuous attitudes themselves, even when placed in tempting situations.

 

This is what all parents want for their kids, I hope.  All children are different, and in the same family one child is biddable and does everything she is told, while the other one raised in the same environment rebels against all strictures and cheats every chance he gets. 

 

But the current trend of recognizing on an institutional scale that constant access to social media by teenagers does on balance more harm than good, is vastly encouraging to those parents out there who saw the dangers years ago and have been taking positive action ever since. 

 

While there are still situations in which using aspects of social media are logistically necessary, the hope is that the same philosophy of delaying social-media use becomes generally acceptable in K-12 educational institutions, whether forced to change by state legislators, or guided from within by enlightened educators at all levels. 

 

The damage has already been done to millions of children and teens, however.  And my point about suddenly withdrawing social media from those who have made it an integral part of their lives is still valid.  The results are likely to be comparable to Prohibition, which became effective in 1920 and only made alcohol-consumption problems worse. 

           

But if the educational system is adapted to minimize social-media dependence at all levels, there is real hope that the current increases in teen depression and suicide can be reversed. 

 

Sources:  The article "Life as a teen without social media isn't easy.  These families are navigating adolescence offline" by Jocelyn Gecker was dated June 5, 2024 and appeared at https://apnews.com/article/influenced-social-media-teens-mental-health-e32f82d46ea74b807c9099d61aec25d5.  I also referred to data about state-adopted school cellphone bans at https://www.newsweek.com/map-shows-us-states-school-phone-bans-2090411.

Monday, September 22, 2025

Are $100K H-1B Visas a Good Idea?

 

President Trump seems to think so.  The H-1B visa was intended to be used by foreigners wishing to work in the United States in "an occupation that requires theoretical and practical application of a body of highly specialized knowledge" and typically requires a bachelors' degree or higher.  When President Bush signed the Immigration Act of 1990 creating the H-1B as we know it today, it established a quota of persons to be admitted and required that companies hiring such people fill out a Labor Condition Application, which showed that it was unusually hard to find such qualified individuals in the existing U. S. labor pool.

 

On the face of it, the H-1B visa sounds like a good idea.  If we are going to allow immigration at all (a question that is more debatable now than it has been in the past), it would make sense to choose those immigrants who are more capable of contributing to the economy in employment sectors where there are presently shortages. 

 

But there are always unintended consequences for any law, and lately there have been accusations that the H-1B system has been abused by companies simply wanting to hire cheaper foreign workers for jobs that they could fill with better-paid U. S. workers.

 

Evaluating the truth of that accusation is something I'm not personally prepared to do.  But the current H-1B visas are allocated largely by lottery, plus a nominal fee of a few hundred dollars, and a recent article in the Los Angeles Times indicates that companies have been gaming the lottery system by putting in multiple applications for the same person or position.  Authorities claim they have changed the rules to reduce such abuse.  But clearly there is room for improvement in the way the H-1B visa is administered.

 

President Trump's solution to these problems is to raise the annual fee for an H-1B visa holder to at least $100,000, with a variety of more expensive visas for which the main qualification is that you're already rich and can pay a million dollars or more.  Some statistics cited by the LA Times indicate that many H-1B visa holders may be earning as little as $60,000 a year, which is both an indication that the prevailing wages in the high-tech industry are not being paid as they should be for these visa holders, and that slapping a $100K surcharge on such visas will simply make most of them disappear, along with the current visa holders.

 

There are two main groups of stakeholders in this situation.  One group consists of high-tech U. S. firms wanting to hire the best workers at the lowest wages they can get by with.  Another group (or set of groups) are foreign workers who are qualified to do high-tech jobs, and would like to do them in the U. S.  Over 70% of H-1B visa holders are from India, it turns out, but there are other countries involved as well. 

 

Adam Smith's invisible hand would let these workers in at whatever wage they would accept, which would be way below the prevailing U. S. wage scale.  While Smith's rule about trade barriers (namely, the fewer the better) was intended for goods, not people, extending it to free immigration is in the spirit if not the letter of his ideas.  In the free-for-all of illegal immigration we have seen over the last few years before Trump came into office, high-tech companies have not benefited as much as they might have, because even with troops of lawyers at their disposal they would find it difficult to blatantly violate immigration law on a large scale, in contrast to the thousands of construction and other more menial jobs that most illegal immigrants find to do, at least at first. 

 

On the other extreme, you have people such as President Trump and my late friend Steve Unger, who was a classic "leftie" of the old school.  Two more different personalities can scarcely be imagined, but on this issue I think they would agree:  keep out them durn foreigners so that U. S. workers can make higher wages. 

 

If you take this principle to an extreme, we would halt all immigration of whatever kind, even for temporary student visas, which are a kind of back-door way that many well-qualified foreigners get here in the first place.  From personal experience, I can tell you that would be an unmitigated disaster for U. S. higher education, which depends on non-U. S. citizens for a large fraction of its students and ultimately professors, who have to start out as students.

 

Would wages for high-tech jobs and graduate students go up?  Perhaps some, but that assumes other things are equal, and after a while they wouldn't be.  It's easy to forget that the world's most important resource is people, not rare-earth minerals or oil or even water and air.  We are a nation of immigrants, and historically we have turned that into a unique strength by means of converting all kinds of immigrants to something called the American Way.

 

But if the American Way turns into something that people from other countries either can't afford, or can't buy into for some other reason, be it political, ethnic, or what have you, then the future of this country is dim.  We have had practically open borders with toleration of scofflaws for far too long, and it makes sense to reform the system so that the rule of law becomes more respected.  But it should be a rule of law, not a rule of men, or of one man.  And laws, or rules, that change with the whims of one energetic guy in the White House are hard to have respect for.

 

In sum, the hundred-thousand-dollar H-1B visa looks a lot like the tariff situation or the random roundups by masked ICE enforcers.  It's flashy, attracts a lot of attention and support from President Trump's base, but makes little logical sense in the greater scheme of things if you look at it from a practical view.  Not only the H-1B visa system, but the whole immigration process needs a major overhaul.  But in a republic, the ideas for the overhaul should originate with the public who is concerned, as well as stakeholders such as high-tech companies.  That's not being done, and until it is done, we can expect further chaos and distress among highly qualified people who are here to contribute to our economy and just want to better their lives. 

 

Sources:  The Los Angeles Times carried the article "India expresses concern about Trump's move to hike fees for H-1B visas" at https://www.latimes.com/world-nation/story/2025-09-20/india-expresses-concern-about-trump-plan-to-hike-fees-on-h-1b-visas-that-bring-tech-workers-to-us on Sept. 20, 2025.  I also referred to the Wikipedia article on the H-1B visa

Monday, September 15, 2025

Data Centers On the Grid: Ballast or Essential Cargo?

 

Back in the days of sailing ships, the captain had a choice when a storm became so severe that it threatened to sink the ship.  He could throw the cargo overboard, lightening the ship enough to save it and its crew for another day.  But doing that would ruin any chance of profiting from the voyage. 

 

It was a hard decision then, and an equally hard decision is facing operators of U. S. power grids as they try to cope with increasing demand for reliable power from data centers, many of which are being built to power the next generation of artificial-intelligence (AI) technologies. 

 

An Associated Press report by Marc Levy reveals that one option many grid operators are considering is to write into their agreements with new data centers an option to cut off power to them in emergencies. 

 

Texas, which is served by a power grid which is largely independent of the rest of the country's networks, recently passed a law that prescribes situations in which the grid operator can disconnect big electricity users such as semiconductor-fab plants and data centers.  This is not an entirely new practice.  For some years, large utility customers have taken the option of being disconnected in emergencies such as extremely hot or cold days that put a peak strain on the grid.  Typically they receive a discount for normal power usage when they allow the grid operator to have that option.

 

But according to Levy, the practice is being considered in other parts of the country as well.  A large grid operator called PJM Interconnection serves 65 million customers in the mid-Atlantic region.  They have proposed a rule similar to the one adopted in Texas for their data-center customers.  But an organization called the Digital Power Network, which includes data-center operators and bitcoin miners, another big energy user class, complained that if PJM adopts this policy, it may scare off future investment in data centers and cause them to flee to other parts of the U. S. 

 

Another concern is rising electricity prices, which some attribute to the increased demand by data centers.  These prices are being borne by the average consumer, who in effect is subsidizing the gargantuan power needs of data centers, which typically pay less than residential consumers per kilowatt anyway.

 

In a way, this issue is just an extreme example of a problem that power-grid operators have faced since there were power grids:  how to handle peak loads.  Historically, electricity has to be generated at the same time it's consumed, although there is some progress recently in battery storage of electricity, though not enough to make much of a large-scale difference yet.  This immediacy requires a power grid to have enough generating capacity to supply the peak load—the most electricity they will ever have to supply on the hottest (or coldest) day under worst-case situations. 

 

The problem with peak loads from an economic view is that many of those generating facilities sit idle most of the time, not producing a return on their investment.  So it's always been a tradeoff between taking a chance that your grid will manage the peak load and scrimping on capacity, versus spending enough to make sure you have margin even with the worst peak load imaginable, but having a lot of useless generators and network stuff on your hands most of the time.

 

When the electric utility business was highly regulated and companies had a guaranteed rate of return, they could build excess capacity without being punished by the market.  But since the deregulatory era of the 1970s, and especially in hyper-free-market environments such as Texas, the grids no longer have this luxury.  This is one reason why load-shedding (the practice of cutting off certain big customers in emergencies) looks so attractive now:  instead of building excess capacity, the grid operator can simply throw some switches and pull through an emergency while ticking off only a few big customers, rather than cutting it off to everybody, including the old ladies who might freeze or die of heat exhaustion without power. 

 

Understandably, the data-center operators are upset.  They don't want to spend the money on backup generators that they would rather the grid operators spend.  But the semiconductor manufacturers have learned how to do this already, and build costs for giant emergency-generation facilities into their budgets from the start. 

 

Some data-center operators are starting to build their own backup generators so that they can agree to go off-grid in emergencies without interrupting their operations.  After all, it's a lot easier to restart a data center after a shutdown than a semiconductor plant, which could suffer extreme damage after a disorganized shutdown that could put it out of action for months and cost many millions of dollars. 

 

Compared to plants that make real stuff, data centers can easily offload work to other centers in different parts of the country, or even outside the U. S.  So if there is a regional power emergency, and a global operation such as Google has to shut down one data center, they have plenty more to take up the slack. 

 

It looks to me like the data centers don't have much of a rhetorical leg to stand on when they argue that they shouldn't be subjected to load-shedding agreements like many other large power users tolerate already.  We are probably seeing the usual huffing and puffing that accompanies an industry-wide shift to a policy that makes sense for consumers, power-grid operators, and even the data centers themselves, if they agree to take more responsibility for their own power in emergencies. 

 

If electricity gets expensive enough, data-center operators will have an incentive to figure out how to do what they do more efficiently.  There's plenty of low-power technology out there developed for the Internet of Things and personal electronics.  We all want cheap electricity, but if it's too cheap it leads to inefficiencies that are wasteful on a large scale.  Parts of California in the 1970s had water bills that were practically indistinguishable from zero.  When I moved out there to school in 1972 from water-conscious Texas, I was amazed to see shopkeepers cleaning their sidewalks every morning, not with a broom, or a leafblower, but with a spray hose, washing down the whole sidewalk. 

 

I don't think they do that anymore, and I don't think we should guarantee all data centers that they'll never lose power in an emergency either.

 

Sources:  Marc Levy's article "US electric grids under pressure from power-hungry data centers" appeared on the Associated Press website on Sept. 13 at https://apnews.com/article/big-tech-data-centers-electricity-energy-power-texas-pennsylvania-46b42f141d0301d4c59314cc90e3eab5.