Monday, August 04, 2025

Should We Worry About Teens Befriending AI Companions?

A recent survey-based study by Common Sense Media shows that a substantial minority of teenagers surveyed use AI "companions" for social interaction and relationships.  In a survey of over a thousand young people aged 13 to 17 last April and May, the researchers found that 33% used applications such as ChatGPT, Character, or Replika for things like conversation, role-playing, emotional support, or just as a friend.  Another 43% of those surveyed used AI as a "tool or program," and about a third reported no use of AI at all.

 

Perhaps more troubling than the percentages were some comments made by teens who were interviewed in an Associated Press report on the survey.  An 18-year-old named Ganesh Nair said, "When you're talking to AI, you are always right.  You're always interesting.  You are always emotionally justified."

           

The researchers also found that teens were more sophisticated than you might think about the reliability of AI and the wisdom of using it as a substitute for "meat" friends.  Half of those surveyed said they do not trust advice given to them by AI, although the younger teens tended to be more trusting.  And two-thirds said that their interactions with AI were less satisfying than those with real-life friends, but one-third said they were either about the same or better.  And four out of five teens spend more time with real friends than with AI.

 

The picture that emerges from the survey itself, as opposed to somewhat hyped news reports, is one of curiosity, cautious use, and skepticism.  However, there may be a small number of teens who either turn to AI as a more trusted interlocutor than live friends, or develop unhealthy dependencies of various kinds with AI chatbots. 

 

At present, we are witnessing an uncontrolled experiment in how young people deal with AI companions.  The firms backing these systems with their multibillion-dollar server farms and sophisticated software are motivated to engage young people especially, as habits developed before age 20 or so tend to stay with us for a lifetime.  It's hard to picture a teenager messaging ChatGPT to "grow old along with me," but it may be happening somewhere.

 

I once knew a woman in New England who kept a life-size cloth doll in her house, made to resemble a former husband.  Most people would regard this as a little peculiar.  But what difference is there between that sort of thing and spending time in online chats with a piece of software that simulates a caring and sympathetic friend?  The interaction with AI is more private, at least until somebody hacks the system.  But why does the notion of teenagers who spend time chatting with Character as though it were a real person bother us?

 

By saying "us," I implicitly separate myself from teens who do this sort of thing.  But there are teens who realize the dangers of AI overuse or misuse, and older teens especially expressed concerns to the AP reporter that too much socializing with chatbots could be bad. 

 

The same teen quoted above got "spooked" about AI companions when he learned that a friend of his used his companion to compose a Dear Jill message to his girlfriend of two years when he decided to break up.  I suppose that is not much different than a nineteenth-century swain paging through a tome entitled "Letters for All Occasions," although I doubt that even the Victorians were that thorough in providing examples for the troubled ex-suitor. 

 

Lurking in the background of all this is a very old theological principle:  idolatry.  An idol is anything less than God that we treat as God, in the sense of resorting to it for help instead of God.  For those who don't believe in God, idolatry would seem to be an empty concept.  But even atheists can see the effects of idolatry in extreme cases, even if they don't acknowledge the God who should be worshipped instead of the idol.

 

For a teen in a radically dysfunctional household, turning to an AI companion might be a good alternative, but a kind, loving human being would always be better.  Kind, loving human beings aren't always available, though, and so perhaps an AI companion would suffice in a pinch like a "donut" spare tire until you can get the flat fixed.  But you shouldn't drive on a temporary tire indefinitely, and teens who make AI companions a regular and significant part of their social lives are probably headed for problems.

 

What kind of problems?  Dependency, for one thing.  The AI firms are not promoting their companions out of the kindness of their collective hearts, and the more people rely on their products the more money they make.  The researchers who executed the survey are concerned that teens who use AI companions that never argue, never disagree with them, and validate everything they say, will be ill-prepared for the real world where other humans have their own priorities, interests, and desires. 

 

In an ideal world, every teen would have a loving mother and father they would trust with their deepest concerns, and perhaps friends as well who would give them good advice.  Not many of us grew up in that ideal world, however, and so perhaps teens in really awful situations may find some genuine solace in turning to AI companions rather than humans.

 

The big news of this survey is the fact that use of AI companions among teens is so widespread, though still in the minority.  The next thing to do is to focus on those small numbers of teens for which AI companions are not simply something fun to play with, but form a deep and significant part of their emotional lives.  These are the teens we should be the most concerned about, and finding out why they get so wrapped up with AI companions and what needs the companions satisfy will take us a long way toward understanding this new potential threat to the well-being of teenagers, who are the future of our society.

 

Sources:  The AP article "Teens say they are turning to AI for friendship" appears on the AP website at https://apnews.com/article/ai-companion-generative-teens-mental-health-9ce59a2b250f3bd0187a717ffa2ad21f, and the Common Sense Media survey on which it was based is at https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf.

 

Monday, July 28, 2025

The Tea App Data Breach: Not Everybody's Cup Of . . .

 

A phone app called Tea became the No. 1 app downloaded from the U. S. Apple App Store last week.  Then Friday, news came that the app had been hacked, exposing thousands of images and other identifying information online.  The users of Tea are women who want to exchange information about men they are considering as dates, or have dated and want to evaluate for other women.  So any kind of data breach is disturbing, although it could have been worse.

 

Tea, an app introduced in 2023, is exclusively for women, and requires some form of identification to use.  Once a user is logged in, she can either post information about a certain man, or research his background as reported by other users, similar to the popular Yelp app that uses crowdsourcing to rate businesses. 

 

Understandably, some men take a dim view of Tea, and claim it violates their privacy or could even provide grounds for defamation lawsuits.  An attorney named Aaron Minc has been getting "hundreds" of calls from men whose descriptions on Tea are less than complimentary.  In an interview, Minc said Tea users could be sued for spreading false information.  But as the Wikipedia site describing Tea points out, the truth is an absolute defense against such a charge.  Nevertheless, being sued for any reason is not a picnic, and so with data breaches and lawsuits in the air, women may now think twice before signing up for Tea and posting the story of their latest disastrous date, which may have been arranged via social media anyway.

 

You might think most couples meet electronically these days, but a recent survey shows that even among those aged 18-29, only 23% of those who are either married or "in a relationship" met online.  So meeting the old-fashioned eyeball-to-eyeball way is still how most couples get together.  The woman meeting a new guy in person could still use Tea to check out the man's credentials, but that raises larger issues of how reliable the report of another woman would be, especially if the report is anonymous.

 

Lawsuits over relationships are nothing new, of course.  One of the running plot threads that Charles Dickens milked for a lot of laughs in his first published novel, The Pickwick Papers, was how Mr. Pickwick's landlady Mrs. Bardell misunderstood a stray comment he made as a proposal of marriage, and filed a suit against him for breach of promise.  Though the legal details differ, this kind of action pitted the woman against the man, who then had to prove that his intentions were honorable or lose the suit.

 

I will admit that the idea of anonymous women posting ratings on me is somewhat disquieting, but as a teacher at a university, I'm subject to somewhat the same treatment by "Rate the Prof" websites, which take anonymous reports by students of various professors and post them online.  I never look at such sites, and if anything scurrilous has been posted about me, I've remained blissfully unaware of it. 

 

The way Tea works raises the question of whether anonymity online should be as widespread as it is.  That issue has been in the news lately as several states have passed laws requiring robust systems to verify ages for users of pornographic websites, for example.  That is another example where anonymity leads to problems,and positive identification with regard to age can at least mitigate harms to children who are too young to be exposed to porn.

 

Unfortunately, anonymity is almost a default setting online, while tying one's identity to every online communication would be not only technically burdensome, but downright dangerous.  How would we do anonymous hotlines and tip lines?  There would have to be exceptions for such cases.  And VPN and other technologies exist that currently encumber even the most vigorous attempts to identify people online, such as in criminal investigations. 

 

The Tea app is facing not only its data-breach problem, which always is disturbing to users, but also the moral question of whether anonymous comments by women about men they have dated are fair to the men.  Such a question can be answered on a case-by-case basis, but in general, if women had to sign their real names to every comment they posted on Tea instead of remaining anonymous, the comments might not be as frank as they are now. 

 

The same principle applies to student evaluations conducted not by a commercial app, but by my university.  Students are guaranteed anonymity for the obvious reason that if they make a negative comment, and the professor finds out who said it, the student might have to take another one of the professor's classes, in which the professor might be tempted to wreak vengeance upon the unhappy but honest student.  So I accept the idea of anonymity in that situation, because I might otherwise be tempted to abuse my authority over students that way.

 

If Tea were really confined only to women users, there wouldn't seem to be any danger of that sort of thing happening.  But a woman could turn traitor, so to speak:  seeing a bad review of her current boyfriend on Tea, she could show it to him, and if the review was tied to the identity of the woman who gave him the bad review, he might consider some kind of revenge.  That would be bad news as well, so anonymity makes sense for Tea too. 

 

Still, when people know their names are associated with things they say, they tend to be more moderate in their expressions than if they hide behind a pseudonym and can flame to their hearts' content without any fear of retribution.  Some systems allow the option of either signing your name or remaining anonymous, and possibly that is the best approach.

 

Tea presents itself as a way to find "green flags," that is, women going online and saying what a good guy this was and you ought to date him.  If he's so good, why not keep him for yourself?  Realistically, I expect most of the comments are negative, which is why the site has been criticized to the extent it has been. Assuming the operators of Tea address their data breach, they can take comfort in the old saying attributed (perhaps apocryphally) to P. T. Barnum:  "There's no such thing as bad publicity."  More women know about Tea now, and so more men may get reviewed.  I only hope they get the reviews they deserve.

 

Sources:  The Associated Press article "The Tea app was intended to help women date safely.  Then it got hacked," appeared on July 26, 2025 at https://apnews.com/article/tea-app-data-breach-leak-4chan-c95d5bb2cabe9d1b8ec0ca8903503b29.  I also referred to an article at https://www.hims.com/news/dating-in-person-vs-online for the statistic about percentage of couples meeting online, and to the Wikipedia article on Tea (app). 

Monday, July 21, 2025

Can Teslas Drive Themselves? Judge and Jury To Decide

 

On a night in April of 2019, Naibel Benavidez and her boyfriend Dillon Argulo had parked the Tahoe SUV they were in near the T-intersection of Card Sound Road and County Road 905 in the upper Florida Keys.  George McGee was heading toward the intersection "driving" a Model S Tesla.  I put "driving" in quotes, because he had set the vehicle on the misleadingly-named  "full self-driving" mode.  It was going about 70 MPH when McGee dropped his cellphone and bent down to look for it.

 

According to dashcam evidence, the Tesla ran a stop sign, ignoring the blinking red light above the intersection, crashed through reflective warning signs, and slammed into the SUV, spinning it so violently that it struck Naibel and threw her 75 feet into the bushes, where her lifeless body was found by first responders shortly thereafter.  Dillon survived, but received a brain injury from which he is still recovering.

 

Understandably, the families of the victims have sued Tesla, and in an unusual move, Tesla is refusing to settle and is letting the case go to trial. 

 

The firm's position is that McGee was solely at fault for not following instructions on how to operate his car safely.  The driver should be prepared to take over manual control at all times, according to Tesla, and McGee clearly did not do that. 

 

The judge in the federal case, Beth Bloom, has thrown out charges of defective manufacturing and negligent misrepresentation, but says she is hospitable toward arguments that Tesla "acted in reckless disregard of human life for the sake of developing their product and maximizing profit."

 

Regardless of the legal details, the outlines of what happened are fairly clear.  Tesla claims that McGee was pressing the accelerator, "speeding and overriding the car's system at the time of the crash."  While I am not familiar with exactly how one overrides the autopilot in a Tesla, if it is like the cruise control on many cars, the driver's manual interventions take priority over whatever the autopilot is doing.  If you press the brake, the car's going to stop, and if you press the accelerator, it's going to speed up, regardless of what the computer thinks should be happening. 

 

The Society of Automotive Engineers (SAE) has promulgated its six levels of vehicle automation, from Level 0 (plain old human-driven cars without even cruise control) up to Level 5 (complete automation in which the driver can be asleep or absent and the car will still operate safely).  The 2019 Tesla involved in the Florida crash was most likely a Level 3 vehicle, which can operate itself in some conditions and locations, but requires the driver to be prepared to take over at any time. 

 

McGee appears to have done at least two things wrong.  First, he was using the "full self-driving" mode on a rural road at night, while it is more suitable for limited-access freeways with clear lane markings.  Second, for whatever reason, when he dropped his phone he hit the accelerator at the wrong time.  This could conceivably have happened even if he had been driving a Level 0 car.  But I think it is much less likely, and here's why.

 

Tesla drivers obviously accumulate experience with their "self-driving" vehicles, and just as drivers of non-self-driving cars learn how hard you have to brake and how far you have to turn the steering wheel to go where you want, Tesla drivers learn what they can get by with when the car is in self-driving mode.  It appears that McGee had set the car in that mode, and while I don't know what was going through his mind, it is likely that he'd been able to do things such as look at his cellphone in the past while the car was driving itself, and nothing bad had happened.  That may be what he was doing just before he dropped the phone.

 

At 70 MPH, a car is traveling over 100 feet per second.  In a five-second pause to look for a phone, the car would have traveled as much as a tenth of a mile.  If McGee had been consciously driving a non-autonomous car the whole time, he probably would have seen the blinking red light ahead and mentally prepared to slow down.  But the way things happened, his eyes might have been on the phone the whole time, even after it dropped, and when he (perhaps accidentally) pressed the accelerator, the rest of the situation plays out naturally, but unfortunately for Naibel and Dillon.

 

So while Tesla may be technically correct that McGee's actions were the direct cause of the crash, there is plenty of room to argue that the way Tesla presents their autonomous-driving system encourages drivers to over-rely on it.  Tesla says they have upgraded the system since 2019, and while that may be true, the issue at trial is whether they had cut corners and encouraged ways of driving in 2019 that could reasonably have led to this type of accident.

 

In an article unrelated to automotive issues but focused on the question of AI in general, I recently read that the self-driving idea has "plateued."  A decade or so ago, the news was full of forecasts that we would all be able to read our phones, play checkers with our commuting partners, or catch an extra forty winks on the way to work as the robot drove us through all kinds of traffic and roads.  That vision obviously has not come to pass, and while there are a few fully autonomous driverless vehicles plying the streets of Austin right now—I've seen them—they are "geofenced" to traverse only certain areas, and equipped with many more sensors and other devices than a consumer could afford to purchase for a private vehicle. 

 

So we may find that unless you live in certain densely populated regions of large cities, the dream of riding in a robot-driven car will remain just that:  a dream.  But when Tesla drivers presume that the dream has become reality and withdraw their attention from their surroundings, the dream can quickly become a nightmare.

 

Sources:  I referred to an Associated Press article on the trial beginning in Miami at https://apnews.com/article/musk-tesla-evidence-florida-benavides-autopilot-3ffab7fb53e93feb4ecfd3023f2ea21f.  I also referred to news reports on the accident and trial at https://www.nbcmiami.com/news/local/trial-against-tesla-begins-in-deadly-2019-crash-in-key-largo-involving-autopilot-feature/3657076/ and

https://www.nbcmiami.com/news/local/man-wants-answers-after-deadly-crash/124944/. 

 

Monday, July 14, 2025

Are AI Chatbots Really Just Pattern Engines?

 

That's all they are, according to Nathan Beacom, who wrote recently in the online journal The Dispatch an article with the title "There Is No Such Thing as Artificial Intelligence." 

 

His point is an important one, as the phrase "artificial intelligence" and its abbreviation "AI" have enjoyed a boom in usage since 2000, according to Google's Ngrams analysis of words appearing in published books.  The system plots frequency of occurrence, as a percentage, versus time.  The term "AI" peaked around 1965 and again around 1987, which correspond to the first and second comings of AI, both of which fizzled out as the digital technology of the time and the algorithms used were inadequate to realize most of the hopes of developers.

 

But starting around 2014, usage of "AI" soared until in 2022 (the last year surveyed by the Ngram machine), it stands at a level higher than the highest peak ever enjoyed by a common but very different phrase, "IRS."  So perhaps people now think the only inevitable things are death and AI, not death and taxes.

 

Kidding aside, Beacom says that the language we use about the technology generally referred to as AI has a profound influence on how we view it.  And his point is that a basic philosophical fallacy is embedded in the term "artificial intelligence."  Namely, what AI does is not intelligent in any meaningful sense of the term.  And fooling ourselves into thinking it is, as millions are doing every day, can lead to big problems.

 

He begins his article with a few extreme cases.  A woman left her husband because he developed weird conspiracy theories after he became obsessed with a chatbot, and another woman beat up on her husband after he told her the chatbot she was involved with was not "real."

 

The problem arises when we fall into the easy trap of behaving as though Siri, Alexa, Claude, and their AI friends are real people, as real as the checkout person at the grocery store or the voice who answers on the other end of the line when we call Mom.  Admittedly, quite a few of the chatbots out there would pass the Turing test, which compares the responses to typed commands and queries between a human being and a computer. 

 

But to believe that an AI chatbot can think and possesses human intelligence is a mistake.  According to Beacom, it's as much a mistake to believe that as it was for the probably apocryphal New Guinea tribesman who, when first hearing a voice come out of a radio, opened it up to see the little man inside.  The tribesman was disappointed, and it didn't do the radio any good either.

 

We can't open up AI systems to see the works, as they consist of giant server farms in remote areas that are intentionally hidden from view, like the Wizard of Oz who hid behind a curtain as he tried to astound his guests with fire and smoke.  Instead, companies promote chatbots as companions for elderly people or entertainment for the lonely young.  And if you decide to establish a personal relationship with a chatbot, you are always in control.  The algorithms are designed to please the human, not the other way around, and such a relationship will have little if any of the unpredictability and friction that always happens when one human being interacts with another human being. 

 

That is because human intelligence is a clean different kind of a thing than what AI does.  Beacom makes the point that all the machines can do is a fancy kind of autocomplete function.  Large-language models use their huge databases of what has been said on the Internet to predict what kind of thing is most likely to come after whatever the human interlocutor says or asks.  And so the only way it can sound human is by basing its responses on the replies of millions of other humans. 

 

But an AI system no more understands what it is saying, or thinks about your question, than a handheld calculator thinks about what the value of pi is.  Both a simple electronic calculator and the largest system of server farms and Internet connections are fundamentally the same kind of thing.  A newborn baby and its elemental responses to its environment, as simple as they are, represents a difference in kind, not degree, from any type of machine.  A baby has intelligence, as rudimentary as it is.  But a machine, no matter how complex and no matter what algorithms are running on it, is just a machine, as predictable (in principle, but no longer in practice in many cases) as a mechanical adding machine's motions.

 

Nobody in his right mind is going to treat a calculator like it was his best friend.  But the temptation is there to attribute minds and understanding to AI systems, and the systems' developers often encourage that attitude, because it leads to further engagement and, ultimately, profits.

 

There is nothing inherently wrong with profits, but Beacom says we need to begin to disengage ourselves from the delusion that AI systems have personalities or minds or understanding.  And the way he wants us to start is to quit referring to them as AI. 

 

His preferred terminology is "pattern engine," as nearly everything AI does can be subsumed under the general category of pattern repetition and modification. 

 

Beacom probably realizes his proposal is unlikely to catch on.  Terms are important, but even more important are the attitudes we bring toward things we deal with.  Beacom touches on the real spiritual problem involved in all this when he says that those who recognize the true nature of what we now call AI will "be able to see the clay feet of the new idol." 

 

Whenever a friend of mine holds his phone flat, brings it to his mouth, and asks something like "What is the capital of North Dakota?" I call it "consulting the oracle."  I mean it jokingly, and I don't think my friend is seriously worshipping his phone, but some people treat AI, or whatever we want to call it, as something worthy of the kind of devotion that only humans deserve.  That  is truly idolatry.  And as the Bible and history prove, idolatry almost always ends badly.

 

Sources:  Nathan Beacom's article "There Is No Such Thing as Artificial Intelligence" appeared in The Dispatch at https://thedispatch.com/article/artificial-intelligence-morality-honesty-pattern-engines/.

 

Monday, July 07, 2025

Has AI Made College Essays Pointless?

 

That's the question that Bard College professor of literature Hua Hsu asks in the current issue of The New Yorker.  Anyone who went to college remembers having to write essay papers on humanities subjects such as art history, literature, or philosophy.  Even before computers, the value of these essays was questionable.  Ideally, the task of writing an essay to be graded by an expert in the field was to give students practice in analyzing a body of knowledge, taking a point of view, and expressing it with clarity and even style.  The fact that few students achieved these ideals was beside the point, because, as Hsu says in his essay, "I have always had a vague sense that my students are learning something, even when it is hard to quantify."

 

The whole process of assigning and grading essay papers has recently been short-circuited by the widespread availability of large-language-model artificial intelligence (AI) systems such as ChatGPT.  Curious to see whether students at large schools used AI any differently than the ones at his exclusive small liberal-arts college, Hsu spent some time with a couple of undergraduates at New York University, which has a total graduate-plus-undergraduate enrollment of over 60,000.  They said things such as "Any type of writing in life, I use A. I."  At the end of the semester, one of them spent less than an hour using AI to write two final papers for humanities classes, and estimated doing it the hard way might have taken eight hours or more.  The grades he received on the papers were A-minus and B-plus.

 

If these students are representative of most undergraduates who are under time pressure to get the most done with the least amount of damage to their GPA in the subjects they are actually interested in, one can understand why they turn to resources such as ChatGPT to deal with courses that require a lot of writing.  Professors have taken various tacks to deal with the issue, which has mostly flummoxed university administrations.  Following an initial panic after ChatGPT was made publicly available in 2022, many universities have changed course and now run faculty-education courses that teach professors how to use ChatGPT more effectively in their research and teaching.  A philosophy professor, Barry Lam, who teaches at the University of California Riverside, deals with it by telling his class on the first day, "If you're gonna just turn in a paper that's ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach."  Presumably his class isn't spending all their time at the beach yet, but Lam is pretty sure that a lot of his students use AI in writing their papers anyway.

 

What are the ethical challenges in this situation?  It's not plagiarism pure and simple.  As one professor pointed out, there are no original texts that are being plagiarized, or if there are, the plagiarizing is being done by the AI system that scraped the whole Internet for the information it comes up with.  The closest non-technological analogy is the "paper mills" that students can pay to write a custom paper for them.  This is universally regarded as cheating, because students are passing off another person's work (the paper mill's employee) as their own. 

 

When Hsu asked his interviewees about the ethics of using AI heavily in writing graded essays, they characterized it as a victimless crime and an expedient to give them more time for other important tasks.  If I had been there, I might have pointed out something that I tell my own students at the beginning of class when I tell them not to cheat on homework. 

 

A STEM (science, technology, engineering, math) class is different than a humanities class, but just as vulnerable to the inroads of AI, as a huge amount of routine coding is now reportedly done with clever prompts to AI tools rather than writing the code directly.  What I tell them is that if they evade doing the homework either by using AI to do it all or paying a homework service, they are cheating themselves.  The point of doing the homework isn't to get a good grade; it is to give your mind practice in solving problems that (a) you will face without the help of anything or anybody when I give a paper exam in class, and (b) you may face in real life.  Yes, in real life you will be able to use AI assistance.  But how do you know it's not hallucinating?  Hsu cites experts who say that the hallucination problem—AI saying things that aren't true—has not gone away, and may actually be getting worse, for reasons that are poorly understood.  At some level, important work, whether it's humanities research papers to be published or bridges to be built, must pass through the evaluative minds of human beings before it goes straight to the public. 

 

That begs the question of what work qualifies as important.  It's obvious from reading the newspapers I read (a local paper printed on actual dead trees, and an electronic version of the Austin American-Statesman) that "important" doesn't seem to cover a lot of what passes for news in these documents.  Just this morning, I read an article about parking fees, and the headline and "see page X" at the end both referred to an article on women's health, not the parking-fees article.  And whenever I read the Austin paper on my tablet, whatever system they use to convert the actual typeset words into more readable type when you select the article oftenmakesthewordscomeoutlikethis.  I have written the managing editor about this egregious problem, but to no avail.

 

The most serious problem I see in the way AI has taken over college essay writing is not the ethics of the situation per se, although that is bad enough.  It is the general lowering of standards on the part of both originators and consumers of information.  In Culture and Anarchy (1869) (which I have not read, by the way), British essayist Matthew Arnold argued that a vital aspect of education was to read "the best that has been thought or said" as a way of combating the degradation of culture that industrialization was bringing on.  But if nobody except machines thinks or says those gems of culture after a certain point, we all might as well go to the beach.  I actually went to the beach a few weeks ago, and it was fun, but I wouldn't want to live there. 

 

Sources:  Hua Hsu's "The End of the Essay" appeared on pp. 21-27 of the July 7 & 14, 2025 issue of The New Yorker.  I referred to the website https://newlearningonline.com/new-learning/chapter-7/committed-knowledge-the-modern-past/matthew-arnold-on-learning-the-best-which-has-been-thought-and-said for the Arnold quote. 

Monday, June 30, 2025

Supreme Court Validates Texas Anti-Porn Law

 

On Friday, June 27, the U. S. Supreme Court issued its decision in the case of Free Speech Coalition, Inc. v. Paxton.  The Free Speech Coalition is an organization representing the interests of the online pornography industry, and Kenneth Paxton is the controversial attorney general of Texas,whose duty it is to enforce a 2023 law which "requires pornography websites to verify the age of users before they can access explicit material," according to a report by National Review.  The Court upheld the Texas law, finding that the law was a constitutional exercise of a state's responsibility to prevent children from "accessing sexually explicit content." 

 

This ruling has implications beyond Texas, as 22 other states have adopted similar laws, and the decision of the court means that those states are probably safe from federal lawsuits as well.

 

This is a matter of interest to engineering ethicists because, whether we like it or not, pornography has played a large role in electronic media at least since the development of consumer video-cassette recorders in the 1970s.  As each new medium has appeared, the pornographers have been among its earliest adopters.  Around 1980, as I was considering a career change in the electronic communications industry, one of the jobs I was offered was as engineer for a satellite cable-TV company.  One of the factors that made me turn it down was that a good bit of their programming back then was of the Playboy Channel ilk.  I ended up working for a supplier of cable TV equipment, which wasn't much better, perhaps, but that job lasted only a couple of years before I went back to school and remained in academia thereafter.

 

The idea behind the Texas law is that children exposed to pornography suffer objective harm.  The American College of Pediatricians has a statement on their website attesting to the problems caused by pornography to children:  depression, anxiety, violent behavior, and "a distorted view of relationships between men and women."  And it's not a rare problem.  The ubiquity of mobile phones means that even children who do not have their own phone are exposed to porn by their peers, and so even parents who do not allow their children to have a mobile phone are currently pretty defenseless against the onslaught of online pornography. 

 

Requiring porn websites to verify a user's age is a small but necessary step in reducing the exposure of young people to the social pathology of pornography.  In an article in the online journal The Dispatch, Charles Fain Lehman proposes that we dust off obscenity laws to prosecute pornographers regardless of the age of their clientele.  The prevalence of porn in the emotional lives of young people has ironically led to a dearth of sexual activity in Gen Z, who have lived with its presence all their lives.  In a review of several books that ask why people in their late teens and 20s today are having less sex than previous generations, New Yorker writer Jia Tolentino cites the statistic that nearly half of adults in this age category regard porn as harmful, but only 37% of older millennials do.  And fifteen percent of young Americans have encountered porn by the age of 10.

 

There are plenty of science-based reasons to keep children and young teenagers from viewing pornography.  For those who believe in God, I would like to add a few more.  In the gospel of Matthew, Jesus tells his disciples that they must "become like children" to enter the kingdom of Heaven.  Then he warns that "whoever causes one of these little ones who believe in me to sin [the Greek word means "to stumble"], it would be better for him to have a great millstone fastened round his neck and to be drowned in the depths of the sea."  (Matt. 18:6).  People who propagate pornography that ten-year-olds can watch on their phones seem to fill the bill for those who cause children to stumble. 

 

The innocence of children can be overrated, as anyone who has dealt with a furious two-year-old can attest.  But it is really a kind of mental virginity that children have:  the absence of cruel and exploitative sexual images in their minds helps keep them away from certain kinds of sin, even before they could understand what was involved.  Until a few decades ago, most well-regulated societies protected children from the viewing, reading, or hearing of pornography, and those who wished to access it had to go to considerable efforts to seek out a bookstore or porn theater.

 

But that is no longer the case, and as Carter Sherman, the author of a book quoted in the New Yorker says, the internet is a "mass social experiment with no antecedent and whose results we are just now beginning to see."  Among those results are a debauching of the ways men and women interact sexually, to the extent that one recent college-campus survey showed that nearly two-thirds of women said they'd been choked during sex. 

 

This is not the appropriate location to explore the ideals of how human sexuality should be expressed.  But suffice it to say that the competitive and addictive nature of online pornography invariably degrades its users toward a model of sexual attitudes that are selfish, exploitative, and unlikely to lead to positive outcomes. 

 

The victory of Texas's age-verification law at the Supreme Court is a step in the right direction toward the regulation of the porn industry, and gives hope to those who would like to see further legal challenges to its very existence.  Perhaps we are at the early stages of a trend comparable to what happened with the tobacco industry, which denied the objective health hazards of smoking until the evidence became overwhelming.  It's not too early for pornographers to start looking for millstones as a better alternative to their current occupation. 

 

Sources:  The article "Supreme Court Upholds Texas Age-Verification Law" appeared at https://www.nationalreview.com/news/supreme-court-upholds-texas-age-verification-porn-law/, and the article "It's Time to Prosecute Pornhub" appeared at https://thedispatch.com/article/pornhub-supreme-court-violence-obscenity-rape/.  I also referred to the Wikipedia article "Free Speech Coalition, Inc. v. Paxton" and the New Yorker article "Sex Bomb" by Jia Tolentino on pp. 58-61 of the June 30, 2025 issue. 


Monday, June 23, 2025

Should Chatbots Replace Government-Worker Phone Banks?

 

The recent slashes in federal-government staffing and funding have drawn the attention of the Distributed AI Research Institute (DAIR), and two of the Institute's members warn of impending disaster if the Department of Governmental Efficiency (DOGE) carries through its stated intention to replace hordes of government workers with AI chatbots.  In the July/August issue of Scientific American, DAIR founder Timnit Gebru, joined by staffer Asmelash Teka Hadgu, decry the current method of applying general-purpose large-language-model AI to the specific task of speech recognition, which would be necessary if one wants to replace the human-staffed phone banks that are at the other end of the telephone numbers for Social Security and the IRS with machines. 

 

The DAIR people give vivid examples of the kinds of things that can go wrong.  They focused on Whisper, which is a speech-recognition feature of OpenAI, and the results of studies by four universities of how well Whisper converted audio files of a person talking into transcribed text.

 

The process of machine transcription has come a long way since the very early days of computers in the 1970s, when I heard Bell Labs' former head of research John R. Pierce say that he doubted speech recognition would ever be computerized.  But anyone who phones a large organization today is likely to deal with some form of automated speech recognition, as well as anyone who has a Siri or other voice-controlled device in the home.  Just last week I was on vacation, and the TV in the room had to be controlled with voice commands.  Simple operations like asking for music or a TV channel are fairly well performed by these systems, but that's not what the DAIR people are worried about.

 

With more complex language, Whisper was shown not only to misunderstand things, but to make up stuff as well that was not in the original audio file at all.  For example, the phrase "two other girls and one lady" in the audio file became after Whisper transcribed it, "two other girls and one lady, um, which were Black." 

 

This is an example of what is charitably called "hallucinating" by AI proponents.  If a human being did something like this, we'd just call it lying, but to lie requires a will and an intellect that chooses a lie rather than the truth.  Not many AI experts want to attribute will and intellect to AI systems, so they default to calling untruths hallucinations.

 

This problem arises, the authors claim, when companies try to develop AI systems that can do everything and train them on huge unedited swaths of the Internet, rather than tailoring the design and training to a specific task, which of course costs more in terms of human input and guidance.  They paint a picture of a dystopian future in which somebody who calls Social Security can't ever talk to a human being, but just gets shunted around among chatbots which misinterpret, misinform, and simply lie about what the speaker said.

 

Both government-staffed interfaces with the public and speech-recognition systems vary greatly in quality.  Most people have encountered at least one or two government workers who are memorable for their surliness and aggressively unhelpful demeanor.  But there are also many such people who go out of their way to pay personal attention to the needs of their clients, and these are the kinds of employees we would miss if they got replaced by chatbots.

 

Elon Musk's brief tenure as head of DOGE is profiled in the June 23 issue of The New Yorker magazine, and the picture that emerges is that of a techie dude roaming around in organizations he and his tech bros didn't understand, causing havoc and basically throwing monkey wrenches into finely-adjusted clock mechanisms.  The only thing that is likely to happen in such cases is that the clock will stop working.  Improvements are not in the picture, not even cost savings in many cases.  As an IRS staffer pointed out, many IRS employees end up bringing in many times their salary's worth of added tax revenue by catching tax evaders.  Firing those people may look like an immediate short-term economy, but in the long term it will cost billions.

 

Now that Musk has left DOGE, the threat of massive-scale replacement of federal customer-service people by chatbots is less than it was.  But we would be remiss in ignoring DAIR's warning that AI systems can be misused or abused by large organizations in a mistaken attempt to save money.

 

In the private sector, there are limits to what harm can be done.  If a business depends on answering phone calls accurately and helpfully, and they install a chatbot that offends every caller, pretty soon that business will not have any more business and will go out of business.  But in the U. S. there's only one Social Security Administration and one Internal Revenue Service, and competition isn't part of that picture. 

 

The Trump administration does seem to want to do some revolutionary things to the way government operates.  But at some level, they are also aware that if they do anything that adversely affects millions of citizens, they will be blamed for it. 

 

So I'm not too concerned that all the local Social Security offices scattered around the country will be shuttered, and one's only alternative will be to call a chatbot which hallucinates by concluding the caller is dead and cuts off his Social Security check.  Along with almost every other politician in the country, Trump recognizes Social Security is a third rail that he touches at his peril. 

 

But that still leaves plenty of room for future abuse of AI by trying to make it do things that people really still do better, and maybe even more economically than computers.  While the immediate threat may have passed from the scene with Musk's departure from DOGE, the tendency is still there.  Let's hope that sensible mid-level managers will prevail against the lightning strikes of DOGE and its ilk, and the needed work of government will go on.

 

Sources:  The article "A Chatbot Dystopian Nightmare" by Asmelash Teka Hadgu and Timnit Gebru appeared in the July/August 2025 Scientific American on pp. 89-90.  I also referred to the article "Move Fast and Break Things" by Benjamin Wallace-Wells on pp. 24-35 of the June 23, 2025 issue of The New Yorker.

Monday, June 16, 2025

Why Did Air India Flight 171 Crash?

 

That is the question that investigators will be asking in the coming days, weeks and months to come.  On Thursday June 12, a Boeing 787 Dreamliner took off from Ahmedabad in northwest India, bound for London.  On board were 242 passengers and crew.  It was a hot, clear day.  Videos taken from the ground show that after rolling down the runway, the plane "rotated" into the air (orienting flight surfaces to make the plane take off), and assumed a nose-up attitude.  But after rising for about fifteen seconds, it began to sink back toward the ground and plowed into a building housing students of a medical college.  All but one person on the plane were killed, and at least 38 people on the ground died as well.

 

This is the first fatal crash of a 787 since it was introduced in 2011.  The data recorder was recovered over the weekend, so experts have abundant information to comb through in determining what went wrong.  The formal investigation will take many weeks, but understandably, friends and relatives of the victims of the crash would like answers earlier than that.

 

Air India, the plane's operator, became a private entity only in 2022 after spending 69 years under the control of the Indian government.  An AP news report mentions that fatal crashes killing hundreds of people involved Air India equipment in 1978 and 2010.  The quality of training is always a question in accidents of this kind, and that issue will be addressed along with many others.

 

An article in the Seattle Times describes the opinions of numerous aviation experts as to what might have led to a plane crashing shortly after takeoff in this way.  While they all emphasized that everything they say is speculative at this point, they had some specific suggestions as well.

 

One noted that the appearance of dust in a video of the takeoff just before the plane becomes airborne might indicate that the pilot used up the entire runway in taking off.  This is not the usual procedure at major airports, and might have indicated issues with available engine power.

 

Several experts mentioned that the flaps may not have been in the correct position for takeoff.  Flaps are parts of the wing that can be extended downward during takeoff and landing to provide extra lift, and are routinely extended for the first few minutes of any flight.  The problem with this theory, as one expert mentioned, is that modern aircraft have alarms to alert a negligent pilot that the flaps haven't been extended, and unless there was a problem with hydraulic pressure that overwhelmed other alarms, the pilots would have noticed the issue immediately.

 

Another possibility involves an attempt to take off too soon, before the plane had enough airspeed to leave the ground safely.  Rotation, as the actions to make the plane leave the ground are called, cannot come too early, or else the plane is likely to either stall or lose altitude after an initial rise.  Stalling is an aerodynamic effect that happens when an airfoil has an excessive angle of attack to the incoming air, which no longer flows in a controlled way over the upper surface but separates from it.  The result is that lift decreases dramatically.  An airplane entering a sudden stall can appear to pitch upward and then simply drop out of the air.  While such a stall was not obvious in the videos of the flight obtained so far, something obviously caused a lack of sufficient lift that led to the crash.

 

Other more remote possibilities include engine problems that would limit the amount of thrust available below that needed for a safe takeoff.  It is possible that some systemic control issue may have limited available thrust, but there was no obvious mechanical failure of the engines before the crash, so this possibility is not a leading one.

 

In sum, initial signs are that some type of pilot error may have at least contributed to the crash:  too-early rotation, misapplication of flaps, or other more subtle mistakes.  A wide-body aircraft cannot be stopped on a dime, and once it has begun a rollout to takeoff there are not a lot of options left to the pilot should a sudden emergency occur.  A decision to abort takeoff beyond a certain point will result in overrunning the runway.  And depending on how much extra space there is at the end of the runway, an overrun can easily lead to a crash, as recently happened when Jeju Air Flight 2216 in Thailand overshot the runway and crashed into the concrete foundation of some antennas in December 2024. 

 

The alternative of taking off and trying to stay in the air may not be successful either, unless sufficient thrust can be applied to gain sufficient altitude.  Although no expert mentioned the following possibility and there may be good reasons for that, perhaps there was an issue with brakes not being fully released on the landing-gear wheels.  This would slow down the plane considerably, and the unusual nature of the problem might not give the pilots time enough to figure out what was happening. 

 

Modern jetliners are exceedingly complicated machines, and the more parts there are in a system, the more combinations of things can happen to cause difficulties.  The fact that there have so far been no calls to ground the entire fleet of 787 Dreamliners indicates that the consensus of experts is that a fundamental issue with the plane itself is probably not at fault. 

 

Once the flight-recorder data has been studied, we will know a great deal more about things such as flap and engine settings, precise timing of control actions, and other matters that are now a subject of speculation.  It is entirely possible that the accident happened due to a combination of minor mechanical problems and poor training or execution by the flight crew.  Many major tragedies in technology occur because a number of problems, each of which could be overcome by itself, combine to cause a system failure.

 

Our sympathies are with those who lost loved ones in the air or on the ground.  And I hope that whatever lessons we learn from this catastrophe will improve training and design efforts to make these the last fatalities involving a 787 in a long time.

 

Sources:  I referred to AP articles at https://apnews.com/article/air-india-survivor-crash-boeing-e88b0ba404100049ee730d5714de4c67 and https://apnews.com/article/india-plane-crash-what-to-know-4e99be1a0ed106d2f57b92f4cc398a6c, a Seattle Times article at https://www.seattletimes.com/business/boeing-aerospace/what-will-investigators-be-looking-for-in-air-india-crash-data/, and the Wikipedia articles on Air India and Air India Flight 171.

Monday, June 09, 2025

Science Vs. Luck: DNA Sequencing of Embryos

 

"Science Vs. Luck" was the title of a sketch by Mark Twain about a lawyer who got his clients off from a charge of gambling by recruiting professional gamblers, who convinced the jury that the game was more science than luck—by playing it with the jury and cleaning them out! Of course, there was more going on than met the eye, as professional gamblers back then had some tricks up their sleeves that the innocents on the jury wouldn't have caught.  So while the verdict of science looked legitimate to the innocents, there was more going on than they suspected, and the spirit of the law against gambling was violated even though the letter seemed to be obeyed.

 

That sketch came to mind when I read an article by Abigail Anthony, who wrote on the National Review website about a service offered by the New York City firm Nucleus Genomics:  whole-genome sequencing of in-vitro-fertilized embryos.  For only $5,999, Nucleus will take the genetic data provided by the IVF company of your choice and give you information on over 900 different possible conditions and characteristics the prospective baby might have, ranging from Alzheimer's to the likelihood that the child will be left-handed. 

 

There are other companies offering services similar to this, so I'm not calling out Nucleus in particular.  What is peculiarly horrifying about this sales pitch is the implication that having a baby is no different in principle than buying a car.  If you go in a car dealership and order a new car, you get to choose the model, the color, a range of optional features, and if you don't like that brand you can go to a different dealer and get even more choices. 

 

The difference between choosing a car and choosing a baby is this:  the cars you don't pick will be sold to somebody else.  The babies you don't pick will die. 

 

We are far down the road foreseen by C. S. Lewis in his prescient 1943 essay "The Abolition of Man."  Lewis realized that what was conventionally called man's conquest of nature was really the exertion of power by some men over other men.  And the selection of IVF embryos by means of sophisticated genomic tests such as the ones offered by Nucleus are a fine example of such power.  In the midst of World War II when the fate of Western civilization seemed to hang in the balance, Lewis wrote, " . . .  if any one age attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power."

 

Eugenics was a highly popular and respectable subject from the late 19th century up to right after World War II, when its association with the horrors of the Holocaust committed by the Nazi regime gave it a much-deserved bad name.  The methods used by eugenicists back then were crude ones:  sterilization of the "unfit," where the people deciding who was unfit always had more power than the unfit ones; encouraging the better classes to have children and the undesirable classes (such as negroes and other minorities) to have fewer ones; providing birth control and abortion services especially to those undesirable classes (a policy which is honored by Planned Parenthood to this day); and in the case of Hitler's Germany, the wholesale extermination of whoever was deemed by his regime to be undesirable:  Jews, Romani, homosexuals, and so on. 

 

But just as abortion hides behind a clean, hygienic medical facade to mask the fact that it is the intentional killing of a baby, the videos on Nucleus's website conceal the fact that in order to get that ideal baby with a minimum of whatever the parents consider to be undesirable traits, an untold number of fertilized eggs—all exactly the same kind of human being that you were when you were that age—have to be "sacrificed" on the altar of perfection. 

 

If technology hands us a power that seems attractive, that enables us to avoid pain or suffering even on the part of another, does that mean we should always avail ourselves of it?  The answer depends on what actions are involved in using that power. 

 

If the Nucleus test enabled the prospective parents to avert potential harms and diseases in the embryo analyzed without killing it, there would not be any problem.  But we don't know how to do that yet, and by the very nature of reproduction we may never be able to.  The choice being offered is made by producing multiple embryos, and disposing of the ones that don't come up to snuff. 

 

Now, at $6,000 a pop, it's not likely that anyone with less spare change than Elon Musk is going to keep trying until they get exactly what they want.  But the clear implication of advertising such genomic testing as a choice is that, you don't have to take what Nature (or God) gives you.  If you don't like it, you can put it away and try something else.

 

And that's really the issue:  whether we acknowledge our finiteness before God and take the throw of the genetic dice that comes with having a child, the way it's been done since the beginning; or cheat by taking extra turns and drawing cards until we get what we want. 

 

The range of human capacity is so large and varied that even the 900 traits analyzed by Nucleus do not even scratch the surface of what a given baby may become.  This lesson is brought home in a story attributed to an author named J. John.  In a lecture on medical ethics, the professor confronts his students with a case study.  "The father of the family had syphilis and the mother tuberculosis.  They already had four children.  The first child is blind, the second died, the third is deaf and dumb, and the fourth has tuberculosis.  Now the mother is pregnant with her fifth child.  She is willing to have an abortion, so should she?"

 

After the medical students vote overwhelmingly in favor of the abortion, the professor says, "Congratulations, you have just murdered the famous composer Ludwig von Beethoven!"

 

Sources:  Abigail Anthony's article "Mail-order Eugenics" appeared on the National Review website on June 5, 2025 at https://www.nationalreview.com/corner/mail-order-eugenics/.  My source for the Beethoven anecdote is https://bothlivesmatter.org/blog/both-lives-mattered.