Monday, July 21, 2025

Can Teslas Drive Themselves? Judge and Jury To Decide

 

On a night in April of 2019, Naibel Benavidez and her boyfriend Dillon Argulo had parked the Tahoe SUV they were in near the T-intersection of Card Sound Road and County Road 905 in the upper Florida Keys.  George McGee was heading toward the intersection "driving" a Model S Tesla.  I put "driving" in quotes, because he had set the vehicle on the misleadingly-named  "full self-driving" mode.  It was going about 70 MPH when McGee dropped his cellphone and bent down to look for it.

 

According to dashcam evidence, the Tesla ran a stop sign, ignoring the blinking red light above the intersection, crashed through reflective warning signs, and slammed into the SUV, spinning it so violently that it struck Naibel and threw her 75 feet into the bushes, where her lifeless body was found by first responders shortly thereafter.  Dillon survived, but received a brain injury from which he is still recovering.

 

Understandably, the families of the victims have sued Tesla, and in an unusual move, Tesla is refusing to settle and is letting the case go to trial. 

 

The firm's position is that McGee was solely at fault for not following instructions on how to operate his car safely.  The driver should be prepared to take over manual control at all times, according to Tesla, and McGee clearly did not do that. 

 

The judge in the federal case, Beth Bloom, has thrown out charges of defective manufacturing and negligent misrepresentation, but says she is hospitable toward arguments that Tesla "acted in reckless disregard of human life for the sake of developing their product and maximizing profit."

 

Regardless of the legal details, the outlines of what happened are fairly clear.  Tesla claims that McGee was pressing the accelerator, "speeding and overriding the car's system at the time of the crash."  While I am not familiar with exactly how one overrides the autopilot in a Tesla, if it is like the cruise control on many cars, the driver's manual interventions take priority over whatever the autopilot is doing.  If you press the brake, the car's going to stop, and if you press the accelerator, it's going to speed up, regardless of what the computer thinks should be happening. 

 

The Society of Automotive Engineers (SAE) has promulgated its six levels of vehicle automation, from Level 0 (plain old human-driven cars without even cruise control) up to Level 5 (complete automation in which the driver can be asleep or absent and the car will still operate safely).  The 2019 Tesla involved in the Florida crash was most likely a Level 3 vehicle, which can operate itself in some conditions and locations, but requires the driver to be prepared to take over at any time. 

 

McGee appears to have done at least two things wrong.  First, he was using the "full self-driving" mode on a rural road at night, while it is more suitable for limited-access freeways with clear lane markings.  Second, for whatever reason, when he dropped his phone he hit the accelerator at the wrong time.  This could conceivably have happened even if he had been driving a Level 0 car.  But I think it is much less likely, and here's why.

 

Tesla drivers obviously accumulate experience with their "self-driving" vehicles, and just as drivers of non-self-driving cars learn how hard you have to brake and how far you have to turn the steering wheel to go where you want, Tesla drivers learn what they can get by with when the car is in self-driving mode.  It appears that McGee had set the car in that mode, and while I don't know what was going through his mind, it is likely that he'd been able to do things such as look at his cellphone in the past while the car was driving itself, and nothing bad had happened.  That may be what he was doing just before he dropped the phone.

 

At 70 MPH, a car is traveling over 100 feet per second.  In a five-second pause to look for a phone, the car would have traveled as much as a tenth of a mile.  If McGee had been consciously driving a non-autonomous car the whole time, he probably would have seen the blinking red light ahead and mentally prepared to slow down.  But the way things happened, his eyes might have been on the phone the whole time, even after it dropped, and when he (perhaps accidentally) pressed the accelerator, the rest of the situation plays out naturally, but unfortunately for Naibel and Dillon.

 

So while Tesla may be technically correct that McGee's actions were the direct cause of the crash, there is plenty of room to argue that the way Tesla presents their autonomous-driving system encourages drivers to over-rely on it.  Tesla says they have upgraded the system since 2019, and while that may be true, the issue at trial is whether they had cut corners and encouraged ways of driving in 2019 that could reasonably have led to this type of accident.

 

In an article unrelated to automotive issues but focused on the question of AI in general, I recently read that the self-driving idea has "plateued."  A decade or so ago, the news was full of forecasts that we would all be able to read our phones, play checkers with our commuting partners, or catch an extra forty winks on the way to work as the robot drove us through all kinds of traffic and roads.  That vision obviously has not come to pass, and while there are a few fully autonomous driverless vehicles plying the streets of Austin right now—I've seen them—they are "geofenced" to traverse only certain areas, and equipped with many more sensors and other devices than a consumer could afford to purchase for a private vehicle. 

 

So we may find that unless you live in certain densely populated regions of large cities, the dream of riding in a robot-driven car will remain just that:  a dream.  But when Tesla drivers presume that the dream has become reality and withdraw their attention from their surroundings, the dream can quickly become a nightmare.

 

Sources:  I referred to an Associated Press article on the trial beginning in Miami at https://apnews.com/article/musk-tesla-evidence-florida-benavides-autopilot-3ffab7fb53e93feb4ecfd3023f2ea21f.  I also referred to news reports on the accident and trial at https://www.nbcmiami.com/news/local/trial-against-tesla-begins-in-deadly-2019-crash-in-key-largo-involving-autopilot-feature/3657076/ and

https://www.nbcmiami.com/news/local/man-wants-answers-after-deadly-crash/124944/. 

 

Monday, July 14, 2025

Are AI Chatbots Really Just Pattern Engines?

 

That's all they are, according to Nathan Beacom, who wrote recently in the online journal The Dispatch an article with the title "There Is No Such Thing as Artificial Intelligence." 

 

His point is an important one, as the phrase "artificial intelligence" and its abbreviation "AI" have enjoyed a boom in usage since 2000, according to Google's Ngrams analysis of words appearing in published books.  The system plots frequency of occurrence, as a percentage, versus time.  The term "AI" peaked around 1965 and again around 1987, which correspond to the first and second comings of AI, both of which fizzled out as the digital technology of the time and the algorithms used were inadequate to realize most of the hopes of developers.

 

But starting around 2014, usage of "AI" soared until in 2022 (the last year surveyed by the Ngram machine), it stands at a level higher than the highest peak ever enjoyed by a common but very different phrase, "IRS."  So perhaps people now think the only inevitable things are death and AI, not death and taxes.

 

Kidding aside, Beacom says that the language we use about the technology generally referred to as AI has a profound influence on how we view it.  And his point is that a basic philosophical fallacy is embedded in the term "artificial intelligence."  Namely, what AI does is not intelligent in any meaningful sense of the term.  And fooling ourselves into thinking it is, as millions are doing every day, can lead to big problems.

 

He begins his article with a few extreme cases.  A woman left her husband because he developed weird conspiracy theories after he became obsessed with a chatbot, and another woman beat up on her husband after he told her the chatbot she was involved with was not "real."

 

The problem arises when we fall into the easy trap of behaving as though Siri, Alexa, Claude, and their AI friends are real people, as real as the checkout person at the grocery store or the voice who answers on the other end of the line when we call Mom.  Admittedly, quite a few of the chatbots out there would pass the Turing test, which compares the responses to typed commands and queries between a human being and a computer. 

 

But to believe that an AI chatbot can think and possesses human intelligence is a mistake.  According to Beacom, it's as much a mistake to believe that as it was for the probably apocryphal New Guinea tribesman who, when first hearing a voice come out of a radio, opened it up to see the little man inside.  The tribesman was disappointed, and it didn't do the radio any good either.

 

We can't open up AI systems to see the works, as they consist of giant server farms in remote areas that are intentionally hidden from view, like the Wizard of Oz who hid behind a curtain as he tried to astound his guests with fire and smoke.  Instead, companies promote chatbots as companions for elderly people or entertainment for the lonely young.  And if you decide to establish a personal relationship with a chatbot, you are always in control.  The algorithms are designed to please the human, not the other way around, and such a relationship will have little if any of the unpredictability and friction that always happens when one human being interacts with another human being. 

 

That is because human intelligence is a clean different kind of a thing than what AI does.  Beacom makes the point that all the machines can do is a fancy kind of autocomplete function.  Large-language models use their huge databases of what has been said on the Internet to predict what kind of thing is most likely to come after whatever the human interlocutor says or asks.  And so the only way it can sound human is by basing its responses on the replies of millions of other humans. 

 

But an AI system no more understands what it is saying, or thinks about your question, than a handheld calculator thinks about what the value of pi is.  Both a simple electronic calculator and the largest system of server farms and Internet connections are fundamentally the same kind of thing.  A newborn baby and its elemental responses to its environment, as simple as they are, represents a difference in kind, not degree, from any type of machine.  A baby has intelligence, as rudimentary as it is.  But a machine, no matter how complex and no matter what algorithms are running on it, is just a machine, as predictable (in principle, but no longer in practice in many cases) as a mechanical adding machine's motions.

 

Nobody in his right mind is going to treat a calculator like it was his best friend.  But the temptation is there to attribute minds and understanding to AI systems, and the systems' developers often encourage that attitude, because it leads to further engagement and, ultimately, profits.

 

There is nothing inherently wrong with profits, but Beacom says we need to begin to disengage ourselves from the delusion that AI systems have personalities or minds or understanding.  And the way he wants us to start is to quit referring to them as AI. 

 

His preferred terminology is "pattern engine," as nearly everything AI does can be subsumed under the general category of pattern repetition and modification. 

 

Beacom probably realizes his proposal is unlikely to catch on.  Terms are important, but even more important are the attitudes we bring toward things we deal with.  Beacom touches on the real spiritual problem involved in all this when he says that those who recognize the true nature of what we now call AI will "be able to see the clay feet of the new idol." 

 

Whenever a friend of mine holds his phone flat, brings it to his mouth, and asks something like "What is the capital of North Dakota?" I call it "consulting the oracle."  I mean it jokingly, and I don't think my friend is seriously worshipping his phone, but some people treat AI, or whatever we want to call it, as something worthy of the kind of devotion that only humans deserve.  That  is truly idolatry.  And as the Bible and history prove, idolatry almost always ends badly.

 

Sources:  Nathan Beacom's article "There Is No Such Thing as Artificial Intelligence" appeared in The Dispatch at https://thedispatch.com/article/artificial-intelligence-morality-honesty-pattern-engines/.

 

Monday, July 07, 2025

Has AI Made College Essays Pointless?

 

That's the question that Bard College professor of literature Hua Hsu asks in the current issue of The New Yorker.  Anyone who went to college remembers having to write essay papers on humanities subjects such as art history, literature, or philosophy.  Even before computers, the value of these essays was questionable.  Ideally, the task of writing an essay to be graded by an expert in the field was to give students practice in analyzing a body of knowledge, taking a point of view, and expressing it with clarity and even style.  The fact that few students achieved these ideals was beside the point, because, as Hsu says in his essay, "I have always had a vague sense that my students are learning something, even when it is hard to quantify."

 

The whole process of assigning and grading essay papers has recently been short-circuited by the widespread availability of large-language-model artificial intelligence (AI) systems such as ChatGPT.  Curious to see whether students at large schools used AI any differently than the ones at his exclusive small liberal-arts college, Hsu spent some time with a couple of undergraduates at New York University, which has a total graduate-plus-undergraduate enrollment of over 60,000.  They said things such as "Any type of writing in life, I use A. I."  At the end of the semester, one of them spent less than an hour using AI to write two final papers for humanities classes, and estimated doing it the hard way might have taken eight hours or more.  The grades he received on the papers were A-minus and B-plus.

 

If these students are representative of most undergraduates who are under time pressure to get the most done with the least amount of damage to their GPA in the subjects they are actually interested in, one can understand why they turn to resources such as ChatGPT to deal with courses that require a lot of writing.  Professors have taken various tacks to deal with the issue, which has mostly flummoxed university administrations.  Following an initial panic after ChatGPT was made publicly available in 2022, many universities have changed course and now run faculty-education courses that teach professors how to use ChatGPT more effectively in their research and teaching.  A philosophy professor, Barry Lam, who teaches at the University of California Riverside, deals with it by telling his class on the first day, "If you're gonna just turn in a paper that's ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach."  Presumably his class isn't spending all their time at the beach yet, but Lam is pretty sure that a lot of his students use AI in writing their papers anyway.

 

What are the ethical challenges in this situation?  It's not plagiarism pure and simple.  As one professor pointed out, there are no original texts that are being plagiarized, or if there are, the plagiarizing is being done by the AI system that scraped the whole Internet for the information it comes up with.  The closest non-technological analogy is the "paper mills" that students can pay to write a custom paper for them.  This is universally regarded as cheating, because students are passing off another person's work (the paper mill's employee) as their own. 

 

When Hsu asked his interviewees about the ethics of using AI heavily in writing graded essays, they characterized it as a victimless crime and an expedient to give them more time for other important tasks.  If I had been there, I might have pointed out something that I tell my own students at the beginning of class when I tell them not to cheat on homework. 

 

A STEM (science, technology, engineering, math) class is different than a humanities class, but just as vulnerable to the inroads of AI, as a huge amount of routine coding is now reportedly done with clever prompts to AI tools rather than writing the code directly.  What I tell them is that if they evade doing the homework either by using AI to do it all or paying a homework service, they are cheating themselves.  The point of doing the homework isn't to get a good grade; it is to give your mind practice in solving problems that (a) you will face without the help of anything or anybody when I give a paper exam in class, and (b) you may face in real life.  Yes, in real life you will be able to use AI assistance.  But how do you know it's not hallucinating?  Hsu cites experts who say that the hallucination problem—AI saying things that aren't true—has not gone away, and may actually be getting worse, for reasons that are poorly understood.  At some level, important work, whether it's humanities research papers to be published or bridges to be built, must pass through the evaluative minds of human beings before it goes straight to the public. 

 

That begs the question of what work qualifies as important.  It's obvious from reading the newspapers I read (a local paper printed on actual dead trees, and an electronic version of the Austin American-Statesman) that "important" doesn't seem to cover a lot of what passes for news in these documents.  Just this morning, I read an article about parking fees, and the headline and "see page X" at the end both referred to an article on women's health, not the parking-fees article.  And whenever I read the Austin paper on my tablet, whatever system they use to convert the actual typeset words into more readable type when you select the article oftenmakesthewordscomeoutlikethis.  I have written the managing editor about this egregious problem, but to no avail.

 

The most serious problem I see in the way AI has taken over college essay writing is not the ethics of the situation per se, although that is bad enough.  It is the general lowering of standards on the part of both originators and consumers of information.  In Culture and Anarchy (1869) (which I have not read, by the way), British essayist Matthew Arnold argued that a vital aspect of education was to read "the best that has been thought or said" as a way of combating the degradation of culture that industrialization was bringing on.  But if nobody except machines thinks or says those gems of culture after a certain point, we all might as well go to the beach.  I actually went to the beach a few weeks ago, and it was fun, but I wouldn't want to live there. 

 

Sources:  Hua Hsu's "The End of the Essay" appeared on pp. 21-27 of the July 7 & 14, 2025 issue of The New Yorker.  I referred to the website https://newlearningonline.com/new-learning/chapter-7/committed-knowledge-the-modern-past/matthew-arnold-on-learning-the-best-which-has-been-thought-and-said for the Arnold quote. 

Monday, June 30, 2025

Supreme Court Validates Texas Anti-Porn Law

 

On Friday, June 27, the U. S. Supreme Court issued its decision in the case of Free Speech Coalition, Inc. v. Paxton.  The Free Speech Coalition is an organization representing the interests of the online pornography industry, and Kenneth Paxton is the controversial attorney general of Texas,whose duty it is to enforce a 2023 law which "requires pornography websites to verify the age of users before they can access explicit material," according to a report by National Review.  The Court upheld the Texas law, finding that the law was a constitutional exercise of a state's responsibility to prevent children from "accessing sexually explicit content." 

 

This ruling has implications beyond Texas, as 22 other states have adopted similar laws, and the decision of the court means that those states are probably safe from federal lawsuits as well.

 

This is a matter of interest to engineering ethicists because, whether we like it or not, pornography has played a large role in electronic media at least since the development of consumer video-cassette recorders in the 1970s.  As each new medium has appeared, the pornographers have been among its earliest adopters.  Around 1980, as I was considering a career change in the electronic communications industry, one of the jobs I was offered was as engineer for a satellite cable-TV company.  One of the factors that made me turn it down was that a good bit of their programming back then was of the Playboy Channel ilk.  I ended up working for a supplier of cable TV equipment, which wasn't much better, perhaps, but that job lasted only a couple of years before I went back to school and remained in academia thereafter.

 

The idea behind the Texas law is that children exposed to pornography suffer objective harm.  The American College of Pediatricians has a statement on their website attesting to the problems caused by pornography to children:  depression, anxiety, violent behavior, and "a distorted view of relationships between men and women."  And it's not a rare problem.  The ubiquity of mobile phones means that even children who do not have their own phone are exposed to porn by their peers, and so even parents who do not allow their children to have a mobile phone are currently pretty defenseless against the onslaught of online pornography. 

 

Requiring porn websites to verify a user's age is a small but necessary step in reducing the exposure of young people to the social pathology of pornography.  In an article in the online journal The Dispatch, Charles Fain Lehman proposes that we dust off obscenity laws to prosecute pornographers regardless of the age of their clientele.  The prevalence of porn in the emotional lives of young people has ironically led to a dearth of sexual activity in Gen Z, who have lived with its presence all their lives.  In a review of several books that ask why people in their late teens and 20s today are having less sex than previous generations, New Yorker writer Jia Tolentino cites the statistic that nearly half of adults in this age category regard porn as harmful, but only 37% of older millennials do.  And fifteen percent of young Americans have encountered porn by the age of 10.

 

There are plenty of science-based reasons to keep children and young teenagers from viewing pornography.  For those who believe in God, I would like to add a few more.  In the gospel of Matthew, Jesus tells his disciples that they must "become like children" to enter the kingdom of Heaven.  Then he warns that "whoever causes one of these little ones who believe in me to sin [the Greek word means "to stumble"], it would be better for him to have a great millstone fastened round his neck and to be drowned in the depths of the sea."  (Matt. 18:6).  People who propagate pornography that ten-year-olds can watch on their phones seem to fill the bill for those who cause children to stumble. 

 

The innocence of children can be overrated, as anyone who has dealt with a furious two-year-old can attest.  But it is really a kind of mental virginity that children have:  the absence of cruel and exploitative sexual images in their minds helps keep them away from certain kinds of sin, even before they could understand what was involved.  Until a few decades ago, most well-regulated societies protected children from the viewing, reading, or hearing of pornography, and those who wished to access it had to go to considerable efforts to seek out a bookstore or porn theater.

 

But that is no longer the case, and as Carter Sherman, the author of a book quoted in the New Yorker says, the internet is a "mass social experiment with no antecedent and whose results we are just now beginning to see."  Among those results are a debauching of the ways men and women interact sexually, to the extent that one recent college-campus survey showed that nearly two-thirds of women said they'd been choked during sex. 

 

This is not the appropriate location to explore the ideals of how human sexuality should be expressed.  But suffice it to say that the competitive and addictive nature of online pornography invariably degrades its users toward a model of sexual attitudes that are selfish, exploitative, and unlikely to lead to positive outcomes. 

 

The victory of Texas's age-verification law at the Supreme Court is a step in the right direction toward the regulation of the porn industry, and gives hope to those who would like to see further legal challenges to its very existence.  Perhaps we are at the early stages of a trend comparable to what happened with the tobacco industry, which denied the objective health hazards of smoking until the evidence became overwhelming.  It's not too early for pornographers to start looking for millstones as a better alternative to their current occupation. 

 

Sources:  The article "Supreme Court Upholds Texas Age-Verification Law" appeared at https://www.nationalreview.com/news/supreme-court-upholds-texas-age-verification-porn-law/, and the article "It's Time to Prosecute Pornhub" appeared at https://thedispatch.com/article/pornhub-supreme-court-violence-obscenity-rape/.  I also referred to the Wikipedia article "Free Speech Coalition, Inc. v. Paxton" and the New Yorker article "Sex Bomb" by Jia Tolentino on pp. 58-61 of the June 30, 2025 issue. 


Monday, June 23, 2025

Should Chatbots Replace Government-Worker Phone Banks?

 

The recent slashes in federal-government staffing and funding have drawn the attention of the Distributed AI Research Institute (DAIR), and two of the Institute's members warn of impending disaster if the Department of Governmental Efficiency (DOGE) carries through its stated intention to replace hordes of government workers with AI chatbots.  In the July/August issue of Scientific American, DAIR founder Timnit Gebru, joined by staffer Asmelash Teka Hadgu, decry the current method of applying general-purpose large-language-model AI to the specific task of speech recognition, which would be necessary if one wants to replace the human-staffed phone banks that are at the other end of the telephone numbers for Social Security and the IRS with machines. 

 

The DAIR people give vivid examples of the kinds of things that can go wrong.  They focused on Whisper, which is a speech-recognition feature of OpenAI, and the results of studies by four universities of how well Whisper converted audio files of a person talking into transcribed text.

 

The process of machine transcription has come a long way since the very early days of computers in the 1970s, when I heard Bell Labs' former head of research John R. Pierce say that he doubted speech recognition would ever be computerized.  But anyone who phones a large organization today is likely to deal with some form of automated speech recognition, as well as anyone who has a Siri or other voice-controlled device in the home.  Just last week I was on vacation, and the TV in the room had to be controlled with voice commands.  Simple operations like asking for music or a TV channel are fairly well performed by these systems, but that's not what the DAIR people are worried about.

 

With more complex language, Whisper was shown not only to misunderstand things, but to make up stuff as well that was not in the original audio file at all.  For example, the phrase "two other girls and one lady" in the audio file became after Whisper transcribed it, "two other girls and one lady, um, which were Black." 

 

This is an example of what is charitably called "hallucinating" by AI proponents.  If a human being did something like this, we'd just call it lying, but to lie requires a will and an intellect that chooses a lie rather than the truth.  Not many AI experts want to attribute will and intellect to AI systems, so they default to calling untruths hallucinations.

 

This problem arises, the authors claim, when companies try to develop AI systems that can do everything and train them on huge unedited swaths of the Internet, rather than tailoring the design and training to a specific task, which of course costs more in terms of human input and guidance.  They paint a picture of a dystopian future in which somebody who calls Social Security can't ever talk to a human being, but just gets shunted around among chatbots which misinterpret, misinform, and simply lie about what the speaker said.

 

Both government-staffed interfaces with the public and speech-recognition systems vary greatly in quality.  Most people have encountered at least one or two government workers who are memorable for their surliness and aggressively unhelpful demeanor.  But there are also many such people who go out of their way to pay personal attention to the needs of their clients, and these are the kinds of employees we would miss if they got replaced by chatbots.

 

Elon Musk's brief tenure as head of DOGE is profiled in the June 23 issue of The New Yorker magazine, and the picture that emerges is that of a techie dude roaming around in organizations he and his tech bros didn't understand, causing havoc and basically throwing monkey wrenches into finely-adjusted clock mechanisms.  The only thing that is likely to happen in such cases is that the clock will stop working.  Improvements are not in the picture, not even cost savings in many cases.  As an IRS staffer pointed out, many IRS employees end up bringing in many times their salary's worth of added tax revenue by catching tax evaders.  Firing those people may look like an immediate short-term economy, but in the long term it will cost billions.

 

Now that Musk has left DOGE, the threat of massive-scale replacement of federal customer-service people by chatbots is less than it was.  But we would be remiss in ignoring DAIR's warning that AI systems can be misused or abused by large organizations in a mistaken attempt to save money.

 

In the private sector, there are limits to what harm can be done.  If a business depends on answering phone calls accurately and helpfully, and they install a chatbot that offends every caller, pretty soon that business will not have any more business and will go out of business.  But in the U. S. there's only one Social Security Administration and one Internal Revenue Service, and competition isn't part of that picture. 

 

The Trump administration does seem to want to do some revolutionary things to the way government operates.  But at some level, they are also aware that if they do anything that adversely affects millions of citizens, they will be blamed for it. 

 

So I'm not too concerned that all the local Social Security offices scattered around the country will be shuttered, and one's only alternative will be to call a chatbot which hallucinates by concluding the caller is dead and cuts off his Social Security check.  Along with almost every other politician in the country, Trump recognizes Social Security is a third rail that he touches at his peril. 

 

But that still leaves plenty of room for future abuse of AI by trying to make it do things that people really still do better, and maybe even more economically than computers.  While the immediate threat may have passed from the scene with Musk's departure from DOGE, the tendency is still there.  Let's hope that sensible mid-level managers will prevail against the lightning strikes of DOGE and its ilk, and the needed work of government will go on.

 

Sources:  The article "A Chatbot Dystopian Nightmare" by Asmelash Teka Hadgu and Timnit Gebru appeared in the July/August 2025 Scientific American on pp. 89-90.  I also referred to the article "Move Fast and Break Things" by Benjamin Wallace-Wells on pp. 24-35 of the June 23, 2025 issue of The New Yorker.

Monday, June 16, 2025

Why Did Air India Flight 171 Crash?

 

That is the question that investigators will be asking in the coming days, weeks and months to come.  On Thursday June 12, a Boeing 787 Dreamliner took off from Ahmedabad in northwest India, bound for London.  On board were 242 passengers and crew.  It was a hot, clear day.  Videos taken from the ground show that after rolling down the runway, the plane "rotated" into the air (orienting flight surfaces to make the plane take off), and assumed a nose-up attitude.  But after rising for about fifteen seconds, it began to sink back toward the ground and plowed into a building housing students of a medical college.  All but one person on the plane were killed, and at least 38 people on the ground died as well.

 

This is the first fatal crash of a 787 since it was introduced in 2011.  The data recorder was recovered over the weekend, so experts have abundant information to comb through in determining what went wrong.  The formal investigation will take many weeks, but understandably, friends and relatives of the victims of the crash would like answers earlier than that.

 

Air India, the plane's operator, became a private entity only in 2022 after spending 69 years under the control of the Indian government.  An AP news report mentions that fatal crashes killing hundreds of people involved Air India equipment in 1978 and 2010.  The quality of training is always a question in accidents of this kind, and that issue will be addressed along with many others.

 

An article in the Seattle Times describes the opinions of numerous aviation experts as to what might have led to a plane crashing shortly after takeoff in this way.  While they all emphasized that everything they say is speculative at this point, they had some specific suggestions as well.

 

One noted that the appearance of dust in a video of the takeoff just before the plane becomes airborne might indicate that the pilot used up the entire runway in taking off.  This is not the usual procedure at major airports, and might have indicated issues with available engine power.

 

Several experts mentioned that the flaps may not have been in the correct position for takeoff.  Flaps are parts of the wing that can be extended downward during takeoff and landing to provide extra lift, and are routinely extended for the first few minutes of any flight.  The problem with this theory, as one expert mentioned, is that modern aircraft have alarms to alert a negligent pilot that the flaps haven't been extended, and unless there was a problem with hydraulic pressure that overwhelmed other alarms, the pilots would have noticed the issue immediately.

 

Another possibility involves an attempt to take off too soon, before the plane had enough airspeed to leave the ground safely.  Rotation, as the actions to make the plane leave the ground are called, cannot come too early, or else the plane is likely to either stall or lose altitude after an initial rise.  Stalling is an aerodynamic effect that happens when an airfoil has an excessive angle of attack to the incoming air, which no longer flows in a controlled way over the upper surface but separates from it.  The result is that lift decreases dramatically.  An airplane entering a sudden stall can appear to pitch upward and then simply drop out of the air.  While such a stall was not obvious in the videos of the flight obtained so far, something obviously caused a lack of sufficient lift that led to the crash.

 

Other more remote possibilities include engine problems that would limit the amount of thrust available below that needed for a safe takeoff.  It is possible that some systemic control issue may have limited available thrust, but there was no obvious mechanical failure of the engines before the crash, so this possibility is not a leading one.

 

In sum, initial signs are that some type of pilot error may have at least contributed to the crash:  too-early rotation, misapplication of flaps, or other more subtle mistakes.  A wide-body aircraft cannot be stopped on a dime, and once it has begun a rollout to takeoff there are not a lot of options left to the pilot should a sudden emergency occur.  A decision to abort takeoff beyond a certain point will result in overrunning the runway.  And depending on how much extra space there is at the end of the runway, an overrun can easily lead to a crash, as recently happened when Jeju Air Flight 2216 in Thailand overshot the runway and crashed into the concrete foundation of some antennas in December 2024. 

 

The alternative of taking off and trying to stay in the air may not be successful either, unless sufficient thrust can be applied to gain sufficient altitude.  Although no expert mentioned the following possibility and there may be good reasons for that, perhaps there was an issue with brakes not being fully released on the landing-gear wheels.  This would slow down the plane considerably, and the unusual nature of the problem might not give the pilots time enough to figure out what was happening. 

 

Modern jetliners are exceedingly complicated machines, and the more parts there are in a system, the more combinations of things can happen to cause difficulties.  The fact that there have so far been no calls to ground the entire fleet of 787 Dreamliners indicates that the consensus of experts is that a fundamental issue with the plane itself is probably not at fault. 

 

Once the flight-recorder data has been studied, we will know a great deal more about things such as flap and engine settings, precise timing of control actions, and other matters that are now a subject of speculation.  It is entirely possible that the accident happened due to a combination of minor mechanical problems and poor training or execution by the flight crew.  Many major tragedies in technology occur because a number of problems, each of which could be overcome by itself, combine to cause a system failure.

 

Our sympathies are with those who lost loved ones in the air or on the ground.  And I hope that whatever lessons we learn from this catastrophe will improve training and design efforts to make these the last fatalities involving a 787 in a long time.

 

Sources:  I referred to AP articles at https://apnews.com/article/air-india-survivor-crash-boeing-e88b0ba404100049ee730d5714de4c67 and https://apnews.com/article/india-plane-crash-what-to-know-4e99be1a0ed106d2f57b92f4cc398a6c, a Seattle Times article at https://www.seattletimes.com/business/boeing-aerospace/what-will-investigators-be-looking-for-in-air-india-crash-data/, and the Wikipedia articles on Air India and Air India Flight 171.

Monday, June 09, 2025

Science Vs. Luck: DNA Sequencing of Embryos

 

"Science Vs. Luck" was the title of a sketch by Mark Twain about a lawyer who got his clients off from a charge of gambling by recruiting professional gamblers, who convinced the jury that the game was more science than luck—by playing it with the jury and cleaning them out! Of course, there was more going on than met the eye, as professional gamblers back then had some tricks up their sleeves that the innocents on the jury wouldn't have caught.  So while the verdict of science looked legitimate to the innocents, there was more going on than they suspected, and the spirit of the law against gambling was violated even though the letter seemed to be obeyed.

 

That sketch came to mind when I read an article by Abigail Anthony, who wrote on the National Review website about a service offered by the New York City firm Nucleus Genomics:  whole-genome sequencing of in-vitro-fertilized embryos.  For only $5,999, Nucleus will take the genetic data provided by the IVF company of your choice and give you information on over 900 different possible conditions and characteristics the prospective baby might have, ranging from Alzheimer's to the likelihood that the child will be left-handed. 

 

There are other companies offering services similar to this, so I'm not calling out Nucleus in particular.  What is peculiarly horrifying about this sales pitch is the implication that having a baby is no different in principle than buying a car.  If you go in a car dealership and order a new car, you get to choose the model, the color, a range of optional features, and if you don't like that brand you can go to a different dealer and get even more choices. 

 

The difference between choosing a car and choosing a baby is this:  the cars you don't pick will be sold to somebody else.  The babies you don't pick will die. 

 

We are far down the road foreseen by C. S. Lewis in his prescient 1943 essay "The Abolition of Man."  Lewis realized that what was conventionally called man's conquest of nature was really the exertion of power by some men over other men.  And the selection of IVF embryos by means of sophisticated genomic tests such as the ones offered by Nucleus are a fine example of such power.  In the midst of World War II when the fate of Western civilization seemed to hang in the balance, Lewis wrote, " . . .  if any one age attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power."

 

Eugenics was a highly popular and respectable subject from the late 19th century up to right after World War II, when its association with the horrors of the Holocaust committed by the Nazi regime gave it a much-deserved bad name.  The methods used by eugenicists back then were crude ones:  sterilization of the "unfit," where the people deciding who was unfit always had more power than the unfit ones; encouraging the better classes to have children and the undesirable classes (such as negroes and other minorities) to have fewer ones; providing birth control and abortion services especially to those undesirable classes (a policy which is honored by Planned Parenthood to this day); and in the case of Hitler's Germany, the wholesale extermination of whoever was deemed by his regime to be undesirable:  Jews, Romani, homosexuals, and so on. 

 

But just as abortion hides behind a clean, hygienic medical facade to mask the fact that it is the intentional killing of a baby, the videos on Nucleus's website conceal the fact that in order to get that ideal baby with a minimum of whatever the parents consider to be undesirable traits, an untold number of fertilized eggs—all exactly the same kind of human being that you were when you were that age—have to be "sacrificed" on the altar of perfection. 

 

If technology hands us a power that seems attractive, that enables us to avoid pain or suffering even on the part of another, does that mean we should always avail ourselves of it?  The answer depends on what actions are involved in using that power. 

 

If the Nucleus test enabled the prospective parents to avert potential harms and diseases in the embryo analyzed without killing it, there would not be any problem.  But we don't know how to do that yet, and by the very nature of reproduction we may never be able to.  The choice being offered is made by producing multiple embryos, and disposing of the ones that don't come up to snuff. 

 

Now, at $6,000 a pop, it's not likely that anyone with less spare change than Elon Musk is going to keep trying until they get exactly what they want.  But the clear implication of advertising such genomic testing as a choice is that, you don't have to take what Nature (or God) gives you.  If you don't like it, you can put it away and try something else.

 

And that's really the issue:  whether we acknowledge our finiteness before God and take the throw of the genetic dice that comes with having a child, the way it's been done since the beginning; or cheat by taking extra turns and drawing cards until we get what we want. 

 

The range of human capacity is so large and varied that even the 900 traits analyzed by Nucleus do not even scratch the surface of what a given baby may become.  This lesson is brought home in a story attributed to an author named J. John.  In a lecture on medical ethics, the professor confronts his students with a case study.  "The father of the family had syphilis and the mother tuberculosis.  They already had four children.  The first child is blind, the second died, the third is deaf and dumb, and the fourth has tuberculosis.  Now the mother is pregnant with her fifth child.  She is willing to have an abortion, so should she?"

 

After the medical students vote overwhelmingly in favor of the abortion, the professor says, "Congratulations, you have just murdered the famous composer Ludwig von Beethoven!"

 

Sources:  Abigail Anthony's article "Mail-order Eugenics" appeared on the National Review website on June 5, 2025 at https://www.nationalreview.com/corner/mail-order-eugenics/.  My source for the Beethoven anecdote is https://bothlivesmatter.org/blog/both-lives-mattered. 

Monday, June 02, 2025

AI-Induced False Memories in Criminal Justice: Fiction or Reality?

 

A filmmaker in Germany named Hashem Al-Ghaili has come up with an idea to solve our prison problems:  overcrowding, high recidivism rates, and all the rest.  Instead of locking up your rapist, robber, or arsonist for five to twenty years, you offer him a choice:  conventional prison and all that entails, or a "treatment" taking only a few minutes, after which he could return to society a free . . . I was going to say, "man," but once you find out what the treatment is, you may understand why I hesitated.

 

Al-Ghaili works with an artificial-intelligence firm called Cognify, and his treatment would do the following.  After a detailed scan of the criminal's brain, false memories would be inserted into his brain, the nature of which would be chosen to make sure the criminal doesn't commit that crime again.  Was he a rapist?  Insert memories of what the victim felt like and experienced.  Was he a thief?  Give him a whole history of realizing the loss he caused to others, repentance on his part, and rejection of his criminal ways.  And by the bye, the same brain scans could be used to create a useful database of criminal minds to figure out how to prevent these people from committing crimes in the first place.

 

Al-Ghaili admits that his idea is pretty far beyond current technological capabilities, but at the rate AI and brain research is progressing, he thinks now is the time to consider what we should do with these technologies once they're available. 

 

Lest you think these notions are just a pipe dream, a sober study from the MIT Media Lab experimented with implanting false memories simply by sending some of a study group of 200 people to have a conversation with a chatbot about a crime video they all watched.  The members of the study did not know that the chatbots were designed to mislead them with questions that would confuse their memories of what they saw.  The researchers also tried the same trick with a simple set of survey questions, and left a fourth division of the group alone as a control.

 

What the MIT researchers found was that the generative type of chatbot induced its subjects to have more than three times the false memories of the control group, who were not exposed to any memory-clouding techniques, and more than those who took the survey experienced.  What this study tells us is that using chatbots to interrogate suspects or witnesses in a criminal setting could easily be misused to distort the already less-than-100%-reliable recollections that we base legal decisions on. 

 

Once again, we are looking down a road where we see some novel technologies in the future beckoning us to use them, and we face a decision:  should we go there or not?  Or if we do go there, what rules should we follow? 

 

Let's take the Cognify prison alternative first.  As ethicist Charles Camosy pointed out in a broadcast discussion of the idea, messing with a person's memories by direct physical intervention and bypassing their conscious mind altogether is a gross violation of the integrity of the human person.  Our memories form an essential part of our being, as the sad case of Alzheimer's sufferers attests.  To implant a whole set of false memories into a person's brain, and therefore mind as well, is as violent an intrusion as cutting off a leg and replacing it with a cybernetic prosthesis.  Even if the person consents to such an action, the act itself is intrinsically wrong and should not be done. 

 

We laugh at such things when we see them in comedies such as Men in Black, when Tommy Lee Jones whips out a little flash device that makes everyone in sight forget what they've seen for the last half hour or so.  But each person has a right to experience the whole of life as it happens, and wiping out even a few minutes is wrong, let alone replacing them with a cobbled-together script designed to remake a person morally. 

 

Yes, it would save money compared to years of imprisonment, but if you really want to save money, just chop off the head of every offender, even for minor infractions.  That idea is too physically violent for today's cultural sensibilities, but in a culture inured to the death of many thousands of unborn children every year, we can apparently get used to almost any variety of violence as long as it is implemented in a non-messy and clinical-looking way.

 

C. S. Lewis saw this type of thing coming as long ago as 1949, when he criticized the trend of substituting therapeutic treatment of criminals as suffering from a disease, for retributive fixed-term punishment as the delivery of a just penalty to one who deserved it.  He wrote "My contention is that this doctrine [of therapy rather than punishment], merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being." 

 

No matter what either C. S. Lewis or I say, there are some people who will see nothing wrong with this idea, because they have a defective model of what a human being is.  One can show entirely from philosophical, not religious, presuppositions that the human intellect is immaterial.  Any system of thought which neglects that essential fact is capable of serious and violent errors, such as the Cognify notion of criminal memory-replacement would be.

 

As for allowing AI to implant false memories simply by persuasion, as the MIT Media Lab study appeared to do, we are already well down that road.  What do you think is going on any time a person "randomly" trolls the Internet looking at whatever the fantastically sophisticated algorithms show him or her?  AI-powered persuasion, of course.  And the crisis in teenage mental health and many other social-media ills can be largely attributed to such persuasion.

 

I'm glad that Hashem Al-Ghaili's prison of the future is likely to stay in the pipe-dream category at least for the next few years.  But now is the time to realize what a pernicious thing it would be, and to agree as a culture that we need to move away from treating criminals as laboratory animals and restore to them the dignity that every human being deserves. 

 

Sources:  Charles Camosy was interviewed on the Catholic network Relevant Radio on a recent edition of the Drew Mariani Show, where I heard about Cognify's "prison of the future" proposal. The quote from C. S. Lewis comes from his essay "The Humanitarian Theory of Punishment," which appears in his God in the Dock (Eerdmans, 1970).  I also referred to an article on Cognify at https://www.dazeddigital.com/life-culture/article/62983/1/inside-the-prison-of-the-future-where-ai-rewires-your-brain-hashem-al-ghaili and the MIT Media Lab abstract at https://www.media.mit.edu/projects/ai-false-memories/overview/. 

Monday, May 26, 2025

NSF and Women in STEM: Removing Barriers or Chartering Jets?

 

Anyone even remotely connected with the academic world knows that the Trump administration has recently been playing Attila the Hun to the Italy of the government-funded research establishment, slashing billions in already-granted money, firing staff, and generally raising Hades.  A recent article by Andrew Follett in National Review highlights the shakeup at what many academics consider to be the crown jewel of such funding, the U. S. National Science Foundation (NSF).  Follett points out that the long-established woke-diversity-equity-inclusion slant at the agency may be repressed for the moment, but making permanent changes will require Congressional action. 

 

Follett may well be right regarding the correct political strategy, but what I would like to focus on is one particular goal which the NSF holds dear to its bureaucratic heart:  expanding participation in STEM (science, technology, engineering, and math) for women. 

 

It is not quite the case, as Follett says it is, that NSF is abandoning this goal completely.  Rather, according to some updated guidelines on the NSF website, investigators may pursue it, but only "as part of broad engagement activities" that are open to all Americans, regardless of sex, race, or other "protected characteristics."  Even if Congress gets involved, I suspect NSF will keep doing what it wants to do while staying within the letter of the law, because I've seen up close how they do it in the case of a specific grant aimed at increasing the participation of women in engineering.

 

I state categorically that women should not be barred from pursuing degrees or jobs in engineering, either de jure or de facto.  As recently as 1970, women were not admitted to many all-male engineering-intensive schools, and many engineering programs at coed universities refused to take women.  Accordingly, the U. S. Dept. of Labor reports that only 3% of engineers were women that year.  Second-wave feminism, equal-employment laws, and other societal changes knocked down virtually all the legal and cultural barriers that kept women from being engineers by around 1990, and the percentage of engineers in the U. S. who were women rose to around 16%, where it has hovered to this day. 

 

But since 1990, NSF has expended probably a total of billions of dollars trying to raise that percentage above 16%, with the presumed goal being "equity":  that is, a percentage of women in engineering equal to their percentage in the general population.  We can say several things about these efforts.

 

The most obvious thing is, they have failed.  If NSF had poured billions of dollars into a pure-science project—just to take one at random, say, the nature of ball lightning—and gotten precisely zip results by now, one would hope that common sense would prevail and they would turn their attention to other matters.  But that is not how these things work.  This is not to say that all the money was wasted.  In a grant I was familiar with at my own university, special scholarships and academic support networks were set up in a way that mainly attracted women, although when I asked the principal investigator whether a male student could apply, she said technically yes, although they weren't getting any to speak of.  And scholarships are good for students, other things being equal. 

 

But in terms of NSF's original goals of funding science research that otherwise would not get done, paying for scholarships that are legally for everybody but (wink-wink) are really focused on women is a classic example of politics corrupting science.

 

I use the word "corrupting" deliberately, in the sense that betrayal of an agency's stated purpose for political reasons—any political reasons, right, left, or slantwise—is a step down a long road that led to distinctions such as "Aryan science" in Germany before World War II. 

 

As a wise junior-high civics teacher once told me, politics is just the conduct of public affairs, and of course it's not possible to keep any human institution, let alone a governmental one, completely free of political considerations. 

 

But as in so many other ethical situations, the intent is the key.  If Congress manipulates an agency's budget to favor certain regions, there's not much the agency's director can do about it other than jawbone.  But that is vastly different from setting up entire divisions directed not at the discovery of new knowledge, but at the changing of certain demographic statistics such as the percentage of women in engineering. 

 

It is entirely possible, but in the nature of things it cannot be proved, that about as many women as want to go into engineering today presently do so.  As we said, most legal barriers that kept women out of engineering were gone by 1990, and since then the two professions that are even more prestigious than engineering—law and medicine—have become thoroughly feminized.  And the stereotypical engineering image has changed radically from the 1940s, when a clipart drawing of an engineer would depict a rugged guy wearing work boots and toting a transit tripod on one shoulder and a big hammer in his hand.  Nowadays, your typical engineer does exactly what I'm doing now—sits at a computer, something that women and men can do equally well. 

 

I agree with Follett that the NSF, along with other federal agencies, will require extensive Congressional action and supervision in order for it to reorient its intentions and priorities.  Old habits die hard, and old bureaucrats die harder.  But some such sea change may be necessary if we are to avoid a wholesale turn away from government support of science research, which from the 1940s up to at least the 1990s enjoyed the benign support of most citizens.  In a democracy, if most people no longer want a thing done by the government, it shouldn't be done, generally speaking.  And if the science establishment has betrayed its origins and allowed itself to be corrupted by political winds that inevitably go out of fashion, the day of reckoning we are currently seeing the dawn of may go on longer than we think. 

 

I'm glad there are women in engineering.  I miss them when I don't have any in my undergraduate class, which happened last semester.  But I think it's time NSF quit trying to move political needles and go back to funding science.

 

Sources:  Andrew Follett's article "How Republicans Can Actually Defund Woke Science" appeared on the National Review website at https://www.nationalreview.com/2025/05/how-republicans-can-actually-defund-woke-science/.  I also referred to the Dept. of Labor site at https://www.dol.gov/agencies/wb/data/occupations-stem for women-in-engineering statistics, and the NSF website https://www.nsf.gov/updates-on-priorities for their updated priorities.