Monday, June 16, 2025

Why Did Air India Flight 171 Crash?

 

That is the question that investigators will be asking in the coming days, weeks and months to come.  On Thursday June 12, a Boeing 787 Dreamliner took off from Ahmedabad in northwest India, bound for London.  On board were 242 passengers and crew.  It was a hot, clear day.  Videos taken from the ground show that after rolling down the runway, the plane "rotated" into the air (orienting flight surfaces to make the plane take off), and assumed a nose-up attitude.  But after rising for about fifteen seconds, it began to sink back toward the ground and plowed into a building housing students of a medical college.  All but one person on the plane were killed, and at least 38 people on the ground died as well.

 

This is the first fatal crash of a 787 since it was introduced in 2011.  The data recorder was recovered over the weekend, so experts have abundant information to comb through in determining what went wrong.  The formal investigation will take many weeks, but understandably, friends and relatives of the victims of the crash would like answers earlier than that.

 

Air India, the plane's operator, became a private entity only in 2022 after spending 69 years under the control of the Indian government.  An AP news report mentions that fatal crashes killing hundreds of people involved Air India equipment in 1978 and 2010.  The quality of training is always a question in accidents of this kind, and that issue will be addressed along with many others.

 

An article in the Seattle Times describes the opinions of numerous aviation experts as to what might have led to a plane crashing shortly after takeoff in this way.  While they all emphasized that everything they say is speculative at this point, they had some specific suggestions as well.

 

One noted that the appearance of dust in a video of the takeoff just before the plane becomes airborne might indicate that the pilot used up the entire runway in taking off.  This is not the usual procedure at major airports, and might have indicated issues with available engine power.

 

Several experts mentioned that the flaps may not have been in the correct position for takeoff.  Flaps are parts of the wing that can be extended downward during takeoff and landing to provide extra lift, and are routinely extended for the first few minutes of any flight.  The problem with this theory, as one expert mentioned, is that modern aircraft have alarms to alert a negligent pilot that the flaps haven't been extended, and unless there was a problem with hydraulic pressure that overwhelmed other alarms, the pilots would have noticed the issue immediately.

 

Another possibility involves an attempt to take off too soon, before the plane had enough airspeed to leave the ground safely.  Rotation, as the actions to make the plane leave the ground are called, cannot come too early, or else the plane is likely to either stall or lose altitude after an initial rise.  Stalling is an aerodynamic effect that happens when an airfoil has an excessive angle of attack to the incoming air, which no longer flows in a controlled way over the upper surface but separates from it.  The result is that lift decreases dramatically.  An airplane entering a sudden stall can appear to pitch upward and then simply drop out of the air.  While such a stall was not obvious in the videos of the flight obtained so far, something obviously caused a lack of sufficient lift that led to the crash.

 

Other more remote possibilities include engine problems that would limit the amount of thrust available below that needed for a safe takeoff.  It is possible that some systemic control issue may have limited available thrust, but there was no obvious mechanical failure of the engines before the crash, so this possibility is not a leading one.

 

In sum, initial signs are that some type of pilot error may have at least contributed to the crash:  too-early rotation, misapplication of flaps, or other more subtle mistakes.  A wide-body aircraft cannot be stopped on a dime, and once it has begun a rollout to takeoff there are not a lot of options left to the pilot should a sudden emergency occur.  A decision to abort takeoff beyond a certain point will result in overrunning the runway.  And depending on how much extra space there is at the end of the runway, an overrun can easily lead to a crash, as recently happened when Jeju Air Flight 2216 in Thailand overshot the runway and crashed into the concrete foundation of some antennas in December 2024. 

 

The alternative of taking off and trying to stay in the air may not be successful either, unless sufficient thrust can be applied to gain sufficient altitude.  Although no expert mentioned the following possibility and there may be good reasons for that, perhaps there was an issue with brakes not being fully released on the landing-gear wheels.  This would slow down the plane considerably, and the unusual nature of the problem might not give the pilots time enough to figure out what was happening. 

 

Modern jetliners are exceedingly complicated machines, and the more parts there are in a system, the more combinations of things can happen to cause difficulties.  The fact that there have so far been no calls to ground the entire fleet of 787 Dreamliners indicates that the consensus of experts is that a fundamental issue with the plane itself is probably not at fault. 

 

Once the flight-recorder data has been studied, we will know a great deal more about things such as flap and engine settings, precise timing of control actions, and other matters that are now a subject of speculation.  It is entirely possible that the accident happened due to a combination of minor mechanical problems and poor training or execution by the flight crew.  Many major tragedies in technology occur because a number of problems, each of which could be overcome by itself, combine to cause a system failure.

 

Our sympathies are with those who lost loved ones in the air or on the ground.  And I hope that whatever lessons we learn from this catastrophe will improve training and design efforts to make these the last fatalities involving a 787 in a long time.

 

Sources:  I referred to AP articles at https://apnews.com/article/air-india-survivor-crash-boeing-e88b0ba404100049ee730d5714de4c67 and https://apnews.com/article/india-plane-crash-what-to-know-4e99be1a0ed106d2f57b92f4cc398a6c, a Seattle Times article at https://www.seattletimes.com/business/boeing-aerospace/what-will-investigators-be-looking-for-in-air-india-crash-data/, and the Wikipedia articles on Air India and Air India Flight 171.

Monday, June 09, 2025

Science Vs. Luck: DNA Sequencing of Embryos

 

"Science Vs. Luck" was the title of a sketch by Mark Twain about a lawyer who got his clients off from a charge of gambling by recruiting professional gamblers, who convinced the jury that the game was more science than luck—by playing it with the jury and cleaning them out! Of course, there was more going on than met the eye, as professional gamblers back then had some tricks up their sleeves that the innocents on the jury wouldn't have caught.  So while the verdict of science looked legitimate to the innocents, there was more going on than they suspected, and the spirit of the law against gambling was violated even though the letter seemed to be obeyed.

 

That sketch came to mind when I read an article by Abigail Anthony, who wrote on the National Review website about a service offered by the New York City firm Nucleus Genomics:  whole-genome sequencing of in-vitro-fertilized embryos.  For only $5,999, Nucleus will take the genetic data provided by the IVF company of your choice and give you information on over 900 different possible conditions and characteristics the prospective baby might have, ranging from Alzheimer's to the likelihood that the child will be left-handed. 

 

There are other companies offering services similar to this, so I'm not calling out Nucleus in particular.  What is peculiarly horrifying about this sales pitch is the implication that having a baby is no different in principle than buying a car.  If you go in a car dealership and order a new car, you get to choose the model, the color, a range of optional features, and if you don't like that brand you can go to a different dealer and get even more choices. 

 

The difference between choosing a car and choosing a baby is this:  the cars you don't pick will be sold to somebody else.  The babies you don't pick will die. 

 

We are far down the road foreseen by C. S. Lewis in his prescient 1943 essay "The Abolition of Man."  Lewis realized that what was conventionally called man's conquest of nature was really the exertion of power by some men over other men.  And the selection of IVF embryos by means of sophisticated genomic tests such as the ones offered by Nucleus are a fine example of such power.  In the midst of World War II when the fate of Western civilization seemed to hang in the balance, Lewis wrote, " . . .  if any one age attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power."

 

Eugenics was a highly popular and respectable subject from the late 19th century up to right after World War II, when its association with the horrors of the Holocaust committed by the Nazi regime gave it a much-deserved bad name.  The methods used by eugenicists back then were crude ones:  sterilization of the "unfit," where the people deciding who was unfit always had more power than the unfit ones; encouraging the better classes to have children and the undesirable classes (such as negroes and other minorities) to have fewer ones; providing birth control and abortion services especially to those undesirable classes (a policy which is honored by Planned Parenthood to this day); and in the case of Hitler's Germany, the wholesale extermination of whoever was deemed by his regime to be undesirable:  Jews, Romani, homosexuals, and so on. 

 

But just as abortion hides behind a clean, hygienic medical facade to mask the fact that it is the intentional killing of a baby, the videos on Nucleus's website conceal the fact that in order to get that ideal baby with a minimum of whatever the parents consider to be undesirable traits, an untold number of fertilized eggs—all exactly the same kind of human being that you were when you were that age—have to be "sacrificed" on the altar of perfection. 

 

If technology hands us a power that seems attractive, that enables us to avoid pain or suffering even on the part of another, does that mean we should always avail ourselves of it?  The answer depends on what actions are involved in using that power. 

 

If the Nucleus test enabled the prospective parents to avert potential harms and diseases in the embryo analyzed without killing it, there would not be any problem.  But we don't know how to do that yet, and by the very nature of reproduction we may never be able to.  The choice being offered is made by producing multiple embryos, and disposing of the ones that don't come up to snuff. 

 

Now, at $6,000 a pop, it's not likely that anyone with less spare change than Elon Musk is going to keep trying until they get exactly what they want.  But the clear implication of advertising such genomic testing as a choice is that, you don't have to take what Nature (or God) gives you.  If you don't like it, you can put it away and try something else.

 

And that's really the issue:  whether we acknowledge our finiteness before God and take the throw of the genetic dice that comes with having a child, the way it's been done since the beginning; or cheat by taking extra turns and drawing cards until we get what we want. 

 

The range of human capacity is so large and varied that even the 900 traits analyzed by Nucleus do not even scratch the surface of what a given baby may become.  This lesson is brought home in a story attributed to an author named J. John.  In a lecture on medical ethics, the professor confronts his students with a case study.  "The father of the family had syphilis and the mother tuberculosis.  They already had four children.  The first child is blind, the second died, the third is deaf and dumb, and the fourth has tuberculosis.  Now the mother is pregnant with her fifth child.  She is willing to have an abortion, so should she?"

 

After the medical students vote overwhelmingly in favor of the abortion, the professor says, "Congratulations, you have just murdered the famous composer Ludwig von Beethoven!"

 

Sources:  Abigail Anthony's article "Mail-order Eugenics" appeared on the National Review website on June 5, 2025 at https://www.nationalreview.com/corner/mail-order-eugenics/.  My source for the Beethoven anecdote is https://bothlivesmatter.org/blog/both-lives-mattered. 

Monday, June 02, 2025

AI-Induced False Memories in Criminal Justice: Fiction or Reality?

 

A filmmaker in Germany named Hashem Al-Ghaili has come up with an idea to solve our prison problems:  overcrowding, high recidivism rates, and all the rest.  Instead of locking up your rapist, robber, or arsonist for five to twenty years, you offer him a choice:  conventional prison and all that entails, or a "treatment" taking only a few minutes, after which he could return to society a free . . . I was going to say, "man," but once you find out what the treatment is, you may understand why I hesitated.

 

Al-Ghaili works with an artificial-intelligence firm called Cognify, and his treatment would do the following.  After a detailed scan of the criminal's brain, false memories would be inserted into his brain, the nature of which would be chosen to make sure the criminal doesn't commit that crime again.  Was he a rapist?  Insert memories of what the victim felt like and experienced.  Was he a thief?  Give him a whole history of realizing the loss he caused to others, repentance on his part, and rejection of his criminal ways.  And by the bye, the same brain scans could be used to create a useful database of criminal minds to figure out how to prevent these people from committing crimes in the first place.

 

Al-Ghaili admits that his idea is pretty far beyond current technological capabilities, but at the rate AI and brain research is progressing, he thinks now is the time to consider what we should do with these technologies once they're available. 

 

Lest you think these notions are just a pipe dream, a sober study from the MIT Media Lab experimented with implanting false memories simply by sending some of a study group of 200 people to have a conversation with a chatbot about a crime video they all watched.  The members of the study did not know that the chatbots were designed to mislead them with questions that would confuse their memories of what they saw.  The researchers also tried the same trick with a simple set of survey questions, and left a fourth division of the group alone as a control.

 

What the MIT researchers found was that the generative type of chatbot induced its subjects to have more than three times the false memories of the control group, who were not exposed to any memory-clouding techniques, and more than those who took the survey experienced.  What this study tells us is that using chatbots to interrogate suspects or witnesses in a criminal setting could easily be misused to distort the already less-than-100%-reliable recollections that we base legal decisions on. 

 

Once again, we are looking down a road where we see some novel technologies in the future beckoning us to use them, and we face a decision:  should we go there or not?  Or if we do go there, what rules should we follow? 

 

Let's take the Cognify prison alternative first.  As ethicist Charles Camosy pointed out in a broadcast discussion of the idea, messing with a person's memories by direct physical intervention and bypassing their conscious mind altogether is a gross violation of the integrity of the human person.  Our memories form an essential part of our being, as the sad case of Alzheimer's sufferers attests.  To implant a whole set of false memories into a person's brain, and therefore mind as well, is as violent an intrusion as cutting off a leg and replacing it with a cybernetic prosthesis.  Even if the person consents to such an action, the act itself is intrinsically wrong and should not be done. 

 

We laugh at such things when we see them in comedies such as Men in Black, when Tommy Lee Jones whips out a little flash device that makes everyone in sight forget what they've seen for the last half hour or so.  But each person has a right to experience the whole of life as it happens, and wiping out even a few minutes is wrong, let alone replacing them with a cobbled-together script designed to remake a person morally. 

 

Yes, it would save money compared to years of imprisonment, but if you really want to save money, just chop off the head of every offender, even for minor infractions.  That idea is too physically violent for today's cultural sensibilities, but in a culture inured to the death of many thousands of unborn children every year, we can apparently get used to almost any variety of violence as long as it is implemented in a non-messy and clinical-looking way.

 

C. S. Lewis saw this type of thing coming as long ago as 1949, when he criticized the trend of substituting therapeutic treatment of criminals as suffering from a disease, for retributive fixed-term punishment as the delivery of a just penalty to one who deserved it.  He wrote "My contention is that this doctrine [of therapy rather than punishment], merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being." 

 

No matter what either C. S. Lewis or I say, there are some people who will see nothing wrong with this idea, because they have a defective model of what a human being is.  One can show entirely from philosophical, not religious, presuppositions that the human intellect is immaterial.  Any system of thought which neglects that essential fact is capable of serious and violent errors, such as the Cognify notion of criminal memory-replacement would be.

 

As for allowing AI to implant false memories simply by persuasion, as the MIT Media Lab study appeared to do, we are already well down that road.  What do you think is going on any time a person "randomly" trolls the Internet looking at whatever the fantastically sophisticated algorithms show him or her?  AI-powered persuasion, of course.  And the crisis in teenage mental health and many other social-media ills can be largely attributed to such persuasion.

 

I'm glad that Hashem Al-Ghaili's prison of the future is likely to stay in the pipe-dream category at least for the next few years.  But now is the time to realize what a pernicious thing it would be, and to agree as a culture that we need to move away from treating criminals as laboratory animals and restore to them the dignity that every human being deserves. 

 

Sources:  Charles Camosy was interviewed on the Catholic network Relevant Radio on a recent edition of the Drew Mariani Show, where I heard about Cognify's "prison of the future" proposal. The quote from C. S. Lewis comes from his essay "The Humanitarian Theory of Punishment," which appears in his God in the Dock (Eerdmans, 1970).  I also referred to an article on Cognify at https://www.dazeddigital.com/life-culture/article/62983/1/inside-the-prison-of-the-future-where-ai-rewires-your-brain-hashem-al-ghaili and the MIT Media Lab abstract at https://www.media.mit.edu/projects/ai-false-memories/overview/.