Monday, June 30, 2025

Supreme Court Validates Texas Anti-Porn Law

 

On Friday, June 27, the U. S. Supreme Court issued its decision in the case of Free Speech Coalition, Inc. v. Paxton.  The Free Speech Coalition is an organization representing the interests of the online pornography industry, and Kenneth Paxton is the controversial attorney general of Texas,whose duty it is to enforce a 2023 law which "requires pornography websites to verify the age of users before they can access explicit material," according to a report by National Review.  The Court upheld the Texas law, finding that the law was a constitutional exercise of a state's responsibility to prevent children from "accessing sexually explicit content." 

 

This ruling has implications beyond Texas, as 22 other states have adopted similar laws, and the decision of the court means that those states are probably safe from federal lawsuits as well.

 

This is a matter of interest to engineering ethicists because, whether we like it or not, pornography has played a large role in electronic media at least since the development of consumer video-cassette recorders in the 1970s.  As each new medium has appeared, the pornographers have been among its earliest adopters.  Around 1980, as I was considering a career change in the electronic communications industry, one of the jobs I was offered was as engineer for a satellite cable-TV company.  One of the factors that made me turn it down was that a good bit of their programming back then was of the Playboy Channel ilk.  I ended up working for a supplier of cable TV equipment, which wasn't much better, perhaps, but that job lasted only a couple of years before I went back to school and remained in academia thereafter.

 

The idea behind the Texas law is that children exposed to pornography suffer objective harm.  The American College of Pediatricians has a statement on their website attesting to the problems caused by pornography to children:  depression, anxiety, violent behavior, and "a distorted view of relationships between men and women."  And it's not a rare problem.  The ubiquity of mobile phones means that even children who do not have their own phone are exposed to porn by their peers, and so even parents who do not allow their children to have a mobile phone are currently pretty defenseless against the onslaught of online pornography. 

 

Requiring porn websites to verify a user's age is a small but necessary step in reducing the exposure of young people to the social pathology of pornography.  In an article in the online journal The Dispatch, Charles Fain Lehman proposes that we dust off obscenity laws to prosecute pornographers regardless of the age of their clientele.  The prevalence of porn in the emotional lives of young people has ironically led to a dearth of sexual activity in Gen Z, who have lived with its presence all their lives.  In a review of several books that ask why people in their late teens and 20s today are having less sex than previous generations, New Yorker writer Jia Tolentino cites the statistic that nearly half of adults in this age category regard porn as harmful, but only 37% of older millennials do.  And fifteen percent of young Americans have encountered porn by the age of 10.

 

There are plenty of science-based reasons to keep children and young teenagers from viewing pornography.  For those who believe in God, I would like to add a few more.  In the gospel of Matthew, Jesus tells his disciples that they must "become like children" to enter the kingdom of Heaven.  Then he warns that "whoever causes one of these little ones who believe in me to sin [the Greek word means "to stumble"], it would be better for him to have a great millstone fastened round his neck and to be drowned in the depths of the sea."  (Matt. 18:6).  People who propagate pornography that ten-year-olds can watch on their phones seem to fill the bill for those who cause children to stumble. 

 

The innocence of children can be overrated, as anyone who has dealt with a furious two-year-old can attest.  But it is really a kind of mental virginity that children have:  the absence of cruel and exploitative sexual images in their minds helps keep them away from certain kinds of sin, even before they could understand what was involved.  Until a few decades ago, most well-regulated societies protected children from the viewing, reading, or hearing of pornography, and those who wished to access it had to go to considerable efforts to seek out a bookstore or porn theater.

 

But that is no longer the case, and as Carter Sherman, the author of a book quoted in the New Yorker says, the internet is a "mass social experiment with no antecedent and whose results we are just now beginning to see."  Among those results are a debauching of the ways men and women interact sexually, to the extent that one recent college-campus survey showed that nearly two-thirds of women said they'd been choked during sex. 

 

This is not the appropriate location to explore the ideals of how human sexuality should be expressed.  But suffice it to say that the competitive and addictive nature of online pornography invariably degrades its users toward a model of sexual attitudes that are selfish, exploitative, and unlikely to lead to positive outcomes. 

 

The victory of Texas's age-verification law at the Supreme Court is a step in the right direction toward the regulation of the porn industry, and gives hope to those who would like to see further legal challenges to its very existence.  Perhaps we are at the early stages of a trend comparable to what happened with the tobacco industry, which denied the objective health hazards of smoking until the evidence became overwhelming.  It's not too early for pornographers to start looking for millstones as a better alternative to their current occupation. 

 

Sources:  The article "Supreme Court Upholds Texas Age-Verification Law" appeared at https://www.nationalreview.com/news/supreme-court-upholds-texas-age-verification-porn-law/, and the article "It's Time to Prosecute Pornhub" appeared at https://thedispatch.com/article/pornhub-supreme-court-violence-obscenity-rape/.  I also referred to the Wikipedia article "Free Speech Coalition, Inc. v. Paxton" and the New Yorker article "Sex Bomb" by Jia Tolentino on pp. 58-61 of the June 30, 2025 issue. 


Monday, June 23, 2025

Should Chatbots Replace Government-Worker Phone Banks?

 

The recent slashes in federal-government staffing and funding have drawn the attention of the Distributed AI Research Institute (DAIR), and two of the Institute's members warn of impending disaster if the Department of Governmental Efficiency (DOGE) carries through its stated intention to replace hordes of government workers with AI chatbots.  In the July/August issue of Scientific American, DAIR founder Timnit Gebru, joined by staffer Asmelash Teka Hadgu, decry the current method of applying general-purpose large-language-model AI to the specific task of speech recognition, which would be necessary if one wants to replace the human-staffed phone banks that are at the other end of the telephone numbers for Social Security and the IRS with machines. 

 

The DAIR people give vivid examples of the kinds of things that can go wrong.  They focused on Whisper, which is a speech-recognition feature of OpenAI, and the results of studies by four universities of how well Whisper converted audio files of a person talking into transcribed text.

 

The process of machine transcription has come a long way since the very early days of computers in the 1970s, when I heard Bell Labs' former head of research John R. Pierce say that he doubted speech recognition would ever be computerized.  But anyone who phones a large organization today is likely to deal with some form of automated speech recognition, as well as anyone who has a Siri or other voice-controlled device in the home.  Just last week I was on vacation, and the TV in the room had to be controlled with voice commands.  Simple operations like asking for music or a TV channel are fairly well performed by these systems, but that's not what the DAIR people are worried about.

 

With more complex language, Whisper was shown not only to misunderstand things, but to make up stuff as well that was not in the original audio file at all.  For example, the phrase "two other girls and one lady" in the audio file became after Whisper transcribed it, "two other girls and one lady, um, which were Black." 

 

This is an example of what is charitably called "hallucinating" by AI proponents.  If a human being did something like this, we'd just call it lying, but to lie requires a will and an intellect that chooses a lie rather than the truth.  Not many AI experts want to attribute will and intellect to AI systems, so they default to calling untruths hallucinations.

 

This problem arises, the authors claim, when companies try to develop AI systems that can do everything and train them on huge unedited swaths of the Internet, rather than tailoring the design and training to a specific task, which of course costs more in terms of human input and guidance.  They paint a picture of a dystopian future in which somebody who calls Social Security can't ever talk to a human being, but just gets shunted around among chatbots which misinterpret, misinform, and simply lie about what the speaker said.

 

Both government-staffed interfaces with the public and speech-recognition systems vary greatly in quality.  Most people have encountered at least one or two government workers who are memorable for their surliness and aggressively unhelpful demeanor.  But there are also many such people who go out of their way to pay personal attention to the needs of their clients, and these are the kinds of employees we would miss if they got replaced by chatbots.

 

Elon Musk's brief tenure as head of DOGE is profiled in the June 23 issue of The New Yorker magazine, and the picture that emerges is that of a techie dude roaming around in organizations he and his tech bros didn't understand, causing havoc and basically throwing monkey wrenches into finely-adjusted clock mechanisms.  The only thing that is likely to happen in such cases is that the clock will stop working.  Improvements are not in the picture, not even cost savings in many cases.  As an IRS staffer pointed out, many IRS employees end up bringing in many times their salary's worth of added tax revenue by catching tax evaders.  Firing those people may look like an immediate short-term economy, but in the long term it will cost billions.

 

Now that Musk has left DOGE, the threat of massive-scale replacement of federal customer-service people by chatbots is less than it was.  But we would be remiss in ignoring DAIR's warning that AI systems can be misused or abused by large organizations in a mistaken attempt to save money.

 

In the private sector, there are limits to what harm can be done.  If a business depends on answering phone calls accurately and helpfully, and they install a chatbot that offends every caller, pretty soon that business will not have any more business and will go out of business.  But in the U. S. there's only one Social Security Administration and one Internal Revenue Service, and competition isn't part of that picture. 

 

The Trump administration does seem to want to do some revolutionary things to the way government operates.  But at some level, they are also aware that if they do anything that adversely affects millions of citizens, they will be blamed for it. 

 

So I'm not too concerned that all the local Social Security offices scattered around the country will be shuttered, and one's only alternative will be to call a chatbot which hallucinates by concluding the caller is dead and cuts off his Social Security check.  Along with almost every other politician in the country, Trump recognizes Social Security is a third rail that he touches at his peril. 

 

But that still leaves plenty of room for future abuse of AI by trying to make it do things that people really still do better, and maybe even more economically than computers.  While the immediate threat may have passed from the scene with Musk's departure from DOGE, the tendency is still there.  Let's hope that sensible mid-level managers will prevail against the lightning strikes of DOGE and its ilk, and the needed work of government will go on.

 

Sources:  The article "A Chatbot Dystopian Nightmare" by Asmelash Teka Hadgu and Timnit Gebru appeared in the July/August 2025 Scientific American on pp. 89-90.  I also referred to the article "Move Fast and Break Things" by Benjamin Wallace-Wells on pp. 24-35 of the June 23, 2025 issue of The New Yorker.

Monday, June 16, 2025

Why Did Air India Flight 171 Crash?

 

That is the question that investigators will be asking in the coming days, weeks and months to come.  On Thursday June 12, a Boeing 787 Dreamliner took off from Ahmedabad in northwest India, bound for London.  On board were 242 passengers and crew.  It was a hot, clear day.  Videos taken from the ground show that after rolling down the runway, the plane "rotated" into the air (orienting flight surfaces to make the plane take off), and assumed a nose-up attitude.  But after rising for about fifteen seconds, it began to sink back toward the ground and plowed into a building housing students of a medical college.  All but one person on the plane were killed, and at least 38 people on the ground died as well.

 

This is the first fatal crash of a 787 since it was introduced in 2011.  The data recorder was recovered over the weekend, so experts have abundant information to comb through in determining what went wrong.  The formal investigation will take many weeks, but understandably, friends and relatives of the victims of the crash would like answers earlier than that.

 

Air India, the plane's operator, became a private entity only in 2022 after spending 69 years under the control of the Indian government.  An AP news report mentions that fatal crashes killing hundreds of people involved Air India equipment in 1978 and 2010.  The quality of training is always a question in accidents of this kind, and that issue will be addressed along with many others.

 

An article in the Seattle Times describes the opinions of numerous aviation experts as to what might have led to a plane crashing shortly after takeoff in this way.  While they all emphasized that everything they say is speculative at this point, they had some specific suggestions as well.

 

One noted that the appearance of dust in a video of the takeoff just before the plane becomes airborne might indicate that the pilot used up the entire runway in taking off.  This is not the usual procedure at major airports, and might have indicated issues with available engine power.

 

Several experts mentioned that the flaps may not have been in the correct position for takeoff.  Flaps are parts of the wing that can be extended downward during takeoff and landing to provide extra lift, and are routinely extended for the first few minutes of any flight.  The problem with this theory, as one expert mentioned, is that modern aircraft have alarms to alert a negligent pilot that the flaps haven't been extended, and unless there was a problem with hydraulic pressure that overwhelmed other alarms, the pilots would have noticed the issue immediately.

 

Another possibility involves an attempt to take off too soon, before the plane had enough airspeed to leave the ground safely.  Rotation, as the actions to make the plane leave the ground are called, cannot come too early, or else the plane is likely to either stall or lose altitude after an initial rise.  Stalling is an aerodynamic effect that happens when an airfoil has an excessive angle of attack to the incoming air, which no longer flows in a controlled way over the upper surface but separates from it.  The result is that lift decreases dramatically.  An airplane entering a sudden stall can appear to pitch upward and then simply drop out of the air.  While such a stall was not obvious in the videos of the flight obtained so far, something obviously caused a lack of sufficient lift that led to the crash.

 

Other more remote possibilities include engine problems that would limit the amount of thrust available below that needed for a safe takeoff.  It is possible that some systemic control issue may have limited available thrust, but there was no obvious mechanical failure of the engines before the crash, so this possibility is not a leading one.

 

In sum, initial signs are that some type of pilot error may have at least contributed to the crash:  too-early rotation, misapplication of flaps, or other more subtle mistakes.  A wide-body aircraft cannot be stopped on a dime, and once it has begun a rollout to takeoff there are not a lot of options left to the pilot should a sudden emergency occur.  A decision to abort takeoff beyond a certain point will result in overrunning the runway.  And depending on how much extra space there is at the end of the runway, an overrun can easily lead to a crash, as recently happened when Jeju Air Flight 2216 in Thailand overshot the runway and crashed into the concrete foundation of some antennas in December 2024. 

 

The alternative of taking off and trying to stay in the air may not be successful either, unless sufficient thrust can be applied to gain sufficient altitude.  Although no expert mentioned the following possibility and there may be good reasons for that, perhaps there was an issue with brakes not being fully released on the landing-gear wheels.  This would slow down the plane considerably, and the unusual nature of the problem might not give the pilots time enough to figure out what was happening. 

 

Modern jetliners are exceedingly complicated machines, and the more parts there are in a system, the more combinations of things can happen to cause difficulties.  The fact that there have so far been no calls to ground the entire fleet of 787 Dreamliners indicates that the consensus of experts is that a fundamental issue with the plane itself is probably not at fault. 

 

Once the flight-recorder data has been studied, we will know a great deal more about things such as flap and engine settings, precise timing of control actions, and other matters that are now a subject of speculation.  It is entirely possible that the accident happened due to a combination of minor mechanical problems and poor training or execution by the flight crew.  Many major tragedies in technology occur because a number of problems, each of which could be overcome by itself, combine to cause a system failure.

 

Our sympathies are with those who lost loved ones in the air or on the ground.  And I hope that whatever lessons we learn from this catastrophe will improve training and design efforts to make these the last fatalities involving a 787 in a long time.

 

Sources:  I referred to AP articles at https://apnews.com/article/air-india-survivor-crash-boeing-e88b0ba404100049ee730d5714de4c67 and https://apnews.com/article/india-plane-crash-what-to-know-4e99be1a0ed106d2f57b92f4cc398a6c, a Seattle Times article at https://www.seattletimes.com/business/boeing-aerospace/what-will-investigators-be-looking-for-in-air-india-crash-data/, and the Wikipedia articles on Air India and Air India Flight 171.

Monday, June 09, 2025

Science Vs. Luck: DNA Sequencing of Embryos

 

"Science Vs. Luck" was the title of a sketch by Mark Twain about a lawyer who got his clients off from a charge of gambling by recruiting professional gamblers, who convinced the jury that the game was more science than luck—by playing it with the jury and cleaning them out! Of course, there was more going on than met the eye, as professional gamblers back then had some tricks up their sleeves that the innocents on the jury wouldn't have caught.  So while the verdict of science looked legitimate to the innocents, there was more going on than they suspected, and the spirit of the law against gambling was violated even though the letter seemed to be obeyed.

 

That sketch came to mind when I read an article by Abigail Anthony, who wrote on the National Review website about a service offered by the New York City firm Nucleus Genomics:  whole-genome sequencing of in-vitro-fertilized embryos.  For only $5,999, Nucleus will take the genetic data provided by the IVF company of your choice and give you information on over 900 different possible conditions and characteristics the prospective baby might have, ranging from Alzheimer's to the likelihood that the child will be left-handed. 

 

There are other companies offering services similar to this, so I'm not calling out Nucleus in particular.  What is peculiarly horrifying about this sales pitch is the implication that having a baby is no different in principle than buying a car.  If you go in a car dealership and order a new car, you get to choose the model, the color, a range of optional features, and if you don't like that brand you can go to a different dealer and get even more choices. 

 

The difference between choosing a car and choosing a baby is this:  the cars you don't pick will be sold to somebody else.  The babies you don't pick will die. 

 

We are far down the road foreseen by C. S. Lewis in his prescient 1943 essay "The Abolition of Man."  Lewis realized that what was conventionally called man's conquest of nature was really the exertion of power by some men over other men.  And the selection of IVF embryos by means of sophisticated genomic tests such as the ones offered by Nucleus are a fine example of such power.  In the midst of World War II when the fate of Western civilization seemed to hang in the balance, Lewis wrote, " . . .  if any one age attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power."

 

Eugenics was a highly popular and respectable subject from the late 19th century up to right after World War II, when its association with the horrors of the Holocaust committed by the Nazi regime gave it a much-deserved bad name.  The methods used by eugenicists back then were crude ones:  sterilization of the "unfit," where the people deciding who was unfit always had more power than the unfit ones; encouraging the better classes to have children and the undesirable classes (such as negroes and other minorities) to have fewer ones; providing birth control and abortion services especially to those undesirable classes (a policy which is honored by Planned Parenthood to this day); and in the case of Hitler's Germany, the wholesale extermination of whoever was deemed by his regime to be undesirable:  Jews, Romani, homosexuals, and so on. 

 

But just as abortion hides behind a clean, hygienic medical facade to mask the fact that it is the intentional killing of a baby, the videos on Nucleus's website conceal the fact that in order to get that ideal baby with a minimum of whatever the parents consider to be undesirable traits, an untold number of fertilized eggs—all exactly the same kind of human being that you were when you were that age—have to be "sacrificed" on the altar of perfection. 

 

If technology hands us a power that seems attractive, that enables us to avoid pain or suffering even on the part of another, does that mean we should always avail ourselves of it?  The answer depends on what actions are involved in using that power. 

 

If the Nucleus test enabled the prospective parents to avert potential harms and diseases in the embryo analyzed without killing it, there would not be any problem.  But we don't know how to do that yet, and by the very nature of reproduction we may never be able to.  The choice being offered is made by producing multiple embryos, and disposing of the ones that don't come up to snuff. 

 

Now, at $6,000 a pop, it's not likely that anyone with less spare change than Elon Musk is going to keep trying until they get exactly what they want.  But the clear implication of advertising such genomic testing as a choice is that, you don't have to take what Nature (or God) gives you.  If you don't like it, you can put it away and try something else.

 

And that's really the issue:  whether we acknowledge our finiteness before God and take the throw of the genetic dice that comes with having a child, the way it's been done since the beginning; or cheat by taking extra turns and drawing cards until we get what we want. 

 

The range of human capacity is so large and varied that even the 900 traits analyzed by Nucleus do not even scratch the surface of what a given baby may become.  This lesson is brought home in a story attributed to an author named J. John.  In a lecture on medical ethics, the professor confronts his students with a case study.  "The father of the family had syphilis and the mother tuberculosis.  They already had four children.  The first child is blind, the second died, the third is deaf and dumb, and the fourth has tuberculosis.  Now the mother is pregnant with her fifth child.  She is willing to have an abortion, so should she?"

 

After the medical students vote overwhelmingly in favor of the abortion, the professor says, "Congratulations, you have just murdered the famous composer Ludwig von Beethoven!"

 

Sources:  Abigail Anthony's article "Mail-order Eugenics" appeared on the National Review website on June 5, 2025 at https://www.nationalreview.com/corner/mail-order-eugenics/.  My source for the Beethoven anecdote is https://bothlivesmatter.org/blog/both-lives-mattered. 

Monday, June 02, 2025

AI-Induced False Memories in Criminal Justice: Fiction or Reality?

 

A filmmaker in Germany named Hashem Al-Ghaili has come up with an idea to solve our prison problems:  overcrowding, high recidivism rates, and all the rest.  Instead of locking up your rapist, robber, or arsonist for five to twenty years, you offer him a choice:  conventional prison and all that entails, or a "treatment" taking only a few minutes, after which he could return to society a free . . . I was going to say, "man," but once you find out what the treatment is, you may understand why I hesitated.

 

Al-Ghaili works with an artificial-intelligence firm called Cognify, and his treatment would do the following.  After a detailed scan of the criminal's brain, false memories would be inserted into his brain, the nature of which would be chosen to make sure the criminal doesn't commit that crime again.  Was he a rapist?  Insert memories of what the victim felt like and experienced.  Was he a thief?  Give him a whole history of realizing the loss he caused to others, repentance on his part, and rejection of his criminal ways.  And by the bye, the same brain scans could be used to create a useful database of criminal minds to figure out how to prevent these people from committing crimes in the first place.

 

Al-Ghaili admits that his idea is pretty far beyond current technological capabilities, but at the rate AI and brain research is progressing, he thinks now is the time to consider what we should do with these technologies once they're available. 

 

Lest you think these notions are just a pipe dream, a sober study from the MIT Media Lab experimented with implanting false memories simply by sending some of a study group of 200 people to have a conversation with a chatbot about a crime video they all watched.  The members of the study did not know that the chatbots were designed to mislead them with questions that would confuse their memories of what they saw.  The researchers also tried the same trick with a simple set of survey questions, and left a fourth division of the group alone as a control.

 

What the MIT researchers found was that the generative type of chatbot induced its subjects to have more than three times the false memories of the control group, who were not exposed to any memory-clouding techniques, and more than those who took the survey experienced.  What this study tells us is that using chatbots to interrogate suspects or witnesses in a criminal setting could easily be misused to distort the already less-than-100%-reliable recollections that we base legal decisions on. 

 

Once again, we are looking down a road where we see some novel technologies in the future beckoning us to use them, and we face a decision:  should we go there or not?  Or if we do go there, what rules should we follow? 

 

Let's take the Cognify prison alternative first.  As ethicist Charles Camosy pointed out in a broadcast discussion of the idea, messing with a person's memories by direct physical intervention and bypassing their conscious mind altogether is a gross violation of the integrity of the human person.  Our memories form an essential part of our being, as the sad case of Alzheimer's sufferers attests.  To implant a whole set of false memories into a person's brain, and therefore mind as well, is as violent an intrusion as cutting off a leg and replacing it with a cybernetic prosthesis.  Even if the person consents to such an action, the act itself is intrinsically wrong and should not be done. 

 

We laugh at such things when we see them in comedies such as Men in Black, when Tommy Lee Jones whips out a little flash device that makes everyone in sight forget what they've seen for the last half hour or so.  But each person has a right to experience the whole of life as it happens, and wiping out even a few minutes is wrong, let alone replacing them with a cobbled-together script designed to remake a person morally. 

 

Yes, it would save money compared to years of imprisonment, but if you really want to save money, just chop off the head of every offender, even for minor infractions.  That idea is too physically violent for today's cultural sensibilities, but in a culture inured to the death of many thousands of unborn children every year, we can apparently get used to almost any variety of violence as long as it is implemented in a non-messy and clinical-looking way.

 

C. S. Lewis saw this type of thing coming as long ago as 1949, when he criticized the trend of substituting therapeutic treatment of criminals as suffering from a disease, for retributive fixed-term punishment as the delivery of a just penalty to one who deserved it.  He wrote "My contention is that this doctrine [of therapy rather than punishment], merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being." 

 

No matter what either C. S. Lewis or I say, there are some people who will see nothing wrong with this idea, because they have a defective model of what a human being is.  One can show entirely from philosophical, not religious, presuppositions that the human intellect is immaterial.  Any system of thought which neglects that essential fact is capable of serious and violent errors, such as the Cognify notion of criminal memory-replacement would be.

 

As for allowing AI to implant false memories simply by persuasion, as the MIT Media Lab study appeared to do, we are already well down that road.  What do you think is going on any time a person "randomly" trolls the Internet looking at whatever the fantastically sophisticated algorithms show him or her?  AI-powered persuasion, of course.  And the crisis in teenage mental health and many other social-media ills can be largely attributed to such persuasion.

 

I'm glad that Hashem Al-Ghaili's prison of the future is likely to stay in the pipe-dream category at least for the next few years.  But now is the time to realize what a pernicious thing it would be, and to agree as a culture that we need to move away from treating criminals as laboratory animals and restore to them the dignity that every human being deserves. 

 

Sources:  Charles Camosy was interviewed on the Catholic network Relevant Radio on a recent edition of the Drew Mariani Show, where I heard about Cognify's "prison of the future" proposal. The quote from C. S. Lewis comes from his essay "The Humanitarian Theory of Punishment," which appears in his God in the Dock (Eerdmans, 1970).  I also referred to an article on Cognify at https://www.dazeddigital.com/life-culture/article/62983/1/inside-the-prison-of-the-future-where-ai-rewires-your-brain-hashem-al-ghaili and the MIT Media Lab abstract at https://www.media.mit.edu/projects/ai-false-memories/overview/.