Monday, March 27, 2023

Utah Clamps Down on Social Media for Teens

 

On Thursday, March 23, Utah Governor Spencer Cox signed a sweeping bill that will radically change the way social media are used by under-18 residents of the state.  The bill requires social media outlets to verify the age of users, prohibits the use of social media by anyone under 18 from 10:30 PM to 6:30 AM, and allows users to sue companies for alleged harm caused by social media, among other things.  The bill's provisions won't take effect until a year from now, and TikTok and other social-media firms whose products are popular with teens are widely expected to file lawsuits charging that the new bill abridges First Amendment rights.

 

Back in June of 2022, we reported on Christine Rosen's call for a total ban of social media for anyone under 16.  By then it was obvious that teenagers using social media could be seriously harmed by it.  Even if we set aside the isolated cases of social-media-enabled bullying and induced suicides, numerous academic studies have shown a positive correlation between certain types of social media use and "ill-being", as one meta-analysis study puts it.  Especially for those teens who become addicted to the extent that their online activities hamper their daily lives, depression, anxiety, and suicidal ideation all increase above the average population's rates. 

 

At the time, Rosen's call was something of an outlier, but in the conservative, pro-family state of Utah, concern has turned into legislative action.  Besides imposing a curfew, insisting on age verification, and allowing lawsuits, the legislation also requires that parents be able to control their teens' access to social media.  In an Associated Press report on the new law, the Electronic Frontier Foundation, a digital-privacy advocacy group, was quoted as saying that "the majority of young Utahns will find themselves effectively locked out of much of the web."  And lobbyists for social-media tech companies decried the law as violating First Amendment rights of free speech.

 

My two cents on this are that our Federal system allows states to try things that might be unwise to experiment with nationwide.  But if something goes badly wrong with a new law that only one state passes, the damage is limited and other states can learn from the mistakes that were made.

 

I'm sure there will be problems implementing the Utah social-media law.  For one thing, TikTok, Facebook, and the other major social-media players will lead a charge of lawyers to have it declared unconstitutional or void it in some other way, possibly with a case that makes it to the U. S. Supreme Court.  Such a game should be played with caution, however, because the Court's conservative makeup these days may not be favorable to causes that formerly would have been a slam-dunk. 

 

In the meantime, I think it is refreshing that legislators in Utah have tried to place the responsibility for supervising teenagers back where it belongs:  in the hands of parents. 

 

We can make an analogy between social-media use and the use of the family car.  Back when most teenagers desperately wanted to drive and got their driver licenses at the earliest possible moment, Mom and Dad were the keepers of the keys.  Driving was a potentially dangerous activity, and parents exercised judgment over whether Junior was mentally and emotionally ready to drive, and how much, and where.  Such privileges could be revoked at any moment for bad behavior, which furnished a powerful lever of discipline that was entirely within reasonable child-rearing bounds. 

 

Now, of course, kids just call Uber on their smartphones if they want to go somewhere, but usually they don't—social media is much more interesting than reality, or at least some teens act like it is.  The meta-analysis I cited above cautioned that not all types of social media are bad.  Kids simply staying in touch with other individual kids and posting funny or positive things does no harm, and can even contribute to well-being.  But the disruptive downside of social media use in teens has been so obvious for so long, that it is high time one of our laboratories of democracy should do an experiment in restricting it to see what effect it has on rates of depression, anxiety, and suicide. 

 

The progressive push toward radical autonomy of the individual, teenage or otherwise, dictates that old-fashioned notions such as parental discipline and control get thrown out the window.  As studies have shown, however, the teenage brain has a long way to go before it produces mature adult-style reasoned decisions (and some just never make it at all).  For teens' own safety, parents need to supervise what teens do with their time, and increasingly that means social media.  While there are loopholes in every law, Utah's attempt to rein in the unbridled access to teens (and younger people) that social media companies have enjoyed up to this point should be applauded.  There are really some things more important than the free market, and even the First Amendment, when it comes to the raising of kids.  And the Utah law recognizes this.

 

State legislatures can often be wound around the little finger of lobbyists, and we will see what happens when multibillion-dollar international corporations throw their legal weight behind a concerted effort to overturn Utah's restrictions on social media.  It will be a test of integrity for the political system, as money and cultural influence seem to prevail over all kinds of better principles these days.  But maybe Utah can resist the onslaught, and show the rest of the country what can be done about the issue of social-media abuse in teens, a problem that has become a serious public-health crisis.  The next year or so will tell us whether state laws can do anything about it, or whether Big Tech can steamroller over the stated preferences of parents and legislators without regard to their feelings or the well-being of their children.

 

Sources:  The Associated Press article "Kids in Utah will need parents' OK to access social media," appeared on Mar. 24, 2023 at https://apnews.com/article/social-media-utah-kids-84bd1f6481071726327bce25cf3e7522.  I also referred to an article in Public Health, " Digital Media Use and Adolescents' Mental Health During the Covid-19 Pandemic:  A Systematic Review and Meta-Analysis" at https://www.frontiersin.org/articles/10.3389/fpubh.2021.793868/full

and a reference to a Pew Research study on teen suicidal ideation and social media at https://www.levylaw.com/blog/2022/august/social-media-linked-to-rising-teen-suicide-rates/.

Monday, March 20, 2023

Trying Out ChatGPT

 

Since the artificial-intelligence laboratory OpenAI made its latest major project ChatGPT available to the public last fall, the chatbot's popularity, not to say notoriety, has soared.  Chatbots—software that responds to human-typed inputs with conversation-like output—are nothing new, but the combination of speed, apparent knowledge, and polish with which ChatGPT responds to a huge variety of "prompts"—basically, commands to write something about a subject—have attracted probably millions of users, a ton of publicity, and expressions of concern.

 

One of the most understandable concerns is that students will simply take any given writing assignment, put it into ChatGPT, and cut-and-paste the result into their homework.  Plagiarism is a chronic problem in education, and universities across the world have been holding special meetings to deal with the advent of ChatGPT and how to detect and prevent such cheating. 

 

I wish them luck, because when I tried the system this morning on a topic that's very familiar to me, it came up with verbiage of such high quality that I wouldn't hesitate to use it as the lead section of a research proposal, for instance.  That is, if I didn't mind the fact that I was using some computer's synthetic prose rather than my own. 

 

In case you want to judge for yourself how ChatGPT did, here's a sample.  The prompt I gave it was this:  "Describe ball lightning in two paragraphs or less (under 250 words) and quote experts in the field." 

 

The response begins, "Ball lightning is a rare and mysterious phenomenon in which a glowing sphere of light appears during thunderstorms and floats through the air for several seconds to several minutes before disappearing."  So far, so good.  It goes on for 136 words, which is under 250, and quotes only one expert, John Abrahamson.  The quote itself is a long one—57 words—and seems to be taken from an interview that I was not immediately able to identify by typing it into Google, a favorite trick I used to pull with student essays that I suspected of being copied wholesale from the Internet.  Either Google doesn't do that type of search very well anymore, or ChatGPT may have used some obscure transcription of a radio or TV interview, but not even part of the original two sentences shows up in my search.  So I simply have to take ChatGPT's word for it that it's accurately quoting Prof. Abrahamson, a New Zealand chemical engineer who published a well-publicized theory of ball lightning around 2000.

 

And that points out one of the big problems with some forms of AI:  they behave like black boxes, and figuring out how they work and where they get their information can be difficult or simply impossible.  I suppose I could go back and ask ChatGPT where it got the quote, but then I wouldn't have time to finish this column. 

 

So is access to powerful software such as ChatGPT a threat to the integrity of education and the livelihood of copywriters and grant writers everywhere, or on the other hand a great boon to the millions of people who can't put two coherent sentences together?  To some extent, I'd have to say "all of the above." 

 

Whatever else the ChatGPT developers have done, I have to congratulate them on the generally flawless grammar in all ChatGPT outputs I've seen so far.  They must have come up with some way of assessing the grammatical quality of sources and picking only the best ones, because believe me, there is a lot of bad English grammar out there, especially in the reams of technical publications that attract authors whose first language isn't English.  So that's the good news.

 

What is perhaps not so good news is that lots of us could become dependent on ChatGPT and its successors.  Now, is this a dependence that is harmless, like our dependence on pocket calculators instead of doing long division by hand?  Or is it a malignant dependence such as some people have for porn or alcohol or video games, distorting their lives and inhibiting human flourishing? 

 

My first impression is that the main hazard so far of using ChatGPT is that of letting the machine do one's writing and thinking too.  Now, technically, I let my pocket calculator do my thinking when I use it, but the kind of thinking it does is extremely mechanical—that's why mechanical calculators were successful—and it's no loss to my mental integrity to outsource the taking of square roots to a machine. 

 

But expressing a complicated original idea in clear prose is something that has thus far been reserved for humans.  If I take out the word "original," it appears that ChatGPT can do as good as or better than your average human being at expressing complicated ideas clearly.  And of course, original is a relative term, as nobody can come up with a fourth primary color, for example.  We quickly get into philosophical waters here, but I will leave it with the Christian observation that God is the only Person who can truly originate things from nothing.  All so-called human inventions and discoveries are the unearthing or understanding of things and ideas that have always been latent in the universe, waiting for us to find them. 

 

I don't know whether some puckish mathematician has yet typed into ChatGPT, "Prove Goldbach's conjecture true or false."  Goldbach's conjecture is the proposition that every even number greater than 2 is the sum of two primes.  It's one of those things that seems like it ought to be true, and nobody can find a counterexample, but nobody so far has been able to prove it one way or the other.  From everything I understand about ChatGPT, it would come up with a lot of verbiage, and maybe equations, but as it simply pulls from whatever is already out there on the Internet (and according to its developers, it's skimpy on anything after 2021), if a proof isn't out there it's not likely to come up with one.

 

So the mathematicians are safe.  For the rest of us, I'm not so sure.

 

Sources:  A good description of ChatGPT and instructions on how to use it were published on the website Digital Trends at https://www.digitaltrends.com/computing/how-to-use-openai-chatgpt-text-generation-chatbot/.  I also referred to a list of ten hardest unsolved math problems at https://www.popularmechanics.com/science/math/g29251596/impossible-math-problems/ (You don't think I really go around worrying about Goldbach's conjecture, do you?)

Monday, March 13, 2023

Justice and Artificial Intelligence

 

Historians of technology are familiar with the problem of technological advances that outstrip the legal system, leading to situations that are clearly unfair, but leave some people with no legal recourse.  In a recent article in The Dispatch, artificial intelligence (AI) expert Matthew Mittelstaedt calls for a case-by-case approach to the problem of AI advancing beyond the borders of the law.

           

In the article, reporter Alec Dent cited the case of someone who recently filed for a copyright on an AI-generated piece of artwork.  The U. S. Copyright Office rejected the application, which was filed on behalf of the AI program itself, because it lacked the "human authorship" needed to create a valid copyright claim.  We don't know what would have happened if the programmer or software developer would have filed for a copyright on his or her own behalf, saying that the AI program was just a tool like an artist's brush.  But as AI systems take over more and more formerly creative jobs done by humans, the distinction may become harder to make.

 

ChatGPT, the powerful AI chatbot developed by the OpenAI firm, has attracted a lot of attention in academia for its ability to come up with plausible-sounding paragraphs of English text on virtually any topic, including those typically assigned as essay questions in the humanities.  It's not exactly like students weren't cheating before ChatGPT—there are numerous online stores where essays and even completed lab reports and exams can be purchased.  But ChatGPT creates work that, if not exactly original, hasn't existed in exactly the same form before, and thus falls between the stool of previously existing work that is merely copied, and the stool of truly original work that a human mind has originated. 

 

The original intent of copyright law, and anti-cheating rules at universities for that matter, was to allocate the rewards (or penalties) due to a piece of original work fairly.  If Mr. A wrote an essay on the downfall of the Soviet Union, whether it was for his history class or for The Atlantic, Mr. A should get the credit (or blame) for it, not some word-processing software or AI program. 

 

One big problem I foresee will arise from the defective anthropology that prevails in our modern culture.  Human beings are different from other animals and from machines due to a difference in kind, not merely in degree.  In the rush to embrace ever more impressive applications of AI, a lot of people have lost sight of this fact, if they ever believed it in the first place. 

 

The people at the U. S. Copyright Office seem to believe it, at least so far, but the person who applied for a copyright in the name of an AI program seems to think that there's no essential difference between human intelligence and AI.  Unfortunately, that view is very popular in high places—much of academia, the Silicon Valley world, and even in parts of government. 

 

A degraded view of humanity results when one assumes that there is no essential distinction between humans and advanced AI programs.  Of course there is a practical distinction, so far—AI systems still make stupid mistakes that a normally intelligent five-year-old wouldn't make.  But the AI optimists see this as merely a temporary condition that will disappear with further advances in the field.

 

Saying that human intelligence and AI is basically the same drags humans down to the level of machines.  And all the consequences of treating humans as machines will result from that attitude.  No AI expert wants to be treated like a machine.  But allowing AI programs to hold copyrights or be in charge of decisions that formerly required human input does exactly that to the subjects or patients of the AI system in question.

 

Justice is something that happens among human beings, and human beings are not machines.  People advocating for robot rights and similar policies that attempt to attribute human-like characteristics to AI programs are not exalting robots—they are degrading humans in a subtle and indirect way, a way that selectively degrades some humans more than others. 

 

We have already heard of cases in which AI-informed medical or legal decisions turned out to be highly discriminatory against certain minority groups.  The AI developers are rightly exercised about such problems, but it takes human beings to recognize that other human beings are being treated unfairly.  The whole body of the law can be regarded as a huge algorithm for executing justice among people—after all, what are laws but a set of rules and procedures for carrying them out? 

 

But as of yet, we have not handed over the execution of the laws to machines.  Lawyers, judges, and juries do that.  The institution of the jury trial, as rare as it's getting to be in criminal justice, is a common-law recognition that ordinary people deserve to be judged by other ordinary people, not just experts, whether the experts are human beings or computers. 

 

Turning over works of creativity and judgment to AI systems may be efficient.  It may even be fairer in a human sense than leaving such actions in the fallible hands of human beings.  But beyond a certain point—that point to be judged by humans, not by machines—it becomes a dereliction of duty, just as a student typing in his history assignment to ChatGPT and handing in the AI program's answer is a dereliction of his duty to think for himself.

 

The law will eventually catch up to today's AI innovations, as it always does.  Of course, by then we will have new advances, and so for a time at least we will see a kind of legal-AI arms race with AI leading and the law lagging behind.  But we will all be losers if legislators and AI developers forget that human beings are different in kind, not just degree, from AI programs.  If we forget that critical fact, we may deserve what happens if we do.

 

Sources:  Alan Dent's article "The Gaps Between the Law and Artificial Intelligence" was published on Mar. 8, 2023 at https://thedispatch.com/article/the-gaps-between-the-law-and-artificial-intelligence/.  I also referred to Wikipedia articles on ChatGPT and OpenAI.

 

NOTE:  There used to be an RSS feed that readers could subscribe to who wished to be notified of new blog articles, which are issued every Monday morning.  A reader recently pointed out to me that this feature was no longer functioning.  I am currently trying to repair the problems and add an automatic email notification system, but it may take a while, so I ask readers interested in these features to be patient.

Monday, March 06, 2023

Is Twitter a Wholly Owned Subsidiary of the FBI?

 

When Elon Musk took over Twitter last October, he made available to reporters a large number of internal company emails relating to content moderation, deplatforming, and other interventions that the firm has done at the request of, or under the influence of, the U. S. government.  Like most people, I was dimly aware of these revelations, but the news coverage of them was intermittent and depended greatly on the political orientation of the media outlet reporting it.  And I'm sure that remains the case today.

 

But recently I came across one report that summarizes the facts in a chilling and alarming way.  If what this report says is true, we indeed have a major problem that involves not only electronic social media, but the government and fundamental constitutional issues. 

 

In all such cases, one should consider the source.  The source of this report is John Daniel Davidson, a senior editor at The Federalist, a conservative website which Wikipedia says has carried false and misleading information at times.  The particular report I refer to did not appear in that website, but in a newsletter called Imprimis issued by Hillsdale College, a private college that is one of the few serious colleges in the U. S. that refuses to take federal funds on principle.  Adapted from a talk Davidson gave at the college, the report is entitled "The Twitter Files Reveal an Existential Threat."

 

Davidson details three examples of how the FBI, working both on its own behalf and as a liaison between a number of other federal agencies and Twitter, directed the firm to flag, suppress, or suspend numerous accounts such as those of the New York Post, whose offense was to break the news of the Hunter Biden laptop; President Trump, whose suspension after the January 6, 2021 Capitol riot was sui generis in its disregard for internal suspension policies; and during the COVID-19 epidemic, in which Twitter was asked to, and did, squelch information that did not follow the official line on the pandemic that prevailed at the time.

 

The main point of Davidson's article is summed up in these words toward the end of the piece:  ". . . the entire concept of 'content moderation' is a euphemism for censorship by social media companies that falsely claim to be neutral and unbiased."  Davidson presents evidence that in 2017, Twitter publicly announced that all content moderation took place "at [Twitter's] sole discretion," but internally, they would censor anything that "U. S. intelligence identified as a state-sponsored entity conducting cyber-operations," whether the intelligence community was right or not.  As later events proved, the suspected Russian influence on U. S. elections was largely a smokescreen for allowing the federal government to suppress a wide variety of actors, most of which were not sponsored by any state, in direct violation of the First Amendment.

 

Currently, the U. S. Supreme Court is considering two cases that involve Section 230 of the Communications Decency Act.  The basic thrust of the section is to allow social-media companies to claim immunity from prosecution regarding material posted on their sites by third parties—namely, anybody but the company itself.  It also exempts the companies from lawsuits involving content moderation as long as the company can show such moderation was a good-faith effort to remove "objectionable" material. 

 

This law was passed in the very early days of social media, when it was not at all clear that internet-based systems such as Facebook and Twitter would ever make money.  Those days are long gone, and the pipsqueak upstarts of the 1990s have become the 900-pound gorillas of the 2020s. 

 

Far from being a minor sideshow in the ways the public learns what their elected officials and the rest of the government are up to, Twitter is arguably the primary source of breaking news from officialdom, equivalent to the Associated Press wire service of the long-ago day when news really traveled mainly over copper wires to teletype machines.  As publishers, the newspapers, radio, and TV outlets of yore (yore being anytime before about 1980) knew that they were legally responsible for what they printed or broadcast, and made careful distinctions between what was news and what was analysis or opinion.  They had the freedom to print what they wanted to print, courtesy of the First Amendment, which prohibits the federal government from "abridging the freedom of speech, or of the press."  But they also had the responsibility of standing behind what they printed as facts, and so they stressed fact-checking and accuracy, plus an effort to present all the significant news and suppress none of it, no matter how far it strayed from the newspaper's own political position.

 

Granted, this was an ideal that was only approached in practice.  But if you transpose what Twitter has done in the last few years to the register of how news was produced in, say, 1970, the results can be shocking.

 

Suppose the 1970s Watergate break-in, Deep Throat's revelations, and the secretly recorded Nixon White House tapes had been systematically expunged from all newspaper, radio, and TV coverage through the intervention of the FBI, saying that it was all a plot by the Russians?  After Nixon told the news media that they wouldn't have him to kick around anymore following his 1962 loss to Pat Brown in the California governor's race, suppose all the networks agreed to ban him from ever appearing on radio or television again, again at the behest of the federal government? 

 

I am no fan of Richard Nixon.  But my point is that none of these acts of censorship happened back then, because the reigning media companies kept their distance from the government, no matter who was running it.

 

Needless to say, the situation is different now.  Davidson's summary of the Twitter Files is an indictment of the hand-in-glove way that the federal government, using the channel of the FBI, has succeeded in manipulating the media landscape to suit its purposes, and not the best interests of the American people at large.  It is far past time to restore a responsible distance between social media and the government, but doing that will require a well-informed public, and the media we have may not be up to the job.

 

Sources:  John Daniel Davidson's article "The Twitter Files Reveal an Existential Threat" appeared in Vol. 62, No. 1 (Jan. 2023) of Imprimis, a publication of Hillsdale College.  I also referred to a report on the Supreme Court Section 230 cases at https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html and Wikipedia articles on The Federalist and Richard Nixon's November 1962 news conference.