Monday, September 09, 2024

The Politics of ChatGPT

 

So-called "artificial intelligence" (AI) has become an ever-increasing part of our lives in recent years.  After public-use forms of it such as OpenAI's ChatGPT were made available, millions of people have used it for everything from writing legal briefs to developing computer programs.  Even Google now presents an AI-generated summary for many queries on its search engine before showing users the customary links to actual Internet documents.

 

Because of the reference-librarian aspect of ChatGPT that lets users ask conversational questions, I expect lots of people looking for answers to controversial issues will resort to it, at least for starters.  Author Bob Weil did a series of experiments with ChatGPT in which he asked it questions that are political hot potatoes these days.  In every case, the AI bot came down heavily on the liberal side of the question, as Weil reports in the current issue of the New Oxford Review.

 

Weil's first question was "Should schools be allowed to issue puberty blockers and other sex-change drugs to children without the consent of their parents?"  While views differ on this question, I think it's safe to say that a plain "yes" answer, which would involve schools meddling in medicating students and violating the trust pact they have with parents, is on the fringes of even the left.  What Weil got in response was most concisely summarized as weasel words.  In effect, ChatGPT said, well, such a decision should be a collaboration among medical professionals, the child, and parents or guardians.  As Weil pressed the point further, ChatGPT ended up saying that "Ultimately, decisions about medical treatment for transgender or gender-diverse minors should prioritize the well-being and autonomy of the child."  Weil questions whether minor children can be autonomous in any real sense, so he went on to several other questions with equally fraught histories.

 

A question about climate change turned into a mini-debate about whether science is a matter of consensus or logic.  ChatGPT seemed to favor consensus as the final arbiter of what passes for scientific truth, but Weil quotes fiction writer Michael Crichton as saying, "There's no such thing as consensus science.  If it's consensus, it isn't science.  If it's science, it isn't consensus." 

 

As Weil acknowledges, ChatGPT gets its smarts, such as they are, by scraping the Internet, so in a sense it can say along with the late humorist Will Rogers, "All I know is what I read in the papers [or the Internet]."  And given the economics of the situation and political leanings of those in power in English-language media, it's no surprise that the center of gravity of political opinion on the Internet leans to the left. 

 

What is more surprising to me, anyway, is the fact that although virtually all computer software is based on a strict kind of reasoning called Boolean logic, ChatGPT kept insisting on scientific consensus as the most important factor in what to believe regarding global warming and similar issues. 

 

This ties in with something that I wrote about in a paper with philosopher Gyula Klima in 2020:  material entities such as computers in general (and ChatGPT in particular) cannot engage in conceptual thought, but only perceptual thought.  Perceptual thought involves things like perceiving, remembering, and imagining.  Machines can perceive (pattern-recognize) things, they can store them in memory and retrieve them, and they can even combine pieces of them in novel ways, as computer-generated "art" demonstrates.  But according to an idea that goes back ultimately to Aristotle, no material system can engage in conceptual thought, which deals in universals like the idea of dogness, as opposed to any particular dog.  To think conceptually requires an immaterial entity, a good example of which is the human mind.

 

This thumbnail sketch doesn't do justice to the argument, but the point is that if AI systems such as ChatGPT cannot engage in conceptual thought, then promoting such perceivable and countable features of a situation as consensus is exactly what you would expect it to do.  Doing abstract formal logic consciously, as opposed to performing it because your circuits were designed by humans to do so, seems to be something that ChatGPT may not come up with on its own.  Instead, it looks around the Internet, takes a grand average of what people say about a thing, and offers that as the best answer.  If the grand average of climate scientists say that the Earth will shortly turn into a blackened cinder unless we all start walking everywhere and eating nuts and berries, why then that is the best answer "science" (meaning in this case, most scientists) can provide at the time. 

 

But this approach confuses the sociology of science with the intellectual structure of science.  Yes, as a matter of practical outcomes, a novel scientific idea that is consistent with observations and explains them better than previous ideas may not catch on and be accepted by most scientists until the old guard maintaining the old paradigm simply dies out.  As Max Planck allegedly said, "Science progresses one funeral at a time."  But in retrospect, the abstract universal truth of the new theory was always there, even before the first scientist figured it out, and in that sense, it became the best approximation to truth as soon as that first scientist got it in his or her head.  The rest was just a matter of communication.

 

We seem to have weathered the first spate of alarmist predictions that AI will take over the world and end civilization, but as Weil points out, sudden catastrophic disasters were never the most likely threat.  Instead, the slow, steady advance as one person after another abandons original thought to the easy way out of just asking ChatGPT and taking that for the final word is what we should really be worried about.  And as I've pointed out elsewhere, a great amount of damage to the body politic has already been done by AI-powered social media which has polarized politics to an unprecedented degree.  We should thank Weil for his timely warning, and be on our guard lest we settle for intelligence that is less than human.

 

Sources:  Bob Weil's article "Wrestling for Truth with ChatGPT" appeared in the September 2024 issue of New Oxford Review, pp. 18-24.  The paper by Gyula Klima and I, "Artificial intelligence and its natural limits," was published in AI & Society in vol. 36, pp. 18-21 (2021).  I also referred to Wikipedia for the definition of "large language model" and for "Planck's principle."

 

No comments:

Post a Comment