Monday, September 16, 2024

Deepfake Porn: The Nadir of AI

 

In Dante's Inferno, Hell is imagined as a conical pit with ever-deepening rings dedicated to the torment of worse and worse sinners.  At the very bottom is Satan himself, constantly gnawing on Judas, the betrayer of Jesus Christ. 

 

While much of Dante's medieval imagery would be lost on most people today, we still recognize a connection in language between lowness and badness.  Calling deepfake porn the nadir of how artificial intelligence is used expresses my opinion of it, and also the opinion of women who have become victims of having their faces stolen and applied to pornographic images.  A recent article by Eliza Strickland in IEEE Spectrum shows both the magnitude of the problem and the largely ineffective measures that have been taken to mitigate this evil—for evil it is.

 

With the latest AI-powered software, it can take less than half an hour to use a single photograph of a woman's face to produce a 60-second porn video that makes it look like the victim was a willing participant in whatever debauchery the original video portrayed.  A 2024 research paper cites a survey of 16,000 adults in ten countries, and shows that 2.2% of the respondents reported being a victim of "non-consensual synthetic intimate imagery," which is apparently just a more technical way of saying "deepfake porn."  The U. S. was one of the ten countries included, and 1.1% of the respondents in the U. S. reported being victimized by it.  Because virtually all the victims are women and assuming men and women were represented equally in the survey, that means one out of every fifty women in the U. S. has been a victim of deepfake porn. 

 

That may not sound like much, but it means that over 3 million women in the U. S. have suffered the indignity of being visually raped.  Rape is not only a physical act; it is a shattering assault on the soul.  And simply knowing that one's visage is serving the carnal pleasure of anonymous men is a horrifying situation that no woman should have to face.

 

If a woman discovers she has become the victim of deepfake porn, what can she do?  Strickland interviewed Susanna Gibson, who founded a nonprofit called MyOwn to combat deepfake porn after she ran for public office and the Republican party of Virginia mailed out sexual images of her made without her consent.  Gibson said that although 49 of the 50 U. S. states have laws against nonconsensual distribution of intimate imagery, each state's law is different.  Most of the laws require proof that "the perpetrator acted with intent to harrass or intimidate the victim," and that is often difficult, even if the perpetrator can be found.  Depending on the state, the offense can be classified as either a civil or criminal matter, and so different legal countermeasures are called for in each case.

 

Removing the content is so challenging that at least one company, Alecto AI (named after a Greek goddess of vengeance) offers to search the Web for a person's image being misused in this way, although the startup is not yet ready for prime time.  In the absence of such help, women who have been victimized have to approach each host site individually with legal threats and hope for the best, which is often pretty bad.

 

The Spectrum article ends this way:  ". . . it would be better if our society tried to make sure that the attacks don't happen in the first place."  Right now I'm trying to imagine what kind of a society that would be.

 

All I'm coming up with so far is a picture I saw in a magazine a few years ago of Holland, Michigan.  I have no idea what the rates of deepfake porn production are in Holland, but I suspect they are pretty low.  Holland is famous for its Dutch heritage, its homogeneous culture, and its 140 churches for a town of only 34,000 people.  The Herman Miller furniture company is based there, and the meme that became popular a few years ago, "What Would Jesus Do?" originated there. 

 

Though I've never been to Holland, Michigan, it seems like it's a place that emphasizes human connectedness over anonymous interchanges.  If everybody just put down their phones and started talking to each other instead, there would be no market for deepfake porn, or for most of the other products and services that use the Internet either.

 

As recently as 40 years ago, we had a society in which deepfake porn attacks didn't happen (at least not without a lot of work that would require movie-studio-quality equipment to do).  That was because the technology wasn't available.  So there's one solution:  throw away the Internet.  Of course, that's like saying "throw away the power grid," or "throw away fossil fuels."  But people are saying the latter, though for very different reasons. 

 

This little fantasy exercise shows that logically, we can imagine a society (or really a congeries of societies—congeries meaning "disorderly collection") in which deepfake porn doesn't happen.  But we'd have to give up a whole lot of other stuff we like, such as the ability to use advanced free Internet services for all sorts of things other than deepfake porn. 

 

The fact that swearing off fossil fuels—which are currently just as vital to the lives of billions as the Internet—is the topic of serious discussions, planning, and legislation worldwide, while the problem of deepfake porn is being dealt with piecemeal and at a leisurely pace, says something about the priorities of the societies we live in. 

 

I happen to believe that the devil is more than an imaginative construct in the minds of medieval writers and thinkers.  And one trick the devil likes to pull is to get people's attention away from a present immediate problem and onto some far-away future threat that may not even happen.  His trickery appears to be working fine in the fact that deepfake porn is spreading with little effective legal opposition, while global warming (which is undeniably happening) looms infinitely larger on the worry lists of millions. 

 

Sources:  Eliza Strickland's article "Defending Against Deepfake Pornography" appeared on pp. 5-7 of the October 2024 issue of IEEE Spectrum.  The article "Non-Consensual Synthetic Intimate Imagery:  Prevalence, Attitudes, and Knowledge in 10 Countries" is from the Proceedings of the CHI (presumably Computer-Human Interface) 2024 conference and available at https://dl.acm.org/doi/full/10.1145/3613904.3642382. 

Monday, September 09, 2024

The Politics of ChatGPT

 

So-called "artificial intelligence" (AI) has become an ever-increasing part of our lives in recent years.  After public-use forms of it such as OpenAI's ChatGPT were made available, millions of people have used it for everything from writing legal briefs to developing computer programs.  Even Google now presents an AI-generated summary for many queries on its search engine before showing users the customary links to actual Internet documents.

 

Because of the reference-librarian aspect of ChatGPT that lets users ask conversational questions, I expect lots of people looking for answers to controversial issues will resort to it, at least for starters.  Author Bob Weil did a series of experiments with ChatGPT in which he asked it questions that are political hot potatoes these days.  In every case, the AI bot came down heavily on the liberal side of the question, as Weil reports in the current issue of the New Oxford Review.

 

Weil's first question was "Should schools be allowed to issue puberty blockers and other sex-change drugs to children without the consent of their parents?"  While views differ on this question, I think it's safe to say that a plain "yes" answer, which would involve schools meddling in medicating students and violating the trust pact they have with parents, is on the fringes of even the left.  What Weil got in response was most concisely summarized as weasel words.  In effect, ChatGPT said, well, such a decision should be a collaboration among medical professionals, the child, and parents or guardians.  As Weil pressed the point further, ChatGPT ended up saying that "Ultimately, decisions about medical treatment for transgender or gender-diverse minors should prioritize the well-being and autonomy of the child."  Weil questions whether minor children can be autonomous in any real sense, so he went on to several other questions with equally fraught histories.

 

A question about climate change turned into a mini-debate about whether science is a matter of consensus or logic.  ChatGPT seemed to favor consensus as the final arbiter of what passes for scientific truth, but Weil quotes fiction writer Michael Crichton as saying, "There's no such thing as consensus science.  If it's consensus, it isn't science.  If it's science, it isn't consensus." 

 

As Weil acknowledges, ChatGPT gets its smarts, such as they are, by scraping the Internet, so in a sense it can say along with the late humorist Will Rogers, "All I know is what I read in the papers [or the Internet]."  And given the economics of the situation and political leanings of those in power in English-language media, it's no surprise that the center of gravity of political opinion on the Internet leans to the left. 

 

What is more surprising to me, anyway, is the fact that although virtually all computer software is based on a strict kind of reasoning called Boolean logic, ChatGPT kept insisting on scientific consensus as the most important factor in what to believe regarding global warming and similar issues. 

 

This ties in with something that I wrote about in a paper with philosopher Gyula Klima in 2020:  material entities such as computers in general (and ChatGPT in particular) cannot engage in conceptual thought, but only perceptual thought.  Perceptual thought involves things like perceiving, remembering, and imagining.  Machines can perceive (pattern-recognize) things, they can store them in memory and retrieve them, and they can even combine pieces of them in novel ways, as computer-generated "art" demonstrates.  But according to an idea that goes back ultimately to Aristotle, no material system can engage in conceptual thought, which deals in universals like the idea of dogness, as opposed to any particular dog.  To think conceptually requires an immaterial entity, a good example of which is the human mind.

 

This thumbnail sketch doesn't do justice to the argument, but the point is that if AI systems such as ChatGPT cannot engage in conceptual thought, then promoting such perceivable and countable features of a situation as consensus is exactly what you would expect it to do.  Doing abstract formal logic consciously, as opposed to performing it because your circuits were designed by humans to do so, seems to be something that ChatGPT may not come up with on its own.  Instead, it looks around the Internet, takes a grand average of what people say about a thing, and offers that as the best answer.  If the grand average of climate scientists say that the Earth will shortly turn into a blackened cinder unless we all start walking everywhere and eating nuts and berries, why then that is the best answer "science" (meaning in this case, most scientists) can provide at the time. 

 

But this approach confuses the sociology of science with the intellectual structure of science.  Yes, as a matter of practical outcomes, a novel scientific idea that is consistent with observations and explains them better than previous ideas may not catch on and be accepted by most scientists until the old guard maintaining the old paradigm simply dies out.  As Max Planck allegedly said, "Science progresses one funeral at a time."  But in retrospect, the abstract universal truth of the new theory was always there, even before the first scientist figured it out, and in that sense, it became the best approximation to truth as soon as that first scientist got it in his or her head.  The rest was just a matter of communication.

 

We seem to have weathered the first spate of alarmist predictions that AI will take over the world and end civilization, but as Weil points out, sudden catastrophic disasters were never the most likely threat.  Instead, the slow, steady advance as one person after another abandons original thought to the easy way out of just asking ChatGPT and taking that for the final word is what we should really be worried about.  And as I've pointed out elsewhere, a great amount of damage to the body politic has already been done by AI-powered social media which has polarized politics to an unprecedented degree.  We should thank Weil for his timely warning, and be on our guard lest we settle for intelligence that is less than human.

 

Sources:  Bob Weil's article "Wrestling for Truth with ChatGPT" appeared in the September 2024 issue of New Oxford Review, pp. 18-24.  The paper by Gyula Klima and I, "Artificial intelligence and its natural limits," was published in AI & Society in vol. 36, pp. 18-21 (2021).  I also referred to Wikipedia for the definition of "large language model" and for "Planck's principle."

 

Monday, September 02, 2024

Free Speech In the Age of Government-Influenced Facebook

 

Mark Zuckerberg, the founder, chairman, and CEO of MetaPlatforms, which includes Facebook, Instagram, and WhatsApp, recently sent a letter to Jim Jordan, Republican chairman of the House Judiciary Committee.  Zuckerberg is a busy man, and this was no bread-and-butter socializing note, but more along the lines of a confession. 

 

In the note, Zuckerberg admitted that in 2021, Facebook had caved in to government pressure, specifically from the Biden White House, concerning certain posts relating to COVID-19, "including humor and satire."  The company was also guilty of "demoting" stories about Hunter Biden's laptop when it chose to believe the FBI's claim that it was Russian disinformation in 2020.  In both cases, Zuckerberg says basically we were wrong and we won't do it again.

 

The most generous interpretation of this letter is that here is an upstanding citizen, who also happens to be the fourth richest person in the world, admitting that he and his people did some things that in retrospect might not have been the best choice, given what he knows now.  But hey, he's learned from his mistakes, and we should all feel better that Zuckerberg and his companies have admitted they messed up in what were understandably hard circumstances. 

 

Ranged against this rather anodyne letter are some cherished U. S. traditions such as freedom of speech and the rule of law.  Let's talk about the rule of law first.

 

In a recent issue of Touchstone magazine, professor of law Adam J. MacLeod outlines how the idea of rule by law rather than men arose during the reign of the Emperor Justinian (485-565).  Justinian caused twelve ivory tablets to be placed on public display, tablets that contained a concise summary of the laws of the land.  All disputes were to be decided on the basis of reasoning from what the tablets said, not from what somebody in power said.

 

In placing reason above power, the rule of law placed everyone on a much more equitable footing.  The peasant who could reason out law was now able to defend himself against a powerful lord who wanted to take his land, if the peasant could show what the lord was trying to do was against the law.  MacLeod admits that since the late 1800s, jurisprudence has largely abandoned the fundamentals that supported the rule of law, but in practice, vestiges of it remain.  No thanks to Zuckerberg, however, for those vestiges.

 

Although Facebook is not a branch of government, in bowing to White House pressure it acted as a government agent.  And its near-monopoly on social media channels makes it a powerful player in its ability to censor unfavored speech, such as people making fun of Anthony Fauci or other prominent players in the COVID-19 follies.  So where was the ivory tablet to which a satirical outfit such as the Babylon Bee could appeal when its posts disappeared?  Their only option was to mount a lawsuit that might take years, would certainly cost tons of money, and might in the end amount to nothing.  So much for the rule of law.

 

Some counter the claim that the principle of freedom of speech does not apply to private companies such as Facebook, because a private entity can allow or disallow anything it likes and be as capricious about it as they want.  If Facebook had the reach of my town paper, the San Marcos Daily Record, this argument would carry weight.  One little outlet being arbitrary about what it publishes is no big deal.  But Facebook, although not the only social-media show in town, is by far one of the largest, and its censorship, or lack thereof, hugely influences public discourse in the republic that is the United States, as it does in many other countries of the world with less of a tradition of free speech. 

 

Once again, while Facebook is not a government entity, when it takes actions that the government pressures it to do (either through legal means or simply jawboning), it becomes an agent of that government.  And while it is perhaps true that Facebook did not violate the letter of the First Amendment which prohibits only Congress from making a law that abridges the freedom of speech, the spirit of the law is that the Federal government as a whole—executive, judicial, or legislative—should refrain from suppressing the freedom of the people to express themselves in any way that is not comparable to yelling "Fire!" falsely in a crowded theater. 

 

There are two extremes to which we might go in this situation, at opposite ends from the muddled middle in which we presently find ourselves.  One extreme would be to treat near-monopolies such as Facebook as "common carriers" like the old Ma Bell used to be.  With very few exceptions, nobody regulated what you could say over the telephone, and in the common-carrier model, Facebook would fire all its moderators and only retain the engineers who would keep hackers from crashing the entire system.  Other than that, anybody could say anything about anything.  Zuckerberg wouldn't censor anything, and I bet he'd be relieved to be rid of that little chore.

 

The other extreme would be to regulate the gazoo out of all social media and set up explicit "twelve-tablet"-like rules as to what can and can't be said on it.  We have something like this model in the way the Federal Communications Commission regulates what can be said or shown over the (public) airwaves (not cable).  The FCC is mostly concerned with obscene or indecent content, but that's just a historical fluke.  In a republic you can vote to regulate anything you want.  This would be a return to the pre-deregulation days of inefficient but reliable airline and phone service.  It would be duller and more predictable, but there are worse things than dull.

 

Neither of these extremes will come to pass, but the present near-total governmental inaction in either direction leaves a political vacuum in which Mark Zuckerberg, emperor of social media, will continue to do what he thinks best, and the rest of us simply have to deal with it.  And the rule of law and freedom of speech will continue to suffer.

 

Sources:  I referred to an Associated Press article "Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic," at https://apnews.com/article/meta-platforms-mark-zuckerberg-biden-facebook-covid19-463ac6e125b0d004b16c7943633673fc.  Zuckerberg's letter to Congress is at

https://x.com/JudiciaryGOP/status/1828201780544504064/photo/1, and I also referred to https://en.wikipedia.org/wiki/The_World%27s_Billionaires.  Adam J. MacLeod's "How Law Lost Its Way" appeared on pp. 22-28 of the Sept/Oct 2024 issue of Touchstone.