Showing posts with label deepfake porn. Show all posts
Showing posts with label deepfake porn. Show all posts

Monday, December 16, 2024

Will TAKE IT DOWN Take It Down?

Deepfake porn, that is.  Last week, Republican Senator Ted Cruz held a news conference in which he supported passage by the U. S. House of a bill called "TAKE IT DOWN," which was passed by the Senate on Dec. 4.  Together with Democratic Senator Amy Klobuchar, he has called for the House of Representatives to pass the bill, which would provide federal criminal penalties for those responsible for putting up deepfake porn, as well as requiring the platforms that host it to take it down within 48 hours of receiving requests to do so. 

 

Several victims of deepfake porn testified about the life-shattering harm that deepfake porn causes.  Elliston Berry, who was 14 in October of 2023, woke up one morning to find that a fellow high-school student had created pornographic images with her face on them and posted them on Snapchat.  While she and her parents immediately set about trying to get Snapchat to remove the images, it took eight months and phone calls to Sen. Cruz's office to achieve that. 

 

Another tragic story involving deepfake porn was related by U. S. House representative Brandon Guffey of South Carolina.  In 2022, scammers used Instagram to contact Rep. Guffey's 17-year-old son Gavin.  The scammers pretended to be a young woman interested in nude photos of the boy.  After he complied with this request, the scammers demanded blackmail payments from him.  Tragically, Gavin committed suicide within two hours of these threats, and his father was mystified until the scammers began texting him and other relatives for cash too.  Sen. Klobuchar has counted over 20 such "sextortion" suicides between October 2021 and March of 2023.  Both she and Sen. Cruz are urging the House of Representatives to schedule an early vote on their bill before more teenagers die, according to a story in the Austin American-Statesman.

 

We hear a lot about how polarized politics is and how each party will ostracize any member who has any dealings with the other side.  Perhaps this rule doesn't apply to senators who aren't running for re-election again soon, but last week's news conference is an example that belies that rule. 

 

Death knows no political affiliation, and the unstable minds of teenagers are fertile grounds for sowing seeds of digital manipulation and criminal exploitation.  The TAKE IT DOWN act has severe criminal penalties for anyone who creates deepfake porn without the victim's consent, or uses such material for criminal purposes, including fines and imprisonment of up to 30 months for intimidating minors. 

 

What I was curious about was the penalties spelled out for the platforms which harbor such evil.  What would have happened to Snapchat, for example, if the TAKE IT DOWN act had been enacted and they still dawdled eight months before removing the deepfake porn that used Elliston Berry's image? 

 

The worst that could happen to the company is that it would be found in violation of a Federal Trade Commission (FTC) rule.  Violating FTC rules is not something as familiar to me as a speeding ticket, for instance, so I had to look it up.  The main way the FTC enforces its rules is by levying fines, and indirectly, by raising a stink with bad publicity.  Now fines to a multibillion-dollar-revenue company can easily be written off as just a cost of doing business.  Bad publicity is less easily dealt with sometimes, but its effect is uncertain and depends on what else is going on in the media universe at the time.  While the penalties for laggard companies are there, they don't impress me as being rigorous enough to ensure that deepfake porn will really be taken down inside of 48 hours once the bill is passed.

 

Nevertheless, the bill is a step in the right direction.  We are in a situation with regard to teenagers and social media that is comparable to the situation we were in around 1970 when scientific evidence was accumulating that smoking led to lung cancer, but the tobacco companies were stonewalling that the evidence was insubstantial and refused to take responsibility for the millions of additional deaths that smoking caused every year. 

 

The remarkable thing about the smoking-and-health issue was that not only did tobacco companies eventually pay big in monetary terms, but the climate of social opinion turned largely from one that favored smoking as a romantic and adult thing to do, to one that opposed smoking as both harmful to oneself and others.  And that change has persisted to this day.

 

There are hints that something similar may happen with social media's use by children and teenagers.  Schools and parents are increasingly realizing that any superficial benefits of social media are vastly outweighed by the potential and actual harms it works on the developing minds of young people.  Many schools now collect smartphones at the beginning of a school day and prohibit their use until kids leave for home.  I don't know many people who have school-age children, but the ones I know give a great deal of thought to how old a teenager should be before they get a smartphone, and none of them let children under 12 have one, as far as I know.

 

The day may come when letting someone under 18, say, use social media—at least social media as it is today—will be regarded as, well, I'm trying to think of something that everybody agrees kids shouldn't do.  Bungee-jumping over the Grand Canyon?  My point is, laws can follow public opinion as well as mold it.  If the great majority of adults raising kids in the U. S. conclude that letting social media corrupt their children's minds is simply wrong, we almost don't have to worry about the laws, because the parents will deal with it themselves.  But we need laws to keep sneaky teenagers from evading their parents' prohibitions, and the TAKE IT DOWN act will help tremendously in this regard.

 

The fate of any piece of legislation is uncertain until it's signed, but the indications are hopeful that this bill will make it into law.  It will be only one brick in the wall of protection that we need to erect to keep social media from wreaking more havoc, misery, and death upon children and teenagers.  But every brick counts.

 

Sources:  The Austin American-Statesman online edition carried a story entitled "US House Urged to Ban Deepfake Porn" on Dec. 12, 2024.  I also referred to the draft version of the bill itself at https://www.congress.gov/bill/118th-congress/senate-bill/4569/text, the story of Gavin Guffey's suicide at https://www.cnn.com/2024/01/30/us/rep-brandon-guffey-instagram-lawsuit-cec/index.html, and the Wikipedia article on Snapchat.  I previously blogged on deepfake porn at https://engineeringethicsblog.blogspot.com/2024/09/deepfake-porn-nadir-of-ai.html and https://engineeringethicsblog.blogspot.com/2024/09/deepfake-porn-rest-of-story.html.

 


Monday, September 23, 2024

Deepfake Porn: The Rest of a Story

 

This blog is a species of journalism, and while it's more of an opinion blog than a place to find new facts, I acknowledge the journalistic obligation of accuracy.  So when someone questions the accuracy of something I write, it naturally concerns me, and on occasion I will add corrections to my blogs as necessary.  Something like that happened with last week's blog, and the details are involved and interesting enough to devote today's blog to the issue.

 

I write this blog in about an hour or two every Sunday morning.  It is devoted to commenting on other engineering-ethics-related news articles because, among other things, contacting live sources at 5 AM Sunday morning is not likely to produce positive results.  So I depend only on material that I can get from the Internet, books, magazines, and other sources that are indifferent to the time at which they are consulted. 

 

Last week's blog was based on an item carried by the print version of the professional-organization magazine IEEE Spectrum, which still pays real reporters to talk or otherwise communicate with live people.  One of those live persons was the former Virginia House of Delegates candidate Susanna Gibson, who spoke with Spectrum reporter Eliza Strickland. 

 

Here is the relevant quote from that article:  "[Gibson] was running for a seat in the Virginia House of Delegates in 2023 when the Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn."  This inspired her to start "MyOwn," an organization devoted to passing laws against such malfeasance.

 

From time to time, the website Mercatornet.com picks up this blog and republishes it, with my permission.  That happened with last week's blog, and readers of Mercatornet began commenting on it.  I get copies of these comments, and one of them said the following about the sentence saying that the Republican party of Virginia mailed out sexual images of her made without her consent:  "This is a false statement.  Gibson took and streamed the videos herself while soliciting viewers for money.  Deep fake porn is terrible, but it has nothing to do with the Gibson porn videos."

 

Another comment right after that says this:  "According to Wikipedia:  In September 2023, a Republican operative provided The Washington Post with videos showing Gibson performing sex acts with her husband on the adult streaming site . . ." and it goes on to name the site.

 

The Wikipedia article on Susanna Gibson referred to a Washington Post article of Sept. 11, 2023.  Now, one can question this story, but I have only so much space and we're going to stop at this point and see what the Post says.  According to that news source, Gibson and her husband recorded video of themselves doing certain things, and offering to do certain other things if viewers would send them monetary tokens, on a website called. . . well, let's leave that out, shall we?  Suffice it to say that members of that website are privileged to view videos that other members like the Gibsons post, but even that website's own rules forbid its users to ask other users to exchange money for seeing certain things. 

 

So according to the Post, as many as 5,000 people could have been watching the Gibsons doing things that former ages regarded as suitable only for the total privacy of one's bedroom.  And Susanna Gibson was apparently okay with that, especially if she raised money for her campaign, or next week's groceries, or whatever her motivation was. 

 

What got Ms. Gibson upset was not the fact that 5,000 strangers had a fly's-eye view of their bedroom, but that somebody copied and posted these videos onto non-subscription publicly available sites, and some Republican sent a note to the Washington Post telling the paper where the videos could be found.  And they found them, and published an article about them while Gibson was still running for office.  That's when she got mad and lawyered up and accused the paper of "an illegal invasion of my privacy designed to humiliate me and my family."

 

So where does the truth lie?  I don't think the Washington Post made up the details they published, which have the ring of authenticity.  And I suppose there are people around whose moral formation is so twisted that they think making money from porn seen by 5,000 total strangers is fine, but when news of doing this gets around in a way that interferes with your election campaign, you're suddenly a victim of invasion of privacy.  Was there even any mailing of any deepfake porn?  Only according to Gibson. 

 

There's enough mud in this story that nobody involved comes out quite clean.  IEEE Spectrum could have tried checking Gibson's story, for one thing, instead of just taking her word for it.  I could have checked around myself, but it was just a small part of a larger article, and I didn't.  And one can ask whether sites like the one that Gibson was using to get egg money should even exist, although as one of the lawyers involved pointed out, everybody on that site is a consenting adult and as long as they're okay with the rules and what people do on it, apparently nobody can stop them. 

 

None of this affects the main point of last week's blog, which is that deepfake porn is a terrible thing and something ought to be done about it.  But this weird little side story shows that deepfake porn is the tip of an iceberg of behaviors that technologies associated with the Internet have encouraged, not all of which are illegal or generally regarded as immoral, but which certainly couldn't be done as easily or as extensively as they are now with technological help. 

 

A professor I knew years ago once told me, "Never write anything you don't want to show up on the front page of the New York Times," and updated to today by including "write or video," I still think that's good advice.  Gibson now knows this to her regret, and this is the last blog I'm going to do on deepfake porn for a while. 

 

Sources:  My reprinted blog of last week with comments can be viewed at https://www.mercatornet.com/deepfake_porn_where_ai_goes_to_die, and the Washington Post story referred to in the Wikipedia article on Susanna Gibson is at https://www.washingtonpost.com/dc-md-va/2023/09/11/susanna-gibson-sex-website-virginia-candidate/. 

Monday, September 16, 2024

Deepfake Porn: The Nadir of AI

 

In Dante's Inferno, Hell is imagined as a conical pit with ever-deepening rings dedicated to the torment of worse and worse sinners.  At the very bottom is Satan himself, constantly gnawing on Judas, the betrayer of Jesus Christ. 

 

While much of Dante's medieval imagery would be lost on most people today, we still recognize a connection in language between lowness and badness.  Calling deepfake porn the nadir of how artificial intelligence is used expresses my opinion of it, and also the opinion of women who have become victims of having their faces stolen and applied to pornographic images.  A recent article by Eliza Strickland in IEEE Spectrum shows both the magnitude of the problem and the largely ineffective measures that have been taken to mitigate this evil—for evil it is.

 

With the latest AI-powered software, it can take less than half an hour to use a single photograph of a woman's face to produce a 60-second porn video that makes it look like the victim was a willing participant in whatever debauchery the original video portrayed.  A 2024 research paper cites a survey of 16,000 adults in ten countries, and shows that 2.2% of the respondents reported being a victim of "non-consensual synthetic intimate imagery," which is apparently just a more technical way of saying "deepfake porn."  The U. S. was one of the ten countries included, and 1.1% of the respondents in the U. S. reported being victimized by it.  Because virtually all the victims are women and assuming men and women were represented equally in the survey, that means one out of every fifty women in the U. S. has been a victim of deepfake porn. 

 

That may not sound like much, but it means that over 3 million women in the U. S. have suffered the indignity of being visually raped.  Rape is not only a physical act; it is a shattering assault on the soul.  And simply knowing that one's visage is serving the carnal pleasure of anonymous men is a horrifying situation that no woman should have to face.

 

If a woman discovers she has become the victim of deepfake porn, what can she do?  Strickland interviewed Susanna Gibson, who founded a nonprofit called MyOwn to combat deepfake porn after she ran for public office and the Republican party of Virginia mailed out sexual images of her made without her consent.  Gibson said that although 49 of the 50 U. S. states have laws against nonconsensual distribution of intimate imagery, each state's law is different.  Most of the laws require proof that "the perpetrator acted with intent to harrass or intimidate the victim," and that is often difficult, even if the perpetrator can be found.  Depending on the state, the offense can be classified as either a civil or criminal matter, and so different legal countermeasures are called for in each case.

 

Removing the content is so challenging that at least one company, Alecto AI (named after a Greek goddess of vengeance) offers to search the Web for a person's image being misused in this way, although the startup is not yet ready for prime time.  In the absence of such help, women who have been victimized have to approach each host site individually with legal threats and hope for the best, which is often pretty bad.

 

The Spectrum article ends this way:  ". . . it would be better if our society tried to make sure that the attacks don't happen in the first place."  Right now I'm trying to imagine what kind of a society that would be.

 

All I'm coming up with so far is a picture I saw in a magazine a few years ago of Holland, Michigan.  I have no idea what the rates of deepfake porn production are in Holland, but I suspect they are pretty low.  Holland is famous for its Dutch heritage, its homogeneous culture, and its 140 churches for a town of only 34,000 people.  The Herman Miller furniture company is based there, and the meme that became popular a few years ago, "What Would Jesus Do?" originated there. 

 

Though I've never been to Holland, Michigan, it seems like it's a place that emphasizes human connectedness over anonymous interchanges.  If everybody just put down their phones and started talking to each other instead, there would be no market for deepfake porn, or for most of the other products and services that use the Internet either.

 

As recently as 40 years ago, we had a society in which deepfake porn attacks didn't happen (at least not without a lot of work that would require movie-studio-quality equipment to do).  That was because the technology wasn't available.  So there's one solution:  throw away the Internet.  Of course, that's like saying "throw away the power grid," or "throw away fossil fuels."  But people are saying the latter, though for very different reasons. 

 

This little fantasy exercise shows that logically, we can imagine a society (or really a congeries of societies—congeries meaning "disorderly collection") in which deepfake porn doesn't happen.  But we'd have to give up a whole lot of other stuff we like, such as the ability to use advanced free Internet services for all sorts of things other than deepfake porn. 

 

The fact that swearing off fossil fuels—which are currently just as vital to the lives of billions as the Internet—is the topic of serious discussions, planning, and legislation worldwide, while the problem of deepfake porn is being dealt with piecemeal and at a leisurely pace, says something about the priorities of the societies we live in. 

 

I happen to believe that the devil is more than an imaginative construct in the minds of medieval writers and thinkers.  And one trick the devil likes to pull is to get people's attention away from a present immediate problem and onto some far-away future threat that may not even happen.  His trickery appears to be working fine in the fact that deepfake porn is spreading with little effective legal opposition, while global warming (which is undeniably happening) looms infinitely larger on the worry lists of millions. 

 

Sources:  Eliza Strickland's article "Defending Against Deepfake Pornography" appeared on pp. 5-7 of the October 2024 issue of IEEE Spectrum.  The article "Non-Consensual Synthetic Intimate Imagery:  Prevalence, Attitudes, and Knowledge in 10 Countries" is from the Proceedings of the CHI (presumably Computer-Human Interface) 2024 conference and available at https://dl.acm.org/doi/full/10.1145/3613904.3642382.