Updated Feb 3, 2024 - Technology

Taylor Swift fake nudes show this harassment could happen to anyone

Photo illustration of a grid of closely cropped fingers, hair, lips, skin and digital static

Illustration: Natalie Peeples/Axios

AI has made it easy to generate realistic-looking fake porn, and the targets of malicious deepfakes are finding they have little recourse.

Why it matters: When everyone with a computer can create a convincing and harmful image, anyone from high school teens to the world's biggest pop star could fall victim to these potentially damaging deepfakes.

  • In the absence of federal legal protection — and a smattering of state laws on the issue — those affected can be left to deal with lasting consequences on their mental health and reputations.
  • For ordinary people, especially without an army of fans, that self-protection "can be really challenging, if not impossible," Bernard Marr, a futurist and generative AI expert, tells Axios.
  • Targets of this harassment can request social media companies take down posts or report accounts spreading them, "but there's no good way or structured way" to mitigate such situations, Srijan Kumar, an assistant professor at Georgia Tech who has researched AI, tells Axios.

Driving the news: A bipartisan group of senators on Tuesday introduced a bill designed to hold people responsible for sharing "digital forgery."

  • "Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real," Judiciary Committee Chair Sen. Dick Durbin (D-Ill.) and ranking member Sen. Lindsey Graham (R-S.C.) said in a statement.
  • "Victims have lost their jobs, and may suffer ongoing depression or anxiety."
  • If passed, the legislation would provide a civil remedy for identifiable victims.
  • The proposed legislation came a day before senators grilled Big Tech CEOs over child social media exploitation on Capitol Hill.

Between the lines: Victims of harmful AI-generated images are limited in their legal options, Mary Anne Franks, the president of the Cyber Civil Rights Initiative, told Axios.

  • Even in states with relevant laws, the abuse is defined "in really, really specific terms that are hard for the average victim to meet," she says. Other hurdles include having to know who the original perpetrator was.

Catch up quick: Taylor Swift was the victim of text-to-image generated explicit photos, created using workarounds to Microsoft's anti-porn protections, 404 Media found.

  • A different method to create explicit images requires as little as one photo of a person. Teenage girls at a New Jersey high school last year found out that deepfakes using their faces had been circulating among students.

By the numbers: Fake nudes have increased more than 290% since 2018 on the top 10 websites that host them, the Washington Post reported last year.

  • 86% of people polled by the AI Policy Institute in January said they believe "it should be illegal" to use AI to create deepfake porn. 91% said people who use models to do so should be held liable.

The bottom line: History offers several examples of technology manipulation that disproportionately harms women, such as the 4chan celebrity nude leak and AI producing sexually objectified avatars.

  • "The alarm has been sounded on this for so long," but conversations on regulation weren't taken seriously, Franks says.
  • "That, I think, speaks to the long-standing indifference that our law has toward crimes that disproportionately affect women and girls," she added.

Go deeper: Behind the Curtain: What AI architects fear most (in 2024)

Go deeper