Nude Deepfakes Spur Legislators Taylor Swift deepfake outrage prompts U.S. lawmakers to propose anti-AI pornography laws.

Published
Reading time
3 min read
Nude Deepfakes Spur Legislators: Taylor Swift deepfake outrage prompts U.S. lawmakers to propose anti-AI pornography laws.

Sexually explicit deepfakes of Taylor Swift galvanized public demand for laws against nonconsensual, AI-enabled pornography.

What’s new: U.S. lawmakers responded to public outcry over lewd AI-generated images of the singer by proposing legislation that would crack down on salacious images generated without their subject’s permission. 

High-profile target: In late January, AI-generated images that appeared to depict Swift in the nude appeared on social media sites including X (formerly known as Twitter) and messaging apps such as Telegram. The deepfakes originated on the image-sharing site 4chan, where users competed to prompt text-to-image generators in ways that bypassed their keyword filters. Swift fans reported the images, which subsequently were removed. Swift reportedly is considering legal action against websites that hosted the images.

  • Senators of both major U.S. political parties proposed the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which would allow targets of AI-generated deepfakes to sue and collect financial damages from people who produce, possess, distribute, or receive sexually explicit, nonconsensual images.
  • Other U.S. laws under consideration also would permit legal action against deepfakes. The No AI FRAUD Act would allow public figures to sue for unlicensed uses of their likenesses. That legislation would apply not only to images but also generated music. Opponents argue that it‘s too broad and could outlaw parodies and harmless memes.
  • While U.S. federal law doesn’t regulate deepfakes, 10 states forbid nonconsensual deepfake pornography or provide legal recourse. However, identifying perpetrators and seeking damages is difficult in many cases.

Behind the news: Sexually explicit deepfakes often target celebrities, but several recent incidents involve private citizens who were minors at the time.

  • In October 2023, students at a New Jersey high school distributed deepfakes that depicted more than 30 of their female classmates. One victim, 15-year-old Francesca Mani, is advocating passage of the U.S. No AI FRAUD Act and pushing New Jersey state lawmakers to introduce a similar bill.
  • In September 2023, more than 20 teenage girls in Extremadura, Spain, received messages that included AI-generated nudes of themselves. The perpetrators reportedly downloaded images from the victims’ Instagram accounts and used a free Android app to regenerate them without clothing. In Europe, only the Netherlands prohibits the dissemination of such deepfakes. The incident triggered an international debate whether such activity constitutes distributing child pornography, which is widely illegal. 
  • Law-enforcement agencies face a growing quantity of AI-generated imagery that depicts sexual abuse of both real and fictitious children, The New York Times reported. In 2002, the U.S. Supreme Court struck down a ban on computer-generated child pornography, ruling that it violated the Constitutional guarantee of free speech. 

Why it matters: The Swift incident dramatizes the growing gap between technological capabilities and legal restrictions. The rapid progress of image generators enables unscrupulous (or simply cruel) parties to prey on innocent victims in ways that exact a terrible toll for which reparation may be inadequate or impossible. In many jurisdictions, the laws against nonconsensual pornography don’t account for AI-generated or AI-edited images. To be actionable, for instance, such images must depict the victim’s own body rather than a generated look-alike.

We’re thinking: No one, whether a public or private figure, child or adult, should be subject to the humiliation and abuse of being depicted in nonconsensual pornographic images. The U.S., whose constitution guarantees free speech, has weaker tools for silencing harmful messages than other countries. Nonetheless, we hope that Swift gets the justice she seeks and that lawmakers craft thoughtful legislation to protect citizens and provide recourse for victims without banning legitimate applications.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox