Performing artists are taking action to protect their earning power against scene-stealing avatars.
What’s new: Equity, a union of UK performing artists, launched a campaign to pressure the government to prohibit unauthorized use of a performer’s AI-generated likeness. The union published tips to help artists who work on AI projects exercise control over their performances and likenesses.
Protections for performers: Equity demands that the UK revise existing copyright laws and adopt guidelines enacted by other jurisdictions.

  • The union is pressing lawmakers to revise the UK Copyright, Designs, and Patents Act, which gives performers rights with respect to their performances, to give them rights to computer-generated likenesses as well.
  • Equity wants to give performers greater control over AI-generated representations they believe are negative or harmful, such as deepfakes that expound hateful rhetoric. Under existing law, such rights cover only audio.
  • The union has called for lawmakers to implement the 2012 Beijing Treaty, which ensures that artists control reproduction and distribution of audiovisual performances; image rights provided by the British dependency of Guernsey that empower them to control their voice, mannerisms, and other distinctive attributes; and elements of the 2019 EU Copyright Directive that grant copyright protection to artists whose work is used to train or inspire replicas.

What performers think of AI: Equity conducted a survey of its members between November 2021 and January 2022. Among the 430 people who responded:

  • 65 percent believed that AI poses a threat to employment opportunities. This figure jumped to 93 percent among audio artists.
  • 24 percent had worked on projects that involved synthesizing a voice or avatar.
  • 29 percent had recorded audio for a text-to-speech system.
  • 93 percent supported prohibiting AI-generated replication of an artist’s performance without consent.

Why it matters: While synthetic images, video, and audio contribute to countless exciting works, they’re an obvious source of concern for artists who wish to preserve — never mind increase — their earning power. These developments also affect members of the audience, who may find that their favorite performers have less and less to do with the productions they nominally appear in.
We’re thinking: Using autotune to fix a wayward vocal performance doesn’t require the performer’s permission (though perhaps it should). The emerging generation of media production tools can generate performances entirely without the artist’s participation, further concentrating power in the hands of studios that own the technology. Defining the legal and ethical boundaries of generated media should help tip the balance toward performers, and it might lead to more fruitful creative collaborations between artists and machines.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox