Reading time
2 min read
Screen capture of question in DeviantArt about consent of the use of artwork by AI datasets

Artists are rebelling against AI-driven imitation.

What’s new: DeviantArt, an online community where artists display and sell their work and marketplace for digital art, launched DreamUp, a text-to-image generator that aims to help artists thwart attempts to imitate their styles or works.

How it works: DreamUp is a vanilla implementation of the open source Stable Diffusion text-to-image generator.

  • Artists can fill out a form that adds their name, aliases, and named creations to a list of blocked prompt phrases.
  • DreamUp labels all output images as AI-generated. Users who upload the system’s output to DeviantArt are required to credit artists whose work influenced it. DeviantArt users can report images that they believe imitate an artist’s style. In unclear cases, DeviantArt will ask the artist in question to judge.
  • DeviantArt offers five free prompts a month. Members, who pay up to $14.95 for a monthly subscription, get 300 prompts a month or pay up to $0.20 per prompt.

Opting out: Stable Diffusion was trained on images scraped from the web including works from DeviantArt. Upon its release, some artists objected to the model’s ability to replicate their style via prompts like, “in the style of ____.”

  • DeviantArt opened fresh wounds upon releasing DreamUp by offering members the opportunity to add HTML and HTTP tags that specify that work is not to be included in future training datasets — but only if they opted in.
  • Members objected to having to opt in to mark their works as off limits to AI developers. DeviantArt responded by adding the tags to all uploaded images by default.
  • It’s not clear what consequences would follow if an AI developer were to train a learning algorithm on such tagged images.

Behind the news: AI’s increasing ability to mimic the styles of individual artists has become a flashpoint between engineers and artists. When acclaimed artist Kim Jung Gi died in early October, within one day a former game developer released a model trained to produce works in his style. While the developer justified the work “as an homage,” responses included not only criticism and insults but also threats of violence. Such comments, one commenter noted, were part of a recent rise in “extremely violent rhetoric directed at the AI art community.”

Why it matters: Generative AI is attracting attention and funding, but the ethics of training and using such systems are still coming into focus. For instance, lawyers are preparing to argue that GitHub’s CoPilot code-generation system, which was trained on open-source code, violates open-source licenses by improperly crediting coders for their work. The outcome may resolve some uncertainty about how to credit a generative model’s output — but it seems unlikely to address issues of permission and compensation.

We’re thinking: Artists who have devoted years to developing a distinctive style are justifiably alarmed to see machines crank out imitations of their work. Some kind of protection against copycats is only fair. For the time being, though, the limit of fair use in training and using AI models remains an open question.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox