Back in October last year, a Walled Culture post noted that generative AI programs were likely to have a massive impact on both copyright and creation. When programs can produce free texts, images and sounds that are “good enough” for most everyday purposes, copyright becomes largely irrelevant. Creativity is impacted too, but not just in the obvious, possibly negative way. The free availability of an endless supply of AI generated works will make truly original, human creations more valuable. But of course, many artists don’t see those positives. Obsessed as they are with ownership and its infringements, they have responded to generative AI in the only way they know: by bringing a lawsuit. Their claim:
we’ve heard from people all over the world — especially writers, artists, programmers, and other creators — who are concerned about AI systems being trained on vast amounts of copyrighted work with no consent, no credit, and no compensation.
Today, we’re taking another step toward making AI fair & ethical for everyone. On behalf of three wonderful artist plaintiffs — Sarah Andersen, Kelly McKernan, and Karla Ortiz — we’ve filed a class-action lawsuit against Stability AI, DeviantArt, and Midjourney for their use of Stable Diffusion, a 21st-century collage tool that remixes the copyrighted works of millions of artists whose work was used as training data.
there is a big issue with how things are described in the lawsuit that clash with how machine learning and diffusion models work in reality. The disparity is that there appears to be a big leap in understanding between the training of a model, and how the model stores that knowledge. According to the complaint, Stability.ai takes the images in the training dataset and these are “stored at and incorporated into Stable Diffusion as compressed copies”. This is not what happens at all, a trained model does not have copies of the training data, that would create an unwieldy behemoth of unfathomable size. What happens is the creation of clusters of representation of things, namely latent space.
What is likely to happen during the trial, if it gets to that, is that there will be expert testimony, and this claim is likely to fall easily.
IANAL, but I agree that the lawsuit seems flawed in its understanding of how generative AI works, and that is likely to cause the action to fail. If it does, it will probably also make it harder for future lawsuits to succeed in this area (Getty Images has just announced one in the UK against Stability AI, but no details yet). The following paragraph from the home page of the legal action is also rather telling:
These resulting images may or may not outwardly resemble the training images. Nevertheless, they are derived from copies of the training images, and compete with them in the marketplace.
This admits that generative AI images may not even look like the input data, but still tries to claim that they represent some kind of infringement because they are “derived” from the training images, even though they do not copy them, as Guadamuz notes, they analyse them. By the logic of this lawsuit, artists who look at other works, and dare to think about how they are put together, are also infringing by virtue of the “input” those creations provide for other, non-copying works.
What’s sad about this lawsuit is that it represents a further instance of copyright-obsessed creators reflexively fighting against exciting new developments in technology. It comes from a misplaced sense of ownership of intangible creative elements that belong to the artistic commons, and thus to everyone. It’s yet another result of copyright’s malign influence on creativity and creators.
Featured image created with Stable Diffusion.