The first lawsuit against generative AI seems doomed to fail because it misunderstands the technology

Back in October last year, a Walled Culture post noted that generative AI programs were likely to have a massive impact on both copyright and creation. When programs can produce free texts, images and sounds that are “good enough” for most everyday purposes, copyright becomes largely irrelevant. Creativity is impacted too, but not just in the obvious, possibly negative way. The free availability of an endless supply of AI generated works will make truly original, human creations more valuable. But of course, many artists don’t see those positives. Obsessed as they are with ownership and its infringements, they have responded to generative AI in the only way they know: by bringing a lawsuit. Their claim:

we’ve heard from peo­ple all over the world — espe­cially writ­ers, artists, pro­gram­mers, and other cre­ators — who are con­cerned about AI sys­tems being trained on vast amounts of copy­righted work with no con­sent, no credit, and no com­pen­sa­tion.

Today, we’re tak­ing another step toward mak­ing AI fair & eth­i­cal for every­one. On behalf of three won­der­ful artist plain­tiffs — Sarah Ander­sen, Kelly McK­er­nan, and Karla Ortiz — we’ve filed a class-action law­suit against Sta­bil­ity AI, DeviantArt, and Mid­jour­ney for their use of Sta­ble Dif­fu­sion, a 21st-cen­tury col­lage tool that remixes the copy­righted works of mil­lions of artists whose work was used as train­ing data.

Andres Guadamuz, who was interviewed by Walled Culture last year, has put together a useful first analysis of this lawsuit. Here’s the key passage:

there is a big issue with how things are described in the lawsuit that clash with how machine learning and diffusion models work in reality. The disparity is that there appears to be a big leap in understanding between the training of a model, and how the model stores that knowledge. According to the complaint, takes the images in the training dataset and these are “stored at and incorporated into Stable Diffusion as compressed copies”. This is not what happens at all, a trained model does not have copies of the training data, that would create an unwieldy behemoth of unfathomable size. What happens is the creation of clusters of representation of things, namely latent space.

What is likely to happen during the trial, if it gets to that, is that there will be expert testimony, and this claim is likely to fall easily.

IANAL, but I agree that the lawsuit seems flawed in its understanding of how generative AI works, and that is likely to cause the action to fail. If it does, it will probably also make it harder for future lawsuits to succeed in this area (Getty Images has just announced one in the UK against Stability AI, but no details yet). The following paragraph from the home page of the legal action is also rather telling:

These result­ing images may or may not out­wardly resem­ble the train­ing images. Nev­er­the­less, they are derived from copies of the train­ing images, and com­pete with them in the mar­ket­place.

This admits that generative AI images may not even look like the input data, but still tries to claim that they represent some kind of infringement because they are “derived” from the training images, even though they do not copy them, as Guadamuz notes, they analyse them. By the logic of this lawsuit, artists who look at other works, and dare to think about how they are put together, are also infringing by virtue of the “input” those creations provide for other, non-copying works.

What’s sad about this lawsuit is that it represents a further instance of copyright-obsessed creators reflexively fighting against exciting new developments in technology. It comes from a misplaced sense of ownership of intangible creative elements that belong to the artistic commons, and thus to everyone. It’s yet another result of copyright’s malign influence on creativity and creators.

Featured image created with Stable Diffusion.

Follow me @glynmoody on Mastodon or Twitter