Last week Walled Culture noted that there already are two lawsuits against generative AI systems that are causing such a buzz at the moment. Both those legal actions involved the visual arts, but as this blog noted back in October last year, generative AI is going to have a massive impact across all the creative arts. Further evidence of that comes from a new paper written by engineers at Google Research:
We introduce MusicLM, a model for generating high-fidelity music from text descriptions such as“a calming violin melody backed by a distorted guitar riff”.
MusicLM is already highly capable:
we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption.
If you’d like to hear the kind of thing it produces, Google Research has put online a huge number of impressive examples. Despite the evident power of MusicLM, the paper’s authors write: “we have no plans to release models at this point”, for the following reason:
We acknowledge the risk of potential misappropriation of creative content associated to the use-case. In accordance with responsible model development practices, we conducted a thorough study of memorization, adapting and extending a methodology used in the context of text-based [large language models], focusing on the semantic modeling stage. We found that only a tiny fraction of examples was memorized exactly, while for 1% of the examples we could identify an approximate match.
In other words, Google’s engineers are acutely aware of the likelihood that the music industry would sue over the creations of MusicLM, in the same way that visual artists have already done with Stable Diffusion. In fact the issue of what the Google team calls “memorization” – that is, literal copying of input samples – occurs only rarely, and in any case is easy to fix with a little tweaking of the code. Regardless of that, an article on Euractiv makes it clear that the copyright world is gearing up to attack generative AI as a matter of principle:
Artists’ organisations are preparing a push for regulatory changes over concerns that EU law fails to protect the creative industries from fast-developing generative AI technologies such as ChatGPT.
The article notes that the copyright industry has a perfect opportunity to hobble this new technology in the EU with the AI Act, currently under discussion:
The original version of the draft law did not cover AI systems like ChatGPT that can be adapted for various purposes.
Artist associations are mobilising to introduce a specific section in the Act dedicated to the creative arts, including safeguards requiring that rightsholders give explicit informed consent before their work is used.
This knee-jerk reaction to new technology has been a constant feature of the legal landscape for the last hundred years or so. Rather than embracing a new technology, the copyright world has always sought to kill it, and failing that, to limit it as much as possible by demanding unreasonable rights to decide what is permitted. Moreover, it generally wants to get paid handsomely for allowing even this limited use, a product of copyright’s perpetual sense of entitlement.
As Walled Culture wrote last year, generative AI is in fact a great opportunity for artists, because it will place a premium on (human) originality, freeing creators from drudge work, which can now be produced by AI systems. But as usual the copyright world isn’t interested in benefitting from this positive change, and would rather fight it, indifferent to the harms that it will cause to everyone else in the process.
Featured image created with Stable Diffusion.