Research suggests more open access training for academics could help boost its uptake and support

Open access publishing, which allows people to read academic papers without a subscription, seems such a good idea. It means that anyone, anywhere in the world, can read the latest research without needing to pay. Academic institutions can spend less to keep their scholars up-to-date with work in their field. It also helps disseminate research, which means that academics receive more recognition for their achievements, boosting their career paths.

And yet despite these manifest benefits, open access continues to struggle. As Walled Culture has noted several times, one reason is that traditional academic publishers have managed to subvert the open access system, apparently embracing it, but in such a way as to negate the cost savings for institutions. Many publishers also tightly control the extent to which academic researchers can share their own papers that are released as open access, which rather misses the point of moving to this approach.

Another reason why open access has failed to take off in the way that many hoped is that academics often don’t seem to care much about supporting it or even using it. Again, given the clear benefits for themselves, their institutions and their audience, that seems extraordinary. Some new research sheds a little light on why this may be happening. It is based on an online survey that was carried out regarding the extent and nature of training in open access offered to doctoral students, sources of respondents’ open access knowledge, and their perspectives on open access. The results are striking:

a large majority of current (81%) and recent (84%) doctoral students are or were not required to undertake mandatory open access training. Responses from doctoral supervisors aligned with this, with 66% stating that there was no mandatory training for doctoral students at their institution. The Don’t know figure was slightly higher for supervisors (16%), suggesting some uncertainty about what is required of doctoral students.

The surprisingly high figures quoted above matter, because

a statistically significant difference was observed between respondents who have completed training and those who have not. These findings provide some solid evidence that open access training has an impact on researcher knowledge and practices

One worrying aspect is where else researchers are obtaining their knowledge of open access principles and practices:

Web resources and colleagues were found to be the most highly rated sources, but publisher information also scored highly, which may be cause for some concern. While it is evident that publisher information about open access may be of value to researchers, if for no other reason than to explain the specific open access options available to authors submitting to a particular journal, publishers are naturally incentivised to describe positively the forms of open access they offer to authors, and therefore can hardly be said to represent an objective source of information about open access in general terms.

What this means in practice is that academics may simply accept the publishers’ version of open access, without calling into question why it is so expensive or so restrictive in allowing papers to be shared freely. It could explain why the publishers’ distorted form of the original open access approach does not meet greater resistance. On the plus side, the survey revealed widespread support for more open access training:

First, only 27% of respondents answered that the level of open access training offered as part of their doctoral studies was sufficient. Second, there was widespread agreement with a number of statements presented to respondents that related to actions institutions could take to support researcher understanding of open access. There was widest agreement with the notion that institutions should provide Web resources about open access specifically for doctoral students, followed by optional training for these students. The statement that suggested institutions should require doctoral students to undertake open access training received agreement or strong agreement from almost half of respondents (45%).

Although the research reveals widely differing views on requirements for open access training, and who exactly should provide it, there does seem to be an opportunity to increase researchers’ familiarity with the concept and its benefits. Rather than lamenting the diluted form of open access that major publishers now offer, open access advocates might usefully spend more time spreading the word about its benefits to the people who can make it happen – new and established researchers – by helping to provide training in a variety of forms.

Featured image by Hans Wolff.

Follow me @glynmoody on TwitterDiaspora, or Mastodon. 

A French collecting society wants a tax on generative AI, payable to…collecting societies

Back in October last year, Walled Culture wrote about a proposed law in France that would see a tax imposed on AI companies, with the proceeds being paid to a collecting society. Now that the EU’s AI Act has been adopted, it is being invoked as another reason why just such a system should be set up. The French collecting society SPEDIDAM (which translates as “Society for the collection and distribution of performers rights”) has issued a press release on the idea, including the following (translation via DeepL):

SPEDIDAM advocates a right to remuneration for performers for AI-generated content without protectable human intervention, in the form of fair compensation that would benefit the entire community of artists, inspired by proven and virtuous collective management models, similar to that of remuneration for private copy.

This remuneration, collected from AI system suppliers, would also help support the cultural activities of collective management organizations, thus ensuring the future employment of artists and the constant renewal of the sources feeding these tools.

That sounds all well and good, but as we noted last year, collecting societies around the world have a terrible record when it comes to sharing that remuneration with the creators they supposedly represent. Walled Culture the book (free digital versions available), quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. They also have a tendency to adopt a maximalist interpretation of their powers. Here are few choice examples of their actions over the years:

  • Soza (Slovenský Ochranný Zväz Autorský/Slovak Performing and Mechanical Rights Society), a Slovakian collecting society, has sought money from villages when their children sing. One case involved children singing to their mothers on Mothers’ Day.
  • SABAM (Société d’Auteurs Belge/Belgische Auteurs Maatschappij/Belgian Authors’ Society), a Belgian collecting society, sought expanded protection for readings of copyrighted works. One consequence of their action was that it would require librarians to pay a licence to read books to children in a children’s library.
  • SABAM sought a licensing fee from truck drivers who listened to the radio alone in their trucks.
  • The British collecting society PPL (Phonographic Performance Limited) sought a fee from a hardware store owner who listened to the radio in his store while cleaning it after he had closed.
  • The Performing Rights Society in the UK sought performance licensing fees from a woman who played classical music to her horses.

SPEDIDAM’s press release is interesting as perhaps the first hint of a wider pan-European campaign to bring in some form of levy on the use of training data for generative AI services. That would just take a new bad idea – taxing companies for simply analysing training material – and add it to an old bad idea, that of hugely-inefficient collecting societies. The resulting system would be a disaster for the European AI industry, since it would favour deep-pocketed US companies. Moreover, this approach would produce no meaningful benefit for creators, as the sorry history of collective societies has shown time and again.

Featured image by Enrico van Leeuwen.

Follow me @glynmoody on Mastodon.

How private equity has used copyright to cannibalise the past at the expense of the future

Walled Culture has been warning about the financialisation and securitisation of music for two years now. Those obscure but important developments mean that the owners of copyrights are increasingly detached from the creative production process. They regard music as just another asset, like gold, petroleum or property, to be exploited to the maximum. A Guest Essay in the New York Times points out one of the many bad consequences of this trend:

Does that song on your phone or on the radio or in the movie theater sound familiar? Private equity — the industry responsible for bankrupting companies, slashing jobs and raising the mortality rates at the nursing homes it acquires — is making money by gobbling up the rights to old hits and pumping them back into our present. The result is a markedly blander music scene, as financiers cannibalize the past at the expense of the future and make it even harder for us to build those new artists whose contributions will enrich our entire culture.

As well as impoverishing our culture, the financialisation and securitisation of music is making life even harder for the musicians it depends on:

In the 1990s, as the musician and indie label founder Jenny Toomey wrote recently in Fast Company, a band could sell 10,000 copies of an album and bring in about $50,000 in revenue. To earn the same amount in 2024, the band’s whole album would need to rack up a million streams — roughly enough to put each song among Spotify’s top 1 percent of tracks. The music industry’s revenues recently hit a new high, with major labels raking in record earnings, while the streaming platforms’ models mean that the fractions of pennies that trickle through to artists are skewed toward megastars.

Part of the problem is the extremely low rates paid by streaming services. But the larger issue is the power imbalance within all the industries based on copyright. The people who actually create books, music, films and the rest are forced to accept bad deals with the distribution companies. Walled Culture the book (free ebook versions) details the painfully low income the vast majority of artists derive from their creativity, and how most are forced to take side jobs to survive. This daily struggle is so widespread that it is no longer remarked upon. It is one of the copyright world’s greatest successes that the public and many creators now regard this state of affairs as a sad but unavoidable fact of life. It isn’t.

The New York Times opinion piece points out that there are signs private equity is already moving on to its next market/victim, having made its killing in the music industry. But one thing is for sure. New ways of financing today’s exploited artists are needed, and not ones cooked up by Wall Street. Until musicians and creators in general take back control of their works, rather than acquiescing in the hugely unfair deal that is copyright, it will always be someone else who makes most of the money from their unique gifts.

Featured image by GoginkLobabi.

Follow me @glynmoody on Mastodon and on Bluesky.

We risk losing access to the world’s academic knowledge, and copyright makes things worse

The shift from analogue to digital has had a massive impact on most aspects of life. One area where that shift has the potential for huge benefits is in the world of academic publishing. Academic papers are costly to publish and distribute on paper, but in a digital format they can be shared globally for almost no cost. That’s one of the driving forces behind the open access movement. But as Walled Culture has reported, resistance from the traditional publishing world has slowed the shift to open access, and undercut the benefits that could flow from it.

That in itself is bad news, but new research from Martin Paul Eve (available as open access) shows that the way the shift to digital has been managed by publishers brings with it a new problem. For all their flaws, analogue publications have the great virtue that they are durable: once a library has a copy, it is likely to be available for decades, if not centuries. Digital scholarly articles come with no such guarantee. The Internet is constantly in flux, with many publishers and sites closing down each year, often without notice. That’s a problem when sites holding archival copies of scholarly articles vanish, making it harder, perhaps impossible, to access important papers. Eve explored whether publishers were placing copies of the articles they published in key archives. Ideally, digital papers would be available in multiple archives to ensure resilience, but the reality is that very few publishers did this. Ars Technica has a good summary of Eve’s results:

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)

This very patchy coverage is concerning, for reasons outlined by Ars Technica:

The risk here is that, ultimately, we may lose access to some academic research. As Eve phrases it, knowledge gets expanded because we’re able to build upon a foundation of facts that we can trace back through a chain of references. If we start losing those links, then the foundation gets shakier. Archiving comes with its own set of challenges: It costs money, it has to be organized, consistent means of accessing the archived material need to be established, and so on.

Given the importance of ensuring the long-term availability of academic research the manifest failure of most publishers to guarantee that by putting articles in multiple archives is troubling. What makes things worse is that there is an easy way to improve the resilience of the academic research system. If all papers could be shared freely, there could be many new archives located around the world holding the contents of all academic journals. One or two such archives already exist, for example the well-established Sci-Hub, and the more recent Anna’s Archive, which currently claims to hold around 100,000,000 papers.

Despite the evident value to the academic world and society in general of such multiple, independent backups, traditional publishing houses are pursuing them in the courts, in an attempt to shut them down. It seems that preserving their intellectual monopoly is more important to publishers than preserving the world’s accumulated academic knowledge. It’s a further sign of copyright’s twisted values that those archives offering solutions to the failure of publishers to fulfil their obligations to learning are regarded not as public benefactors, but as public enemies.

Featured image by H. Melville after T. H. Shepherd.

Follow me @glynmoody on Mastodon and on Bluesky.

Of true fans and superfans: the rise of an alternative business model to copyright

One of the commonest arguments from supporters of copyright is that creators need to be rewarded and that copyright is the only realistic way of doing that. The first statement may be true, but the second certainly isn’t. As Walled Culture the book (free digital versions available) notes, most art was created without copyright, when the dominant way of rewarding creators was patronage – from royalty, nobility, the church etc. Indeed, nearly all of the greatest works of art were produced under this system, not under copyright.

It’s true that it is no longer possible to depend on these outdated institutions to sustain a large-scale modern creative ecosystem, but the good news is we don’t have to. The rise of the Internet means that not only can anyone become a patron, sending money to their favourite creators, but that collectively that support can amount to serious sums of money. The first person to articulate this Internet-based approach was Kevin Kelly, in his 2008 essay “1000 True Fans”:

A true fan is defined as a fan that will buy anything you produce. These diehard fans will drive 200 miles to see you sing; they will buy the hardback and paperback and audible versions of your book; they will purchase your next figurine sight unseen; they will pay for the “best-of” DVD version of your free youtube channel; they will come to your chef’s table once a month. If you have roughly a thousand of true fans like this (also known as super fans), you can make a living — if you are content to make a living but not a fortune.

It’s taken a while, but the music industry in particular is finally waking up to the potential of this approach. For example a 2023 post on MusicBusiness Worldwide, with the title “15% of the general population in the US are ‘superfans.’ Here’s what that means for the music business” reported that the incidence of superfans was probably even higher in some groups, for example among customers of Universal Music Group (UMG):

Speaking on UMG’s Q1 earnings call, Michael Nash, UMG’s EVP and Chief Digital Officer, indicated that an “artist-centric” model would look to increase revenue flow from “superfans” – or in other words, individuals who are willing to pay more for subscriptions in exchange for additional content.

“Our consumer research says that among [music streaming] subscribers, about 30% are superfans of one or more of our artists,” said Nash.

In January of this year, the head of UMG, Sir Lucian Grainge gave another signal that superfans were a key component of the company’s future strategy: “The next focus of our strategy will be to grow the pie for all artists, by strengthening the artist-fan relationship through superfan experiences and products.” Spotify, too, is joining the superfan fan club, writing that “we’re looking forward to a future of superfan clubs”. UMG started implementing its superfan strategy just a few weeks later. MusicBusiness Worldwide reported it was joining a move to create a new superfan destination:

A press release issued by Universal Music Group today stated that the NTWRK consortium’s acquisition of [the youth-orientated media platform] Complex will “create a new destination for ‘superfan’ culture that will define the future of commerce, digital media, and music”.

Here’s why leading music industry players are so interested in the superfan idea:

In Goldman’s latest Music In The Air report, it claimed that if 20% of paid streaming subscribers today could be categorized as ‘superfans’ and, furthermore, if these ‘superfans’ were willing to spend double what a non-superfan spends on digital music each year, it implies a $4.2 billion (currently untapped) annual revenue opportunity for the record industry.

For the music industry, then, it’s about making even more money from their customers – no surprise there. But this validation of the true fans/superfans idea goes well beyond that. By acknowledging the power and value of the relationship between creators and their most enthusiastic supporters, the music companies are also providing a huge hint to artists that there’s a better way than the unbalanced and unfair deals they currently sign up to. When it comes to making a decent living from creativity, what matters is not using heavy-handed enforcement of copyright law to make people pay, but building on the unique and natural connection between creators and their true fans, who want to pay.

Featured image by Antonio Mette.

Follow me @glynmoody on Mastodon and on Bluesky.

Forgotten books and how to save them

On the Neglected Books site, there is a fine meditation on rescuing forgotten writers and their works from oblivion, and why this is important. As its author Brad Bigelow explains:

I have been searching for neglected books for over forty years and the one thing I can say with unshakeable confidence is that there are more great (and even just seriously good) books out there in the thickets off the beaten path of the canon than I or anyone else can ever hope to discover.

His post mentions three questions that “reissue” publishers must answer when looking at some of these neglected books as potential candidates for re-printing:

Is the book good (meaning of sufficient merit to justify being associated with the imprint)? Is the book in the public domain or are the rights attainable for a reasonable price? Will enough readers buy the book to recoup costs and, with some luck, earn a profit?

The first is an aesthetic judgement, but the other two are essentially about copyright. Walled Culture the book (free ebook versions available) discusses at length the issue of “orphan works” – works that are still in copyright, but which cannot be re-issued because it is not clear who owns the rights, and thus who could give permission for new editions. Bigelow makes a good point about why this is such a problem:

Even in the U.K., which has the advantage of a national database of wills, it can be practically impossible to track down who has inherited the copyrights from a dead author. The database, for one thing, is incomplete. There are millions of wills missing. There are plenty of writers who failed recognize their copyrights as inheritable assets and didn’t bother to mention them in the will. And there are plenty of writers who simply didn’t bother to have a will drawn up in the first place. Every publisher involved in the reissue business can name a dozen or more writers they’d love to publish, if only they could find legatees empowered to sign the necessary contracts.

The last question for publishers – will enough readers buy the book to recoup costs and earn a profit? – is the other main stumbling block to re-issuing out-of-print books for a new audience. Bigelow explains that this often comes down to a key challenge: how does a publisher get a reader who knows nothing about the book, the writer, or the publisher’s reputation to look at, let alone buy it?

If copyright terms were a more reasonable length, no more than the original 14 years (plus an option of renewal for 14 years) of the 1710 Statute of Anne, then both these problems would disappear. Relatively soon after the original publication of a book, before it sinks into obscurity, anyone could turn it into an ebook, and circulate it freely online under a public domain licence. Publishers could do the same, perhaps adding forewords and other critical apparatus, and they could also print new, analogue editions without worrying about copyright issues. The costs for both book forms would be lower without the need for expensive legal searches, which would encourage more publishers to bring out new editions, and increase the availability of these works, perhaps guided by the online popularity of the freely-circulating copies made by individuals.

It is the absurdly long intellectual monopoly created by copyright – typically the author’s life plus 70 years more – that has created the near-impenetrable thickets that Bigelow refers to. Slash the copyright term, and you slash the thickets. If that could be done, the main obstacles to finding, reading, enjoying and – above all – sharing those great but forgotten books would all disappear at a stroke.

Featured image by Famartin.

Follow me @glynmoody on Mastodon and on Bluesky.

The new Hadopi? Piracy Shield blocks innocent Web sites and makes it hard for them to appeal

Italy’s newly-installed Piracy Shield system, put in place by the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), is already failing in significant ways. One issue became evident in February, when the VPN provider AirVPN announced that it would no longer accept users resident in Italy because of the “burdensome” requirements of the new system. Shortly afterwards, TorrentFreak published a story about the system crashing under the weight of requests to block just a few hundred IP addresses. Since there are now around two billion copyright claims being made every year against YouTube material, it’s unlikely that Piracy Shield will be able to cope once takedown requests start ramping up, as they surely will.

That’s a future problem, but something that has already been encountered concerns one of the world’s largest and most important content delivery networks (CDN), Cloudflare. CDNs have a key function in the Internet’s ecology. They host and deliver digital material to users around the globe, using their large-scale infrastructure to provide this quickly and efficiently on behalf of Web site owners. Blocking CDN addresses is reckless: it risks affecting thousands or even millions of sites, and compromises some of the basic plumbing of the Internet. And yet according to a post on TorrentFreak, that is precisely what Piracy Shield has now done:

Around 16:13 on Saturday [24 February], an IP address within Cloudflare’s AS13335, which currently accounts for 42,243,794 domains according to IPInfo, was targeted for blocking [by Piracy Shield]. Ownership of IP address 188.114.97.7 can be linked to Cloudflare in a few seconds, and doubled checked in a few seconds more.

The service that rightsholders wanted to block was not the IP address’s sole user. There’s a significant chance of that being the case whenever Cloudflare IPs enter the equation; blocking this IP always risked taking out the target plus all other sites using it.

The TorrentFreak article lists a few of the evidently innocent sites that were indeed blocked by Piracy Shield, and notes:

Around five hours after the blockade was put in place, reports suggest that the order compelling ISPs to block Cloudflare simply vanished from the Piracy Shield system. Details are thin, but there is strong opinion that the deletion may represent a violation of the rules, if not the law.

That lack of transparency about what appears to be a major overblocking is part of a larger problem, which affects those who are wrongfully cut off. As TorrentFreak writes, AGCOM’s “rigorous complaint procedure” for Piracy Shield “effectively doesn’t exist”:

information about blocks that should be published to facilitate correction of blunders, is not being published, also in violation of the regulations.

That matters, because appeals against Piracy Shield’s blocks can only be made within five working days of their publication. As a result, the lack of information about erroneous blocks makes it almost impossible for those affected to appeal in time:

That raises the prospect of a blocked innocent third party having to a) proactively discover that their connectivity has been limited b) isolate the problem to Italy c) discover the existence of AGCOM d) learn Italian and e) find the blocking order relating to them.

No wonder, then that:

some ISPs, having seen the mess, have decided to unblock some IP addresses without permission from those who initiated the mess, thus contravening the rules themselves.

In other words, not only is the Piracy Shield system wrongly blocking innocent sites, and making it hard for them to appeal against such blocks, but its inability to follow the law correctly is causing ISPs to ignore its rulings, rendering the system pointless.

This combination of incompetence and ineffectiveness brings to mind an earlier failed attempt to stop people sharing unauthorised copies. It’s still early days, but there are already indications that Italy’s Piracy Shield could well turn out to be a copyright fiasco on the same level as France’s Hadopi system, discussed in detail in Walled Culture the book (digital versions available free).

Featured image by Kimberlym21.

Follow me @glynmoody on Mastodon and on Bluesky.

How copyright makes the climate crisis worse

Many of the posts here on the Walled Culture blog examine fairly niche problems that copyright is causing. Although they are undoubtedly important, in the overall scheme of things they can hardly be called major. But sometimes copyright can have important repercussions in the wider world, as an interesting post on The Conversation makes clear.

It reports on a paper published in the Environmental Science & Policy journal by a group of researchers in the UK. It explores how policymakers make planning decisions for new offshore wind turbine developments in the UK, and what evidence they draw on. There are two kinds of literature that are used: “primary literature”, which refers to studies published in academic journals following a peer review process; and “grey literature”, which the University of Exeter Library defines as follows:

“information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing” ie. where publishing is not the primary activity of the producing body.

According to the post, it is grey literature, not the more rigorous primary literature, that policymakers prefer to draw on in this area:

Policymakers tend to favour grey literature even though it gives a less balanced outlook, perhaps due to access issues. Primary literature often sits behind paywalls, the process of review can lead to lengthy delays in publication, and these studies may just investigate one species or process in detail. Grey literature is easier to access, available much sooner, and can provide a useful overview or synthesis of available knowledge, which is exactly what regulators need.

Although surprising – you’d think that policy makers would want the best information, not simply the most accessible – it might seem a harmless bias. In fact, it has serious consequences:

Overall, 71% of outcomes reported in grey literature for the impacts of offshore wind farms are negative, compared with 36% in primary literature. This disparity could in part be due to the fact that environmental impact assessments address potential rather than specific impacts, and reflect a high proportion of the grey literature.

The considerably more negative view that grey literature takes of offshore wind farms is likely to have slowed the roll out of this technology, which is highly contested in many countries. That, in its turn, will have meant more carbon dioxide entering the atmosphere through burning fossil fuels instead, causing additional global heating and its associated problems. The unnecessary exacerbation of the climate crisis is a consequence of the difficulty of accessing rigorous academic studies. It is copyright that allows publishers to lock away knowledge behind the paywalls mentioned above, in order to impose a fee for accessing it.

Open access to academic papers goes some way to alleviating that problem, since it aims to make research more widely available. But it is not a panacea. As readers of this blog know, there are various kinds of open access, with different restrictions on how and when papers can be accessed, viewed and shared. As a result, the open access landscape is complicated and confusing. Even if all research were available under open access licences, it would still be easier to turn to grey literature, which generally imposes no conditions on how material is used – or if it does, they are rarely enforced.

Unless all academic research is routinely placed in the public domain – which seems unlikely – this new paper suggests that copyright will continue to act as a brake on taking necessary action to address arguably the most serious crisis facing us today.

Featured image by US Department of Energy.

Follow me @glynmoody on Mastodon and on Bluesky.

Texts of laws must be freely available, not locked away by copyright; in Germany, many still aren’t

It is often said that “ignorance of the law is no defence”. But the corollary of this statement is that laws must be freely available so that people can find them, read them and obey them. Secret laws, or laws that are hard to access, undermine the ability and thus the willingness of citizens to follow them. And yet just such a situation is found in many countries around the world, including Germany, as a post on the Communia blog by Judith Doleschal from the FragDenStaat (“Ask the Government”) organisation describes. It concerns what are known as “law gazettes”. These are crucial documents that define German regulations for a wide range of areas such as work safety, health insurance tariffs, directives on the use of police tasers or guidelines for pesticide applications. They are not primary legislation, but they are nonetheless legally binding, which means that they should be freely available to anyone who might have to obey them. They are not, for reasons explained by Doleschal:

The [German] Federal Ministry of the Interior is the editor of the gazette. However, it is published by a private publishing house owned by the billion-dollar Wolters Kluwer group. Wolters Kluwer charges €1.70 per 8 pages for individual copies of the documents. If you were to buy all official issues of the [Federal Gazette of Ministerial Orders] with a total of 63,983 pages individually from the publisher, it would cost a whopping €13,596.

Given the healthy profits such pricing presumably generates from material that is provided by the German government, the publisher is naturally unwilling to allow anyone else to provide free access to these official documents. The reason why it can do that is interesting:

[The publisher] doesn’t hold the copyright to the official documents. Instead, it argues that the database of the law gazette is protected under related rights („Leistungsschutzrecht“ in German).

This Leistungsschutzrecht is also known as an “ancillary copyright”, and is a good demonstration of how fans of copyright try to spread its monopoly beyond the usual domains. Whether to create a new Leistungsschutzrecht was one of the important battles that took place during the passage of the EU’s Copyright Directive, discussed at length in Walled Culture the book (free digital versions available). In that instance, it resulted in a new ancillary copyright for newspaper publishers that is another example of yet more money being channelled to the copyright world simply because they were able to lobby for it effectively. As usual, there is no corresponding benefit for the public flowing from this extension of copyright. In the case of the Leistungsschutzrecht claimed by the publisher of the German law gazettes, it results in a ridiculous situation:

the state publish[es] binding regulations in documents that are in the public domain, but still not publicly available without a paywall. A private billion-dollar publisher earns money referring to an alleged investment protection for the database. An absurd construction, but still quite convenient for the [German] Federal Ministry of Interior as it has zero costs and hardly any effort for the publication.

An absurd situation indeed, and one that FragDenStaat wants to change:

We at FragDenStaat are willing to take the risk of being sued for the publication of the law gazette as we believe that official documents of general interest belong in the public domain – not in the hands of private publishers. Free access to documents is not only lawful, but also necessary. So by publishing the most important state databases, we make available to the public what is already theirs. We will continue to open up more public databases in the next months.

That’s a laudable move, and one that everyone who cares about a society based on the rule of law, and therefore on publicly-accessible laws, should support. The publisher currently benefiting from this unjustified monopoly will doubtless fight this attempt to open up the German law gazettes, but FragDenStaat is optimistic, because it has managed to change official behaviour before:

Four years ago our campaign „Offene Gesetze“ („Open Laws“) helped freeing the Federal Law Gazette in the same manner. All laws of the Federal Republic of Germany are published in the Federal Law Gazette. Laws only come into force when they are published there. Back then, the publisher was the Bundesanzeiger Verlag, which was privatized in 2006 and belongs to the Dumont publishing group. Anyone who wanted to search, copy or print out federal law gazettes needed to pay .

After we published the documents as freely reusable information, the Federal Ministry of Justice decided to publish the Law Gazette on its own open platform.

It’s great to see brave organisations like FragDenStaat righting the wrongs that copyright has enabled by locking up key public documents behind paywalls. But it is outrageous that it needs to.

Featured image by Daderot.

Follow me @glynmoody on Mastodon and on Bluesky.

Italy’s new Piracy Shield has just gone into operation and is already harming human rights there

Back in October, Walled Culture wrote about the grandly-named “Piracy Shield”. This is Italy’s new Internet blocking system, which assumes people are guilty until innocent, and gives the copyright industry a disproportionate power to control what is available online, no court orders required. Piracy Shield went live in December, and has just issued its first blocking orders. But a troubling new aspect of Piracy Shield has emerged, reported here by TorrentFreak:

A document detailing technical requirements of Italy’s Piracy Shield anti-piracy system confirms that ISPs are not alone in being required to block pirate IPTV services. All VPN and open DNS services must also comply with blocking orders, including through accreditation to the Piracy Shield platform. Google has already agreed to dynamically deindex sites and remove infringing adverts.

This is no mere theoretical threat. The VPN (Virtual Private Network) service AirVPN has just announced that it will no longer accept users resident in Italy. As AirVPN explains:

The list of IP addresses and domain names to be blocked is drawn up by private bodies authorised by AGCOM (currently, for example, Sky and DAZN). These private bodies enter the blocking lists in a specific platform. The blocks must be enforced within 30 minutes of their first appearance by operators offering any service to residents of Italy.

There is no judicial review and no review by AGCOM. The block must be enforced inaudita altera parte [without hearing the other party] and without the possibility of real time refusal, even in the case of manifest error. Any objection by the aggrieved party can only be made at a later stage, after the block has been imposed.

As a result, AirVPN says it can no longer offer its service in Italy:

The above requirements are too burdensome for AirVPN, both economically and technically. They are also incompatible with AirVPN’s mission and would negatively impact service performance. They pave the way for widespread blockages in all areas of human activity and possible interference with fundamental rights (whether accidental or deliberate). Whereas in the past each individual blockade was carefully evaluated either by the judiciary or by the authorities, now any review is completely lost. The power of those private entities authorized to compile the block lists becomes enormous as the blocks are not verified by any third party and the authorized entities are not subject to any specific fine or statutory damage for errors or over-blocking.

That’s a good summary of all that is wrong with Piracy Shield. Companies can compile block lists without any constraint or even oversight. If the blocks are unjustified, there are no statutory damages, which will obviously encourage overblocking. And proving they are unjustified is a slow and complex process, and only takes place after the block has been effected.

What is particularly troubling here is that Italian residents are now losing access to a popular VPN as a result of this new law. In a world where privacy threats from companies and governments are constantly increasing, VPNs are a vital tool, and it is crucial to have a range of them to choose from. The fact that AirVPN has been forced to discontinue this service for people in Italy is a further demonstration of how here, as elsewhere, copyright is evidently regarded by the authorities as more important than fundamental human rights such as privacy and security.

Featured image by Anastasiya Lobanovskaya.

Follow me @glynmoody on Mastodon and on Bluesky.

Important court ruling on copyright ought to lead to a blossoming of UK open culture – but will it?

There’s a post on the Creative Commons blog with some important news about copyright (in the UK, at least):

In November 2023, the Court of Appeal in THJ v Sheridan offered an important clarification of the originality requirement under UK copyright law, which clears a path for open culture to flourish in the UK.

In setting the copyright originality threshold, the court stated: “What is required is that the author was able to express their creative abilities in the production of the work by making free and creative choices so as to stamp the work created with their personal touch.” Crucially, the court affirmed that “this criterion is not satisfied where the content of the work is dictated by technical considerations, rules or other constraints which leave no room for creative freedom.”

The post points out that the case is potentially a “game-changer in the UK open culture landscape”:

Because by setting the standard for copyright to arise based on “free and creative choices” it effectively bars copyright claims from being made over faithful reproductions of public domain materials (i.e., materials that are no longer or never were protected by copyright).

This touches on a topic that Walled Culture has written about many times: the fact that many museums and art galleries around the world try to claim copyright on faithful reproductions of artistic creations in their collections that are unequivocally in the public domain. Their argument, such as it is, seems to be that taking a digital photo or making a 3D copy requires such an immense intellectual effort that a new monopoly should be granted on it. It’s really about money, of course.

The Creative Commons post mentions “A Culture of Copyright”, a useful report by Dr Andrea Wallace that looked at how widespread the problem was in the UK. The blog post also refers to a CC Open Culture Platform working group that developed proposals for “technical, legal, and social interventions” to address the problem of “PD BY” (that is, the use of CC-BY licences to share reproductions of public domain works).

Although the group’s idea of adding some kind of courtesy (non-binding) request to all deed pages is interesting and well intentioned, it takes a dangerous step towards compromising the public domain, which is already under constant attack from copyright maximalists. The full and undiluted version of the public domain must be maintained – that’s the supposed bargain of copyright: after being locked down by an (over-long) government-backed intellectual monopoly, works enter the public domain without restriction.

In any case, as an important analysis by Douglas McCarthy points out, just because a UK court has ruled that faithful reproductions of public domain works are in the public domain doesn’t mean that galleries and museums there will gracefully allow us to access to millions of images, and to use them for any purpose. Once people become hooked on the powerful drug of a copyright monopoly, they are very reluctant to give it up. McCarthy explains how some UK cultural institutions are likely to respond:

I anticipate that THJ v Sheridan will accelerate the existing trend towards contract law replacing copyright law as a means of controlling access to public domain collections. If this happens, one shouldn’t assume that access policies will become any more open – the old status quo may simply persist in new form.

Basically, before people can access a Web site with the public domain images of public domain artworks, they are forced to accept terms and conditions that require them to pay for the privilege:

The model of restricted access to digitised public domain works, governed by contract law, has been around for some time. It is, to take one example, practised by Tate Images, enabling Tate to charge fees (on the usual sliding scale of cost, based on the scope and scale of intended reproduction) for licensing images without copyright.

The Tate Gallery should be ashamed of this approach, as should any other public institution that adopts it. In case they have forgotten, they are entrusted with the masterpieces of the past on behalf of the public, which has generally paid for them and their preservation. As such, the public should have free and unrestricted access not just to the works themselves, but also to the public domain reproductions of them. If they refuse to allow this, museums and galleries are not only abandoning their mission, and betraying the trust placed in them, they are thumbing their nose at an important court ruling.

If this shift to contract law becomes common, it will be a further proof that copyright not only harms the public and artists, as Walled Culture the book (free ebook versions) lays out in detail, but also seems to cause those who are obsessed with it to lose their collective minds.

Featured image by Stefan Bellini.

Follow me @glynmoody on Mastodon and on Bluesky.

Two important reasons for keeping AI-generated works in the public domain

Generative AI continues to be the hot topic in the digital world – and beyond. A previous blog post noted that this has led to people finally asking the important question whether copyright is fit for the digital world. As far as AI is concerned, there are two sides to the question. The first is whether generative AI systems can be trained on copyright materials without the need for licensing. That has naturally dominated discussions, because many see an opportunity to impose what is effectively a copyright tax on generative AI. The other question is whether the output of generative AI systems can be copyrighted. As another Walled Post explained, the current situation is unclear. In the US, purely AI-generated art cannot currently be copyrighted and forms part of the public domain, but it may be possible to copyright works that include significant human input.

Given the current interest in generative AI, it’s no surprise that there are lots of pundits out there pontificating on what it all means. I find Christopher S Penn’s thoughts on the subject to be consistently insightful and worth reading, unlike those of many other commentators. Even better, his newsletter and blog are free. His most recent newsletter will be of particular interest to Walled Culture readers, and has a bold statement concerning AI and copyright:

We should unequivocally ensure machine-made content can never be protected under intellectual property laws, or else we’re going to destroy the entire creative economy.

His newsletter includes a short harmonised tune generated using AI. Penn points out that it is trivially easy to automate the process of varying that tune and its harmony using AI, in a way that scales to billions of harmonised tunes covering a large proportion of all possible songs:

If my billion songs are now copyrighted, then every musician who composes a song from today forward has to check that their composition isn’t in my catalog of a billion variations – and if it is (which, mathematically, it probably will be), they have to pay me.

Moreover, allowing copyright in this way would result in a computing arms race. Those with the deepest pockets could use more powerful hardware and software to produce more AI tunes faster than anyone else, allowing them to copyright them first:

That wipes out the music industry. That wipes out musical creativity, because suddenly there is no incentive to create and publish original music for commercial purposes, including making a living as a musician. You know you’ll just end up in a copyright lawsuit sooner or later with a company that had better technology than you.

That’s one good reason for not allowing music – or images, videos or text – generated by AI to be granted copyright. As Penn writes, doing so would just create a huge industry whose only purpose is generating a library of works that is used for suing human creators for alleged copyright infringement. The bullying and waste already caused by the similar patent troll industry shows why this is not something we would want. Here’s another reason why copyright for AI creations is a bad idea according to Penn:

If machine works remain non-copyrightable, there’s a strong disincentive for companies like Disney to use machine-made works. They won’t be able to enforce copyright on them, which makes those works less valuable than human-led works that they can fully protect. If machine works suddenly have the same copyright status as human-led works, then a corporation like Disney has much greater incentive to replace human creators as quickly as possible with machines, because the machines will be able to scale their created works to levels only limited by compute power.

This chimes with something that I have argued before: that generative AI could help to make human-generated art more valuable. The value of human creativity will be further enhanced if companies are unable to claim copyright in AI-generated works. It’s an important line of thinking, because it emphasises that it is not in the interest of artists to allow copyright on AI-generated works, whatever Big Copyright might have them believe.

Featured image by Christopher S Penn.

Follow me @glynmoody on Mastodon and on Bluesky.

A Swiftian solution to some of copyright’s problems

Copyright is generally understood to be for the benefit of two groups of people: creators and their audience. Given that modern copyright often acts against the interests of the general public – forbidding even the most innocuous sharing of copyright material online – copyright intermediaries such as publishers, recording companies and film studios typically place great emphasis on how copyright helps artists. As Walled Culture the book spells out in detail (digital versions available free) the facts show otherwise. It is extremely hard for creators in any field to make a decent living from their profession. Mostly, artists are obliged to supplement their income in other ways. In fact, copyright doesn’t even work well for the top artists, particularly in the music world. That’s shown by the experience of one of the biggest stars in the world of music, Taylor Swift, reported here by The Guardian:

Swift is nearing the end of her project to re-record her first six albums – the ones originally made for Big Machine Records – as a putsch to highlight her claim that the originals had been sold out from under her: creative and commercial revenge served up album by album. Her public fight for ownership carried over to her 2018 deal with Republic Records, part of Universal Music Group (UMG), where an immovable condition was her owning her future master recordings and licensing them to the label.

It seems incredible that an artist as successful as Swift should be forced to re-record some of her albums in order to regain full control over them – control she lost because of the way that copyright works, splitting copyright between the written song and its performance (the “master recording”). A Walled Culture post back in 2021 explained that record label contracts typically contain a clause in which the artist grants the label an exclusive and total licence to the master.

Swift’s need to re-record her albums through a massive but ultimately rather pointless project is unfortunate. However, some good seems to be coming of Swift’s determination to control both aspects of her songs – the score and the performance – as other musicians, notably female artists, follow her example:

Olivia Rodrigo made ownership of her own masters a precondition of signing with Geffen Records (also part of UMG) in 2020, citing Swift as a direct inspiration. In 2022, Zara Larsson bought back her recorded music catalogue and set up her own label, Sommer House. And in November 2023, Dua Lipa acquired her publishing from TaP Music Publishing, a division of the management company she left in early 2022.

It’s a trend that has been gaining in importance in recent years, as more musicians realise that they have been exploited by recording companies through the use of copyright, and that they have the power to change that. The Guardian article points out an interesting reason why musicians have an option today that was not available to them in the past:

This recalibration of the rules of engagement between artists and labels is also a result of the democratisation of information about the byzantine world of music contract law. At the turn of the 2000s, music industry information was highly esoteric and typically confined to the pages of trade publications such as Billboard, Music Week and Music & Copyright, or the books of Donald S Passman. Today, industry issues are debated in mainstream media outlets and artists can use social media to air grievances or call out heinous deal terms.

Pervasive use of the Internet means that artists’ fans are more aware of how the recording industry works, and thus better able to adjust their purchasing habits to punish the bad behaviour, and reward the good. One factor driving this is that musicians can communicate directly to their fans through social media and other platforms. They no longer need the marketing departments of big recording companies to do that, which means that the messages to fans are no longer sanitised or censored.

This is another great example of how today’s digital world makes the old business models of the copyright industry redundant and vulnerable. That’s great news, because it is a step on the path to realising that creators – whatever their field – don’t need copyright to thrive, despite today’s dogma that they do. What they require is precisely what innovative artists like Taylor Swift have achieved – full control over all aspects of their own creations – coupled with the Internet’s direct channels to their fans that let them turn that into fair recompense for their hard work.

Featured image of Jonathan Swift based on painting by Charles Jervas.

Follow me @glynmoody on Mastodon and on Bluesky.

A lawsuit against OpenAI has mainstream media finally asking if copyright is fit for the digital world

Last year saw great excitement over a new wave of AI services based on large language models (LLMs). That enthusiasm was somewhat overshadowed by a subsequent wave of lawsuits claiming that the LLMs were guilty of copyright infringement because of the training materials they used. Just before the start of 2024, a new lawsuit was filed, this time by The New York Times (NYT), against OpenAI and Microsoft. As an article in the NYT itself explains:

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

Around the same time, Microsoft and OpenAI were also sued for alleged copyright infringement by non-fiction authors, but it is the NYT action that has really caught people’s attention, and led to a flurry of analysis and opinion pieces. There are two main elements to the lawsuit. One alleges that use of material from the NYT to train OpenAI’s LLMs without permission is illegal, and the other that the output from OpenAI’s ChatGPT infringes on NYT’s copyrights.

The first point is the old argument that LLMs infringe on copyright because they are copying their training materials. But as previous posts on Walled Culture (and many others elsewhere) have explained, that’s not how LLMs work. They don’t copy, they analyse, in order to create a database of probabilities that represent existing patterns in text, images, videos and sounds. They then use these patterns to generate new material given a prompt by the user. The second element of the NYT lawsuit is the following, as described by the NYT story:

The complaint cites several examples when a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view. It asserts that OpenAI and Microsoft placed particular emphasis on the use of Times journalism in training their A.I. programs because of the perceived reliability and accuracy of the material.

At first sight, the examples provided look quite compelling. In a blog post commenting on the lawsuit, OpenAI makes the following points about these “regurgitations” that the NYT says is evidence of copyright infringement by ChatGPT:

Interestingly, the regurgitations The New York Times induced appear to be from years-old articles that have proliferated on multiple third-party websites. It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate. Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts.

Moreover, as Mike Masnick noted on Techdirt:

If you actually understand how these systems work, the output looking very similar to the original NY Times piece is not so surprising. When you prompt a generative AI system like GPT, you’re giving it a bunch of parameters, which act as conditions and limits on its output. From those constraints, it’s trying to generate the most likely next part of the response. But, by providing it paragraphs upon paragraphs of these articles, the NY Times has effectively constrained GPT to the point that the most probabilistic responses is… very close to the NY Times’ original story.

That same OpenAI blogpost says that “The New York Times is not telling the full story” because the two companies had been negotiating a “a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting.” Mike Masnick speculates that the NYT may have decided to bring its lawsuit because at the beginning of December last year OpenAI announced a “Partnership with Axel Springer to deepen beneficial use of AI in journalism”. This followed an earlier deal with The Associated Press. Other publishers are also discussing licensing deals with OpenAI. NYT might be using its legal action to put pressure on OpenAI to offer a better licensing deal and sooner.

Licensing has always been a favourite approach for the copyright world – however inappropriately – as Walled Culture the book details (free digital downloads). But a comment from the venture capital company Andreessen Horowitz, submitted to the US Copyright Office as part of the latter’s inquiry into AI, points out:

The reason AI models are able to do what they can do today is that the internet has given AI developers ready access to a broad range of content, much of which can’t reasonably be licensed—everything from blog posts to social media threads to customer reviews on shopping sites. Indeed, the cost of paying to license even a fraction of the content needed to properly train an AI model would be prohibitive for all but the deepest-pocketed AI developers, resulting in dominance by a few technology incumbents. This would undermine competition by the technology startups which are the source of the greatest innovation in AI.

OpenAI said something similar in a submission to the UK’s House of Lords communications and digital select committee:

Because copyright today covers virtually every sort of human expression–including blog posts, photographs, forum posts, scraps of software code, and government documents–it would be impossible to train today’s leading AI models without using copyrighted materials. Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.

Some people have mocked this comment, on the grounds that it seems to say that breaking the law is justified if your business model depends upon it. This overlooks two points. First, there is an assumption among big content that copyright law should be allowed to throttle exciting new technologies with major benefits for society simply because copyright is sacred and must always be protected, regardless of the harm it causes. Secondly, big content has pushed, and keeps pushing, for legislation that ensures that copyright continues to sustain their current business models.

One of the most positive aspects of the NYT lawsuit against OpenAI and Microsoft is that it has led to articles in the mainstream media noting that it raises important questions about copyright law, and whether it is fit for purpose in the digital world. The answer to that question is unlikely to come back quickly – it may need to go all the way to the US Supreme Court for a definitive answer, but it is good news that the problem is finally being acknowledged in this way.

Featured image by OpenAI.

Follow me @glynmoody on Mastodon and on Bluesky.

Mickey Mouse is public domain now, but the battle to prevent copyright term extensions is not over

The beginning of the year is a great time for the public domain, since it sees thousands of copyrighted works released from the intellectual monopoly that prevents their free creative use. Which works enter the public domain depends on the details of local copyright law, which varies around the world. But there’s a liberation that has taken place in the US that is particularly worth celebrating. Among the many important works that are now in the US public domain, there is the long-awaited arrival of Mickey and Minnie Mouse as they appeared in the short animation Steamboat Willie.

Beyond its cultural significance, its release into the public domain is notable because of the role that the character has played in the field of copyright law. It was Disney’s obsession with maintaining control over Mickey Mouse that led to the US copyright term being extended multiple times to prevent it entering the public domain. The last extension, formally the Sonny Bono Copyright Term Extension Act, is widely known as the Mickey Mouse Protection Act. Had that law only extended copyright protection for Mickey Mouse, it would have been a minor if annoying legal aberration. But as a long and fascinating post on the Center for the Study of the Public Domain explains, Disney’s successful lobbying had much wider consequences:

Disney pushed for the law that extended the copyright term to 95 years, which became referred to derisively as the “Mickey Mouse Protection Act.” This extension has been criticized by scholars as being economically regressive and having a devastating effect on our ability to digitize, archive, and gain access to our cultural heritage. It locked up not just famous works, but a vast swath of our culture, including material that is commercially unavailable. Even though calling it the “Mickey Mouse Protection Act” may overstate Disney’s actual role in the legislative process – the measure passed because of a much broader lobbying effort – Disney was certainly a prominent supporter, and the Mouse was sometimes a figurehead.

It was feared that Disney would lobby for another extension to copyright in order to retain control of Mickey Mouse after 2023. Fortunately that did not happen, possibly as a result of the growing awareness of, and resistance to, copyright’s imbalance, discussed in Walled Culture the book (free digital versions available). There has already been a rapid flowering of creative re-use, including the application of AI to generate some very un-Disney-like images of Mickey.

The early versions of Mickey Mouse in Steamboat Willie have definitely entered the public domain in the US, but elsewhere it is less clear. Mike Masnick notes on Techdirt that YouTube is still blocking access to Steamboat Willie in some jurisdictions, including in the EU:

The EU is supposed to apply the “rule of the shorter term,’ respecting the entrance into the public domain in other countries if the work originated in those countries, though as that article notes, a German court decided that an 1892 treaty between the US and Germany pre-empted that obligation.

Even in the US, it seems that Disney is unwilling to let Mickey go. On 4 January, voice actor and YouTuber Brock Baker uploaded a new video, with the title “Steamboat Willie (Brock’s Dub),” to his YouTube channel with more than 1 million subscribers. As a post on Mashable explained:

shortly after uploading the clip though, YouTube demonetized the video, evidently on behalf of the erstwhile copyright owner, Disney. Baker also shared a screenshot to his X account showing the video was also being blocked from view in some territories as well.

Baker disputed the copyright claim, and Disney backed down, allowing the new version to be monetised, embedded and shared worldwide. But only for a day or two: on 7 January, Disney again demonetised the Mickey Mouse video, claiming this time that the audio element infringed on its copyright. At the time of writing, it’s not clear whether Disney will drop this claim too, and or whether it is aiming to use this avenue as a way of continuing to control aspects of Mickey Mouse. In addition, Disney still has trademarks that it can wield to limit how people use the liberated Mickey.

The Mickey Mouse saga is an excellent demonstration of the fact that even when a work has unequivocally entered the public domain (in the US at least), copyright can still be used to limit its creative use. A widespread bias in the legal framework favours copyright owners against the general public. The recent events also underline the reluctance of companies whose profits are built on copyright, such as Disney, to fulfil their side of the implicit copyright bargain: that in return for a fixed term of government-backed monopoly protection, the work enters the public domain afterwards for all to use as they wish. As many more popular characters such as Pluto, Donald Duck, Superman, J.R.R. Tolkien’s The Hobbit and James Bond are poised to follow Mickey Mouse into the public domain soon, we might even see Disney and other companies push for yet another copyright term extension.

Featured image by Disney.

Follow me @glynmoody on Mastodon and on Bluesky.

Generative AI will be a huge boon for the public domain – unless copyright blocks it

A year ago, I noted that many of Walled Culture’s illustrations were being produced using generative AI. During that time, AI has developed rapidly. For example, in the field of images, OpenAI has introduced DALL-E 3 in ChatGPT:

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

Ars Technica has written a good intro to the new DALL-E 3, describing it as “a wake-up call for visual artists” in terms of its advanced capabilities. The article naturally touches on the current situation regarding copyright for these creations:

In the United States, purely AI-generated art cannot currently be copyrighted and exists in the public domain. It’s not cut and dried, though, because the US Copyright Office has supported the idea of allowing copyright protection for AI-generated artwork that has been appreciably altered by humans or incorporated into a larger work.

The article goes to explore an interesting aspect of that situation:

there’s suddenly a huge new pool of public domain media to work with, and it’s often “open source”—as in, many people share the prompts and recipes used to create the artworks so that others can replicate and build on them. That spirit of sharing has been behind the popularity of the Midjourney community on Discord, for example, where people typically freely see each other’s prompts.

When several mesmerizing AI-generated spiral images went viral in September, the AI art community on Reddit quickly built off of the trend since the originator detailed his workflow publicly. People created their own variations and simplified the tools used in creating the optical illusions. It was a good example of what the future of an “open source creative media” or “open source generative media” landscape might look like (to play with a few terms).

There are two important points there. First, that the current, admittedly tentative, status of generative AI creations as being outside the copyright system means that many of them, perhaps most, are available for anyone to use in any way. Generative AI could drive a massive expansion of the public domain, acting as a welcome antidote to constant attempts to enclose the public domain by re-imposing copyright on older works – for example, as attempted by galleries and museums.

The second point is that without the shackles of copyright, these creations can form the basis of collaborative works among artists willing to embrace that approach, and to work with this new technology in new ways. That’s a really exciting possibility that has been hard to implement without recourse to legal approaches like Creative Commons. Although the intention there is laudable, most people don’t really want to worry about the finer points of licensing – not least out of fear that they might get it wrong, and be sued by the famously litigious copyright industry.

A situation in which generative AI creations are unequivocally in the public domain could unleash a flood of pent-up creativity. Unfortunately, as the Ars Technica article rightly points out, the status of AI generated artworks is already slightly unclear. We can expect the copyright world to push hard to exploit that opening, and to demand that everything created by computers should be locked down under copyright for decades, just as human inspiration generally is from the moment it is in a fixed form. Artists should enjoy this new freedom to explore and build on generative AI images while they can – it may not last.

Featured image created with Stable Diffusion.

Follow me @glynmoody on Mastodon.

1 2 3 17
Cookie Consent with Real Cookie Banner