An elegant weapon from a more civilized age.

The AI copyright infringement cases are increasing rapidly, not only in the US but also in the UK and China. The number of cases is so high that it’s impossible to keep track of all of them, but some common elements are emerging for analysis. While the emphasis has so far rightly been on the input phase—namely, the training of AI models using copyrighted works—hints suggest that more cases will require analysis of the outputs generated by those models. This analysis may rely on very old case law dealing with analogue technology, such as video and audio tape recorders.

We’ve discussed the input phase extensively, and most of the ongoing litigation will probably be centred on whether training falls under fair use or fair dealing, so there’s no need to go into that again. While most cases currently focus on the inputs, outputs are gaining prominence, necessitating a different approach from the defendants.

The biggest output case is The New York Times v OpenAI and Microsoft, where the newspaper produced some outputs using ChatGPT, as discussed here. OpenAI has responded that these outputs are the result of heavy manipulation of their API, going as far as calling it hacking, pretty much exploiting a bug to produce outputs, an interesting aspect of the defence that I will not go into for now. But what has prompted this blog post is something that is present in Microsoft’s own motion to dismiss, and which may have wider repercussions on how the cases proceed.

Instead of concentrating on the input aspects, Microsoft decided to deal specifically with the potential uses given to the technology, calling back to the VCR era. In the opening section of their plea to dismiss the case, they comment:

“By harnessing humanity’s collective wisdom and thinking, LLMs help us learn from each other, solve problems, organize our lives, and launch bold new ideas. Because Microsoft firmly believes in LLMs’ capacity to improve the way people live and work, it has collaborated with OpenAI to help bring their extraordinary power to the public, while leading the way in promoting safe and responsible AI development.
Despite The Times’s contentions, copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, internet, or search engine). Content used to train LLMs does not supplant the market for the works, it teaches the models language.”

What is this talk of the VCR in an artificial intelligence case? Microsoft are calling back to a foundational case in US copyright law called Sony v. Universal, commonly known as the “Betamax case.” This case establishes the Sony Doctrine, where Universal City Studios (along with other movie studios) sued Sony, claiming that the company’s Betamax VCRs were contributing to copyright infringement by allowing users to record television broadcasts without permission. The studios argued that Sony should be held liable for facilitating this infringement. The US Supreme Court, however, held that Sony was not liable for copyright infringement committed by individuals using its Betamax VCRs for non commercial, personal use. The Court introduced the concept of “time-shifting,” where individuals record television programs for viewing at a later time, and deemed it a fair use under copyright law. A key aspect of the ruling was the doctrine of “substantial non-infringing use,” meaning that if a product is widely used for lawful purposes, the manufacturer of the product cannot be held liable for the infringing uses of the product by individuals.

A similar principle can be found in other systems, particularly in the UK with the case CBS Songs Limited v Amstrad Consumer Electronics. The case centred around Amstrad’s manufacture and sale of a twin cassette recorder, which allowed users to copy music cassettes easily. CBS Songs Limited (representing the interests of copyright holders) argued that by manufacturing and selling these devices, Amstrad was authorising or contributing to copyright infringement by users who used the devices to make unauthorised copies of copyrighted music. The House of Lords held that Amstrad was not liable for copyright infringement. They reasoned that the manufacturer of a device that could be used to infringe copyright was not in itself infringing copyright, provided that there were substantial non-infringing uses for the device. The Lords emphasized the importance of the intention of the device’s manufacturer and concluded that Amstrad had not authorized the infringement of copyright by producing and selling the twin cassette recorders. This could be relevant in the ongoing Getty Images case.

So what Microsoft is doing in this case is starting to mount a Sony Doctrine defence, arguing that while large language models could potentially be used to infringe copyright, the vast majority of users will utilise it for non-infringing purposes, and therefore the technology should be allowed to develop. They argue that The New York Times hasn’t been able to produce any infringing outputs by their users, and on the contrary, all allegedly infringing outputs have been introduced by the claimants. Discussing this point, the defendants comment:

“The Complaint is filled with attempts to establish that GPT-based products “will output near-verbatim copies of significant portions of Times Works when prompted to do so” […] And it warns that if The Times does not prevail in this case, the “cost to society will be enormous.” […] And yet its 68 pages and 204 paragraphs of allegations do not contain a single allegation concerning something that has actually happened in the real world as a result of the development, offering, or use of GPT-based products. The Times is at pains to insinuate that with “minimal prompting” these products “will recite large portions” of The Times’s works […], offering various examples that compare the text of an article to model output. But The Times buries this “minimal prompting” in an exhibit, where it admits that its prompts each “comprise[] a short snippet from the beginning of an article from The New York Times”—often verbatim snippets of paragraphs on end. […] The Complaint does not explain why any real person, already in possession of a New York Times article, would ever want to feed part of that prompt into a GPT-based product to generate a part of the rest of the article. It certainly does not allege that any real person has done so. It has manufactured the story.”

I happen to find this line of reasoning rather compelling, and I also think that this could eventually resonate positively during trial. The thing is that by now members of the public are becoming acquainted with tools like Gemini and ChatGPT, and therefore more and more people are becoming familiar with their everyday use. While some people may use image generators to attempt to produce copyright infringing outputs, I don’t think the same applies for language models, I can’t think of a single instance in which I was trying to use them to try to reproduce the content of a copyright work other than for legal research, most people use them as intended, as language tools.

But I also think that the “substantial non-infringing uses” can be exemplified by looking at the source of the content that is used to train an AI. It would be trite to say that the Internet is a giant copyright infringement machine, we do it all the time by sharing images, text, gifs, memes, quotes, movie stills, screenshots, etc. These outputs are prevalent in the training data because we share them on social media and the Web all the time, so infringement is prevalent, but the Internet continues to exist because of its substantial non-infringing uses. It could be argued that a similar thing is happening with generative AI, as it did with the VCR. Substantial non-infringing uses could outweigh the potential existence of infringing outputs.

Concluding

I have been thinking about what the endgame will be for AI. The current crop of lawsuits will eventually lead to some form of stability (pun unintended), and we do not know what shape that will take. My guess is that in the input phase a combination of fair use, licensing, and opt-outs will allow copyright holders to obtain some remuneration, while also allowing AI tools to continue to be developed. And with regards to outputs, any commercial infringing output could be prosecuted, but for the most part we could see AI tools as having substantial non-infringing uses. Only time will tell.

On the meantime, I wonder if a good way to test for machine intelligence would be to have it program a VCR. AGI achieved, please be kind, rewind.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.