A gathering storm, or a collage?

What many of us had expected has finally happened, artist have sued for copyright infringement a couple of AI companies, as well as an art repository site (complaint here). Is this the end of AI tools? I don’t think so, I’ll try to explain why, this will not be a detailed look at the lawsuit, there will be more time for that, this is my own take on some of the technical issues that I think the complaint gets wrong, so this is not intended as an in-depth look at the law anyway, as I suspect this may not get to a trial, more on that later. I’m also aware that this is at a very early stage, things may change, and most importantly, nobody can be sure of what the result will be, this is my own early speculation on the first filing as it stands, I’ll update and write further blog posts as needed.

The claims

Three artists are starting a class-action lawsuit against Stability.ai, Midjourney, and DeviantArt alleging direct copyright infringement, vicarious copyright infringement, DMCA violations, publicity rights violation, and unfair competition. DeviantArt appears to be included as punishment for “betrayal of its artist community”, so I will mostly ignore their part in this analysis for now. Specifically with regards to the copyright claims, the lawsuit alleges that Stability.ai and Midjourney have scraped the Internet to copy billions of works without permission, including works belonging to the claimants. They allege that these works are then stored by the defendants, and these copies are then used to produce derivative works.

This is at the very core of the lawsuit. The complaint is very clear that the resulting images produced by Stable Diffusion and Midjourney are not directly reproducing the works by the claimants, no evidence is presented of even a close reproduction of one of their works. What they are claiming is something quite extraordinary: “Every output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” Let that sink in. Every output image is a derivative of every input, so following this logic, anyone included in the data scraping of five billion images can sue for copyright infringement. Heck, I have quite a few images in the training data, maybe I should join! But I digress.

The argument goes something like this: images are scraped from the Internet without permission, these images are then copied, compressed and stored by the defendants, and these copies are used as a “modern day collage tool” to put together images from the training data, this is because machines cannot reason like people, so it stands to reason that they just put together stuff, hence all images are derivatives of the works in the training data.

The technology

I think that the argument in the claim is flawed because it does not accurately represent the technology, so I will attempt to make a very quick explanation of how tools such as Stable Diffusion or Midjourney produce images. What follows is using some excerpts from my forthcoming article, so stay tuned for a lengthier explanation.

I like to classify what happens in AI generative tools in two stages, the input phase and the output phase. The input phase is comprised of the gathering of data to create a dataset, and this is used to train a model. In the case of Stable Diffusion, it uses a dataset called LAION, which has of over 5 billion entries consisting of the pairing of a hyperlink to a web image (not the image itself) with its ALT text description. This dataset then is used to train a model, I will not go into detail into models, suffice it to say that a model is a mathematical representation of a real-world process that is trained using a dataset, this can be used to make predictions or decisions without being explicitly programmed to perform the task. There are various types of models, but Stable Diffusion and Midjourney both use diffusion models (see an explanation in a previous blog post). Long story short, diffusion models take an image, add noise to it, and then put it back together.

But what is the model from a practical perspective? It is a common misconception that a machine learning model is just a storage of images that then generates a collage, the current lawsuit uses the word collage repeatedly, so it is perpetuating this myth. This is where another model comes in, this is known as CLIP,  it is designed to improve the performance of AI models on a wide range of tasks involving both language and images. The model is trained using a large dataset of images and their corresponding text descriptions, and it learns to understand the relationship between language and images. This allows it to perform tasks such as image captioning and image classification with high accuracy. So, AI tools use a combination of a diffusion model trained on reconstructing images, as well as CLIP to understand words used to describe an image.

There is another very important element involved in generative models, and this is called latent space. In order to train a model with millions, and sometimes billions of single data points, it would be inefficient to treat every data point in the same way, there could be clusters of similar works. If we are thinking about images, you may not have to look at every single cat picture, it may suffice to cluster data that is similar. Imagine data as a room, you would put the cat pictures in the same space, the dog pictures in another space, etc. Latent space is the space of hidden or underlying factors that can explain the observed data, by clustering similar data, it is used in generative models where the goal is to learn a representation of the data that can be used to generate new samples that are like the ones in the training set. This is very valuable because it helps to compress the inputs, there’s no need to copy all images of cats, the model contains latent representations of cats.

The output phase is the generation of the image using all of the above models, and it is done using apps that can take a text prompt and generate a new image based on a combination of statistical data, language models, and latent space.

In other words, this is not a collage.

Analysing the claims

As you can start seeing from the above description of the technology, there is a big issue with how things are described in the lawsuit that clash with how machine learning and diffusion models work in reality. The disparity is that there appears to be a big leap in understanding between the training of a model, and how the model stores that knowledge. According to the complaint, Stability.ai takes the images in the training dataset and these are “stored at and incorporated into Stable Diffusion as compressed copies”. This is not what happens at all, a trained model does not have copies of the training data, that would create an unwieldy behemoth of unfathomable size. What happens is the creation of clusters of representation of things, namely latent space.

What is likely to happen during the trial, if it gets to that, is that there will be expert testimony, and this claim is likely to fall easily. Sure, there is some temporary copying at some stage, it is important to remember that LAION doesn’t copy images either, but there is scraping of images in the training process, but these are not stored in the model as claimed.

This will be a vital point, because as mentioned, the complaint doesn’t make any claims that the outputs are reproductions of any of the training images belonging to the claimants.

The other problematic issue in the complaint is the claim that all resulting images are necessarily derivatives of the five billion images used to train the model. I’m not sure if I like the implications of such level of dilution of liability, this is like homeopathy copyright, any trace of a work in the training data will result in a liable derivative. That way madness lies.

Other legal considerations

Perhaps the biggest surprise in the complain is who is missing as a defendant, in particular two very conspicuous names: LAION and OpenAI. I think that LAION is easier to explain, it is a German research organisation, and what they do is collect hyperlinks and text descriptions. This I believe falls under the text and data mining exception contained in the EU’s DSM Directive. OpenAI’s absence is more difficult to explain. I think that the main reason is that OpenAI does not disclose which dataset they are using, so it is not very easy for a claimant to prove that they have been used in the training data. As this lawsuit is entirely based on the input phase, this missing information is vital.

The other question is whether the lawsuit will be successful, and the honest answer is that I do not know. I am not impressed by the technical errors described above, and I think that this will be an important part of the defence. The defendants are likely to claim fair use, and this case has the potential to be the test case for whether training an AI without permission can meet the fair use requirements. We do not know, but I find that this lawsuit could be a risky gamble for artists. A defeat would finally settle the question that has been left open since Google Books, and I do not think that this is the strongest case, at least as it stands. Edited to add: The response actually didn’t claim fair use, which is interesting.

Concluding

This is the chronicle of a lawsuit foretold. Now that it has arrived, it will be analysed endlessly and poured over during the next weeks, and I’m looking forward to reading what others think, perhaps my skepticism will prove to be misplaced, we will see. My first impression is that this has “out of court settlement” written all over it, but if there is no compromise then this suit could last for years as any result will likely be appealed.

Paraphrasing Yoda, begun, this AI War has.


29 Comments

Avatar

C · January 15, 2023 at 3:45 pm

Stable Diffusion is based in the UK and trains its machines there I assume. Can US copyright laws be applicable here?

    Avatar

    Andres Guadamuz · January 15, 2023 at 4:44 pm

    SD has a US subsidiary based in Delaware, so both the UK company and the subsidiary are named defendants.
    This lawsuit would never fly under UK law, I may write a blog post about that in the future.

Avatar

AP · January 16, 2023 at 3:04 am

The concept art association is planing on lobbying for artists. What are your thoughts on that?
https://www.gofundme.com/f/protecting-artists-from-ai-technologies

    Avatar

    Andres Guadamuz · January 16, 2023 at 10:38 pm

    I think that it’s a bad idea. Getting a lobbyist to go to Washington DC is a narrow response that will be futile at best, and possibly detrimental to artists in the long run. Are we really sure that we want stronger copyright protection? That could affect things like fan art. At least a lawsuit is taking more direct action, even if I disagree with the way this one has been formulated.

Avatar

Anonymous · January 16, 2023 at 11:30 pm

A Toothless class action suit that’s Not brave enough to take also on OpenAI?

Avatar

Péter Faragó · January 20, 2023 at 2:32 pm

As being an A.I. researcher my self, nonetheless also operating a startup in the E.U. which is facing a similar law suit here, here is my food for thought: As Andres explained, A.I. does not store the images itself, but learn the abstraction of what the images represent. Showing it 1000 images with dancing figures, it learns the abstraction of dancing, not the images of dancing figure. Showing it 1000 images with families on it, it learns the abstraction of families. If you’re asked to draw a family, you don’t draw an existing images, you probably draw a naive lines of 2 parents and one or couple of children holding eachothers’ hands – like in kindergarten draws. If artists would say that observing their images to learn and extract the concept (making abstractions) of things on it is an unfair use of their performance, it cuts both ways. It would mean that every image that artists learned from in their life to create their own style and art, was used in an unfair way, and infringed the original artists’ right. I expect this decade will provide interesting clash of the fields of copyrights, database rights, sui generis rights, search engines, AI corpus generator and competition law.

Avatar

Susie · January 21, 2023 at 2:36 am

The AI is no different than other artists using previous works of art to inspire their own art work, so this will never fly. And it still comes down to what words you use to make the AI art & no same set of words produce the same image. Technology is moving us forward, perhaps the artists are scared that their work will no longer measure up but that is silly, as all art work has value in the eyes of the beholder.

Avatar

Metatrix · January 27, 2023 at 10:03 pm

“The other problematic issue in the complaint is the claim that all resulting images are necessarily derivatives of the five billion images used to train the model. I’m not sure if I like the implications of such level of dilution of liability, this is like homeopathy copyright, any trace of a work in the training data will result in a liable derivative. That way madness lies.”

Haha. Well said.

I’m starting a business as a freelancer aiming at legal technical writing so I’m currently browsing what kind of articles are being made in that department. Your article stood out for me (on Justia) because I happen to be part of a state-funded project primarily based on the use of Stable Diffusion.

My initial thoughts were exactly what you pointed out in that quote.

Also, if this lawsuit is based on the misconception that this technology is doing something like a collage, then yeah this whole thing is bound to fall apart; it is just a big misunderstanding. But ignorance can make a misunderstanding last… well, eternally some would say.

It could also be a cash grab. Either way, I think it is going away.

Although I would end by saying that the ethical implications of the advent of AI are far from resolved, and they are as important to cover legally as they are philosophically.

Aren’t they!?!

Farewell.

    Avatar

    Andres Guadamuz · February 2, 2023 at 9:19 pm

    Absolutely agreed.

Avatar

Clifford · January 28, 2023 at 10:55 am

I find it disturbing how the instigators of this lawsuit are trying their very best to make this into an “Us VS Them” situation – in this case “Artists VS the Evil AI corporations”. This cannot be further from the truth. A HUGE portion (if not most) of the AI art users are traditional artists themselves, who are delighted to find such a tool that will allow them to expand their creativity.. Just like how creative artists reacted to the advent of digital art, or 3D modeling, or animation.
This feeble attempt at demonizing the other side and treating AI art users as “traitors” is despicable, and quite worryingly, I see that it’s already made an inconspicuous appearance inside this article under the “Other Legal considerations” section: “I find that this lawsuit could be a risky gamble for artists” suggests that artists are on the Anti-AI side, while the other side is composed of.. what exactly? Faceless corporations who are simply trying to squeeze artists’ poor souls like lemons? Certainly not, but that’s what the lawsuit creators want you to think.

AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit - The Verge · January 16, 2023 at 2:32 pm

[…] lawsuit launched by Butterick and the Joseph Saveri Law Firm has also been criticized for containing technical inaccuracies. For example, the suit claims that AI art models “store com­pressed copies of […]

AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit - Edulogg · January 17, 2023 at 12:18 am

[…] lawsuit launched by Butterick and the Joseph Saveri Law Firm has also been criticized for containing technical inaccuracies. For example, the suit claims that AI art models “store com­pressed copies of […]

SCRAPINGS OF THE DAY – 01/16/23 – cabbagesandkings524 · January 17, 2023 at 12:37 am

[…] Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney […]

2023-Jan-17 – For The Record · January 17, 2023 at 3:12 am

[…] 藝術家對 AI 作畫提出侵犯版權的集體訴訟 [src] […]

¿Algo más que “sal-pique”, querido enero 2023?: Stable Diffusion, OEPM, EPO | LVCENTINVS · January 20, 2023 at 8:11 am

[…] os dejo este post de Technollama que desarrolla el fondo de estos […]

Is AI ART really ruing artists?! - Ezra Art · January 22, 2023 at 9:29 pm

[…] set in motion a class action copyright lawsuit against popular Ai companies as per an article from Technollama. In my opinion, with time regulations will be set on AI as it was with 3d printing back then. Ai […]

Artists Attack AI: Why The New Lawsuit Goes Too Far | Copyright Lately · January 24, 2023 at 1:00 am

[…] The tool can then apply its knowledge of tables to the knowledge it has acquired about aesthetic choices, styles and perspectives, all en route to creating a new image that’s never existed before. There’s no “cutting and pasting” involved. (If you’re interested in doing a deeper dive into how all of this works, I recommend following Andres Guadamuz’s blog on the topic.) […]

The first lawsuit against generative AI seems doomed to fail because it misunderstands the technology – Walled Culture · January 24, 2023 at 1:03 pm

[…] Guadamuz, who was interviewed by Walled Culture last year, has put together a useful first analysis of this lawsuit. Here’s the key […]

AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit - Domain Gulf Port · January 25, 2023 at 1:53 pm

[…] lawsuit launched by Butterick and the Joseph Saveri Law Firm has also been criticized for containing technical inaccuracies. For example, the suit claims that AI art models “store com­pressed copies of […]

Links 28/01/2023: Lots of Catching Up (Had Hardware Crash) | Techrights · January 28, 2023 at 3:47 pm

[…] Andrés Guadamuz ☛ Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney […]

Beyond Hysteria: Working out AI Art and the Ethics At the lend a hand of It – TOP Show HN · February 1, 2023 at 10:06 pm

[…] Andres Guadamuz, TechnoLlama […]

Fair Use: Training Generative AI - Creative Commons - Inergency · February 19, 2023 at 8:43 pm

[…] To answer this question, we must first look at the facts of the case. Dr Andrés Guadamuz has a couple excellent blog posts that explain the technology involved in this case and that begin to explain why this should […]

GUEST POST: SAVAN DHAMELIYA- AI-GENERATED ART AND COPYRIGHT INFRINGEMENT | IPRMENTLAW · April 16, 2023 at 3:22 pm

[…] [5] Andres Guadamuz, Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney, TechnoLlama (January 15, 2023) https://www.technollama.co.uk/artists-file-class-action-lawsuit-against-stability-ai-deviantart-and-… […]

A Primer and FAQ on Copyright Law and Generative AI for News Media | infojustice · May 2, 2023 at 5:41 pm

[…] very strong, as it is based on an incorrect representation of how this AI model works (see here for more detailed criticism and the aforementioned Motion to Dismiss the […]

FROM PIXELS TO PROTECTIONS: THE COMPLEX RELATIONSHIP BETWEEN GENERATIVE AI AND COPYRIGHT – The IP Press · May 5, 2023 at 7:16 am

[…] However, it must be clarified that AI systems do not operate by combining the data set to give an output like a collage. Rather, they analyze the common patterns and generate outputs based on these mathematical […]

“Uso Justo”: Treinando IA Generativas - CC Brasil · June 6, 2023 at 9:30 pm

[…] Para responder a esta pergunta, temos de começar por analisar os fatos do caso. Andrés Guadamuz tem duas excelentes publicações no seu blog que explicam a tecnologia envolvida neste caso e que começam a explicar por que razão isto deve […]

“Uso Justo”: Treinando IA Generativas – InfoEconômico · June 7, 2023 at 7:43 pm

[…] Para responder a esta pergunta, temos de começar por analisar os fatos do caso. Andrés Guadamuz tem duas excelentes publicações no seu blog que explicam a tecnologia envolvida neste caso e que começam a explicar por que razão isto deve […]

AI art tools Stable Diffusion and Midjourney targeted with copyright … – The Verge – Auto Robot Demo · August 31, 2023 at 4:38 am

[…] The lawsuit launched by Butterick and the Joseph Saveri Law Firm has also been criticized for containing technical inaccuracies. For example, the suit claims that AI art models “store com­pressed copies of […]

Fair Use: Training Generative AI | Creative Commons · September 26, 2023 at 4:10 pm

[…] To answer this question, we must first look at the facts of the case. Dr Andrés Guadamuz has a couple excellent blog posts that explain the technology involved in this case and that begin to explain why this should […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.