On Tuesday 11 October 2022 I gave evidence to the House of Lords Communications and Digital Committee. Beforehand we were given a set of questions to prepare. While this does not reflect the final version of my statements, it covers most of what was said (I will post a link to the transcript when it’s ready).

Before I go to the questions, I’d like to make a quick comment about the experience. Needless to say, this was a huge honour for me, I’ve been interested in this subject for years, and it is good to see that my work in this area is being recognised and noticed, particularly as an immigrant to these islands. So I have to thank the Commission’s clerks for their kind invitation. Edited to add: the committee members were lovely, friendly, supportive, and the questions were well made and very astute.

Much to my surprise, there was quite a lot of press surrounding this session of the committee, but it was not because of a surge of people becoming more interested on the detail of copyright and AI, but because Ai-Da the robot was also giving evidence. I will be honest, I was not particularly happy about this, what was supposed to be a quiet day all of a sudden became the focus of considerable media scrutiny, and for what I think were the wrong reasons. I have nothing against the makers of Ai-Da, but I’m on record stating that such robots are fun but gimmicky. Lost in the media attention is the very important subject that was being discussed, and I hope this will be evident in the text below. People’s livelihoods may be affected by all of this, and I think that this fact was lost in all of the media circus.

On with my prepared statements.

Question 1. How is AI changing the way creative and cultural content is produced, and what positive opportunities will this offer the creative industries over the next 5-10 years?

While I am not a technologist, I have been following closely the developments in artificial intelligence over the past 10 years. I can say without a doubt that we are in the middle of a technological revolution that is on par to the development of the personal computer and the smart phone. When I started looking at the application of AI in the production of cultural content, tools such as Google’s Deep Dream seemed impressive, but in retrospect they were acting mostly as mere filters. The adoption of more sophisticated models such as generative adversarial networks, and more recently diffusion models have brought the entire field forward.

In text, GPT-3 and other large language models are capable of producing readable text, but the real advance has been with images and the development of tools such as DALL-E, MidJourney, and Stable Diffusion. Just in the last six months the advances have been mind-blowing, and right now it’s difficult to keep up, a new development is brought forward almost every week. From text to image, to text-to-text, now we have 3D animations, virtual reality, and we are starting to see music and sound models.

Content creation is being changed as we speak. I’m able to use tools to produce images that may not be works of art, and are not intended to replace artists, but that fill me with joy, and I’m not alone in this. Some can be quite marvellous and fun. I can’t say what the future holds, but one thing is true, AI is here to stay.

As for opportunities, companies and creators can take advantage of all of these changes. Companies in the UK are already starting to take advantage of some of these technologies, Stability.ai is a startup that is being valued at 1 billion USD.

Question 2. What are the main challenges AI presents to the creative industries regarding intellectual property?

I’m going to concentrate on copyright here, there are issues regarding AI in patents, and potentially designs, but those are not my area of expertise and I want to keep my remarks relatively brief.

When thinking about AI and copyright, it’s useful to make the distinction between authorship and liability.

The authorship question in AI is simple, do works generated using an artificial intelligence have copyright? There are two options, if the answer is negative, then all AI works are in the public domain. If the answer is positive, then the question is who gets the copyright, the maker of the program, or the user?

We are fortunate in the UK that there is already a provision that deals with this question, and it is section 9(3) of the Copyright, Designs, and Patents Act 1988, which states that the author of a computer-generated work is the person who made the arrangements necessary for the work to be created. Needless to say, this provision came into being in a time before AI, and it was mostly done with automated systems in mind, more like robotic painters such as Ai-Da, there were already some precursors back then. However, there is nothing stopping the provision as it exists to apply to AI works. On the contrary, in a consultation conducted by the UK IP Office this year, most respondents recommended that the existing law was sufficient for now, and that no changes should be made, so at least for now s9(3) remains applicable.

So who should the copyright go to? I often use the analogy of a pen, the maker of the pen does not get the copyright over what you write with it, and I will admit that this is not a perfect analogy, but we see similar things in the computer industry, Adobe doesn’t get the copyright over the images created with Photoshop, even though the software is instrumental in the creation of some artwork. I think that a similar test should apply here.

That brings us to the second issue, that of liability. This is a considerably tricker question to navigate. The question here can be divided between inputs and outputs, but in the interest of time I will not go into the possible liability of outputs. From the perspective of inputs, machine learning models require a lot of data, and the collection, storage, and processing of such could be copyright infringement. In 2014 Parliament passed a series of exceptions to copyright, one of these is an exception for text and data mining for non-commercial research purposes, contained in section 29A of the CDPA. As far as I know, the UK was the first country to implement such an exception. Other countries have implemented similar ones, in particular Japan, and most recently the EU with the implementation of articles 3 and 4 of the Digital Single Market Directive of 2019.

Article 3 is similar to s29A of the CDPA, it enacts an exception for research and scientific purposes. What is different in the EU’s approach is Art 4, which allows for the reproductions and extractions of lawfully accessible works for the purposes of text and data mining even for commercial uses, as long as the authors of those works have not reserved their rights. In other words, this acts as an opt-out.

The aforementioned UK IPO consultation has suggested the adoption of a provision that goes farther than the DSM, and it is to have an exception for text and data mining for all commercial purposes, but this exception cannot be opted-out.  During the consultation I suggested that the UK should at least match the EU exception to bring us in line with the European practice.

Question 3. What are the broader challenges AI presents to the creative industries, in particular regarding workers’ rights and fair treatment?

The main challenge is competition for human creators.

AI is already bringing about challenges at the wider societal level that we have not even started to get to grips with. Already artists everywhere have been rightly complaining that the prevalence of some AI tools will have a detrimental effect to their livelihoods, but I think that this goes beyond art. What happens to academia where everyone can write a decent essay based on a few prompts? What happens to art when everything that you can describe can be easily painted by an AI? What happens to music when an AI composer can churn out thousands of passable tunes? What happens to programmers when AI can code faster and better than any one person? What happens to journalism when AI can produce quick outputs based on a few key prompts? What happens to photography when you can create images of people who do not exist?

All of these technologies are already here, they’re just not widespread, but we have to start thinking about content-creation in different terms. We’re reaching a point in which AI outputs are passable, sometimes even beautiful, and while one could argue that there’s no competition with human creativity, that is besides the point, human creators will be competing with unlimited, cheap, free content. It doesn’t matter if that picture is not perfect, it is free, and you made it by writing a couple of prompts to an AI that is probably free because it is harvesting your data to further train their AI.

These changes will happen, are happening, already happened. We just haven’t realised that the world has changed, and we haven’t even started having the conversations about what this new AI-enabled world will mean.

3.1 What action do you recommend is taken to address these issues?

Other than considering universal basic income, I really cannot see how some of the effects of AI in the workplace can be avoided short of an outright ban. I do however believe that we need widespread societal discussion of this issue.

Question 4. What are the strengths and weaknesses of Government and industry responses to the opportunities and challenges of AI for the creative industries?

I believe that the response so far has been very strong, and I have to commend the UK IP Office for being so proactive in formulating consultations in this area that have been both informed and well conducted.

The next stage will be to make sure that our IP laws, in particular copyright law, are up to the challenge.

4.1. How internationally competitive is the UK in this area, and what can the UK learn from abroad?

I believe that the UK is already an international leader in some of this issues, we were the first country with a machine-learning provision in its copyright law, and as I mentioned we were the first country to pass a text and data mining exception.

As I also mentioned earlier, the best way forward is to adopt a similar approach to that found in the DSM Directive, at the very least one that matches the European exception, otherwise AI companies may start leaving the UK and setting shop in Paris and Amsterdam. I also think that the government should be encouraging and funding research into the effects of AI in the creative industries, and potentially the adoption of tools that empower human creators to take control over how their works are utilised in this new AI age.


1 Comment

Avatar

Andy J · October 13, 2022 at 11:17 pm

Thanks for for sharing your input to the committee, Andres. Politicians often get a bad press, but like you, my impression, based on the work of the various committees which look into IP matters (such as the Digital, Culture, Media and Sport Committee), is that the committee members are generally well-informed. No doubt much of what they are briefed about comes from highly partisan lobby groups, but as long as they are also listening to independent academics such as you, there is hope that the policies that they help to shape will be fair to all concerned.

I’m afraid I don’t share your faith in section 9(3) as being a helpful stopgap measure when it comes to AI and copyright. The whole issue of authorship when it comes to AI (especially fully autonomous AI which we can expect in the very near future) requires an fundamental re-examination and it would, in my opinion, be a mistake to start from the premise of s. 9(3).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.