If you are paying attention to technology news, you may have come across deepfakes. In case you haven’t, a deepfake has become an umbrella term for various types of image and video manipulation in which a realistic computer rendering of a person can be constructed, seemingly looking and sounding exactly like the original.

There is a growing body of amusing examples of deepfakes, from the Queen to Obama; in a growing number of cases a variation of deepfake technology is being used in movies and TV to de-age actors, or even bring them back from the dead. In the last couple of years, the technology became so good that people started fearing that it would eventually be used in politics to make it seem like politicians were saying objectionable things. Quite a lot of the press interest on deepfakes has been precisely on this point, with some commentators fearing a deepfake future where it’s difficult to tell reality from fiction.

However, this infocalypse has failed to materialise for various reasons. Firstly, although deepfakes can look impressive, they have yet to cross the uncanny valley, and it’s therefore obvious that what you’re looking at is fake. Secondly, there are much easier ways to disseminate disinformation or making a politician look bad, often enough it’s sufficient to use their own words, or by the deployment of old-fashioned selective editing. Finally, our capacity for shock has taken a hit this year, and perhaps a video of the Queen dancing on TikTok doesn’t even makes it past mysterious monoliths and Galactic Federations in our over-informed brains.

But there is a truly sinister underbelly to the deepfake phenomenon that is often left unreported, and it is the use of deepfakes in porn. Back in October a report found a deepfake bot generator in Telegram where pictures of 100k women were being shared online, often without their knowledge. Similarly, there has been an increase in TikTok, often depicting minors.

The solution for now has been to fight against this new threat with technology tools such as Sensity, which uses deep learning tools to detect deepfakes. While this is a good alternative, it does require proactive action from those affected, as you need to train the tool with images to recognise the presence of the deepfake. It is also a for profit company. So the presence of technological tools is welcome, but we may need a much more robust legal response. So what is the law?

This is a less explored area, and while some researchers have started tackling the question, the law is all over the place. For some, it may be possible to use defamation law, copyright, false advertisement, while others propose to use privacy or data protection, while others claim that the law is not prepared to tackle the issue.

I have been thinking about this for a while, and I would like to put forward a different avenue, and that is to revisit the concept of image rights. Strictly speaking, the term image rights is used to refer to a large number of legal protection of a person’s likeness, it comprises various legal fields, from privacy to tort, and it is often directed to protecting celebrities and other people who have a commercial interest in their image. In the UK this falls under passing-off, while in the US it is called a publicity right. Some countries have a more comprehensive legal protection for non-celebrities; I first became interested in the subject after following the legal case of Technoviking in Germany.

My proposal is to use these tools to try to tackle deepfakes. For now image rights have been used mostly to protect celebrities from unauthorised commercial use of their image. My proposal would be to try to get a comprehensive type of IP protection to a person’s own likeness that would be based on the system in existence in countries such as Germany and France, and maybe even to have this harmonised in some sort of treaty.

You may rightly point out that it may be folly to  propose a system that creates new rights that are not in existence in many jurisdictions. I would argue that image rights should already be under serious policymaking scrutiny. The influencer economy is in full flow, and this often means that growing number of people rely on their image to make a living. While it may be easy to dismiss as a passing fad, image rights are becoming more important, and adding the spread of deepfakes, we could be witnessing a conjuncture that requires a regulatory response.

I would also add to these two the rise in facial-recognition technology as something that could prompt for the protection of people’s images.

To conclude, I was having an interesting Twitter conversation about this topic, in which several people pointed out the danger of porn deepfakes. One of the things that we may want to do is to make a distinction between public figure and political deepfakes, and their use in porn, particularly without the subject’s permission. While I don’t have a suggestion, we need to make sure that when discussing the phenomenon, the real nature of the threat should be made clear.



Andy J · January 8, 2021 at 10:40 pm


While I don’t think your proposal for the wider use of image rights legislation is a non-starter, I would want to see the law narrowly targeted to just prevent instances of real harm, somewhat similar to the way defamation is framed in the UK.

As you know, in most jurisdictions which have privacy and image rights legislation which goes further than standard ECHR Article 8 protection, there is an exception for the editorial use of a person’s likeness. I’m rather surprised that that line of defence wasn’t succesfully deployed in the Technoviking case, as Fritsch’s original video was pure reportage. Yes, the subsequent exploitation went way beyond that, but much of it was beyond Fritsch’s control.

My concern is that poorly drafted legislation (Italy’s perhaps? – remember the Audrey Hepburn case) just leads to censorship rather than the genuine protection of individuals.

On the specific issue of deepfakes of a pornographic nature, firstly I am not convinced that new laws are necessarily the answer, and inevitably the law will always be playing catchup. For instance what to do about deepfakes entirely created by autonomous AI, something which is surely just around the corner. Where is the mens rea? And secondly I think the main issue is not so much going after the perpetrators as giving the courts the powers to get such images or videos taken down quickly. As you are aware, this already under consideration in the UK under the banner of Online Harms. And as we have seen, courts dealing with largescale copyright infringement within the EU and UK have been very successful in issuing blocking injunctions against websites which host such infringing material, so I don’t see it being a big stretch to bring in similar powers over abusive deepfakes. However getting worldwide consensus on the subject sounds pretty improbable to me, simply because at present if this is a problem at all, it is a first world problem.


Sean Harris · April 21, 2021 at 3:21 am

Law would be helpful, as well as the use of AI tools that can detect and fight against these harmful activities for the misuse of cloning technologies. There are several distinct legal and ethical challenges posed by the change brought on by artificial intelligence. The companies who are using this technology should have strong principles and responsibility to mitigate the impact of unethical use of related technology no matter who produces it.

News of the Week; December 30, 2020 – Communications Law at Allard Hall · January 2, 2021 at 9:21 am

[…] Revamp image rights to fight deepfakes (Andres Guadamuz) […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: