If you are paying attention to technology news, you may have come across deepfakes. In case you haven’t, a deepfake has become an umbrella term for various types of image and video manipulation in which a realistic computer rendering of a person can be constructed, seemingly looking and sounding exactly like the original.
There is a growing body of amusing examples of deepfakes, from the Queen to Obama; in a growing number of cases a variation of deepfake technology is being used in movies and TV to de-age actors, or even bring them back from the dead. In the last couple of years, the technology became so good that people started fearing that it would eventually be used in politics to make it seem like politicians were saying objectionable things. Quite a lot of the press interest on deepfakes has been precisely on this point, with some commentators fearing a deepfake future where it’s difficult to tell reality from fiction.
However, this infocalypse has failed to materialise for various reasons. Firstly, although deepfakes can look impressive, they have yet to cross the uncanny valley, and it’s therefore obvious that what you’re looking at is fake. Secondly, there are much easier ways to disseminate disinformation or making a politician look bad, often enough it’s sufficient to use their own words, or by the deployment of old-fashioned selective editing. Finally, our capacity for shock has taken a hit this year, and perhaps a video of the Queen dancing on TikTok doesn’t even makes it past mysterious monoliths and Galactic Federations in our over-informed brains.
But there is a truly sinister underbelly to the deepfake phenomenon that is often left unreported, and it is the use of deepfakes in porn. Back in October a report found a deepfake bot generator in Telegram where pictures of 100k women were being shared online, often without their knowledge. Similarly, there has been an increase in TikTok, often depicting minors.
The solution for now has been to fight against this new threat with technology tools such as Sensity, which uses deep learning tools to detect deepfakes. While this is a good alternative, it does require proactive action from those affected, as you need to train the tool with images to recognise the presence of the deepfake. It is also a for profit company. So the presence of technological tools is welcome, but we may need a much more robust legal response. So what is the law?
This is a less explored area, and while some researchers have started tackling the question, the law is all over the place. For some, it may be possible to use defamation law, copyright, false advertisement, while others propose to use privacy or data protection, while others claim that the law is not prepared to tackle the issue.
I have been thinking about this for a while, and I would like to put forward a different avenue, and that is to revisit the concept of image rights. Strictly speaking, the term image rights is used to refer to a large number of legal protection of a person’s likeness, it comprises various legal fields, from privacy to tort, and it is often directed to protecting celebrities and other people who have a commercial interest in their image. In the UK this falls under passing-off, while in the US it is called a publicity right. Some countries have a more comprehensive legal protection for non-celebrities; I first became interested in the subject after following the legal case of Technoviking in Germany.
My proposal is to use these tools to try to tackle deepfakes. For now image rights have been used mostly to protect celebrities from unauthorised commercial use of their image. My proposal would be to try to get a comprehensive type of IP protection to a person’s own likeness that would be based on the system in existence in countries such as Germany and France, and maybe even to have this harmonised in some sort of treaty.
You may rightly point out that it may be folly to propose a system that creates new rights that are not in existence in many jurisdictions. I would argue that image rights should already be under serious policymaking scrutiny. The influencer economy is in full flow, and this often means that growing number of people rely on their image to make a living. While it may be easy to dismiss as a passing fad, image rights are becoming more important, and adding the spread of deepfakes, we could be witnessing a conjuncture that requires a regulatory response.
I would also add to these two the rise in facial-recognition technology as something that could prompt for the protection of people’s images.
To conclude, I was having an interesting Twitter conversation about this topic, in which several people pointed out the danger of porn deepfakes. One of the things that we may want to do is to make a distinction between public figure and political deepfakes, and their use in porn, particularly without the subject’s permission. While I don’t have a suggestion, we need to make sure that when discussing the phenomenon, the real nature of the threat should be made clear.