It’s been an eventful couple of weeks for YouTube specifically, and for Internet moderation in general. It all started with YouTube’s controversial decision not to ban a famous content creator for instigating anti-gay and racist abuse. Then they were applauded in some circles for banning a number of neo-Nazi and white supremacist channels. But then they were criticised again for removing some channels that were either documenting neo-Nazis, or had educational content regarding the Holocaust and Hitler. Before that, Facebook had been prompted to take action due to the publishing of a doctored video that showed Democrat House Speaker Nancy Pelosi appearing to be drunk.

Damned if you do, and damned if you don’t, or as postulated in Graham Smith’s First Law of Internet Intermediaries: “Anything you agree to do will never be enough.”

These incidents come on the back of a wider discussion regarding online moderation by the large platforms. As more and more people get their information online, the role of tech giants becomes more and more important. Long gone are the days where most content would be left unattended and unmoderated, and now there is an assumption that moderation is expected, so every decision they make will be under increasing scrutiny.

The problem is that moderation is hard. Really hard. Democratic societies have been struggling with the question of freedom of speech and the role of communicators for centuries, and entire bodies of law have been developed to deal with this specific question, yet somehow we are expecting tech giants to get things right almost overnight. As the platforms fail to provide the content moderation that is increasingly been expected, governments threaten more regulation, and the courts start to tinker with the problem.

One of the latest example is the ongoing case in the Court of Justice of the European Union regarding Facebook content moderation, Eva Glawischnig-Piesczek v Facebook Ireland Limited (C-18/18). In this case, a Facebook user in Austria posted on Facebook disparaging comments regarding Austrian MP Eva GlawischnigPiesczek. She replied by asking Facebook to remove the comment, and when they did not respond, she went to an Austrian court to ask for an injunction to have the defamatory post removed. So far so normal. But what sets this case apart is that the request was done to Facebook Ireland, and it did not only ask for the offending post to be removed, but it also asked to take down all instances of the Austrian politician being called “a ‘lousy traitor of the people’ and/or a ‘corrupt oaf’ and/or a member of a ‘fascist party’.” The Austrian court complied, and Facebook Ireland disabled access to that post in Austria.

Under appeal, the court agreed that the comments were defamatory and degrading, and therefore should be removed, but the question arose as to the extent of the injunction. Should the content be removed only in Austria, or should Facebook Ireland be ordered to block access to that post all around the world? Or just in Europe? The Austrian Supreme Court referred this question to the CJEU, and Advocate General Spuznar has given an answer that has sent chills down the spines of many Internet regulation experts.

There is an interesting question in the referral regarding intermediary liability which has wider implications to the continuing existence of the limitation of liability contained in the E-commerce directive. If one was to take the injunction to its logical conclusion, it does not only ask for the removal of one specific post (which is relatively easy to do), but requires the constant removal of new posts which may contain the offending combination of defamatory words.This would seem to require an obligation to monitor content, which goes against Art 14 of the E-commerce directive. This would be yet another nail in the coffin to the shrinking system of limitation of liability that arose around the 2000s, and perhaps we can just bury it for good.

After a lengthy and interesting discussion about jurisdiction, AG Spuznar does not preclude the extra-territorial application of injunctions, but asks that courts should be careful in exercising such a route, as there is no harmonised application of personal rights, and some of the content could be legal in other jurisdictions. Courts should adopt a policy of self-limitation, and he says:

“The implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person. Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking.

But perhaps the most interesting part of the opinion is the discussion on content moderation. AG Spuznar spends some time thinking about how it would be possible to moderate content that has been found to be defamatory, and this analysis is very telling of the type of expectations that the judiciary has when it comes to technology. In a telling series of paragraphs, he writes:

60. Nonetheless, an obligation to seek and identify information identical to the information that has been characterised as illegal by the court seised is always targeted at the specific case of an infringement. In addition, the present case relates to an obligation imposed in the context of an interlocutory order, which is effective until the proceedings are definitively closed. Thus, such an obligation imposed on a host provider is, by the nature of things, limited in time.

 61. Furthermore, the reproduction of the same content by any user of a social network platform seems to me, as a general rule, to be capable of being detected with the help of software tools, without the host provider being obliged to employ active non-automatic filtering of all the information disseminated via its platform.

62. In addition, imposing the obligation to seek and identify all the information identical to the information that was characterised as illegal makes it possible to ensure a fair balance between the fundamental rights involved.”

This is remarkable. Without having heard any evidence, he declares that platforms should be able to use software to find identical content to that which is infringing. Yet again, Nerd Harder. Would any legal report containing the infringing combination be filtered out by these magical filters? Will any discussion be censored?

This is what continues to worry me about the debate regarding online moderation, we are trying to impose wide-ranging solutions to problems that are often local, and we are expecting platforms to conform to often contradictory rules, while expecting them to get it just right, not too hot, not too cold. Ban the Nazis, but not too much so that you don’t also ban people discussing Nazis. Ban misinformation and hateful speech, but then continue banning speech we don’t like, people making off-colour jokes, and just being silly. I know slippery slopes are problematic, but we may be presented with the case of slippery slope thinking. If we let the platforms decide, then we are really in danger of entrenching the views of a small number of moderators in Silicon Valley.

I have to insist again that I am not calling for nothing to be done. I have been the recipient of online abuse, and for a period of time, this abuse seriously affected my mental well-being, so I can see why we want more accountability and stronger online enforcement and moderation. But the level of some of the debate seems to imply that it is just as easy as flipping a switch. We have to start recognising that this is not something that will be solved with a few codes of conduct, or malformed and misguided concepts of duty of care.

This debate cuts to the heart of who we are as a society, and what level of control we want in our lives.

Categories: Regulation

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.