One of the most over-used (yet true) legal comparisons in Internet regulation studies is to contrast the European and US approaches to freedom of speech when it comes to cyberspace. The United States favours an almost unlimited view of freedom of speech, while Europe has put in place large caveats and balances with other rights, particularly privacy. This clash is often seen in European legislation and case law that seem to erode freedom of speech, such as bans on nazi memorabilia, curbs on hate speech, the right to be forgotten, and requirements to intermediaries to remove hateful content online.
Now the US-based civil liberties community is undergoing a serious soul searching exercise about the limits of freedom of speech after a series of events have prompted a revision of the sacrosanct First Amendment. Although the debate has exploded in the last couple of weeks, the debate has actually been going for a while now. I would argue that the current iteration of the free speech online debate gained force after gamergate, where several prominent feminists online started receiving serious online abuse and threats. It soon became clear that a lot of the abuse got a free pass from online platforms, with gamergate supporters assuming the mantle of freedom of speech (see here and here). The prevalent meme espoused by some sides in gamergate was that they were the first line of defence against the censorious so-called Social Justice Warriors, feminists, and the “PC Brigade”. Lindy West, in an article in the New York Times, explains the situation:
“[…] The anti-free-speech charge, applied broadly to cultural criticism and especially to feminist discourse, has proliferated. It is nurtured largely by men on the internet who used to nurse their grievances alone, in disparate, insular communities around the web — men’s rights forums, video game blogs. Gradually, these communities have drifted together into one great aggrieved, misogynist gyre and bonded over a common interest: pretending to care about freedom of speech so they can feel self-righteous while harassing marginalized people for having opinions.”
Various writers have lashed out against this caricaturisation of the issues, and have tried to defend a more nuanced look at free speech, particularly when it comes to online abuse. Bishakha Datta frames freedom of expression online as a conflict of power inequality, and concludes that “no one should have the right to abuse another under the guise of freedom of expression.” Soraya Chemaly writes that “when institutions tolerate sustained online bullying, abuse, and harassment, they become complicit in it.” Similarly, Sarah Jeong in her book The Internet of Garbage expresses that online abuse has various elements, and that it is not only a debate about freedom of speech.
The latest iteration of the free speech conflict started with the publication of an internal document at Google by engineer James Damore. The document reads like anti-diversity manifesto, claiming that women have differences with men that make them less likely to be efficient in the technology workplace. Damore was fired by Google, prompting an immediate backlash from free speech proponents. Then came Charlottesville, when the US woke up to the fact that there is a sizeable contingent of neo-nazis and white supremacists. I have a theory that Charlottesville shocked many sectors of the left because the people marching looked normal, just your average white dude from down the street. It became clear that the neo-nazis had been congregating and organising online with no opposition whatsoever, and now they were marching and committing a terrorist attack that took the life of counter-protester Heather Heyer. Something clicked, the penny dropped, the light-bulb went on, finally tech firms took action by removing white supremacist website Daily Stormer, Facebook and Reddit removed several hate group pages, Apple Pay removed payment facilities in several hate sites, and even Spotify banned white power tracks.
All of these actions have prompted a debate amongst the otherwise monolithic pro-freedom of expression civil liberties groups such as EFF and the ACLU. EFF has come out strongly in favour of freedom of speech online with a strongly worded condemnation of the action by tech firms removing hateful sites. Their argument is familiar, “if you tolerate this, then you might be next”; also, they rightly say that corporations should not be arbiters of what can get published online. Three Californian ACLU affiliates have stirred the pot by claiming that white supremacist violence is not freedom of speech, concluding that “the First Amendment should never be used as a shield or sword to justify violence.” The main ACLU continues to stand for freedom of speech.
From an Internet regulation perspective, this has been a very interesting week. As someone who favours the European approach to freedom of speech, what is happening in the US right now can be explained as a sudden realisation that maybe the European standards are worth a second look. I often disagree with friends and colleagues from across the pond on this very topic. A lot of people I greatly respect and admire tend to be on the free speech maximalist spectrum, while I am in favour of things like data protection, the right to be forgotten, hate speech removal, and even the criminalisation of some online practices. I do agree however that platforms and intermediaries should not have the power to unilaterally decide when to remove something, and this is where some sort of regulation comes into play.
It all comes down to a basic idea about what an open and democratic society should look like, and it is best expressed by Karl Popper in his book The Open Society and Its Enemies, in what is known as the paradox of tolerance. Popper explains:
“Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. […] We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.”
This is well encapsulated in various types of legislation across Europe, particularly the banning of Nazi memorabilia, and the criminal action against online hate speech, but to a lesser extent it can be seen in other actions designed to balance rights, such as free expression and privacy. In the US, free speech is usually not subject to the same balancing act as European courts have done, and this is one of the reasons why the Court of Justice of the European Union (CJEU) recognised the so-called the right to be forgotten in the Google Spain case. The balancing act can be seen in more detail in a series of decisions by the European Court of Human Rights (ECtHR), which start with Delfi v Estonia, and culminates with MTE v Hungary. In these cases, the ECtHR had to attempt to balance the freedom of speech of news organisations and internet intermediaries, and the right to privacy of users who were abused online. In Delfi, the court erred on the side of the victim, ruling that intermediaries were under an obligation to remove abusive content online. This decision was met with criticism from freedom of expression proponents, and then the court made adjustments in MTE v Hungary. In that case the court decided that removal of content should only take place whenever there is “hate speech and direct threats to the physical integrity of individuals“. This is a high threshold that still leaves room to respect the right of freedom of expression.
In the end, the United States tech industry has already been complying with take-down of content for many years, particularly in the case of copyright infringement, terrorism, and child pornography. For years these platforms have been acting to remove pro-ISIS content whenever it is found, and there has been little pushback from free speech advocates. The difference now is that you would be extending the definition of what is considered hate speech online to include white supremacists and neo-nazis.
I do agree with EFF and many others who are suspicious of giving unilateral power to tech platforms to act as the judge, jury and executioner of online content. In my view, this is where regulation and case law could prove useful. The problem is that in a system that enshrines free speech above other rights, there will be little legal protection against abuses of those rights. In a thoughtful response to the current situation, Access Now has states:
“Freedom of expression is not an absolute right. However, governments appear too willing to obscure the most public and vocal face of hate, while failing to combat the deeper roots of racism and violence, listen to victims, and prosecute those responsible for the most heinous and violent crimes. Hate groups in the U.S. have emerged because they feel emboldened by the rhetoric of U.S. authorities, but also because the government has failed to uphold its responsibility to protect human rights, especially of minority communities.
I absolutely agree. While the European approach can be flawed and can produce bad results from time to time, I do not feel less free in Europe because of the existing checks and balances to unfettered free speech. Paraphrasing that often mentioned maxim, freedom of speech stops where the rights of others begin.