It would be fair to say that the Christchurch terrorist attack has been one of the most shocking events in recent history, not only because of the heinous act itself, but because the perpetrator live-streamed the attack on Facebook, and the video has then been shared countless times online.

After a tragedy such as this, there is usually a desire to make immediate changes so that such events cannot happen again, so New Zealand has immediately reacted to ban assault rifles. But the Internet question is more difficult to tackle, and policy-makers and many commentators seem to be baffled as to how to react.

In many ways, the story of the attack offers a perfect illustration of the difficulties of regulating the Internet. It has become evident that the terrorist was part of an online white supremacist sub-culture that radicalises young people and spreads its message through memes in various websites, most of these are completely unregulated boards. In particular, the Christchurch terrorist advertised the attack on 8chan beforehand, providing a link to his manifesto and stating that he would be live-streaming the event on Facebook with a body-cam. As advertised, the assault was broadcast for 17 minutes to a live audience of between 40 and 200 people, one has to assume most were 8chan users. The video was not reported until 12 minutes after the live broadcast was finished, and it was viewed by 4,000 people until it finally was removed. However, by the time it got taken down it had already been downloaded and recorded in various formats, including screen recordings. It was then re-posted all over the Internet, including Twitter, Reddit, and YouTube, and it started being shared by people using messaging apps like Whatsapp. A massive digital clean-up operation ensued, within 24 hours Facebook had blocked 1.5 million attempts to re-upload the video, and YouTube removed an ‘unprecedented volume’ of videos, without specifying numbers.

The digital virality of the footage has prompted various leaders to call for more Internet regulation, with Australian PM Scott Morrison going as far as stating in a call for action that “it is unacceptable to treat the internet as an ungoverned space”. Expect some Internet regulation action to come soon. In a time of tragedy it is normal to look at some form of allocating blame outside of the obvious perpetrator. Could this have been prevented? If so, how?

The first reactions point towards a general blame against tech giants, with Facebook and YouTube getting the largest share of the blame as being the first conduits of the video being shared, and there is considerable talk of putting a leash on online content. I have to admit that I am unsurprised by some of the reactions, and I also certain that most proposed “solutions” will completely miss the mark.

The main reason for my scepticism is that it is evident that there is some selective blame-allocation taking place in policy circles. While the Australian Prime Minister blames the Internet, it is quite ironic that he seems to have failed to criticise the fact that Australian mainstream media broadcast the video, and while there is a review of this action, the damage is done. Similarly, we seem to be ignoring the fact that we the public deserve quite a lot of blame, Facebook, Twitter, and Google do not share the video, it is the users uploading it and sharing it.

Similarly, there appears to be a complete misunderstanding of how the extreme right-wing online forums operate. Anyone who has been following the rise of the alt-right and Neo-Nazis online will know that these communities are quite sophisticated when it comes to spreading their message. Many mainstream memes begin their life in places such as 4chan, and there is often a very good understanding of the type of content that will become viral. The Christchurch terrorist was aware of this, and the streaming was done in a way that ensured widespread sharing. The manifesto itself is filled with in-jokes and references to the obscure sub-culture that includes mentions to Bitconnect, PewDiePie, and various memes, the intention is almost entirely baiting mainstream media to have to mention these obscure references, it is almost as if the whole event is part of an ongoing big joke for some of the participants.These communities use sites like Facebook only to enhance their message, but the actual discussion takes place in places that have practically no oversight, and which are almost entirely devoid of regulation.

Moreover, the video was shared so much because the internet is built on the idea of spreading information, by reaching a large audience, the video shows that the Internet works precisely as intended. Censorship is difficult online, even after the application of filters. Facebook claims that it was able to filter about 80% of the shared content, but even 20% is enough to guarantee further spread, all you need is one copy to be shared to ensure the content will remain out there.

Should we give up?

Of course not, but we need a more sober look at what is really going on, and this means also taking a very good look at ourselves and at what we really expect from technology. Firstly, we should understand that the likes of Facebook offer tools that allow users to share information. We could try to impose more restrictions on how these companies do it, but in the end this cannot be done without severely hindering the main function of online spaces. If we want a “safer” Internet, we must be prepared to give up quite a few perks of a more open Internet. No live streaming. Filtered user-generated content. National firewalls. No sharing to a public audience, only to your circle of friends and followers. Monitored private communications. No anonymity allowed. Subscription services using verified “real life” information, or only allowing verified users to broadcast. Remove all intermediary liability exceptions.

This may seem like an acceptable compromise to some, but even then the threat will not end. If we regulate the tech giants more and more, the Internet will still allow users to congregate outside of these commercial structures. Furthermore, these actions will almost surely not affect sites like 8chan. But perhaps more importantly, full control in a global decentralised network is futile because what we will be regulating are the centralised nodes that we commonly use, but not the network itself.

I honestly do not have a viable solution, and I do not think that one is possible with our current system. While I am aware that some people may want to live in a fully controlled Internet, that is not how it works, and if you achieve any sort of sanitised network, it won’t be the Internet as we know it. Making Internet intermediaries more liable will also not result in the removal of harmful content, as for the most part this does not originate there.

But perhaps we do need to look at our own practices more. I am personally glad that I have yet to come across the Christchurch attack video, which may seem remarkable given the amount of time I spend online. I have disallowed automated video playback wherever possible. I am lucky that nobody I follow in any social media shared the video (that I am aware of). Every time I encounter anyone in my timelines that shares what I would consider objectionable content, I promptly mute or unfollow; on a few extreme occasions I have contacted the person telling them precisely that I object to racist, misogynist, nationalist content shared. I practically never use Whatsapp or join any Facebook groups, so there is also limited scope for someone sharing content. These are just personal practices, but for now they seem to have helped me to avoid extreme content online, and while I cannot expect others to follow these practices, we cannot only rely on tech companies to keep us safe.

But we live in the time of regulatory over-reaction, so at the very least, expect things to get worse.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.