Evergreen cartoon

It’s a dance as old as (digital) time. When faced with a challenge, politicians will look for a scapegoat in which to assign the blame of a complex issue, and propose allegedly easy solutions to fix impossible problems. Terrorist attack? End encryption. Rise in populism? End fake news. Violent crime increase? Video games are to blame. Teen suicides? Regulate social media.

UK Health Secretary Matt Hancock has been in the news demanding that social media clean up their act, or there will be regulation forthcoming.He wrote:

“It is appalling how easy it still is to access [harmful] content online and I am in no doubt about the harm this material can cause, especially for young people. It is time for internet and social media providers to step up and purge this content once and for all.”

His calls come on the back of the tragic suicide of teenager Molly Russell, whose parents claim took her life because of Instagram. Russell was following a number of accounts that show depressing content which they say promotes suicide and self-harming, with messages such as “i hate me”, “this world is so cruel and I don’t want to see it anymore”, and “have you ever cried just because you are you?”, or images such as this:

The main argument from the parents and people like Matt Hancock is that these images are too easily accessible, that looking at hashtags such as #selfharm, #depression and #suicide will readily produce images that are excessive, and often are directly or indirectly encouraging harmful conduct, and that displaying such images makes vulnerable teenagers more likely to engage in self harming. So we are told that social media must take immediate action to remove harmful content and protect children. Who could oppose such a view? Won’t anyone think of the children?

Reporting tools are already available

While I am entirely sympathetic with the plight of the victims of depression and their families, and agree that often teenagers and children will have access to images that can encourage self harm and suicide, I disagree that this is a simple problem, and that it can be easily fixed with a change in regulation or legislation.

Firstly, there is this idea in some circles, and particularly in sectors of the press, that social media is an unregulated space where everything goes and where terabytes of harmful content are easily accessible. The fact is that companies already spend millions on content moderation, with lengthy rulebooks on what should be filtered out. Similarly, platforms such as Instagram already allow users to flag content, or to let the system know when someone might be engaged in practices that could lead to harm. When you search for some potentially problematic hashtags, you will get this message:

Secondly, removing all harmful content is very difficult because of the volumes involved. 300 hours of video are uploaded to YouTube every minute. Instagram users upload 400 million stories per day, and 52 million pictures per day. It’s impossible to moderate this amount of content, and while companies could try to use machines to do some of the moderation (they already do to some extent), this can generate more problems than they are worth, as there is always the chance that legitimate content will be automatically removed.

Finally, there seems to be a serious misunderstanding of how the Internet works, and where the problem really lies. Snapchat is the most popular social media amongst teens, and this could prove extremely difficult to police as regulators may be trying to tackle the wrong technology. If every single harmful image was magically removed from Pinterest and Instagram tomorrow, there would still be other outlets, and then others that have not even been invented. The problem is to look at the images on Instagram as the cause, and not as the symptom. What drives teenagers to self harm? Likely a combination of causes, but it is likely that by the time someone searches for a hashtag like #suicide, something is already gone wrong in that person’s life, and it is most likely not caused by an Instagram post.

I am not saying that we should do nothing, but I am afraid that a lack of understanding of how content is shared and curated will produce the wrong proposals. We could find that misguided solutions may end up pushing teenagers to darker corners of the Internet. Any action must be taken with clear understanding that social media is just part of the problem, and that there are other factors at work here, such as social change and even parenting strategies.

Do not trust anyone peddling easy solutions, the Internet does not come with a switch.


3 Comments

Avatar

Cédric · February 4, 2019 at 1:45 pm

It’s only tangential to your post above, Andres, but I’m pretty sure you will be interested in Elie’s recent preso on CSAI at https://www.elie.net/talk/rethinking-the-detection-of-child-sexual-abuse-imagery-on-the-internet/

    Avatar

    Andres · February 7, 2019 at 10:20 pm

    Thanks, very useful!

News of the Week; January 30, 2019 – Communications Law at Allard Hall · February 4, 2019 at 5:30 am

[…] Can the internet be made safe for children? (Andres Guadamuz) […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.