NSA

On 13 May 2014 the Court of Justice of the European Union made a ground-breaking decision in the case of Costeja Gonzalez v Google Spain (C-131/12), which heralded the creation of the Right to Be Forgotten. As you may remember, Mario Costeja González, a Spanish citizen, had wanted a newspaper article about his insolvency to be ‘forgotten’ by Google and no longer listed on their search engine. The CJEU held that Google was a data controller of personal data, and therefore was subject to EU data protection rules, meaning individuals had the right to ask them to stop linking to material that could be deemed ‘inaccurate, inadequate, irrelevant or excessive’.

Fast forward one year, it might be a good time to find out the pros and cons of the RTBF. Has the right been misused by thieves, con artists and sex offenders, as some feared? Has it improved privacy? Has it created a new regulator in the shape of Google?

The figures are quite interesting. Google has published a transparency report where it indicates that it has received 255,143 requests at the time of writing, with a total of almost a million URLs requested for removal. Of these, it has removed just over 41%.

We only have access to some information of what is being removed. Here are some things that have been removed (source Julia Powles):

  • A woman requested removal of a decades-old article about her husband’s murder, which included her name.
  • Request to remove five-year-old stories about exoneration in a child porn case.
  • A request to remove results detailing a patient’s medical history.
  • The name on the membership list of a far-right party of someone who no longer holds such views.

And here are some of the requests that have been rejected:

  • Request from a former clergyman to remove 2 links to articles covering an investigation of sexual abuse accusations while in his professional capacity.
  • Request by a paedophile who wanted links to articles about his conviction removed.
  • A doctor requested removal of more than 50 links to newspaper articles about a botched procedure. Three pages that contained personal information about the doctor but did not mention the procedure have been removed from search results for his name. The rest of the links to reports on the incident remain in search results.

Most of the reported cases by Google seem to show that it is applying the ruling consistently, and mostly erring on the side of caution when truly damaging personal data is shared somewhere.

But we need more data, this is why 80 academics (including Yours Truly) have signed a letter requesting Google to release more raw data in its transparency reports. The letter explains:

“We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the RTBF in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.”

This is one of the most important legal rulings of our lifetime, one that has given Google the power of judge, jury and executioner when it comes to its application. We want more data to better determine that it is being applied properly.

Based on the information that we have at the moment, I strongly believe that the right to be forgotten is a good thing, but I’ll continuously revise my position based on the data.

Edited to add:

We need to keep an eye on the proposed General Data Protection Regulation, which includes a right to erasure.

Categories: Privacy

2 Comments

Avatar

Seb · May 16, 2015 at 12:32 pm

Wonder if making this information publicly available doesn’t defeat the whole purpose of the right to be forgotten? We’re surely talking about aggregated data, but haven’t we seen too often in the past how easy it is to identify individuals from pseudonymised data?

    Avatar

    Andres · May 19, 2015 at 5:22 am

    Excellent point, it really had not occurred to me. I guess the idea is not so much that the people cannot be identified, as the original content is still available, but to have more data in a transparency report, which won’t be indexed anyway, which is the reason for the existing right to be delisted.

Leave a Reply to AndresCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.