There’s been quite a lot of talk recently about the UK’s Online Safety Act, mostly negative. This is a perfect storm of Internet regulation, people either decry it for being too soft, or too strong. Some critics point out that it is useless, while others claim that it is single-handedly responsible for eroding freedom of speech in the UK. Attackers come from the left and the right, and there appears to be no defenders left, or at least none have crossed my various timelines.
But what is it? The Online Safety Act 2023 was passed by the previous government, and it is aimed at regulating online services to reduce harm, particularly to children. The Act imposes duties of care on internet service providers that host user-generated content or enable user interaction (such as social media platforms, forums, and certain messaging services). These providers must assess risks, put in place proportionate safety measures, and prevent the dissemination of illegal content. This includes terrorism, child sexual abuse material, and certain forms of fraud. The Act also creates specific obligations for protecting children from harmful but legal content, such as material promoting suicide, self-harm, or eating disorders, with stricter rules for services likely to be accessed by minors. The enforcement of the Act is led by Ofcom, which has the power to issue fines of up to £18 million GBP, or 10% of a company’s global annual turnover. In serious cases, the Act allows the regulator to block access to services in the UK. The Act also includes controversial provisions for age verification and makes senior managers potentially liable for non-compliance in specific circumstances.
I have to admit that I’m not an expert on the Online Safety Act, and some people have been covering it in more detail than I ever could (highly recommend Cyberleagle’s writing on this subject). However, I have been quite interested in following the online blowback to the implementation of the Act, as it has served to remind me of a style of Internet regulation that I thought had been abandoned. The idea that we can have a system of age verification for online content has always been baffling to me, not only because the technology is often clunky, but also because it opens the door for future privacy breaches that could end up being more problematic than the original problem it’s supposed to solve.
I can’t pretend to know the extent of the problem of children accessing inappropriate content online. Some people who oppose the Act seem intent on ignoring that this is a real issue, I think that there is definitely a problem there, but the solution offered by the Act is not only mostly useless, it could prove counter-productive. By pushing children to dodgy sites and to use VPNs to mask their identity, we could be fostering a generations of youths that are suspicious of any effort to control their online presence.
The enforcement mechanisms within the Act also raise significant questions about proportionality and precedent. While £18 million GBP may sound substantial, for tech giants like Meta or Google such fines represent a mere rounding error in their quarterly earnings. The real threat lies in the power to block services entirely, but this nuclear option seems almost too extreme to be credible. One wonders whether Ofcom will find itself caught between imposing ineffective financial penalties and wielding a regulatory sledgehammer that could prove politically and economically catastrophic. The resulting enforcement limbo may well render the Act’s deterrent effect negligible, particularly for international platforms that can simply relocate their legal structures beyond UK jurisdiction.
Perhaps most troubling is how the Act’s implementation appears to be occurring in a policy vacuum divorced from broader considerations of digital rights and international coordination. The UK risks creating a patchwork of compliance requirements that differ markedly from other developed countries, potentially fragmenting the internet experience for British users. This regulatory isolation could inadvertently push UK citizens towards less regulated platforms or encourage the development of parallel digital ecosystems that operate entirely outside British oversight. Rather than creating a safer online environment, the Act may simply be rearranging the deckchairs on the digital Titanic, whilst the real work of fostering digital literacy and critical thinking skills amongst young people remains woefully underfunded and largely ignored.
You may be wondering, why did the UK even pass this truly awful piece of legislation? The problem with UK policymaking when it comes to online spaces and the Internet is that it tends to be informed by some of the most backward-looking views because most UK policies tend to be geared towards pleasing the UK’s voting bloc, which are mostly Boomers and increasingly older Gen-Xrs. Stories about the lawless Internet where predators await children at every click are popular with this demographic, and therefore governments from both the left and the right are encouraged to cater to these ideas. “Won’t somebody please think of the children” is really an important part of the UK’s regulatory landscape, and one that tends to produce clueless regulation like the Online Safety Act.
It is unlikely that the Act will be repealed even given its lack of popularity, but perhaps we can expect governments to stop enforcing it, which would have the effect of reducing its efficacy. But for now the future looks bleak for British innovation.
Personally I don’t intend to place any age-verification on my blog. I’m almost certain that this is not the place of choice for the British youth, and my comment section seems mostly inhabited by linkbacks.

1 Comment
Links for Week of August 22, 2025 – Cyberlaw Central · August 22, 2025 at 4:06 pm
[…] https://www.technollama.co.uk/the-online-safety-act-how-not-to-do-internet-regulation […]