You may have heard the outcry over the recent Online Safety Act, in particular in relation to adult content online; “I now need permission from Keir Starmer to get off” as one Reddit user succinctly put it. But along with requiring age verification to view adult sites, this act has much worse, far-reaching and frankly dangerous consequences for our society.
What Is the Online Safety Act?
The Online Safety Act gives unprecedented powers to Ofcom, the UK’s communications regulator, to police online content. Platforms and websites are now legally required to proactively monitor, filter, and remove content deemed ‘harmful’ -a term that is worryingly vague and open to interpretation. Laws are meant to be specific and prevent ambiguitiy - ‘harmful’ is a term that can be argued to mean practically anything. Before much-needed amendments, the law also covered not just illegal material, but also so-called “legal but harmful” content, which could have included anything from political opinions to satire. While this phrase was taken out of the final Act, accountability provisions still remain which pressure platforms into censoring ‘by default’.
Do you remember net neutrality? When ISPs were told they couldn’t throttle and block specific content on the Internet just because they didn’t like it? That pales in comparison to this. While net neutrality majorly focused on access speeds to different types of content, this Act gives broad censorship powers to government bodies, and tech companies by extension.
It is already blocking many things such as the subreddits r/cider, r/stopsmoking, and r/sexualassault. Wonderful - children can no longer access incredibly useful internet resources to spot signs of dangerous familial environments, nor can people access help with addictions without providing all of their identification to the government (via a third party company!) (I say blocking, what I really mean is slapping an identification wall in front of the content - but for children, and people who don’t want to fork over their private information, it’s a defacto block.)
Of course, it is absolutely true that it is important to protect children from falling into dark rabbitholes, especially on the internet, but in my view this is absolutely not the way to do it. Similar ‘adult content’ censorship policies inevitably slip into censorship. Once upon a time, a particularly strict internet firewall at my secondary school derailed an entire PSHE curriculum as nobody could access the search terms ‘abortion’, ‘pregnancy’, or ‘menstruation’ for an entire week. Is that really the Internet we want to cultivate? Do we want to make it ‘taboo’ to talk about things like that? It makes me draw eerie comparisons to book bans in places like Florida.
Why Is This Dangerous?
The most alarming aspect of the Act is its devastating impact on privacy and free expression. In a strange counterpart the Brussels Effect, named for being typically the other way around, this law is already spreading - Twitter (X) has already begun to block ‘adult’ content in Ireland and the rest of the European Union as well as the UK. By the way, ‘adult’ in this context includes things like war footage, such as coverage of the wars in Gaza and Ukraine. Twitter (at the time of writing) does not allow users to manually verify their age via sensible methods such as ID uploads - their AI ‘estimates’ your age via your ‘browsing patterns’ - so if their AI has not yet ‘estimated your age’, vital news from around the word is being suppressed on hundreds of thousands of people’s social media feeds.
It is clear to see that the vagueness of what counts as “harmful” content means that platforms will likely err on the side of caution, removing anything that could possibly be seen as problematic. Placing the lawful responsibility on the shoulders of the platform owners may be conceptually logical - the big businesses should have to regulate their content to prevent harm to children - but unfortunately, it does not consider that their only interest is their own self-preservation. They would sooner pull out of a market entirely, and many sites have already shut down / blocked access entirely for UK users, as they simply do not have the moderation resources to verify the age of every user. The requirements are so broad and technically demanding that only the largest tech companies can realistically comply. The Act has introduced new criminal offenses for senior managers at tech companies, who face up to two years in prison if they fail to comply with various requests and demands from Ofcom - all the more reason to pull out of the UK market. Would you, as a small tech company, who runs say a micro-blog on niche beer, risk those penalties? Would you pay huge fees to a private company for age verification? Or would you just block access to UK residents?
Meanwhile, determined bad actors will simply move to less regulated corners of the internet, leaving the law’s effects minimised and targeted onto regular internet users. Children know how to use VPNs. Pretending they don’t is helping nobody. UK government ministers are busy arguing with people on Twitter, calling opponents to the law friends of predators, instead of honestly engaging with the community. This is not the way to pass a law in a democratic society, and it’s certainly not the way to treat your voters.
One of the worst outcomes is the potential, but hopefully unlikely, blocking of Wikipedia. Wikipedia is one of the last strongholds of free, reliable, and widely available information on the internet. It is such a valuable resource and we are truly lucky as a society to have it - it is absolutely disingenuous to argue there is any reason to block or restrict articles on Wikipedia for the sake of ‘safety’ - invest in support, education, and outreach, not censoring banks of vitally important data! (At the time of writing, Wikipedia just failed in their most recent appeal against the law.)
Can’t You Just Verify Your Age?
Sure you can. If you want to hand over your sensitive information to dozens of third party companies, that is. Reddit, for example, uses a company called Persona. Other platforms use services like Yoti, Onfido, or Jumio, all of which would sound better on the packaging of a yoghurt. While this may seem like an inevitable solution to this problem, these companies have their own, sometimes very sparse and vague, privacy policy. Persona’s, for instance, includes things such as:
- The right to aggregate the ID you provide with other data they scrape from you online
- The right to hold this data for up to 3 years
- The right to share this with third parties
- The right to use this data to ‘infer’ information about you
So if you, as an adult in the UK, wish to continue with your rehab journey on r/stopsmoking, you need to give up these privacies, which really should be expected. (Or, use a VPN. We can always use a VPN. But that’s not the point. This is the UK I’m talking about, not North Korea, not Russia. I don’t want to have to use a VPN to browse the web in my own country. Come on.)
What Can We Do?
The Online Safety Act is a sharp wake-up call for anyone who cares about digital rights. Net neutrality was one thing - we now have to push back even harder against this clear attempt at censorship. We can create laws that target real harm without undermining the fundamental principles of privacy and free expression. The internet should be a place where ideas can be shared freely, not a space policed by vague and overreaching laws. The Online Safety Act threatens to make the UK a global example of how not to regulate the web. It’s up to all of us to speak out before it’s too late.