The United Kingdom’s Online Safety Act was meant to support children safe. Instead, it is keeping the nationalist uninformed. Within days of the instrumentality taking effect successful precocious July 2025, X (formerly Twitter) started hiding videos of Israel’s atrocities successful Gaza from UK timelines down contented warnings and property barriers. A instrumentality sold arsenic safeguarding has go 1 of the astir effectual censorship tools Britain has ever built. What is unfolding is nary accident. It is the effect of authorities that weaponises child-protection rhetoric to normalise censorship, individuality verification and online surveillance.
The roots of Britain’s online censorship situation spell backmost astir a decade, to MindGeek, present rebranded arsenic Aylo, the scandal-ridden institution down Pornhub. This tax-dodging, exploitative porn empire worked intimately with the UK authorities to make an age-verification strategy called AgeID, a program that would person efficaciously handed Aylo a monopoly implicit ineligible big contented by making smaller competitors wage oregon perish. Public backlash killed AgeID successful 2019, but the thought survived. Once 1 ideology entertained the conception that entree to online contented should beryllium gated by individuality checks, the precedent was set. The Digital Economy Act 2017 laid the groundwork, and the Online Safety Act 2023 made it law. Today, respective European Union states, including France and Germany, are exploring akin legislation, each cloaked successful the aforesaid rhetoric of “protecting children”. This is not conspiracy; it is the earthy convergence of firm seizure and authorities control, wrapped successful the motivation connection of kid safety.
The Online Safety Act empowers Ofcom to constabulary astir each country of the internet, from societal media and hunt engines to big contented platforms, nether menace of fines of up to 18 cardinal pounds ($24m) oregon 10 percent of planetary revenue. Platforms tin beryllium designated arsenic “Category 1” services, triggering the harshest rules, including mandatory property verification, individuality checks for contributors and the removal of vaguely defined “harmful” material. Wikipedia present faces this nonstop threat. In August 2025, the High Court dismissed the Wikimedia Foundation’s situation to the categorisation rules, clearing the mode for Ofcom to dainty it arsenic a high-risk platform. The instauration has warned that compliance would unit it to censor captious accusation and endanger unpaid editors by linking their existent identities to their writing. If it refuses, the UK could, successful theory, beryllium legally empowered to artifact entree altogether, a breathtaking illustration of however “child protection” becomes a instrumentality for accusation control. Already, Ofcom has opened aggregate investigations into large porn sites and societal networks implicit alleged non-compliance. The law’s chilling effect is nary longer hypothetical; it is operational.
Age-verification systems are fundamentally incompatible with privateness and security, successful fact, immoderate id-verification strategy should instantly rise suspicion. The July 25 breach of the Tea dating app, with thousands of photos and implicit 13,000 delicate ID documents leaked and circulated connected 4chan, oregon the adjacent much caller Discord information breach exposing implicit 70 1000 authorities ID documents aft a third-part work was hacked, proved the point.
When systems store verification information that nexus existent identities to online activity, they make a treasure trove for hackers, blackmailers and states. History already offers warnings, from the 2013 Brazzers leak of astir 800,000 accounts to the FBI’s uncovering that pornography-related vulnerability scams stay 1 of the starring categories of online extortion. Now ideate this infrastructure applied not conscionable to big content, but to governmental speech, journalism and activism. The aforesaid tools being built for “child safety” alteration unprecedented blackmail and governmental manipulation. A azygous breach could exposure journalists, whistleblowers oregon nationalist officials. And successful a satellite wherever information often transverse borders, determination is nary warrant that verification databases successful democracies volition enactment retired of the hands of authoritarians. The much we digitise “trust”, the much we endanger it.
The astir insidious diagnostic of this legislative inclination is however it absolves parents portion empowering the state. Existing parental power tools are sophisticated: parents tin already show and restrict children’s net usage done devices, routers and apps. The propulsion for government-mandated property verification is not astir those tools failing; it is astir immoderate parents choosing not to usage them and governments seizing that negligence arsenic a pretext for surveillance. Rather than investing successful acquisition and integer literacy, authorities are expanding their powerfulness to determine what everyone tin see. The authorities should not beryllium parenting the public. Yet nether the Online Safety Act, each national becomes a fishy who indispensable beryllium innocence earlier speaking oregon viewing online. What is framed arsenic “protecting children” is, successful practice, the operation of a population-wide compliance system.
Britain’s disastrous experimentation is already spreading. France and Germany person precocious parallel drafts of property verification and online information legislation, portion the European Union’s age-verification blueprint would nexus big contented entree and “high-risk” platforms to interoperable integer IDs. The EU insists the strategy volition beryllium privacy-preserving, but its architecture is identical to the UK model, broad individuality verification disguised arsenic safeguarding. The logic repeats itself everywhere. Laws statesman with the constrictive extremity of shielding minors from pornography, but their powers rapidly expand, archetypal to protests, past to politics. Today, it is Gaza videos and intersexual content; tomorrow, it is journalism oregon dissent. The UK is not an outlier but a template for integer authoritarianism, exported nether the banner of safety.
Supporters of these laws importune we look a binary: either follow cosmopolitan property verification oregon wantonness children to the internet’s dangers. But this framing is dishonest. No method strategy tin regenerate engaged parenting oregon digital-literacy education. Determined teenagers volition inactive find ways to entree big content, they volition conscionable beryllium driven towards the darker corners of the web. Meanwhile, the laws bash small to halt the existent threat: kid intersexual maltreatment worldly that circulates connected encrypted oregon hidden networks that volition ne'er comply with regulation. In reality, the lone sites that travel the rules are those already susceptible of policing themselves, and those are precisely the ones the authorities is present undermining. By pushing young radical towards VPNs and unregulated platforms, lawmakers hazard exposing them to acold greater harm. The effect is not safety, but greater vulnerability to danger.
Strip distant the child-protection rhetoric, and the Online Safety Act’s existent relation becomes clear: it builds the infrastructure for wide contented power and colonisation surveillance. Once these systems exist, expanding them is easy. We person seen this logic before. Anti-terror laws morphed into instruments for policing dissent; present “child safety” provides screen for the aforesaid authoritarian creep. The EU is already entertaining proposals that would mandate chat-scanning and weaken encryption, promising specified measures volition beryllium utilized lone against abusers, until, inevitably, they are not. The contiguous consequences successful the UK – restricted Gaza footage, threatened entree to Wikipedia, censored protestation videos- are not glitches. They are previews of a integer bid built connected control. What is astatine involvement is not conscionable privateness but ideology itself, the close to speak, to cognize and to dissent without being verified first.
Protecting children online does not necessitate gathering a surveillance state. It requires education, accountability and enactment for parents, teachers and platforms alike. Governments should put successful integer literacy, prosecute genuine online exploitation and springiness parents amended tools to negociate access. Platforms should beryllium held to wide standards of transparency and algorithmic responsibility, not forced into policing adults. Where self-regulation fails, targeted oversight tin work, but cosmopolitan verification cannot.
The UK’s Online Safety Act and akin authorities worldwide correspond a cardinal prime astir the benignant of integer aboriginal we want. We tin judge the mendacious committedness of information done surveillance and control, oregon we tin importune connected solutions that support children without sacrificing the privacy, freedom, and antiauthoritarian values that marque extortion worthwhile successful the archetypal place. The aboriginal results from the UK should service arsenic a warning, not a model. Before this authoritarian creep becomes irreversible, citizens and lawmakers indispensable recognise that erstwhile governments assertion they’re protecting children by controlling information, they’re usually protecting thing other entirely: their ain powerfulness to find what we tin see, say, and know.
The views expressed successful this nonfiction are the author’s ain and bash not needfully bespeak Al Jazeera’s editorial policy.

3 hours ago
2









English (US) ·