A digital world fit for the next generation

VC
15 Mar 2026
Child looks at a phone. Next to the child are age-rating symbols: U, 13+, 16, 18

Tech companies have for far too long treated children as data to be mined rather than young people to be protected. They have let harmful content roam free on their sites from perpetuating negative body image to amplifying extreme and violent content. They have built addictive algorithms designed to keep children endlessly doom-scrolling at the expense of their mental health.

But we know that blanket bans cannot work in the digital age. They also disproportionately impact LGBTQ+ youth, disabled young people, and those in rural communities who rely on digital spaces for services, friendships and opportunities unavailable locally.

We must instead look at a new standard of age-appropriate online safety online like the offline world. While the Government’s response to online safety has been disappointing, the Liberal Democrats are proposing a liberal, more practical alternative. 

Today, Liberal Democrats members passed a policy to move beyond blanket bans and toward a system where safety is baked into the technology itself.

Our new policy calls for:

  • A Statutory Age-Rating System: Introducing a classification framework for online platforms, like those used in the film industry - where services are rated by addictiveness of their platform design, the impact on children’s mental health, and the harmfulness of the content they host.  This is an approach supported by children’s charities like the NSPCC.
     
  • Ending Addictive Design: Requiring these safer modes to include high privacy settings, strict limits on behavioural advertising, and constraints on features designed to drive compulsive use.
     
  • Enforceable Standards: Establishing statutory, auditable standards that platforms must meet, backed by rapid response requirements to ensure that when harms occur, they are addressed immediately.
     
  • Protecting Digital Citizenship: Ensuring that young people’s rights to information and community are preserved, while protecting them from the structural drivers of harm.

We think these changes will also empower parents and young people to be informed about the risks of harmful online content and features through clear guidance and an understandable framework that mirrors existing best practices.

Importantly, this model is future-proof and builds on the lessons from Australia’s social media ban. This would end the whack-a-mole approach of online safeguarding by providing a clear framework of standards for platforms. As new dangers inevitably emerge - such as unsafe chatbots, online gaming, or AI-driven harms - they can be quickly categorised and rated against clear and understandable principles of harm.

We believe that child safety is built through platform accountability and resilient design, not by shutting young people out of the digital world completely.

It is time to start building a digital world that is fit for the next generation to grow up in.

 

 

 

This website uses cookies

Please select the types of cookies you want to allow.