Online Safety Act: what are the new measures to protect children on social media?

MNStudio/Shutterstock

Technology platforms operating in the UK now have a legal duty to protect young people from some of the more dangerous forms of online content. This includes pornography, content that encourages, promotes, or provides instructions for violence, promotion of self-harm and eating disorders. Those failing to comply face hefty fines.

Until now, parents have had the unenviable role of navigating web content filters and app activity management to guard their children from harmful content. As of 25 July 2025, the Online Safety Actputs greater responsibility on platforms and content creators themselves.

In theory, this duty requires tech organisations to curb some of the features that make social media so popular. These include changing the configuration of the algorithms that analyse a user’s typical behaviour and offer content that other people like them usually engage with.

This is because the echo chambers that these algorithms create can push young people towards unwanted (and crucially, unsolicited) content, such as incel-related material.

The Online Safety Act directly acknowledges the impact of algorithms in targeting content to young people. It forms a key part of Ofcom’s proposed solutions. The act requires platforms to adjust their algorithms to filter out content likely to be harmful to young people.

It’s yet to become clear exactly how tech companies will respond. There has been pushback over negative attitudes to algorithms, though. A response from Meta, which owns Facebook, Instagram and WhatsApp, to Ofcom’s 2024 consultation on protecting children from harms online counters the idea that “recommender systems are inherently harmful”.

It states: “Algorithms help to sort information and to create better experiences online and are designed to help recommend content that might be interesting, timely or entertaining. Algorithms also help to personalise a user’s experience, and help connect a user with their friends, family and interests. Most importantly, we use algorithms to help young people have age-appropriate experiences on our apps.”

Age verification

A further safety measure is the use of age checks. Here, Ofcom is enforcing platforms to make “robust age checks” and, in the case of the most serious of content creation sites, these must be “highly effective”.

Users will need to prove their age. Traditionally, age-verification checks involve the submission of government-issued documents – often accompanied by a short video to verify the accuracy of the submission. There have been technological advances which some platforms are embracing. Age-estimation services involve uploading a short video or photo selfie which is analysed by AI.



Read more:
Porn websites now require age verification in the UK – the privacy and security risks are numerous

Age verification can include uploading a selfie that is analysed by AI.
Miljan Zivkovic/Shutterstock

If enforced, the Online Safety Act may not only restrict access to pornography and other recognised extreme content, but it could also help stem the flow of knife sales.

Research shows exposure to knife crime news on social media is linked to symptoms similar to PTSD. Research by one of us (Charlotte Coleman) and colleagues has previously shown that negative effects of seeing knife imagery may be more severe for girls and those who already feel unsafe.

Even on strongly regulated platforms, though, some harmful material can seep through the algorithm and age checks net. Active moderation is therefore a further requirement of the act. This means platforms need to have processes in place to look at user-generated content, assess the potential harm and remove it if appropriate to ensure swift action is taken against content harmful to children.

This may be through proactive moderation (assessing content before it is published), reactive moderation based on user reports, or more likely, a combination of the two.

Even with these changes, invisible online spaces remain. A host of private, encrypted end-to-end messaging services, such as messages on Whatsapp and snaps on Snapchat, are impenetrable to Ofcom and the platform managers, and rightly so. It is a vital fundamental right that people are free to communicate with their friends and family privately without fear of monitoring or moderation.

However, that right may also be abused. Negative content, bullying and threats may also be circulated through these services. This remains a significant problem to be addressed and one that is not currently solved by the Online Safety Act.

These invisible online spaces may be an area that, for now, will remain in the hands of parents and carers to monitor and protect. It is clear that there are still many challenges ahead.

Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

Charlotte Coleman has previously received funding from UKRI to understand the negative online experiences of UK police staff.

Jess Scott-Lewis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.