A worldwide movement to make the internet safer for children is fueling a wave of new technologies powered by artificial intelligence. From stricter laws to AI-enabled smartphones, regulators and companies alike are racing to shield kids from harmful online content.
Stronger laws, tougher penalties
In the U.K., the new Online Safety Act puts legal pressure on tech firms to protect children from inappropriate content, hate speech, cyberbullying, fraud, and child sexual abuse material (CSAM). Companies that fail to comply could face fines of up to 10% of their global annual revenue.
The U.S. is also moving forward with landmark legislation. The proposed Kids Online Safety Act would make social media platforms legally responsible for preventing harm to young users — mirroring the U.K.’s tough stance.
These moves are forcing major tech platforms to rethink their policies. Pornhub and other adult sites now require users to verify their age before gaining access. Beyond adult content, platforms like Spotify, Reddit, and X have rolled out their own age-assurance systems to stop minors from encountering explicit or unsuitable material.
Critics, however, argue that such measures risk undermining user privacy.
AI-powered age checks
One company has quickly risen to the center of this shift: Yoti.
Yoti’s AI scans selfies to estimate age based on facial features. The company says its model — trained on millions of faces — can guess the age of people between 13 and 24 with an accuracy of within two years. Yoti has already partnered with the U.K.’s Post Office and hopes to benefit from a wider rollout of digital IDs across the country.
Other players, including Entrust, Persona, and iProov, are also active in the identity verification space, but Yoti has emerged as the most visible provider under the U.K.’s new rules.
“There’s a race for child safety providers to build trust and prove reliability,” said Pete Kenyon, a partner at law firm Cripps. “The new requirements have created a marketplace where providers are scrambling to make their mark.”
Still, privacy advocates warn of risks. “Substantial privacy issues arise with this technology,” Kenyon added, emphasizing that trust depends on strong safeguards to keep personal data secure.
Rani Govender, policy manager at child protection charity NSPCC, said the technology to balance safety and privacy already exists. “Tech companies must make ethical choices,” she said. “The best solutions don’t just tick boxes; they build trust.”
Child-safe smartphones
The push for safer digital experiences goes beyond software. Finnish manufacturer HMD Global recently launched the Fusion X1, a smartphone designed with built-in AI tools to protect children.
Developed in partnership with U.K. cybersecurity firm SafeToNet, the phone can block kids from creating, sharing, or viewing sexually explicit images across the camera, screen, and apps.
“We believe more needs to be done,” said James Robinson, HMD’s vice president of family vertical. He noted that the idea for a safer phone came before the Online Safety Act took effect but welcomed stronger government action.
The device arrives amid momentum in the “smartphone-free” parenting movement, which urges families to delay giving children their own devices.
Pressure on Big Tech
Looking ahead, experts expect companies like Google and Meta to face growing demands to prioritize child safety. Both have long been criticized for the role their platforms play in fueling online bullying, harmful content, and social media addiction. While the firms say they’ve expanded parental controls and privacy settings, critics argue it’s not enough.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms,” said Govender. “That era of neglect must end.”
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.