Selfie-based age verification technology is rapidly expanding as governments worldwide introduce stricter controls for social media platforms, gaming sites, and adult content portals. The AI-driven tools promise fast, accurate age checks — often completed within a minute — and have become central to enforcing new online safety laws such as Australia’s upcoming under-16 social media ban taking effect on December 10.
Fast AI Age Checks Become the New Norm
Users simply take a clear, front-facing selfie using their phone or computer camera. The system then analyses facial patterns and estimates age almost instantly.
A pop-up on the Roblox gaming platform recently displayed, “We estimated your age is 18 or older,” showcasing how effortless the process has become.
At Yoti, one of the leading developers of this technology, mannequin heads wearing wigs and masks line the office windowsill for testing. Their AI consistently rejects them. “We can’t be sure the image was of a real face,” a test system reported.
CEO Robin Tombs explained that the algorithm has learned to recognise facial patterns associated with different age groups. “It just got very good at estimating age,” he said.
Today, Yoti performs around one million age checks daily for major clients including Meta, TikTok, Sony, and Pinterest. The company turned profitable this year, recording £20 million ($26 million) in revenue and forecasting a 50% sales boost. Competing firms such as Persona, K-id, VerifyMy, and Kids Web Services are also experiencing rapid growth.
Privacy Concerns and Accuracy Debates Intensify
The Age Verification Providers Association (AVPA), which counts 34 member companies, previously predicted the sector could generate nearly $10 billion annually across OECD nations during 2031–36. However, its director, Iain Corby, says the landscape is too unpredictable for updated forecasts.
Experts warn that AI-powered age checks still pose risks. Cybersecurity professor Olivier Blazy noted that these methods can be intrusive and may impact user privacy depending on how sites share data with verification providers.
Blazy also highlighted a technical weakness: ordinary makeup can significantly alter perceived age. Others point to bias issues, as algorithms often perform less accurately on non-white faces. An Australian report revealed ongoing underrepresentation of Indigenous people in training datasets.
Yoti acknowledges gaps in age and skin-tone representation but claims its system can detect false accessories and makeup tricks. The company also stresses that all biometric data is deleted immediately after analysis.
Platforms can customise detection settings, often requiring systems to classify users as over 21 to grant 18+ access. Those flagged as uncertain must use traditional methods such as scanning a government ID.

