Meta Launches AI Age Checks for Teens Without Accuracy Data
Meta announced that Instagram and Facebook will use AI to scan photos and videos users have already posted, looking for visual signals, including height and bone structure, to estimate whether an account holder may be underage. The company has not published accuracy metrics alongside the announcement.
The system moves Meta beyond self-reported birthdates, a method Australia’s eSafety Commissioner found was still widely used across major platforms in a February 2025 report. If a child entered a false date of birth, that could be enough to create an unrestricted account.
Three questions follow from Meta’s announcement: who gets correctly placed into age-appropriate experiences, who might be wrongly flagged as a minor, and who slips through undetected. Meta has not answered those questions with data.
Why the old system was already failing
Most of the eight major platforms reviewed by Australia’s eSafety Commissioner, including YouTube, Facebook, Instagram, TikTok, Snapchat, Reddit, Discord, and Twitch, relied on a user’s truthful self-declaration of date of birth at account creation, the regulator found. The flaw was straightforward: nothing stopped a child from typing a different year.
The consequences were predictable. Around 1.3 million Australian children aged 8 to 12 may have been active on social media in 2024, with roughly 80% of that age group using at least one social media service, the eSafety Commissioner found.
The failure was not uniform. TikTok, Twitch, Snapchat, and YouTube were already using proactive tools to detect users under 13, eSafety found. Meta’s platforms were not listed among them. Meta’s new update is therefore a catch-up move, not a first.
That context makes scrutiny of the proposed fix more important. YouTube’s AI age-detection rollout has already raised privacy questions for adults who may be wrongly classified as minors and asked to prove their age.
What Meta’s AI age checks actually do
The AI scans photos and videos users have already posted for visual clues about a person’s age that text might miss, Meta said. The system operates on existing content rather than only at the moment of account creation, which gives it an advantage over a birthday field but introduces its own complications.
The tool looks at general visual cues, including height and bone structure, to estimate a broad age range, not a precise year. It does not match a face to an identity database. “We want to be clear: this is not facial recognition,” Meta said in its announcement.
Age estimation infers a demographic characteristic from an image; facial recognition identifies a specific person. But the underlying process still involves analyzing physical characteristics from photos and videos, which could draw scrutiny around AI privacy issues and biometric data.
There is also a structural problem that no photo-based system can solve on its own. The person appearing in a post may not be the account holder. eSafety found that 54% of Australian children aged 8 to 12 who used social media did so through a parent’s or carer’s account. An AI system estimating the visible person’s age cannot determine who created or operates the account.
Meta said visual analysis is available in select countries and that accounts determined to be underage may be deactivated unless the account holder verifies their age.
What Meta has not disclosed: no accuracy data, no false-positive or false-negative rates, no demographic breakdown, no details on accuracy near the 13, 16, and 18 thresholds, and no clear explanation of whether flagged cases receive human review before account restrictions.
Because Meta has released no performance data of its own, the closest available reference point is independent testing of age estimation software. NIST found that age estimation technology has improved, but its 2024 evaluation found no single algorithm clearly outperformed the others and said performance varied by image quality, age, sex, and country of birth.
NIST also described age estimation as “an enabling technology” in age assurance programs that have recently been included in law and regulation, both inside and outside the US.
Meta has been expanding teen safety tools across its platforms, including parental controls for teen AI chatbot interactions. This update focuses on age assurance itself: whether AI can enforce platform rules accurately enough when the company has not yet published the data needed to evaluate the system.
Also read: Our list of top AI companies in 2026 shows how major platforms are expanding AI across products, infrastructure, and user-facing systems.
The post Meta Launches AI Age Checks for Teens Without Accuracy Data appeared first on eWEEK.