Lehane: How California can set the standard for teen AI safety
A generation ago, we watched social media race from novelty to necessity without enough guardrails for the people most affected by it: teens. We’re still reckoning with the consequences in classrooms, in living rooms and in teen mental health statistics.
Now, a new technology is becoming part of daily life even faster. Again, there are no industry-wide rules to ensure the safety of younger users.
AI tools can help students learn, help families solve problems and help all of us do more with less. Today’s teens are the first generation growing up with AI from the start, and they hold the opportunity to maximize the economic benefits of the Intelligence Age. That brings enormous opportunity — and a responsibility to avoid repeating the mistakes we made with social media so teens can develop real AI literacy and a healthy relationship with this technology.
Gov. Gavin Newsom has shown, repeatedly, California can and will lead on both AI and teens safety measures. Now, there’s a new opportunity to build on California’s leadership — and set a de facto national regulatory standard on AI — by advancing the new Parents & Kids Safe AI Act.
The legislation – which we hope will be replicated in other states – began in California, where OpenAI partnered with Common Sense Media, an organization that has long helped families navigate new technologies, to draft and release the measure.
The basic idea is simple: AI knows a lot, but parents know best. The proposal would require AI systems that simulate conversation to use privacy-preserving age estimation so child-protective settings kick in for users under 18. It would empower parents by requiring easy-to-use parental controls and implement stronger protections for children under 13. It would include safeguards against manipulative designs like emotional dependency or simulated romantic relationships, and clear crisis-response protocols for self-harm risks. It also calls for independent child-safety audits with accountability through enforcement.
We care deeply about adults’ right to privacy when using AI tools. But when it comes to teens, we should put safety first. Minors need significant protection so they can realize the full potential of AI, and parents deserve confidence that the right guardrails are in place. This measure provides both.
Supporting the bill would also build on Gov. Newsom’s devotion to making teens’ online safety a signature priority. He has backed steps to curb addictive social-media design and strengthen privacy protections, and he pushed distraction-free learning in schools through statewide limits on student smartphone use.
More recently, California has also advanced new guardrails for teen safety online — including stronger age assurance and new safeguards for AI chatbots used by minors — alongside broader efforts to expand access to youth mental-health training and resources.
California is pairing those teen-safety steps with a pragmatic investment in AI research capacity that universities and public-interest researchers need to study AI and help develop it responsibly. Through CalCompute, the state is laying the groundwork for a public computing cluster designed to broaden access to advanced compute so the ability to evaluate and build safe AI isn’t limited to a handful of well-resourced players.
California’s frontier AI transparency framework is already starting to serve as a reference point beyond the state. New York has moved to align its RAISE Act with California’s landmark SB 53, harmonizing the rules of the country’s two largest AI economies and moving the country toward a de facto national standard.
If California and New York can align on youth AI protections, we can create clear expectations for industry, real tools for parents and safer defaults for young people nationwide.
As a father, I can’t think of anything more important. Gov. Newsom has helped make California a leader on both AI and teen safety. Advancing the Parents & Kids Safe AI Act — first developed in California — would be the next step, and provide the kind of leadership the country needs right now.
Chris Lehane is the chief global affairs officer of OpenAI.