OpenAI backs California youth AI safety law in partnership with Common Sense Media
OpenAI has publicly backed the Parents & Kids Safe AI Act, a proposed California ballot measure that would introduce stricter protections for children and teenagers using artificial intelligence systems.
The company says the proposed legislation reflects a broader shift toward stronger child safety standards as AI becomes more widely used by younger audiences.
The announcement was detailed in a LinkedIn post by Chris Lehane, Chief Global Affairs Officer at OpenAI, who outlined the company’s partnership with Common Sense Media and its joint support for the measure.
LinkedIn post outlines scope of proposed protections
In the post, Lehane positioned the proposal as a response to growing concerns about how children interact with AI tools and the lack of consistent safeguards across platforms. “AI knows a lot, but parents know best,” he wrote.
Lehane went on to set out the specific requirements included in the proposed measure. “The new measure requires AI tools to use privacy-preserving age estimation tools to distinguish between kids and adults, offer easy-to-use parental controls, including time limits and stronger protections for children under 13, develop safeguards to prevent outputs that encourage behavior like emotional dependency and sexualized interactions, put clear crisis-response protocols in place for self-harm and suicide risks, and undergo independent child-safety audits, with public reporting,” Lehane added.
He also referenced Common Sense Media’s role in shaping the proposal and its potential impact beyond California, “That’s why we worked with them on a new ballot measure that represents the most comprehensive youth AI safety effort yet, and could serve as a model for other states and the basis for national legislation on child AI safety.”
Measure targets AI tools used by minors
The Parents & Kids Safe AI Act would apply to AI chatbots and systems that simulate conversation, including tools already used by children and teenagers.
According to the proposal, companies would be required to distinguish between adult and minor users, prohibit child-targeted advertising, and limit the collection, sale, or sharing of children’s data without parental consent. Additional requirements include safeguards to prevent AI systems from promoting self-harm, eating disorders, violence, or sexually explicit behavior, as well as restrictions on AI companions for users under 18.
The measure would also introduce enforcement mechanisms, including independent audits, annual risk assessments, and oversight by the California Attorney General, with financial penalties for non-compliance.
OpenAI links legislation to broader AI literacy goals
In the LinkedIn post, Lehane connected the proposed safeguards to wider conversations about AI literacy and long-term adoption.. “Today’s kids and teens are the first generation growing up with AI,” Lehane wrote.
He argued that while AI literacy is essential for future economic participation, protections must be in place for families and educators to feel confident allowing children to use these tools. Lehane also referenced OpenAI’s recent updates to its internal Model Spec, including the addition of under-18 principles, and ongoing collaboration with the American Federation of Teachers around classroom use of AI.
“While OpenAI believes strongly in adults’ right to privacy when using AI tools, and the freedom to use those tools within broad safety bounds, when it comes to teens, we put safety ahead of privacy and freedom,” Lehane added.