OpenAI to introduce new age prediction tools that estimate whether an account user is under 18
The age prediction tool uses information such as the age of the account, typical usage patterns, and a user’s stated age to estimate the user’s age.
ChatGPT will automatically apply additional protections on accounts with an estimated age under 18, reducing exposure to content including graphic violence, viral challenges, sexual or romantic role play, depictions of self-harm, and content that promotes extreme beauty standards or unhealthy dieting.
When ChatGPT is not confident about a user’s age, the tool will default to its safer settings. Users who have been incorrectly identified as underage can verify their age and restore full access by using identify-verification service Persona.
Parents can also choose to add controls to their child’s account, such as quiet hours, memory, and notifications for signs of acute distress.
“Safety is foundational for OpenAI”
Writing on LinkedIn, OpenAI’s Chief Global Affairs Officer Chris Lehane said: “As I noted to reporters on the sidelines of the World Economic Forum at Davos, safety is foundational for OpenAI, especially when it comes to teens. And that belief is guiding how we build, deploy, and govern the use of our tools.
“Today’s rollout builds on the steps we’ve been taking in recent months: clear under-18 model behavior principles, stronger parental controls – including the ability to set quiet hours for their kids – and safeguards around sensitive content.”
The news follows shortly after OpenAI updated its Model Spec to introduce new under-18 principles, alongside expanded parental controls and AI literacy resources aimed at supporting safe use by teens at home and in school.
Lehane adds: “This isn’t about shutting teens out of AI; it’s about opening the door the right way. Used safely, AI can help students learn faster, explore new ideas, and be better prepared for the jobs of the future. That opportunity is real, and so are the risks if we don’t act responsibly
“That’s why we’ve been clear: AI knows a lot, but parents know best. Teen safety must come first, and parents should have real agency in how their kids use our tools. We’ll continue to share what we’re learning as this rolls out, including later in the EU, where we’re accounting for additional regional requirements in the weeks ahead. This is an important milestone, not the finish line.”