Safe AI for Children Alliance issues urgent guidance on Sora 2 risks for schools and parents
Briefing outlines deepfake misuse, safeguarding protocols, and parent advice as AI-generated video tools spread.
The Safe AI for Children Alliance (SAIFCA) has published Sora 2 and AI-Generated Video: An Initial Briefing for Schools and Parents, warning of rapid misuse of new AI video tools and setting out immediate steps for education settings and families.
The release follows a LinkedIn post from SAIFCA highlighting “urgent safeguarding, legal, and ethical concerns” linked to hyper-realistic AI-generated video. SAIFCA is a coalition focused on protecting children from AI-related risks through awareness, policy, and practical guidance.
Briefing flags early misuse and gaps in guardrails
SAIFCA says OpenAI’s Sora 2, released on September 30, 2025 in the United States and Canada, generates realistic video with synchronized audio (dialogue, sound effects, music) and includes a social feed and “cameo” features. Within days, the group reports deepfakes of deceased public figures and widespread copyright and brand abuse. The briefing cites limited consent controls for deceased individuals and warns of potential misuse involving children’s likenesses.
OpenAI has stated protections including age restrictions, moderation, provenance signals, reporting, upload limits, and cameo verification. SAIFCA argues these measures have already been breached, noting what it describes as a “launch first, fix later” pattern across the AI sector.
Schools advised to strengthen policies and escalation protocols
The document recommends that schools:
Update safeguarding and online safety policies to include AI-generated content.
Integrate AI video literacy into teaching, emphasizing consent, legal implications, and verification skills.
Establish procedures for incident response, evidence handling, and coordination with law enforcement.
Recognize legal thresholds for police involvement, such as indecent imagery or serious online harassment.
Ensure students can safely report concerns and that parents receive regular information updates.
Risks identified across bullying, sextortion, and misinformation
The report highlights misuse cases such as synthetic sexual imagery, sextortion using fake “evidence,” and reputational attacks through fabricated school-related videos. It also warns that realistic staged footage of disasters or crimes could circulate widely, compounding misinformation risks. SAIFCA adds that Sora 2’s feed-based design may contribute to addictive use and disrupted wellbeing.
SAIFCA urges parents to discuss AI-generated video openly with children, set clear boundaries, and guide them in evaluating online media. It recommends calm, supportive responses to incidents and awareness of removal tools like NCMEC’s Take It Down. The group also calls for attention to the mental health impact of exposure to synthetic media.
The alliance concludes that AI video generation technology is advancing faster than oversight mechanisms. It calls for tighter regulation, improved platform accountability, stronger age verification, and mandatory safety testing before launch.