OpenAI is rolling out safety and consent controls for the Sora 2 model and Sora app, adding provenance signals, likeness protections and teen safeguards for video creation and sharing. The company says these measures aim to make generated video traceable and to give people control over how their image and voice are used.
The Sora team said in an announcement that every Sora video will include visible and invisible provenance markers, and that outputs "embed C2PA metadata" and are traceable with internal reverse-image and audio search tools. The company describes these as extensions of systems used in prior image-generation tools and notes many outputs will carry visible, dynamically moving watermarks that include the creator’s name.
Related reading
- Mistral AI launches Forge to let companies train their own frontier AI models on proprietary data
- OpenAI builds real-time monitor to catch misaligned behaviour in its own AI coding agents
- Microsoft warns AI agents risk becoming "double agents" as it unveils security controls at RSAC
OpenAI is allowing image-to-video creation from photos of real people only after users attest they have consent and rights to upload the media; such generations are subject to stricter guardrails than Sora Characters. Images of children or young-looking persons face even tighter moderation, and shared videos with people will always display watermarks. The company also offers consent-based "characters" that capture image and voice likenesses; users control who can use their characters, can revoke access, and can review or delete videos featuring their character.
Sora uses layered defenses to filter harmful content before and after generation, blocking sexual material, terrorist propaganda and self-harm promotion by checking prompts, multiple video frames and audio transcripts. OpenAI says it has red teamed the system, tightened policies relative to image generation, and supplements automated filters with human review. For audio, Sora scans transcripts for violations, blocks prompts that attempt to imitate living artists or existing works, and honors takedown requests from creators.
The recap
- OpenAI adds built-in safety features to its Sora video tools
- Sora videos include C2PA metadata and visible moving watermarks
- Teens have stricter filters and parental controls via ChatGPT