Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

OpenAI strengthens Sora video safety and consent controls

The image generator is bringing in consent and age-based safeguards for video generation

Defused News Writer profile image
by Defused News Writer
OpenAI strengthens Sora video safety and consent controls
Photo by Red Shuheart / Unsplash

OpenAI is rolling out safety and consent controls for the Sora 2 model and Sora app, adding provenance signals, likeness protections and teen safeguards for video creation and sharing. The company says these measures aim to make generated video traceable and to give people control over how their image and voice are used.

The Sora team said in an announcement that every Sora video will include visible and invisible provenance markers, and that outputs "embed C2PA metadata" and are traceable with internal reverse-image and audio search tools. The company describes these as extensions of systems used in prior image-generation tools and notes many outputs will carry visible, dynamically moving watermarks that include the creator’s name.

OpenAI is allowing image-to-video creation from photos of real people only after users attest they have consent and rights to upload the media; such generations are subject to stricter guardrails than Sora Characters. Images of children or young-looking persons face even tighter moderation, and shared videos with people will always display watermarks. The company also offers consent-based "characters" that capture image and voice likenesses; users control who can use their characters, can revoke access, and can review or delete videos featuring their character.

Sora uses layered defenses to filter harmful content before and after generation, blocking sexual material, terrorist propaganda and self-harm promotion by checking prompts, multiple video frames and audio transcripts. OpenAI says it has red teamed the system, tightened policies relative to image generation, and supplements automated filters with human review. For audio, Sora scans transcripts for violations, blocks prompts that attempt to imitate living artists or existing works, and honors takedown requests from creators.

The recap

  • OpenAI adds built-in safety features to its Sora video tools
  • Sora videos include C2PA metadata and visible moving watermarks
  • Teens have stricter filters and parental controls via ChatGPT
Defused News Writer profile image
by Defused News Writer

Explore stories