OpenAI announced its text-to-video model, Sora, that can create realistic and imaginative scenes from text instructions.
Initially, Sora will be available to red teamers for the purposes of evaluating potential harms or risks in critical areas, which will not only enhance the model’s security and safety features but also allows OpenAI to incorporate the perspectives and expertise of cybersecurity professionals.
Access will also be extended to visual artists, designers, and filmmakers. This diverse group of creative professionals is being invited to test and provide feedback on Sora, to refine the model to better serve the creative industry. Their insights are expected to guide the development of features and tools that will benefit artists and designers in their work, according to OpenAI in a blog post that contains additional information.
Sora is a sophisticated AI model capable of creating intricate visual scenes that feature numerous characters, distinct types of motion, and detailed depictions of both the subjects and their backgrounds.
Its advanced understanding extends beyond merely following user prompts; Sora interprets and applies knowledge of how these elements naturally occur and interact in the real world. This capability allows for the generation of highly realistic and contextually accurate imagery, demonstrating a deep integration of artificial intelligence with an understanding of physical world dynamics.
“We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model. We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora. We plan to include C2PA metadata in the future if we deploy the model in an OpenAI product,” OpenAI stated in the post. “In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which apply to Sora as well.”
OpenAI has implemented strict content moderation mechanisms within its products to maintain adherence to usage policies and ethical standards. Its text classifier can scrutinize and reject any text input prompts that request content violating these policies, such as extreme violence, sexual content, hateful imagery, celebrity likeness, or intellectual property infringement.
Similarly, advanced image classifiers are utilized to review every frame of generated videos, ensuring they comply with the set usage policies before being displayed to users. These measures are part of OpenAI’s commitment to responsible AI deployment, aiming to prevent misuse and ensure that the generated content aligns with ethical guidelines.
The post OpenAI announces text-to-video model called Sora appeared first on SD Times.
from SD Times https://ift.tt/q3UYSXo
Comments
Post a Comment