Key Points
- OpenAI has released Sora, an AI tool that generates videos from written prompts.
- The program can create complex scenes up to a minute long but limited by realism and extended actions.
- OpenAI actively prevents abuse, especially regarding harmful deepfakes and illegal content.
- OpenAI collaborates with organizations to monitor and report misuse, focusing on protecting vulnerable groups.
OpenAI has officially released Sora, an artificial intelligence video generation program, for its customers. Sora transforms written prompts into digital videos lasting up to 20 seconds. Initially introduced in beta form in February, the program is now available as a standalone product.
During the launch announcement, OpenAI CEO Sam Altman highlighted the company’s vision to move beyond text-based AI systems. “We don’t want the world to just be text. If the AI systems primarily interact with text, I think we’re missing something important,” Altman remarked during a livestream on Monday.
Sora leverages a deep language understanding to create videos featuring complex scenes, characters, and camera movements. The tool can produce content such as animated creatures and realistic human simulations. Examples showcased by OpenAI include a cinematic trailer of a spaceman on a salt desert and a woman strolling through Tokyo. While the program supports videos up to a minute long with intricate motion and details, OpenAI acknowledges its limitations, such as challenges with long-duration actions and realistic physics.
In a blog post, OpenAI emphasized the importance of introducing this technology responsibly and encouraging societal collaboration to establish norms and safeguards. The company also outlined its efforts to prevent abuse, addressing concerns about the potential misuse of AI to create harmful content like deepfakes. The company has implemented measures to restrict uploads of real people, particularly in sensitive contexts, and plans to refine these protections over time.
OpenAI uses metadata and watermarking by default to ensure safety, enabling users to identify AI-generated videos. It also collaborates with organizations like the National Center for Missing & Exploited Children to detect and report illegal material.