OpenAI Announces 'Sora', a Cutting-Edge Generative Video Model

AI TECHNOLOGY
OpenAI Announces 'Sora', a Cutting-Edge Generative Video Model

OpenAI has introduced a new AI model called Sora that is able to generate realistic videos directly from text prompts. Sora was trained to understand language and how the physical world works so it can bring text descriptions to life as videos. Some examples shown include a stylish woman walking through Tokyo at night, woolly mammoths walking through snow, and waves crashing against cliffs in Big Sur.

The videos generated by Sora can be up to a minute long and maintain visual quality while accurately reflecting the details described in the prompt. Sora understands things like how characters and objects should move and interact realistically. It can also create multiple shots within a single video that consistently portray the same characters, locations, and visual style described.

OpenAI hopes that Sora can help solve problems that require interactions with the real world by training models to understand physical simulations. They are sharing their progress early to get feedback from people outside the company on how to advance the model.

Sora is now available for researchers to assess any risks or harms, and for artists and filmmakers to provide input on how it could be useful. The goal is to develop AI that can assist creatives in generating new visual content directly from descriptions.

In summary, OpenAI's new text-to-video model uses its understanding of language and physics to automatically create realistic videos matching written prompts, demonstrating progress toward AI that can simulate the real world.

Reply

or to participate.