Stability AI launches Stable Video Diffusion its video generator –

Stability AI launches Stable Video Diffusion, its video generator – Futura

Stability AI, known for its Stable Diffusion image generator, has just announced the release of a new AI that allows you to animate images by creating videos that are just a few seconds long. The source code is already freely available.

This will also interest you

[EN VIDÉO] Artificial intelligence that adjusts the shooting angle of the photo. Interactive point-based manipulation of the generative image collector. © DraGAN Project

Stable Diffusion XL is one of the best artificial intelligence for image generation. Its inventor, Stability AI, has just announced the launch of a new AI, Stable Video Diffusion, this time for videos. The idea is not innovative as Google, Meta, Nvidia and Adobe have already demonstrated similar technologies and Meta has even announced its upcoming integration with Facebook and Instagram. But at the moment their AIs are not publicly available.

With stable video diffusion, you can create four-second videos. It is therefore impossible to create a movie, they are more like animated images like GIFs. This is how it works because the video is created from an image. Stable Diffusion will also provide an interface for generating videos from text by combining its new AI with a version of Stable Diffusion. To ensure the quality of the videos, the company asked volunteers to test its AI against its main public access competitors, namely Runway Gen-2 and Pika Labs. They felt that the quality of the clips produced by Stable Video Diffusion was much better.

Source code published online

The model was trained on a database of around 600 million videos and generates clips with a resolution of 576 x 1024 pixels. The AI ​​comes in two versions. The first is called SVD and allows you to create videos at 14 frames per second, while the second is called SVD-XT and is optimized for creating videos at 25 frames per second.

One of the features that sets Stability AI apart from its competitors is that the company publishes the source code of its models online, meaning anyone can use them on their own hardware as long as they have the necessary technical knowledge. The code is available on the GitHub page, while the weighting required to run the model locally is available on the Hugging Face page.