Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

New 'Stable Video Diffusion' AI Model Can Animate Any Still Image (arstechnica.com) 13

An anonymous reader quotes a report from Ars Technica: On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video -- with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU. [...] Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576x1024 resolution.

In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for). In our experiments, the generated animation typically keeps a portion of the scene static and adds panning and zooming effects or animates smoke or fire. People depicted in photos often do not move, although we did get one Getty image of Steve Wozniak to slightly come to life.

Given these limitations, Stability emphasizes that the model is still early and is intended for research only. "While we eagerly update our models with the latest advancements and work to incorporate your feedback," the company writes on its website, "this model is not intended for real-world or commercial applications at this stage. Your insights and feedback on safety and quality are important to refining this model for its eventual release." Notably, but perhaps unsurprisingly, the Stable Video Diffusion research paper (PDF) does not reveal the source of the models' training datasets, only saying that the research team used "a large video dataset comprising roughly 600 million samples" that they curated into the Large Video Dataset (LVD), which consists of 580 million annotated video clips that span 212 years of content in duration.

This discussion has been archived. No new comments can be posted.

New 'Stable Video Diffusion' AI Model Can Animate Any Still Image

Comments Filter:

So... did you ever wonder, do garbagemen take showers before they go to work?

Working...