Stable Video Generator

How to run

Powered by stable video diffusion model

How to Run Stable Video Diffusion: A Comprehensive Guide

Stable Video Diffusion (SVD) has emerged as a significant tool in AI-driven video generation. This guide will walk you through the steps to use SVD on various platforms, including Google Colab, ComfyUI, a local Windows setup and try it online.

Explore stable video diffusion model

Getting Started with Stable Video Diffusion

Welcome to our Stable Video Diffusion Showcase, a cutting-edge exhibition where the marvels of the Stable Video Diffusion model are on full display. Here, we celebrate the transformative power of this advanced video generator, showcasing how it is revolutionizing the way we create and interact with digital videos.

  • Stable Stable Video Models weights

  • Model parameters

  • Using Stable Video Diffusion on Google Colab

  • Using Stable Video Diffusion with ComfyUI

  • Installing Stable Video Diffusion on Windows

  • Try it online for free

Stable Stable Video Models weights

Two SVD model weights are publicly available.

SVD

Trained to generate 14 frames at resolution 576×1024.

SVD - XT

Trained to generate 25 frames at resolution 576×1024.

Model parameters

Below is a list of important parameters that control the video output.

Motion bucket id

The motion bucket id controls how much motion is in the video. A higher value means more motion. Accepts a value between 0 and 255.

FPS

The frames per second (fps) parameter controls the number of frames the model generates. Stay between 5 and 30 for optimal performance.

Augmentation level

The augmentation level is the amount of noise added to the initial image. Use it to change the initial image more or when generating videos that deviate from the default size.

Using Stable Video Diffusion on Google Colab

You need a high VRAM NVidia GPU card to run Stable Video Diffusion locally. If you don’t have one, the best option is Google Colab online. The notebook works with the free account.

Step 1: Open the Colab Notebook

Visit the GitHub page hosting the SVD Colab notebook. Look for the 'Open in Colab' icon and click it to open the notebook.

Step 2: Review the notebook option

The default settings are typically sufficient. However, you have the option to adjust settings, such as choosing not to save the final video to your Google Drive.

Step 3: Run the Notebook

Click the run button to start executing the notebook. This process initializes the environment for video generation.

Step 4: Start the GUI

Upon completion, a gradio.live link should appear. Click this link to launch the graphical user interface (GUI) for video generation.

Step 5: Upload an Initial Image

Use the GUI to upload the image you wish to animate. This image will serve as the first frame of your video.

Step 6: Start Video Generation

Click 'Run' in the GUI to commence video generation. The process takes approximately 9 minutes on a T4 GPU (free Colab account) and about 2 minutes on a V100 GPU. The completed video will appear in the GUI.

Using Stable Video Diffusion with ComfyUI

ComfyUI now supports the Stable Video Diffusion SVD models. Follow the steps below to install and use the text-to-video (txt2vid) workflow. It generates the initial image using the Stable Diffusion XL model and a video clip using the SVD XT model.

Step 1: Load the Text-to-Video Workflow

Download the ComfyUI workflow for text-to-video conversion and add it to your ComfyUI setup.

Step 2: Update ComfyUI

Ensure ComfyUI is updated, along with all custom nodes. This may involve installing any missing custom nodes.

Step 3: Download Models

Download the SVD XT model and place it in the appropriate folder within ComfyUI.

Step 4: Run the Workflow

With the model in place, run the workflow in ComfyUI. Parameters like video frame count, motion bucket id, FPS, and augmentation level can be adjusted to control the video output.

Installing Stable Video Diffusion on Windows

You can run Stable Video Difusion locally if you have a high-RAM GPU card. The following installation process is tested with a 24GB RTX4090 card. It is difficult to install this software locally. You may encounter issues not described in this section. So proceed only if you are tech-savvy, or want to be You will need git and Python 3.10 to install and use the software. See the installation guide for Stable Diffusion for steps to install them.

Step 1: Clone the RepositoryMotion bucket id

Use PowerShell (not Command Prompt) to clone the generative models repository from Stability AI's GitHub page.

Step 2: Create a Virtual Environment

In the cloned directory, create a virtual environment for Python and activate it.

Step 3: Remove the Triton Package in Requirements

Edit the requirements file to remove the 'triton' package, which is not necessary for Windows and may cause errors.

Step 4: Install the Required Libraries

Install PyTorch and other required libraries as per the instructions in the PowerShell.

Step 5: Download the Video Model

Download the SVD model weights and place them in the 'checkpoints' directory within the generative models folder.

Step 6: Run the GUI

Execute the command to run the GUI for video generation. Navigate to the local URL provided in the PowerShell output.

Step 7: Generate a Video

In the GUI, select the SVD model, upload an initial image, and start the video generation. Monitor the PowerShell for progress and errors.

Try it online for free

You can visit our website to run stable video diffusion for free