Skip to content

Latest commit

 

History

History
126 lines (95 loc) · 5.62 KB

README.md

File metadata and controls

126 lines (95 loc) · 5.62 KB

FastVideo is a lightweight framework for accelerating large video diffusion models.

FastMochi-Demo.mp4

🤗 FastMochi | 🤗 FastHunyuan | 🔍 Discord

FastVideo currently offers: (with more to come)

  • FastHunyuan and FastMochi: consistency distilled video diffusion models for 8x inference speedup.
  • First open distillation recipes for video DiT, based on PCM.
  • Support distilling/finetuning/inferencing state-of-the-art open video DiTs: 1. Mochi 2. Hunyuan.
  • Scalable training with FSDP, sequence parallelism, and selective activation checkpointing, with near linear scaling to 64 GPUs.
  • Memory efficient finetuning with LoRA, precomputed latent, and precomputed text embeddings.

Dev in progress and highly experimental.

🎥 More Demos

Fast-Hunyuan comparison with original Hunyuan, achieving an 8X diffusion speed boost with the FastVideo framework.

FastHunyuan-Demo.mp4

Comparison between OpenAI Sora, original Hunyuan and FastHunyuan

sora-verse-fasthunyuan.mp4.mp4

Change Log

  • 2024/12/17: FastVideo v0.1 is released.

🔧 Installation

The code is tested on Python 3.10.0, CUDA 12.1 and H100.

./env_setup.sh fastvideo

🚀 Inference

We recommend using a GPU with 80GB of memory. To run the inference, use the following command:

FastHunyuan

# Download the model weight
python scripts/huggingface/download_hf.py --repo_id=FastVideo/FastHunyuan --local_dir=data/FastHunyuan --repo_type=model
# CLI inference
sh scripts/inference/inference_hunyuan.sh

You can also inference FastHunyuan in the official Hunyuan github.

FastMochi

# Download the model weight
python scripts/huggingface/download_hf.py --repo_id=FastVideo/FastMochi-diffusers --local_dir=data/FastMochi-diffusers --repo_type=model
# CLI inference
bash scripts/inference/inference_mochi_sp.sh

🎯 Distill

Our distillation recipe is based on Phased Consistency Model. We did not find significant improvement using multi-phase distillation, so we keep the one phase setup similar to the original latent consistency model's recipe. We use the MixKit dataset for distillation. To avoid running the text encoder and VAE during training, we preprocess all data to generate text embeddings and VAE latents. Preprocessing instructions can be found data_preprocess.md. For convenience, we also provide preprocessed data that can be downloaded directly using the following command:

python scripts/huggingface/download_hf.py --repo_id=FastVideo/HD-Mixkit-Finetune-Hunyuan --local_dir=data/HD-Mixkit-Finetune-Hunyuan --repo_type=dataset

Next, download the original model weights with:

python scripts/huggingface/download_hf.py --repo_id=FastVideo/hunyuan --local_dir=data/hunyuan --repo_type=model

To launch the distillation process, use the following commands:

bash scripts/distill/distill_mochi.sh # for mochi
bash scripts/distill/distill_hunyuan.sh # for hunyuan

We also provide an optional script for distillation with adversarial loss, located at fastvideo/distill_adv.py. Although we tried adversarial loss, we did not observe significant improvements.

Finetune

⚡ Full Finetune

Ensure your data is prepared and preprocessed in the format specified in data_preprocess.md. For convenience, we also provide a mochi preprocessed Black Myth Wukong data that can be downloaded directly:

python scripts/huggingface/download_hf.py --repo_id=FastVideo/Mochi-Black-Myth --local_dir=data/Mochi-Black-Myth --repo_type=dataset

Download the original model weights with:

python scripts/huggingface/download_hf.py --repo_id=genmo/mochi-1-preview --local_dir=data/mochi --repo_type=model
python scripts/huggingface/download_hf.py --repo_id=FastVideo/hunyuan --local_dir=data/hunyuan --repo_type=model

Then you can run the finetune with:

bash scripts/finetune/finetune_mochi.sh # for mochi

Note that for finetuning, we did not tune the hyperparameters in the provided script

⚡ Lora Finetune

Currently, we only provide Lora Finetune for Mochi model, the command for Lora Finetune is

bash scripts/finetune/finetune_mochi_lora.sh

Minimum Hardware Requirement

  • 40 GB GPU memory each for 2 GPUs with lora
  • 30 GB GPU memory each for 2 GPUs with CPU offload and lora.

Finetune with Both Image and Video

Our codebase support finetuning with both image and video.

bash scripts/finetune/finetune_hunyuan.sh
bash scripts/finetune/finetune_mochi_lora_mix.sh

For Image-Video Mixture Fine-tuning, make sure to enable the --group_frame option in your script.

Acknowledgement

We learned and reused code from the following projects: PCM, diffusers, OpenSoraPlan, and xDiT.

We thank MBZUAI and Anyscale for their support throughout this project.