Skip to content

FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality

Notifications You must be signed in to change notification settings

Vchitect/FasterCache

Repository files navigation

FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality

1The University of Hong Kong       2S-Lab, Nanyang Technological University
3Shanghai Artificial Intelligence Laboratory
(‡: Project lead; †: Corresponding authors)

Paper | Project Page

About

We present FasterCache, a novel training-free strategy designed to accelerate the inference of video diffusion models with high-quality generation. For more details and visual results, go checkout our Project Page.

teaser.mp4

News

  • (🔥 New) 2024/11/8 We support the multi-device inference script for CogvideoX
  • (🔥 New) 2024/11/8 We implemented FasterCache based on the Mochi

Usage

Installation

Run the following instructions to create an Anaconda environment.

conda create -n fastercache python=3.10 -y
conda activate fastercache
git clone https://github.com/Vchitect/FasterCache
cd FasterCache
pip install -e .

Inference

We currently support Open-Sora 1.2, Open-Sora-Plan 1.1, Latte, CogvideoX-2B&5B, Vchitect 2.0 and Mochi. You can achieve accelerated sampling by executing the scripts we provide.

  • Open-Sora

    For single-GPU inference on Open-Sora, run the following command:

    bash scripts/opensora/fastercache_sample_opensora.sh
    

    For multi-GPU inference on Open-Sora, run the following command:

    bash scripts/opensora/fastercache_sample_multi_device_opensora.sh
    
  • Open-Sora-Plan

    For single-GPU inference on Open-Sora-Plan, run the following command:

    bash scripts/opensora_plan/fastercache_sample_opensoraplan.sh
    

    For multi-GPU inference on Open-Sora-Plan, run the following command:

    bash scripts/opensora_plan/fastercache_sample_multi_device_opensoraplan.sh
    
  • Latte

    For single-GPU inference on Latte, run the following command:

    bash scripts/latte/fastercache_sample_latte.sh
    

    For multi-GPU inference on Latte, run the following command:

    bash scripts/latte/fastercache_sample_multi_device_latte.sh
    
  • CogVideoX

    For single-GPU or multi-GPU batched inference on CogVideoX-2B, run the following command:

    bash scripts/cogvideox/fastercache_sample_cogvideox.sh
    

    For multi-GPU inference on CogVideoX-2B, run the following command:

    bash scripts/cogvideox/fastercache_sample_cogvideox_multi_device.sh
    

    For inference on CogVideoX-5B, run the following command:

    bash scripts/cogvideox/fastercache_sample_cogvideox5b.sh
    
  • Vchitect 2.0

    For inference on Vchitect 2.0, run the following command:

    bash scripts/vchitect/fastercache_sample_vchitect.sh
    
  • Mochi

    We also provide acceleration scripts for Mochi. Before running these scripts, please follow the official Mochi repository to complete model downloads, environment setup, and installation of genmo. Then, execute the following script:

    bash scripts/mochi/fastercache_sample_mochi.sh 
    

BibTeX

@inproceedings{lv2024fastercache,
  title={FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality},
  author={Lv, Zhengyao and Si, Chenyang and Song, Junhao and Yang, Zhenyu and Qiao, Yu and Liu, Ziwei and Kwan-Yee K. Wong},
  booktitle={arxiv},
  year={2024}
}

Acknowledgement

This repository borrows code from VideoSys, Vchitect-2.0, Mochi, and CogVideo,.Thanks for their contributions!

About

FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •