This is our implementation of ControlNeXt based on Stable Video Diffusion. It can be seen as an attempt to replicate the implementation of AnimateAnyone with a more concise and efficient architecture.
Compared to image generation, video generation poses significantly greater challenges. While direct training of the generation model using our method is feasible, we also employ various engineering strategies to enhance performance. Although they are irrespective of academic algorithms.
Please refer to Inference for more details regarding installation and inference.
Please refer to Advanced Performance for more details to achieve a better performance.
Please refer to Limitations for more details about the limitations of current work.
https://www.youtube.com/watch?v=FwwhwshJW-I
- Clone repository
git clone https://github.com/newgenai79/ControlNeXt-SVD-v2
- Navigate inside cloned repo
cd ControlNeXt-SVD-v2
- Create virtual environment
python -m venv venv
- Activate virtual environment
venv\scripts\activate
- Install wheel
pip install wheel
- Install requirements
pip install -r requirements.txt
- Download examples folder from original repo
https://github.com/dvlab-research/ControlNeXt/tree/main/ControlNeXt-SVD-v2
- Download pretrained weights
8.1. Download the pretrained weight into pretrained/
from here. (More details please refer to Base Model)
8.2. Download the DWPose weights including the dw-ll_ucoco_384 and yolox_l into pretrained/DWPose
. For more details, please refer to DWPose:
├───pretrained
└───DWPose
| │───dw-ll_ucoco_384.onnx
| └───yolox_l.onnx
|
├───unet.bin
└───controlnet.bin
8.3 Clone SVD model in root folder
git clone https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1 stabilityai/stable-video-diffusion-img2vid-xt-1-1
ControlNeXt-SVD-v2\stabilityai
└───stable-video-diffusion-img2vid-xt-1-1
│ .gitattributes
│ LICENSE.md
│ model_index.json
│ README.md
│ svd11.webp
│ svd_xt_1_1.safetensors
│
├───feature_extractor
│ preprocessor_config.json
│
├───image_encoder
│ config.json
│ model.fp16.safetensors
│
├───scheduler
│ scheduler_config.json
│
├───unet
│ config.json
│ diffusion_pytorch_model.fp16.safetensors
│
└───vae
config.json
diffusion_pytorch_model.fp16.safetensors
- Launch gradio WebUI
python app.py
--pretrained_model_name_or_path : pretrained base model, we pretrain and fintune models based on SVD-XT1.1
--controlnet_model_name_or_path : the model path of controlnet (a light weight module)
--unet_model_name_or_path : the model path of unet
--ref_image_path: the path to the reference image
--overlap: The length of the overlapped frames for long-frame video generation.
--sample_stride: The length of the sampled stride for the conditional controls. You can set it to1
to make more smooth generation wihile requires more computation.
It is crucial to ensure that the reference image is clear and easily understandable, especially aligning the face of the reference with the pose.
To significantly enhance performance on a specific pose sequence, you can continuously fine-tune the model for just a few hundred steps.
We will release the related fine-tuning code later.
We adopt DWPose for the pose generation, and follow the related work (1, 2) to align the pose.
We did not prioritize maintaining IP consistency during the development of the generation model and now rely on a helper model for face enhancement.
However, additional training can be implemented to ensure IP consistency moving forward.
This also leaves a possible direction for futher improvement.
The base model plays a crucial role in generating human features, particularly hands and faces. We encourage collaboration to improve the base model for enhanced human-related video generation.