diff --git a/docs/source/en/api/pipelines/ltx.md b/docs/source/en/api/pipelines/ltx.md index 17032fede952..007f43f77b0c 100644 --- a/docs/source/en/api/pipelines/ltx.md +++ b/docs/source/en/api/pipelines/ltx.md @@ -22,6 +22,35 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m +## Loading Single Files + +Loading the original LTX Video checkpoints is also possible with [`~ModelMixin.from_single_file`]. + +```python +import torch +from diffusers import AutoencoderKLLTX, LTXImageToVideoPipeline, LTXTransformer3DModel + +single_file_url = "https://huggingface.co/Lightricks/LTX-Video/ltx-video-2b-v0.9.safetensors" +transformer = LTXTransformer3DModel.from_single_file(single_file_url, torch_dtype=torch.bfloat16) +vae = AutoencoderKLLTX.from_single_file(single_file_url, torch_dtype=torch.bfloat16) +pipe = LTXImageToVideoPipeline.from_pretrained("Lightricks/LTX-Video", transformer=transformer, vae=vae, torch_dtype=torch.bfloat16) + +# ... inference code ... +``` + +Alternative, the pipeline can be used to load the weights with [~FromSingleFileMixin.from_single_file`]. + +```python +import torch +from diffusers import LTXImageToVideoPipeline +from transformers import T5EncoderModel, T5Tokenizer + +single_file_url = "https://huggingface.co/Lightricks/LTX-Video/ltx-video-2b-v0.9.safetensors" +text_encoder = T5EncoderModel.from_pretrained("Lightricks/LTX-Video", subfolder="text_encoder", torch_dtype=torch.bfloat16) +tokenizer = T5Tokenizer.from_pretrained("Lightricks/LTX-Video", subfolder="tokenizer", torch_dtype=torch.bfloat16) +pipe = LTXImageToVideoPipeline.from_single_file(single_file_url, text_encoder=text_encoder, tokenizer=tokenizer, torch_dtype=torch.bfloat16) +``` + ## LTXPipeline [[autodoc]] LTXPipeline