Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline #5194

Merged
merged 5 commits into from
Oct 6, 2023

Conversation

DN6
Copy link
Collaborator

@DN6 DN6 commented Sep 27, 2023

What does this PR do?

Add from single file mixin to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline

Fixes # (issue)
#5150

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@DN6 DN6 changed the title add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline [WIP] Add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline Sep 27, 2023
@DN6
Copy link
Collaborator Author

DN6 commented Sep 27, 2023

@patrickvonplaten Added the mixins to the upscale pipelines. Little confused on how to test this.

Is the expectation that the user is supposed to provide the appropriate yaml config when using from_single_file? Is this the case even if a single file version of the model is present on the hub? Or are we expected to fetch the config somehow if it is present on the hub?

e.g
https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/blob/main/x4-upscaler-ema.safetensors

@patrickvonplaten
Copy link
Contributor

patrickvonplaten commented Sep 27, 2023

The following should work:

wget https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors

And then doing:

from diffusers import StableDiffusionUpscalePipeline

pipeline = StableDiffusionUpscalePipeline.from_single_file("./x4-upscaler-ema.safetensors")

without giving any config.

No need to test the latent upscaler, I don't think there is even an official single file for this

@DN6
Copy link
Collaborator Author

DN6 commented Sep 27, 2023

The following should work:

wget https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors

And then doing:

from diffusers import StableDiffusionUpscalePipeline

pipeline = StableDiffusionUpscalePipeline.from_single_file("./x4-upscaler-ema.safetensors")

without giving any config.

No need to test the latent upscaler, I don't think there is even an official single file for this

So that snippet will fail since there is a mismatch in the layer shapes in the UNet.

ValueError: Trying to set a tensor of shape torch.Size([1024, 256]) in "weight" (which has shape torch.Size([1280, 320])), this look incorrect.

Without a config from_single_file creates a default Unet config that doesn't match the one in the checkpoint. I think it would apply to other components too.

@patrickvonplaten
Copy link
Contributor

The following should work:

wget https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors

And then doing:

from diffusers import StableDiffusionUpscalePipeline

pipeline = StableDiffusionUpscalePipeline.from_single_file("./x4-upscaler-ema.safetensors")

without giving any config.
No need to test the latent upscaler, I don't think there is even an official single file for this

So that snippet will fail since there is a mismatch in the layer shapes in the UNet.

ValueError: Trying to set a tensor of shape torch.Size([1024, 256]) in "weight" (which has shape torch.Size([1280, 320])), this look incorrect.

Without a config from_single_file creates a default Unet config that doesn't match the one in the checkpoint. I think it would apply to other components too.

Then we should add a new if statement that automatically detects when the input model is a upsampling model just from the checkpoint and then set the correct configs

@DN6
Copy link
Collaborator Author

DN6 commented Oct 4, 2023

@patrickvonplaten this is ready for another review. There's a failing ShapE test that's unrelated. Rest of the CI is green.

@DN6 DN6 changed the title [WIP] Add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline Add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline Oct 4, 2023
Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool! Nice job

@DN6 DN6 merged commit 872ae1d into main Oct 6, 2023
12 of 13 checks passed
@xiaoyong-z
Copy link

ldm unet model can not load after this pr, exception.
File "/root/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 450, in convert_ldm_unet_checkpoint
new_checkpoint["class_embedding.weight"] = unet_state_dict["label_emb.weight"]
KeyError: 'label_emb.weight'

(in 0.21.0, ldm model can be loaded correctly)

@patrickvonplaten
Copy link
Contributor

@xiaoyong-z could you open a new PR? cc @DN6 here as well for viz

@xiaoyong-z
Copy link

#5917 @patrickvonplaten

@kashif kashif deleted the load-single-file branch December 5, 2023 09:00
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
…usionLatentUpscalePipeline (huggingface#5194)

* add from single file

* clean up

* make style

* add single file loading for upscaling
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
…usionLatentUpscalePipeline (huggingface#5194)

* add from single file

* clean up

* make style

* add single file loading for upscaling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants