Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot load ../pretrained_models/stable-diffusion-v1-4 because encoder.conv_in.weight expected shape tensor(..., device='meta', size=(64, 3, 3, 3)), but got torch.Size([128, 3, 3, 3]). #51

Open
foxyear-kyumin opened this issue Dec 15, 2023 · 3 comments

Comments

@foxyear-kyumin
Copy link

chosen wrong model?

@Freedomcls
Copy link

Hello, I was wondering if you solved the problem.

@maxin-cn
Copy link
Contributor

Hello, I was wondering if you solved the problem.

@Freedomcls Hi, Could you please provide more details for this problem? thanks~

@delcompan
Copy link

Same error as you
vae = AutoencoderKL.from_pretrained(sd_path, subfolder="vae", torch_dtype=torch.float16).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ImageAI\FreeNoise-LaVie\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 583, in from_pretrained
raise ValueError(
ValueError: Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'> from G:/ImageAI/FreeNoise-LaVie/pretrained_models/stable-diffusion-v1-4 because the following keys are missing:
encoder.mid_block.attentions.0.value.weight, decoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.key.bias, decoder.mid_block.attentions.0.query.bias, decoder.mid_block.attentions.0.key.weight, encoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.proj_attn.weight, encoder.mid_block.attentions.0.proj_attn.bias, decoder.mid_block.attentions.0.value.weight, decoder.mid_block.attentions.0.proj_attn.bias, encoder.mid_block.attentions.0.key.weight, encoder.mid_block.attentions.0.key.bias, decoder.mid_block.attentions.0.value.bias, decoder.mid_block.attentions.0.query.weight, encoder.mid_block.attentions.0.value.bias, encoder.mid_block.attentions.0.query.weight.
Please make sure to pass low_cpu_mem_usage=False and device_map=None if you want to randomly initialize those weights or else make sure your checkpoint file is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants