Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have some remapped issues #3

Open
GrainSack opened this issue Aug 9, 2023 · 1 comment
Open

I have some remapped issues #3

GrainSack opened this issue Aug 9, 2023 · 1 comment

Comments

@GrainSack
Copy link

GrainSack commented Aug 9, 2023

Below script is my terminal error for running

python precompute_noises_and_conditionings.py
--config ./config/parameter_estimation.yaml
--inversion_subfolder noise
--token_subfolder tokens \
--triplet_file triplets.csv
--data_path ./dataset/data/

Model loaded
/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:42: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
  return img.transpose(Image.FLIP_LEFT_RIGHT)
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
Selected timesteps: tensor([4, 0, 5, 2, 3, 6, 7, 1])
  0%|                                                                                                                                                                                        | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "estimate_CLIP_features.py", line 65, in <module>
    output = invertor.perform_cond_inversion_individual_timesteps(file_path, None, optimize_tokens=True)
  File "/hdd1/kss/home/DIA/ddim_invertor.py", line 275, in perform_cond_inversion_individual_timesteps
    noise_prediction = self.ddim_sampler.model.apply_model(noisy_samples, steps_in, cond_init.expand(self.config.conditioning_optimization.batch_size, -1 , -1))
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input,  #**kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 213, in _forward
    x = self.attn2(self.norm2(x), context=context) + x
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 180, in forward
    sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/functional.py", line 330, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [64, 4096, 40]->[64, 4096, 1, 40] [8, 77, 40]->[8, 1, 77, 40]

And did't found Token inversion at last

Traceback (most recent call last):
  File "estimate_input_noise.py", line 70, in <module>
    outputs = invertor.perform_inversion(file_name, cond = None, init_noise_init = None, loss_weights= {'latents': 1. , 'pixels':1.} )
  File "/hdd1/kss/home/DIA/ddim_invertor.py", line 93, in perform_inversion
    assert cond_out is not None, 'Token inversion was not found...'
AssertionError: Token inversion was not found...

My torch version is same as repo(1.11.0)
And nvidia-smi cuda version in 12.0

@subrtadel
Copy link
Owner

Did you change the batch size? There was a bug, I believe it is fixed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants