Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

size mismatch error #9

Open
moclimb opened this issue Nov 19, 2024 · 2 comments
Open

size mismatch error #9

moclimb opened this issue Nov 19, 2024 · 2 comments

Comments

@moclimb
Copy link

moclimb commented Nov 19, 2024

I tried to test the script from scripts/cogvideox/fastercache_sample_cogvideox5b.sh, and the model is downloaded from https://huggingface.co/THUDM/CogVideoX1.5-5B-SAT/tree/main, but I got error as the title says. More detailed error is as follows:
Missing keys: []
Unexpected keys: []
Restored from /nas/xxx/models/CogVideoX1.5-5B-SAT/vae/3d-vae.pt
[2024-11-19 02:39:24,249] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 10548374243
[2024-11-19 02:39:49,290] [INFO] [RANK 0] global rank 0 is loading checkpoint /nas/xxx/models/CogVideoX1.5-5B-SAT/transformer_t2v/1000/mp_rank_00_model_states.pt
/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/sat/training/model_io.py:286: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
sd = torch.load(checkpoint_name, map_location='cpu')
[rank0]: Traceback (most recent call last):
[rank0]: File "/vepfs/home/xxx/project/FasterCache/scripts/cogvideox/fastercache_sample_cogvideox.py", line 658, in
[rank0]: sampling_main(args, model_cls=SATVideoDiffusionEngine)
[rank0]: File "/vepfs/home/xxx/project/FasterCache/scripts/cogvideox/fastercache_sample_cogvideox.py", line 522, in sampling_main
[rank0]: load_checkpoint(model, args)
[rank0]: File "/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/sat/training/model_io.py", line 304, in load_checkpoint
[rank0]: missing_keys, unexpected_keys = module.load_state_dict(sd['module'], strict=False)
[rank0]: File "/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: Error(s) in loading state_dict for SATVideoDiffusionEngine:
[rank0]: size mismatch for model.diffusion_model.mixins.patch_embed.proj.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([3072, 16, 2, 2]).
[rank0]: size mismatch for model.diffusion_model.mixins.final_layer.linear.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
[rank0]: size mismatch for model.diffusion_model.mixins.final_layer.linear.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
[rank0]:[W1119 02:44:44.550044996 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
DONE on di-20231208204413-8nvfq

@cszy98
Copy link
Collaborator

cszy98 commented Nov 19, 2024

Thank you for your interest. We install and download weights of CogVideoX from here: https://github.com/THUDM/CogVideo/tree/main/sat
https://cloud.tsinghua.edu.cn/d/fcef5b3904294a6885e5/?p=%2F&mode=list
In addition, our implementation is not based on CogVideoX1.5, and I am unsure whether the current script can be used directly. We will update it as needed in the future.

@moclimb
Copy link
Author

moclimb commented Nov 24, 2024

Thank you for your interest. We install and download weights of CogVideoX from here: https://github.com/THUDM/CogVideo/tree/main/sat https://cloud.tsinghua.edu.cn/d/fcef5b3904294a6885e5/?p=%2F&mode=list In addition, our implementation is not based on CogVideoX1.5, and I am unsure whether the current script can be used directly. We will update it as needed in the future.

Thanks for your reply! I ran the code successfully according to your comment. I found that the variable "counter" used in
attention cache is different between cogvideox and any other type of model. For example, in cogvdieox, the cache works when counter>=18, while in latte that works when counter>=16, and the ratio to balance last step and current step is also different. I wanna know how to set this, and I didn't find more detailed statement for this in your paper.

cogvideox:
image

latte:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants