-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
size mismatch error #9
Comments
Thank you for your interest. We install and download weights of CogVideoX from here: https://github.com/THUDM/CogVideo/tree/main/sat |
Thanks for your reply! I ran the code successfully according to your comment. I found that the variable "counter" used in |
I tried to test the script from scripts/cogvideox/fastercache_sample_cogvideox5b.sh, and the model is downloaded from https://huggingface.co/THUDM/CogVideoX1.5-5B-SAT/tree/main, but I got error as the title says. More detailed error is as follows:
Missing keys: []
Unexpected keys: []
Restored from /nas/xxx/models/CogVideoX1.5-5B-SAT/vae/3d-vae.pt
[2024-11-19 02:39:24,249] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 10548374243
[2024-11-19 02:39:49,290] [INFO] [RANK 0] global rank 0 is loading checkpoint /nas/xxx/models/CogVideoX1.5-5B-SAT/transformer_t2v/1000/mp_rank_00_model_states.pt
/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/sat/training/model_io.py:286: FutureWarning: You are using
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.sd = torch.load(checkpoint_name, map_location='cpu')
[rank0]: Traceback (most recent call last):
[rank0]: File "/vepfs/home/xxx/project/FasterCache/scripts/cogvideox/fastercache_sample_cogvideox.py", line 658, in
[rank0]: sampling_main(args, model_cls=SATVideoDiffusionEngine)
[rank0]: File "/vepfs/home/xxx/project/FasterCache/scripts/cogvideox/fastercache_sample_cogvideox.py", line 522, in sampling_main
[rank0]: load_checkpoint(model, args)
[rank0]: File "/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/sat/training/model_io.py", line 304, in load_checkpoint
[rank0]: missing_keys, unexpected_keys = module.load_state_dict(sd['module'], strict=False)
[rank0]: File "/home/xxx/miniconda3/envs/fastercache/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: Error(s) in loading state_dict for SATVideoDiffusionEngine:
[rank0]: size mismatch for model.diffusion_model.mixins.patch_embed.proj.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([3072, 16, 2, 2]).
[rank0]: size mismatch for model.diffusion_model.mixins.final_layer.linear.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
[rank0]: size mismatch for model.diffusion_model.mixins.final_layer.linear.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
[rank0]:[W1119 02:44:44.550044996 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
DONE on di-20231208204413-8nvfq
The text was updated successfully, but these errors were encountered: