Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SR yml is missing? #3

Open
liulanyang-alpha opened this issue Nov 13, 2024 · 7 comments
Open

SR yml is missing? #3

liulanyang-alpha opened this issue Nov 13, 2024 · 7 comments

Comments

@liulanyang-alpha
Copy link

SR yml is missing?

@Amirhosein-gh98
Copy link
Collaborator

SR option file is added now.

@njzxj
Copy link

njzxj commented Nov 14, 2024

latent_attn_type1 : "CTS"
latent_attn_type2 : "Channel"
latent_attn_type3 : "CTS"
latent_ffw_type : "GFFW"

In turtle_arch.py CTS Not defined

@Amirhosein-gh98
Copy link
Collaborator

These were our old namings. I fixed the issue by renaming the modules in the option file.

@njzxj
Copy link

njzxj commented Nov 20, 2024

When I set the following parameters:

config = "Turtle/options/Turtle_SR_MVSR.yml"
model_path = "models/SuperResolution.pth"

I encounter the following errors:

Error(s) in loading state_dict for Turtle:
    Unexpected key(s) in state_dict: "decoder_level3.transformer_blocks.9.attn.spatial_aligner.k2.weight", "decoder_level3.transformer_blocks.9.attn.spatial_aligner.k2_dwconv.weight", "decoder_level3.transformer_blocks.9.attn.spatial_aligner.q2.weight", "decoder_level3.transformer_blocks.9.attn.spatial_aligner.q2_dwconv.weight", "decoder_level2.transformer_blocks.5.attn.spatial_aligner.k2.weight", "decoder_level2.transformer_blocks.5.attn.spatial_aligner.k2_dwconv.weight", "decoder_level2.transformer_blocks.5.attn.spatial_aligner.q2.weight", "decoder_level2.transformer_blocks.5.attn.spatial_aligner.q2_dwconv.weight", "decoder_level1.transformer_blocks.1.attn.spatial_aligner.k2.weight", "decoder_level1.transformer_blocks.1.attn.spatial_aligner.k2_dwconv.weight", "decoder_level1.transformer_blocks.1.attn.spatial_aligner.q2.weight", "decoder_level1.transformer_blocks.1.attn.spatial_aligner.q2_dwconv.weight". 
Size mismatch for decoder_level3.transformer_blocks.9.attn.spatial_aligner.temperature: copying a parameter with shape torch.Size([1, 1, 1]) from checkpoint, while the shape in the current model is torch.Size([4, 1, 1]).
Size mismatch for decoder_level2.transformer_blocks.5.attn.spatial_aligner.temperature: copying a parameter with shape torch.Size([1, 1, 1]) from checkpoint, while the shape in the current model is torch.Size([2, 1, 1]).

The error occurs at line 121 in /root/code/Turtle/basicsr/inference_no_ground_truth.py, where model.load_state_dict(torch.load(path)['params']) is called.

Is there an issue with the configuration file or something else?

@zelenooki87
Copy link

Having exactly same issue!

@zelenooki87
Copy link

solved:
Changed model_type="t0",
to
model_type="t1",

@Amirhosein-gh98
Copy link
Collaborator

The model type should be changed to SR for Super Resolution. Could you elaborate more on what you did and for which task?
An upsampling was also missing for the SR case in the inference_no_ground_truth.py file, which is fixed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants