Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix/inv freq #34525

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Bugfix/inv freq #34525

wants to merge 3 commits into from

Conversation

cyr0930
Copy link

@cyr0930 cyr0930 commented Oct 31, 2024

What does this PR do?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Rocketknight1
Copy link
Member

Hi, I don't see the bug in the original code! self.inv_freq.to(x.device) and inv_freq = self.inv_freq.to(x.device) are both equivalent - in either case, the original tensor is moved.

@vasqu
Copy link
Contributor

vasqu commented Oct 31, 2024

@Rocketknight1 It is not due to any missing assignment, you can verify this with the following script:

import torch     # v2.3.1

x = torch.randn(size=(2, 4, 8)).to(device='cpu')
print(x.device)  # cpu

x.to(device='cuda')
print(x.device)  # cpu

Hence, you need an assignment - be it with a new var or directly on self.inv_freq.

Edit: Maybe for more clarification. The in-place movement for a device only happens when we call it on a nn.Module for example, not on a tensor / parameter. You can can also verify this with the following script:

import torch


class TestModel(torch.nn.Module):
    def __init__(self):
        super().__init__()

        inv_freq = torch.tensor(1.0)
        self.register_buffer("inv_freq", tensor=inv_freq, persistent=False)

    def forward(self, x):
        self.inv_freq.to('cuda')
        print(self.inv_freq.device)


model = TestModel() 

# you will see 2x cpu
print(model.inv_freq.device)
model('test')

# in-place movement 
model.to('cuda')

# you will see 2x cuda:x
print(model.inv_freq.device)
model('test')

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're totally right, my bad! That's an embarrassing mistake to make, and my only defence is that I had a lot of notifications that day. The new code is absolutely more correct!

cc @LysandreJik for core maintainer review

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@vasqu
Copy link
Contributor

vasqu commented Nov 1, 2024

No worries! Happens to all of us :) and tbh, it is some niche behaviour on torch's side on how device movement is handled.

@LysandreJik
Copy link
Member

I think cc @ArthurZucker would like to review this

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! This was introduced by #30775, but TBH I am not convinced that we need to even put self.inv_freq to device. See Llama model as it does not have this and there are no reports about failures in this case!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants