-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_autocast_torch_bf16
fails if PyTorch is compiled with CUDA support.
#6085
Labels
Comments
The main problem here is that |
@yeounoh FYI |
This was referenced Dec 15, 2023
pytorchmergebot
pushed a commit
to pytorch/pytorch
that referenced
this issue
Jan 1, 2024
…#115924) Fix: #115900 pytorch/xla#6085 This PR adds a last resort for testing for BF16 support on CUDA. This is necessary on GPUs such as RTX 2060, where `torch.cuda.is_bf_supported()` returns False, but we can successfully create a BF16 tensor on CUDA. Before this PR: ```python >>> torch.cuda.is_bf_supported() False >>> torch.tensor([1.], dtype=torch.bfloat16, device="cuda") tensor([...], device='cuda:0', dtype=torch.bfloat16) ``` After this PR: ```python >>> torch.cuda.is_bf_supported() True >>> torch.tensor([1.], dtype=torch.bfloat16, device="cuda") tensor([...], device='cuda:0', dtype=torch.bfloat16) ``` Pull Request resolved: #115924 Approved by: https://github.com/jansel
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
Running
test_autocast_torch_bf16
test produces the following error, if PyTorch was compiled with CUDA support:Environment
Additional Context
Blocking: #6070
The text was updated successfully, but these errors were encountered: