Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA setup failed on linux CUDA12.3 with version 0.45.0 #1440

Open
viki760 opened this issue Dec 9, 2024 · 0 comments
Open

CUDA setup failed on linux CUDA12.3 with version 0.45.0 #1440

viki760 opened this issue Dec 9, 2024 · 0 comments

Comments

@viki760
Copy link

viki760 commented Dec 9, 2024

System Info

Linux 22.04.1-Ubuntu A100 80G

Reproduction

I have the python -m bitandbytes below:
python -m bitsandbytes
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: None
Torch says CUDA is not available. Possible reasons:

  1. CUDA driver not installed
  2. CUDA not installed
  3. You have multiple conflicting CUDA libraries
    The directory listed in your path is found to be non-existent: /home/XXX/anaconda3/envs/salmon/etc/xml/catalog file
    The directory listed in your path is found to be non-existent: /etc/xml/catalog
    Found duplicate CUDA runtime files (see below).

We select the PyTorch default CUDA runtime, which is None,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the BNB_CUDA_VERSION=<version string, e.g. 122> environmental variable.

For example, if you want to use the CUDA version 122,
BNB_CUDA_VERSION=122 python ...

OR set the environmental variable in your .bashrc:
export BNB_CUDA_VERSION=122

In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,

  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12
  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so
  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12.3.101
  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12
  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so
  • Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12.3.101
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    ++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Checking that the library is importable and CUDA is callable...
    Traceback (most recent call last):
    File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 66, in main
    sanity_check()
    File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 33, in sanity_check
    p = torch.nn.Parameter(torch.rand(10, 10).cuda())
    File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/torch/cuda/init.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    Above we output some debug information.
    Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose
    WARNING: Please be sure to sanitize sensitive info from the output before posting it.

Expected behavior

expect to run correctly with CUDA

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant