You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have the python -m bitandbytes below:
python -m bitsandbytes
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: None
Torch says CUDA is not available. Possible reasons:
CUDA driver not installed
CUDA not installed
You have multiple conflicting CUDA libraries
The directory listed in your path is found to be non-existent: /home/XXX/anaconda3/envs/salmon/etc/xml/catalog file
The directory listed in your path is found to be non-existent: /etc/xml/catalog
Found duplicate CUDA runtime files (see below).
We select the PyTorch default CUDA runtime, which is None,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the BNB_CUDA_VERSION=<version string, e.g. 122> environmental variable.
For example, if you want to use the CUDA version 122,
BNB_CUDA_VERSION=122 python ...
OR set the environmental variable in your .bashrc:
export BNB_CUDA_VERSION=122
In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12.3.101
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so
Found CUDA runtime at: /usr/local/cuda-12.3/lib64/libcudart.so.12.3.101
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
Traceback (most recent call last):
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 66, in main
sanity_check()
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 33, in sanity_check
p = torch.nn.Parameter(torch.rand(10, 10).cuda())
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/torch/cuda/init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Above we output some debug information.
Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose
WARNING: Please be sure to sanitize sensitive info from the output before posting it.
Expected behavior
expect to run correctly with CUDA
The text was updated successfully, but these errors were encountered:
System Info
Linux 22.04.1-Ubuntu A100 80G
Reproduction
I have the
python -m bitandbytes
below:python -m bitsandbytes
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: None
Torch says CUDA is not available. Possible reasons:
The directory listed in your path is found to be non-existent: /home/XXX/anaconda3/envs/salmon/etc/xml/catalog file
The directory listed in your path is found to be non-existent: /etc/xml/catalog
Found duplicate CUDA runtime files (see below).
We select the PyTorch default CUDA runtime, which is None,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the
BNB_CUDA_VERSION=<version string, e.g. 122>
environmental variable.For example, if you want to use the CUDA version 122,
BNB_CUDA_VERSION=122 python ...
OR set the environmental variable in your .bashrc:
export BNB_CUDA_VERSION=122
In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
Traceback (most recent call last):
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 66, in main
sanity_check()
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/bitsandbytes/diagnostics/main.py", line 33, in sanity_check
p = torch.nn.Parameter(torch.rand(10, 10).cuda())
File "/home/XXX/anaconda3/envs/salmon/lib/python3.9/site-packages/torch/cuda/init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Above we output some debug information.
Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose
WARNING: Please be sure to sanitize sensitive info from the output before posting it.
Expected behavior
expect to run correctly with CUDA
The text was updated successfully, but these errors were encountered: