Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes` when in reality it's a torch issue #837

Open
dataf3l opened this issue Oct 22, 2023 · 11 comments
Assignees
Labels
huggingface-related A bug that is likely due to the interaction between bnb and HF libs (transformers, accelerate, peft) likely not a BNB issue

Comments

@dataf3l
Copy link

dataf3l commented Oct 22, 2023

bitsandbytes reports this error:

(venv) ➜  image-captioning-v2 python captionit3.py
True
False
Traceback (most recent call last):
  File "/Users/b/study/ml/image-captioning-v2/captionit3.py", line 14, in <module>
    model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', quantization_config=nf4_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b/study/ml/image-captioning-v2/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2616, in from_pretrained
    raise ImportError(
ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`

however, the error is innacurate, because the issue is that the function:

def is_bitsandbytes_available():
    if not is_torch_available():
        return False

    # bitsandbytes throws an error if cuda is not available
    # let's avoid that by adding a simple check
    import torch

    return _bitsandbytes_available and torch.cuda.is_available()

and if somebody accidentally uninstall torch, this happens.
So maybe one should improve the error message.
and maybe sending a message here to the user complaining about "unable to import torch" would be useful, who knows

tell your friends! :)

@Its3rr0rsWRLD
Copy link

Are you on MacOS? Had the same issue on it, swapped to windows (remote ssh) and searching for a different issue lol

@oushu1zhangxiangxuan1
Copy link

I got the same ERROR

@SoyGema
Copy link

SoyGema commented Oct 30, 2023

Are you on MacOS? Had the same issue on it, swapped to windows (remote ssh) and searching for a different issue lol

Yes. Same issue on MacOS

@effortprogrammer
Copy link

same issue... is there any updates?

@pechaut78
Copy link

same issue

@RamsesCamas
Copy link

Same issue

@pechaut78
Copy link

pechaut78 commented Dec 7, 2023

Well as said above, the error is not that the lib is not properly installed: the error message is misleading.
The issue is that it is not implemented on Apple Silicon (mps)
So, bitsandbytes can not be used and code should be adapted ! sigh

Please see:

#485

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@github-actions github-actions bot closed this as completed Jan 8, 2024
@TimDettmers
Copy link
Collaborator

This is a great catch. Can you please submit this to the transformers github repo? This is only indirectly a bitsandbytes issue.

@TimDettmers TimDettmers reopened this Jan 8, 2024
@Titus-von-Koeller Titus-von-Koeller added likely not a BNB issue huggingface-related A bug that is likely due to the interaction between bnb and HF libs (transformers, accelerate, peft) labels Jan 26, 2024
@Titus-von-Koeller
Copy link
Collaborator

To me it's not entirely clear where Mac comes into play and how we would best warn that Mac is not supported.

@pechaut78 how did you deduce that it must be Mac related? And why does the code get triggered that is throwing the traceback?

@younesbelkada
Copy link
Collaborator

Hi - the core issue is that currently in transformers is_bitsandbytes_available() silently returns False if you don't have a CUDA device, i.e. if torch.cuda.is_available(): https://github.com/huggingface/transformers/blob/cd2eb8cb2b40482ae432d97e65c5e2fa952a4f8f/src/transformers/utils/import_utils.py#L623
This is not ideal as we should display a more informative warning instead - @Titus-von-Koeller would be happy to open a quick PR on transformers to add a logger.info if torch.cuda.is_available() is False to clearly state to users that is_bitsandbytes_available() will silently be set to False ? Otherwise happy to do it as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
huggingface-related A bug that is likely due to the interaction between bnb and HF libs (transformers, accelerate, peft) likely not a BNB issue
Projects
None yet
Development

No branches or pull requests

10 participants