You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ubuntu 22.04.5 LTS
CPU 12Gen Intel Core i7-12700KF
Reproduction
I followed installation instruction step by step
Then I ran the python command
python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml --gpu-id 0
Expected behavior
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Initializing Chat
Traceback (most recent call last):
.....
return self.LoadFromFile(model_file)
File "/home/smusleh/anaconda3/envs/miniGPT-Med/lib/python3.9/site-packages/sentencepiece/init.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from /home/smusleh/miniGPT-Med/llama-2-7b-chat-hf/tokenizer.model
The text was updated successfully, but these errors were encountered:
I encountered a similar issue to yours. The materials I found suggest that the recent updates to peft and bitsandbytes may be the cause. However, even after updating the versions of peft and bitsandbytes, I am still getting the following error:
File "/home/czb/miniconda3/envs/minigpt/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 22, in <module>
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
If you cannot find any issues and suspect a bug, please open an issue with details about your environment:
https://github.com/TimDettmers/bitsandbytes/issues```
Have you been able to resolve the issue?
System Info
Ubuntu 22.04.5 LTS
CPU 12Gen Intel Core i7-12700KF
Reproduction
I followed installation instruction step by step
Then I ran the python command
python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml --gpu-id 0
Expected behavior
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Initializing Chat
Traceback (most recent call last):
.....
return self.LoadFromFile(model_file)
File "/home/smusleh/anaconda3/envs/miniGPT-Med/lib/python3.9/site-packages/sentencepiece/init.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from /home/smusleh/miniGPT-Med/llama-2-7b-chat-hf/tokenizer.model
The text was updated successfully, but these errors were encountered: