-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to download the ready LLaVA-Lightening-7B weights #97
Comments
Hi @SIGMIND, No conversion is required, you can directly clone it from huggingface as below,
Then, download projection weights as
Finally you should be able to run the demo as,
I hope it will help. Let me know if you have any questions. Thanks |
Thanks, the steps helped moving forward with the models. However, is there any specific GPU requirement specification for running this locally? I have tried to run it on RTX 2060 but getting error as bellow: |
It seems that last sentence from your logs indicate a driver issue. |
Understood and that is resolved. But how much GPU memory is required to run it offline? I have 12 GB RTX 2060 and getting this error
|
I've run the model locally on RTX 2070 SUPER successfully and also I've also run the model in the cloud with no issues. Your problem seems related to your setup. Try closing every app on your system and then run the model. In my case during my initial local attempt, the browser was reserving GPU memory and caused errors you just mentioned. |
As mentioned on the offline demo readme,
Alternatively you can download the ready LLaVA-Lightening-7B weights from mmaaz60/LLaVA-Lightening-7B-v1-1.
THe Hugging Face repo has files named pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin
Should I convert the model to gguf format to be used for offline demo?
The text was updated successfully, but these errors were encountered: