-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fixes for llava multimodal #5038
Conversation
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
@szelok could you merge the dev branch check if the changes in this PR are still necessary? |
I just did, only dev branch does not work: After modifying Note: I didn't touch the |
@oobabooga Fixed with cherry-pick 3af2cfbc3c198ddd6b351c27af868287c5eb354d from this PR. One related error is when using --gpu-memory in combination with --load-in-8bit: LlavaLlamaForCausalLM doesn't define from_config. But it works fine when not using bitsandbytes or when not specifying gpu-memory. |
Can confirm that what @randoentity said holds true. Applying just the changes to |
Checklist: