Replies: 1 comment
-
It may have to do with that version being quantized to bf16, try one of these https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/tree/main |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible to run LLama3 with koboldcpp? When trying to run Meta-Llama-3-8B-Instruct-bf16, the program reports an error.
RX6600, 32GB RAM
Beta Was this translation helpful? Give feedback.
All reactions