Replies: 1 comment
-
Fixed in #6637 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using Llama-3.2-11B-Vision-Instruct and the identity dataset. At the very beginning of training, it throws an error: ValueError: Mllama only supports one image per sample. When I use Llama-3.2-11B-Vision-Instruct directly for chat, it gives the same error. It only works properly when each message is accompanied by an image. Why is this happening?
Beta Was this translation helpful? Give feedback.
All reactions