Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batching giving weird outputs #41

Open
mukundkhanna123 opened this issue Apr 10, 2024 · 0 comments
Open

batching giving weird outputs #41

mukundkhanna123 opened this issue Apr 10, 2024 · 0 comments

Comments

@mukundkhanna123
Copy link

Hi I noticed that when doing batch inference with a static prompt like ‘Describe the image’ the model gives wrong output like ‘in detail’, like it is just doing sentence completion. Whereas if I try a more descriptive prompt, where I tell minigemini that it is a ‘prompt generator’ then it goes into sentence completion mode and gives me an okayish response.

However, I have the original image description also, so I tried adding those in the prompt and then asking the model to describe the image, given the information about the image. This works perfectly fine when I just use 1 image at a time.

But when doing batch processing I get completely garbage outputs. To do batch processing, I use padding to get the prompts to the same shape. But this gives me completely garbage output.
I do this by changing line 44 in / MiniGemini/minigemini/mm_utils.py
to
tokenizer(chunk, padding='max_length', max_length=max_len).input_ids for chunk in prompt.split('')

Could you give me any advice on how to do this effectively?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant