We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm loading the model as specified in the demo script and running inference highly similarly:
res = model.chat(tokenizer, str(image_path), ocr_type='format')
On some page images, the only output i get back from the model is:
<|im_end|>};
These images tend to be pages where the entire page is a table and each cell can contain a fair bit of text.
What causes this?
The text was updated successfully, but these errors were encountered:
got the same issue. did you solve it?
Sorry, something went wrong.
I have not, but the model is currently being added to transformers and I'm going to try it again once it's in those APIs
No branches or pull requests
I'm loading the model as specified in the demo script and running inference highly similarly:
res = model.chat(tokenizer, str(image_path), ocr_type='format')
On some page images, the only output i get back from the model is:
<|im_end|>};
These images tend to be pages where the entire page is a table and each cell can contain a fair bit of text.
What causes this?
The text was updated successfully, but these errors were encountered: