You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey there! We love the approach of using a vision model to generate markdown but it isn't full proof all the time. So we trained a vision model (vLLM) with traditional OCR and now we get more consistency and cleaner data output and native support for PDFs, image and supports response structure like JSON & Markdown. It's called JigsawStack vOCR.
If you think it makes sense, happy to create a PR that adds this integrations as an option between a default LLM or the vOCR model. Let me know what you think :)
The text was updated successfully, but these errors were encountered:
Hey there! We love the approach of using a vision model to generate markdown but it isn't full proof all the time. So we trained a vision model (vLLM) with traditional OCR and now we get more consistency and cleaner data output and native support for PDFs, image and supports response structure like JSON & Markdown. It's called JigsawStack vOCR.
If you think it makes sense, happy to create a PR that adds this integrations as an option between a default LLM or the vOCR model. Let me know what you think :)
The text was updated successfully, but these errors were encountered: