Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAG Integration Not Working Correctly with Custom AI Models FROM OLLAMA #203

Closed
TarjinderSingh opened this issue Jan 9, 2025 · 1 comment

Comments

@TarjinderSingh
Copy link

There appears to be a bug in the current implementation of Retrieval-Augmented Generation (RAG) when using custom AI models, such as Ollama. The issue occurs under the following conditions:

  1. Specific Note Interaction: When interacting with a specific note, the system functions as expected.
  2. Vault-Wide Search: When searching the entire vault, the system identifies the top 10 relevant chunks to include but fails to answer the query accurately. Instead, it generates an unrelated response, often focused on markdown content. I attached a screenshot that highlights this problem.

I am not sure that this is the intention of the application.

2025-01-09_Obsidian-Untitled - My Notebook - Obsidian v1 8 1@2x

89101

@glowingjade
Copy link
Owner

Thank you for reporting this issue. Based on your description, this appears to be related to known limitations of smaller models in handling complex contexts and longer prompts.

We are already tracking this behavior in our existing issue: #146

Could you please add your findings there, including details about which model you're using? This information would be valuable for our investigation.

Since this is being tracked in the other issue, I'll close this one to consolidate the discussion.

Thanks for your contribution to improving the project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants