You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There appears to be a bug in the current implementation of Retrieval-Augmented Generation (RAG) when using custom AI models, such as Ollama. The issue occurs under the following conditions:
Specific Note Interaction: When interacting with a specific note, the system functions as expected.
Vault-Wide Search: When searching the entire vault, the system identifies the top 10 relevant chunks to include but fails to answer the query accurately. Instead, it generates an unrelated response, often focused on markdown content. I attached a screenshot that highlights this problem.
I am not sure that this is the intention of the application.
The text was updated successfully, but these errors were encountered:
Thank you for reporting this issue. Based on your description, this appears to be related to known limitations of smaller models in handling complex contexts and longer prompts.
We are already tracking this behavior in our existing issue: #146
Could you please add your findings there, including details about which model you're using? This information would be valuable for our investigation.
Since this is being tracked in the other issue, I'll close this one to consolidate the discussion.
Thanks for your contribution to improving the project.
There appears to be a bug in the current implementation of Retrieval-Augmented Generation (RAG) when using custom AI models, such as Ollama. The issue occurs under the following conditions:
I am not sure that this is the intention of the application.
The text was updated successfully, but these errors were encountered: