-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault Chat isn't working with Ollama #145
Comments
Thank you for reporting this issue. This behavior is likely related to the limitations of smaller models when handling complex contexts and longer prompts. I will add this to our tracking issue #146.
About this, could you provide more details about what you're experiencing? |
https://imgur.com/a/w7rAPi7 Does the ai have context for file structure, tags, links etc? What i am after is to be able to talk and query our assistant more generally about our vault as a whole... FYI: Today i will fork your plugin to add a amazon bedrock integration, which has most models with 0 data retention... |
The plugin does include retrieved context from your vault in the requests. Small models just struggle with longer prompts and complex contextual queries, leading to those repetitive responses. You can see how we structure the prompts and context in promptGenerator.ts.
Could you elaborate on which specific methods you'd like to see exposed? I'm not familiar with how other local models work. |
I haven't taken a closer look to what exactly you are passing to each model call you make and where. But generally the idea would be to make an interface that has all the logic of the actions the plugin does in relation to ai calling, where its internal model API implementations could also pass through, and this way we could make external model implementations more easily as well... |
I didn't manage to make it work with AWS Bedrock... Hope you implement it some day so will be able to use full functionality privacy free! |
If AWS Bedrock supports OpenAI-compatible APIs, you should be able to use it by selecting 'Custom (OpenAI Compatible)' as your Chat model. |
Thanks for following up! If i do come up with a working solution i will share it but Amazon Bedrock uses a different API structure and authentication system (AWS credentials) that isn't compatible with that structure. |
Thanks for sharing. While we don't have immediate plans to implement Amazon Bedrock integration, we're keeping track of user requests. If more users express interest in this support, we'll definitely consider prioritizing it in our roadmap. Would it be okay to close this issue? |
Yes please do |
Hello, when selecting Vault chat it seems to reference some of my files while repeating some of them and i see this after the referencing "Understood! I will only include `
tags for existing markdown blocks you provide with
startLineand
endLine` attributes when referencing them. For newly generated markdown content, line numbers won't be included.I'm ready to assist with your Obsidian editing and organization questions using markdown format as specified. Please ask away! "
After that whatever i prompt i get that 'I don't have access to your obsidian vault'...
I am using gemma 2-9b from ollama with bge-m3 for embedding...
Any thoughts why this is happening?
Also the button vault chat is doing the above in a started conversation, in a new conversation in the input box if i press it, it doesnt do anything...
The text was updated successfully, but these errors were encountered: