-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama support for LLM backend #97
Comments
can ollama use APIs? There is a PR open for the API part. |
@andimarafioti I think the PR is only for OpenAI API. I was just suggesting to offer Ollama API support. In particular, it's possible to run a Ollama server (either locally on a slightly better machine) and then query it using the Ollama Rest API (like here). The changes/additions would be similar to the open PR #81 but a class for Ollama API support and querying Ollama endpoints |
This would be great, when some people already run ollama. Like me :) |
happy to work on this at some point this week. there's added functionality to Ollama last week to run any GGUF model from the HF Hub too: https://huggingface.co/docs/hub/en/ollama |
Somewhat related to #65
In the past I've used Ollama for local inference for LLMs, would it be useful to add this support to the library? I'd be happy to work on this to add support to use an Ollama API endpoint for the LLM
The text was updated successfully, but these errors were encountered: