Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai compatible provider #1

Open
oatmealm opened this issue Mar 27, 2024 · 5 comments
Open

openai compatible provider #1

oatmealm opened this issue Mar 27, 2024 · 5 comments

Comments

@oatmealm
Copy link

Hi there! Wondering if robby can setup to work with openapi compatible providers. Being able to setup a base_url for example...

stevemolitor added a commit that referenced this issue Mar 30, 2024
Put the base API url in a customization variable, instead of hard-coding
it. This should let users use alternate OpenAI compatible providers.

Fixes #1.
@stevemolitor
Copy link
Owner

Oh interesting idea. It was hard-coded but I just added a commit to make the API url customizable - 34cc413.

To use put this in your init.el:

(setq robby-api-url "https://alternate-api-provider")

Let me know if that works. What provider are you using?

@stevemolitor stevemolitor reopened this Mar 30, 2024
@oatmealm
Copy link
Author

oatmealm commented Mar 30, 2024

I'll try with torgetheai and ollama (which now supports some openai api directives).
Thanks!

@oatmealm
Copy link
Author

oatmealm commented Apr 8, 2024

I'm trying a simple setup with litellm (openai proxy), but keep getting "405 Method Not Allowed":

(use-package! robby
  :commands (robby-chat)
  :bind ("C-c r" . robby-command-map)
  :custom
  (robby-api-url "http://localhost:4000")
  (robby-openai-api-key "sk-1234")
  (robby-chat-model "some-model"))

Otherwise this setup works out of the box with other packages I've been trying.

@stevemolitor
Copy link
Owner

Thanks! What other providers did you try?

I’m noticing slight variations between the different providers in auxiliary things like fetching and parsing the list of available models or error response formats. I started working on something to allow plugging in little variations like that for specific providers. It will be a week or two however before I get back to this. I’ll sort the ‘405’ response from litellm then.

@oatmealm
Copy link
Author

oatmealm commented Apr 8, 2024

I have only one cohere model plugged in so far via litellm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants