Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Integration with LiteLLM / LiteLLM-Proxy #173

Open
DBairdME opened this issue Oct 11, 2023 · 11 comments
Open

Feature Request: Integration with LiteLLM / LiteLLM-Proxy #173

DBairdME opened this issue Oct 11, 2023 · 11 comments

Comments

@DBairdME
Copy link

Hi. Loving the development of the chatbot. Has any thought been given to integrating the front end to litellm or litellm-proxy to provide abstraction of the LLM being used? With the rapid development and availability of LLM models, having the front end such as with smart-chatbot-ui able to leverage more LLMs (including those that may be locally hosted) would be a great development.

@ishaan-jaff
Copy link

Hi i'm the maintainer of LiteLLM - how can I help with this ?

@DBairdME
Copy link
Author

I'm keen to see what changes would be needed to allow calls to either LiteLLM or LiteLLM-Proxy in place of the current code that looks at either the OpenAI or Azure OpenAI LLMs. This type of change could include simplifying the UI such that the LLM choice is determined by LiteLLM; alternatively LiteLLM could be queried for the current LLMs that are configured for use and the chatbot UI then showing this list for the user to select the LLM they wish to use

@krrishdholakia
Copy link

@DBairdME i'll have a tutorial for this today

@krrishdholakia
Copy link

Hey @DBairdME

I believe this is what you need to do, assuming you're running it locally:

  1. Clone the repo
git clone https://github.com/dotneet/smart-chatbot-ui.git
  1. Install Dependencies
npm i
  1. Create your env
cp .env.local.example .env.local
  1. Set the API Key and Base
OPENAI_API_KEY="my-fake-key"
OPENAI_API_HOST="http://0.0.0.0:8000

@krrishdholakia
Copy link

Let me know if this works for you @DBairdME

Otherwise happy to hop on a 30min call and get this working for you - https://calendly.com/kdholakia

@DBairdME
Copy link
Author

Hi Krish, thanks for those notes. I see that if the LiteLLM-Proxy\main.py file is amended to use /v1/ as a prefix in the proxy config I can use the proxy for communicating with the LLM (without the /v1/ prefix the proxy isn't able to respond correctly). Interestingly if you choose a new chat within the chatbot, the call to /v1/models gets stuck and the app is not able to take any user inputs

@krrishdholakia
Copy link

Hey @DBairdME can you explain that a bit more? what's the error you're seeing?

We have support for both v1/chat/completions and /chat/completions

Here's all the available endpoints
Screenshot 2023-10-18 at 7 25 39 PM

@DBairdME
Copy link
Author

Hi. OK, I've redeployed the proxy using the litellm repo (rather than the litellm-proxy repo) and this addresses the /v1/ prefix issues.

Smartchatbot's call to /v1/models returns a 404 as shown when selecting a 'New Chat' within the chatbot
INFO: 172.19.8.XXX:53072 - "GET /v1/models HTTP/1.1" 404 Not Found

@krrishdholakia
Copy link

krrishdholakia commented Oct 19, 2023

Great. @DBairdME how did you find the litellm-proxy repo? it should route to litellm

Looks like we're missing the v1/ for models. I'll add it now.

@DBairdME
Copy link
Author

Hi, it came up when just searching for litellm-proxy. At the moment the Git repository for it is the top search result returned by Google.

@krrishdholakia
Copy link

Change pushed @DBairdME, should be part of v0.10.2

Would love to give a shoutout when we announce this on our changelog. Do you have a twitter/linkedin?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants