-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Integration with LiteLLM / LiteLLM-Proxy #173
Comments
Hi i'm the maintainer of LiteLLM - how can I help with this ? |
I'm keen to see what changes would be needed to allow calls to either LiteLLM or LiteLLM-Proxy in place of the current code that looks at either the OpenAI or Azure OpenAI LLMs. This type of change could include simplifying the UI such that the LLM choice is determined by LiteLLM; alternatively LiteLLM could be queried for the current LLMs that are configured for use and the chatbot UI then showing this list for the user to select the LLM they wish to use |
@DBairdME i'll have a tutorial for this today |
Hey @DBairdME I believe this is what you need to do, assuming you're running it locally:
|
Let me know if this works for you @DBairdME Otherwise happy to hop on a 30min call and get this working for you - https://calendly.com/kdholakia |
Hi Krish, thanks for those notes. I see that if the LiteLLM-Proxy\main.py file is amended to use /v1/ as a prefix in the proxy config I can use the proxy for communicating with the LLM (without the /v1/ prefix the proxy isn't able to respond correctly). Interestingly if you choose a new chat within the chatbot, the call to /v1/models gets stuck and the app is not able to take any user inputs |
Hey @DBairdME can you explain that a bit more? what's the error you're seeing? We have support for both v1/chat/completions and /chat/completions |
Hi. OK, I've redeployed the proxy using the litellm repo (rather than the litellm-proxy repo) and this addresses the /v1/ prefix issues. Smartchatbot's call to /v1/models returns a 404 as shown when selecting a 'New Chat' within the chatbot |
Great. @DBairdME how did you find the litellm-proxy repo? it should route to litellm Looks like we're missing the v1/ for models. I'll add it now. |
Hi, it came up when just searching for litellm-proxy. At the moment the Git repository for it is the top search result returned by Google. |
Change pushed @DBairdME, should be part of v Would love to give a shoutout when we announce this on our changelog. Do you have a twitter/linkedin? |
Hi. Loving the development of the chatbot. Has any thought been given to integrating the front end to litellm or litellm-proxy to provide abstraction of the LLM being used? With the rapid development and availability of LLM models, having the front end such as with smart-chatbot-ui able to leverage more LLMs (including those that may be locally hosted) would be a great development.
The text was updated successfully, but these errors were encountered: