Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for claude, cohere and llama2 #1451

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

krrishdholakia
Copy link

@krrishdholakia krrishdholakia commented Aug 4, 2023

Hi @MaartenGr,

Saw you expanded support to OpenAI - and wanted to see how to add support for other LLM providers. I'm working on litellm (simple library to standardize LLM API Calls - https://github.com/BerriAI/litellm) and was wondering if we could be helpful.

Added support for Claude, Cohere, and Llama2 (via Replicate) by creating a new _litellm.py file. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.

Would love to know if this helps.

Happy to make any additional changes as required 😊

@krrishdholakia krrishdholakia changed the title add support for claude, palm, cohere and llama2 add support for claude, cohere and llama2 Aug 4, 2023
@MaartenGr
Copy link
Owner

Oops, I think you missed the Contributing Guideliness! I think the idea of one package for many APIs is definitely welcome. Also, I see you are on a mission to add the functionality of litellm to many repos of which some have already merged, congrats!

Having said that, there are a few things that make me wonder. With the Langchain integration, which supports quite a number of LLMs, what is then the added value of litellm? Also, adding this as a main dependency would result in quite a number of additional dependencies to take into account whilst other LLM-based packages are not used as main dependencies, such as Langchain and OpenAI.

@krrishdholakia
Copy link
Author

krrishdholakia commented Aug 5, 2023

hey @MaartenGr thanks for the feedback

here's what i understand so far:

  • with litellm because we import all the llm packages at the top (openai, anthropic, replicate, etc.) this causes an issue re: dependencies, where you don't want to download packages you don't need

Would a better experience here be to download packages as required (e.g. if the user is passing in an openai call, then download the package at that moment)?

With the Langchain integration, which supports quite a number of LLMs, what is then the added value of litellm?

langchain is good because i can work across models, but i've found it bloated - adds a ton of dependencies that i don't need to my repo (e.g. SQLAlchemy). So wanted to figure out a way to do this in a simpler manner - which is why we built lite.

Open to feedback on this, though.

@MaartenGr
Copy link
Owner

Would a better experience here be to download packages as required (e.g. if the user is passing in an openai call, then download the package at that moment)?

As is done currently, which you can find in the documentation, installation of the necessary packages should be done outside of BERTopic. Otherwise, it becomes too difficult to track all optional dependencies. The nice thing about this is that it easily allows for adding specific frameworks.

langchain is good because i can work across models, but i've found it bloated - adds a ton of dependencies that i don't need to my repo (e.g. SQLAlchemy). So wanted to figure out a way to do this in a simpler manner - which is why we built lite.

Ah right, that makes sense! Here, I am not sure how much it adds compared to what is already implemented. I am always a bit hesitant to add external packages that might not be necessary as it adds another layer of complexity with respect to dependency management. Having said that, if there are many users that would like to see this implemented, I am all for it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants