You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Indeed, it's an interesting project. Going through your questions, let me try to address them:
Cost: The pricing largely depends on the LLM providers, as it varies across different platforms. This project uses the litellm library, simplifying API calls to various LLMs by standardizing inputs and outputs. This library includes a comprehensive JSON file that outlines the costs for each provider, making it easy to compare. I've also submitted a PR to enhance the README, which includes a brief cost comparison for your convenience.
Speed: In my use cases, when using LLMs via API, the response time is generally consistent and doesn't show significant delays. A one or two-second difference hasn't been an issue for my use cases. If you're considering running a local LLM, that's a different scenario, and I recommend checking out a Medium post that covers this topic. Feel free to start a discussion there!
Comparison Among LLMs: Several tools are dedicated to comparing LLMs, such as promptfoo. Additionally, a Colab Notebook is available to play with your data.
Your questions are valuable for understanding and learning more about LLMs, and you are more than welcome to continue this discussion.
Hi! Very interesting project and nice documentation overall, but I have a lot of questions right from the beginning
I think having this info right in README would help popularise this method 🙌
The text was updated successfully, but these errors were encountered: