We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate this ranking? If I added new model, how to reproduce this benchmark?
The text was updated successfully, but these errors were encountered:
My new model is implemented in this pr. https://github.com/OpenGenerativeAI/llm-colosseum/pull/45/files You can watch the video of my model vs mistral at here. https://github.com/Tokkiu/llm-colosseum?tab=readme-ov-file#1-vs-1-mistral-vs-solar
Sorry, something went wrong.
我的新模型已经在这个 PR 中实现。https://github.com/OpenGenerativeAI/llm-colosseum/pull/45/files您可以在这里观看我的模型与 Mistral 的视频。 https://github.com/Tokkiu/llm-colosseum?tab=readme-ov-file#1-vs-1-mistral-vs-solar
你好璟琦,我对这个项目也非常感兴趣,可以交流吗?
I just launch 50 rounds for two models. the result shows who is a better models. at the moment, Gemma 7B is the best. v1.1 is worse.
No branches or pull requests
How to generate this ranking? If I added new model, how to reproduce this benchmark?
The text was updated successfully, but these errors were encountered: