-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/new models #248
Feat/new models #248
Conversation
# Conflicts: # packages/gnosis/customs/ofv_market_resolver/component.yaml # packages/packages.json
@@ -348,6 +348,11 @@ def error_response(msg: str) -> Tuple[str, None, None, None]: | |||
"limit_max_tokens": 8192, | |||
"temperature": 0, | |||
}, | |||
"gpt-4o-2024-05-13": { | |||
"default_max_tokens": 500, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this limit on 500?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just copied over from above.
@@ -348,6 +348,11 @@ def error_response(msg: str) -> Tuple[str, None, None, None]: | |||
"limit_max_tokens": 8192, | |||
"temperature": 0, | |||
}, | |||
"gpt-4o-2024-05-13": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "gpt-4o-2024-08-06" model is cheaper...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, will replace.
@@ -254,6 +254,11 @@ def embeddings(self, model, input): | |||
"limit_max_tokens": 8192, | |||
"temperature": 0, | |||
}, | |||
"gpt-4o-2024-05-13": { | |||
"default_max_tokens": 500, | |||
"limit_max_tokens": 4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limit max tokens = 128_000
you can check it here
https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think thats for the input no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the url that I shared.... search for sonnet and check the config that is there and the one you wrote for limit_max_tokens you put the max input tokens value, not the max_output tokens value. Then why for the gpt model you are putting in limit_max_tokens the max output tokens instead of the input max?
}, | ||
"gpt-4o-2024-05-13": { | ||
"default_max_tokens": 500, | ||
"limit_max_tokens": 4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limit max tokens = 128_000
you can check it here
https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json
@@ -348,6 +348,11 @@ def error_response(msg: str) -> Tuple[str, None, None, None]: | |||
"limit_max_tokens": 8192, | |||
"temperature": 0, | |||
}, | |||
"gpt-4o-2024-05-13": { | |||
"default_max_tokens": 500, | |||
"limit_max_tokens": 4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limit max tokens = 128_000
you can check it here
https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json
@@ -36,9 +36,11 @@ class TokenCounterCallback: | |||
"gpt-4-turbo-preview": {"input": 0.01, "output": 0.03}, | |||
"gpt-4-0125-preview": {"input": 0.01, "output": 0.03}, | |||
"gpt-4-1106-preview": {"input": 0.01, "output": 0.03}, | |||
"gpt-4o-2024-08-06": {"input": 0.01, "output": 0.03}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where did you get those values? they are different from the ones here
https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json
Proposed changes
This PR adds support for
gpt4o
andclaude3.5-sonnet
.Types of changes
What types of changes does your code introduce? (A breaking change is a fix or feature that would cause existing functionality and APIs to not work as expected.)
Put an
x
in the box that appliesChecklist
Put an
x
in the boxes that apply.main
branch (left side). Also you should start your branch off ourmain
.