-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Add Anthropic LLM support via anthropic
Python SDK
#332
[FEATURE] Add Anthropic LLM support via anthropic
Python SDK
#332
Conversation
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
Codecov ReportAttention: Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
|
@bboynton97 @areibman This is ready for review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great PR! I left a few comments. Namely, Async isn't parsing data as we'd expect. Otherwise looks good
"for event in stream:\n", | ||
" if event.type == \"content_block_delta\":\n", | ||
" if event.delta.type == \"text\":\n", | ||
" response += event.delta.text\n", | ||
" elif event.type == \"message_stop\":\n", | ||
" print(\"\\n\")\n", | ||
" print(response)\n", | ||
" print(\"\\n\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I re-ran the code and found out the tool calling isn't doing what it's expected, in this case, performing a web search). This is why it's only printing the "thinking" part but not the "answer". I have 2 solutions in mind for a fix -
- Precisely describe the web search in the tool
- Use the anthropic-tools package which is in alpha stage
I will be testing both of these and push a fix soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested with both of these and so far no luck since there is no innate web search support for the models. However, another solution would be to use the anthropic-tools package with a custom Python class that uses another package to perform web searches, getting us the results we want. Or we could skip such questions and ask something else in the prompt.
Please let me know what are your thoughts on this.
Unfortunate timing for this PR. @bboynton97 is currently splitting llm_tracker.py into multiple files for readability. When that merge goes through, we'll make some changes to make this one work |
I'd like to help, would be good for me to understand the working of the software in detail and I can use that information to draft a contributing guidelines document and/or observe where tests are needed. |
This is the PR in question: #346 From your end, this would just mean splitting the Anthropic handlers into its own file. The tricky part is just merging those changes from that branch into yours. |
@HowieG Can you add the Anthropic notebook to the smoke test suite? |
Cool, I will look into it and find a solution. |
Any notebook added to tests or examples folders will be smoke tested automatically. Except any we've excluded |
44428ff
to
cde0820
Compare
I rebased and refactored the code. I believe that this PR should merge to |
anthropic
Python SDK
hey @the-praxs! thank you for reworking this, it looks fantastic! i'm going to go ahead and merge this into the thank you for your work here! its very appreciated |
47633ec
into
AgentOps-AI:llm-handler-refactor
* separated llm tracker by model * llm refactor progress * working for openai * groq provider * cohere provider * ollama provider * remove support for openai v0 * cohere support * test import fix * test import fix * groq test fix * ollama tests * litellm tests * dont import litellm * cohere fixes and tests * oai version <0.1 better deprecation warning * [FEATURE] Add Anthropic LLM support via `anthropic` Python SDK (#332) * fix typo * add anthropic support * add example file * fixed linting using black * add time travel support for anthropic * linting * fix kwargs key to get prompt * fix for extracting tokens from Message.usage * minor fix for output tokens * remove anthropic example python file * fix completions not show in session * some more fixes and cleanup * add Message object to pydantic models * fix typo * overhaul anthropic code * linting * add anthropic example notebook * linting * added readme examples * fix incorrect attribute access for the model content * add async example * refactor code * fix function name * linting * add provider tests --------- Co-authored-by: reibs <[email protected]> * added undo to canaries * added anthropic tests * undo instrumenting for litellm * cohere considerations --------- Co-authored-by: Pratyush Shukla <[email protected]> Co-authored-by: reibs <[email protected]>
📥 Pull Request
📘 Description
This PR adds functions in the
llm_tracker.py
file for the asynchronous and synchronous Anthropic SDK support. An example notebookanthropic_example
underexamples/anthropic
is also added to demo the integration.🎯 Goal
To extend the AgentOps capability to the Anthropic LLMs.
🔍 Additional Context
Tool Calling in Anthropic Python SDK seems flaky and might not result in correct responses,
🧪 Testing
Tested using the provided example notebook.
Thank you for your contribution to Agentops!