-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/tools #208
Merged
Fix/tools #208
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
73c0d88
feat: apply tool fixes
0xArdi f0ad36b
chore: revert accidental changes
0xArdi eb57ae3
Merge branch 'main' into fix/tools
0xArdi e7b6aa9
chore: merge conflicts
0xArdi a4dc21b
chore: adjust deps
0xArdi 211fe97
chore: revert accidental change
0xArdi edb6cb6
chore: add missing deps
0xArdi b41a1cd
fix: package deps
0xArdi beb203d
chore: generators
0xArdi 41dd06e
chore: liccheck
0xArdi ed34423
fix: cot tools entry point
0xArdi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
20 changes: 20 additions & 0 deletions
20
packages/napthaai/customs/prediction_request_rag_claude/__init__.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# -*- coding: utf-8 -*- | ||
# ------------------------------------------------------------------------------ | ||
# | ||
# Copyright 2024 Valory AG | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# ------------------------------------------------------------------------------ | ||
|
||
"""This module contains the prediction request rag claude tool.""" |
39 changes: 39 additions & 0 deletions
39
packages/napthaai/customs/prediction_request_rag_claude/component.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
name: prediction_request_rag_claude | ||
author: napthaai | ||
version: 0.1.0 | ||
type: custom | ||
description: A tool for making binary predictions on markets using claude. | ||
license: Apache-2.0 | ||
aea_version: '>=1.0.0, <2.0.0' | ||
fingerprint: | ||
__init__.py: bafybeihd72xjbjzbqcpm3qfaimqk3ts3qx2l4w2lhca2e4ruso7jngmmce | ||
prediction_request_rag_claude.py: bafybeibxeaoizixha4po7apf4k7gix2hi7x54hwqx4oyp23bwdgao6fv2u | ||
fingerprint_ignore_patterns: [] | ||
entry_point: prediction_request_rag_claude.py | ||
callable: run | ||
dependencies: | ||
google-api-python-client: | ||
version: ==2.95.0 | ||
googlesearch-python: | ||
version: ==1.2.3 | ||
requests: {} | ||
markdownify: | ||
version: ==0.11.6 | ||
readability-lxml: | ||
version: ==0.8.1 | ||
anthropic: | ||
version: ==0.21.3 | ||
tiktoken: | ||
version: ==0.5.1 | ||
pydantic: | ||
version: '>=1.9.0,<3' | ||
faiss-cpu: | ||
version: ==1.7.4 | ||
openai: | ||
version: ==1.11.0 | ||
docstring-parser: | ||
version: ==0.15 | ||
pypdf2: | ||
version: ==3.0.1 | ||
numpy: | ||
version: '>=1.19.0' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -681,114 +681,117 @@ def extract_question(prompt: str) -> str: | |
|
||
def run(**kwargs) -> Tuple[str, Optional[str], Optional[Dict[str, Any]], Any]: | ||
"""Run the task""" | ||
with OpenAIClientManager(kwargs["api_keys"]["openai"]): | ||
tool = kwargs["tool"] | ||
prompt = extract_question(kwargs["prompt"]) | ||
num_urls = kwargs.get("num_urls", DEFAULT_NUM_URLS[tool]) | ||
counter_callback = kwargs.get("counter_callback", None) | ||
api_keys = kwargs.get("api_keys", {}) | ||
google_api_key = api_keys.get("google_api_key", None) | ||
google_engine_id = api_keys.get("google_engine_id", None) | ||
temperature = kwargs.get("temperature", DEFAULT_OPENAI_SETTINGS["temperature"]) | ||
max_tokens = kwargs.get("max_tokens", DEFAULT_OPENAI_SETTINGS["max_tokens"]) | ||
engine = kwargs.get("model", TOOL_TO_ENGINE[tool]) | ||
print(f"ENGINE: {engine}") | ||
if tool not in ALLOWED_TOOLS: | ||
raise ValueError(f"Tool {tool} is not supported.") | ||
|
||
( | ||
additional_information, | ||
queries, | ||
counter_callback, | ||
) = fetch_additional_information( | ||
client=client, | ||
prompt=prompt, | ||
engine=engine, | ||
google_api_key=google_api_key, | ||
google_engine_id=google_engine_id, | ||
counter_callback=counter_callback, | ||
source_links=kwargs.get("source_links", None), | ||
num_urls=num_urls, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
) | ||
try: | ||
with OpenAIClientManager(kwargs["api_keys"]["openai"]): | ||
tool = kwargs["tool"] | ||
prompt = extract_question(kwargs["prompt"]) | ||
num_urls = kwargs.get("num_urls", DEFAULT_NUM_URLS[tool]) | ||
counter_callback = kwargs.get("counter_callback", None) | ||
api_keys = kwargs.get("api_keys", {}) | ||
google_api_key = api_keys.get("google_api_key", None) | ||
google_engine_id = api_keys.get("google_engine_id", None) | ||
temperature = kwargs.get("temperature", DEFAULT_OPENAI_SETTINGS["temperature"]) | ||
max_tokens = kwargs.get("max_tokens", DEFAULT_OPENAI_SETTINGS["max_tokens"]) | ||
engine = kwargs.get("model", TOOL_TO_ENGINE[tool]) | ||
print(f"ENGINE: {engine}") | ||
if tool not in ALLOWED_TOOLS: | ||
raise ValueError(f"Tool {tool} is not supported.") | ||
|
||
( | ||
additional_information, | ||
queries, | ||
counter_callback, | ||
) = fetch_additional_information( | ||
client=client, | ||
prompt=prompt, | ||
engine=engine, | ||
google_api_key=google_api_key, | ||
google_engine_id=google_engine_id, | ||
counter_callback=counter_callback, | ||
source_links=kwargs.get("source_links", None), | ||
num_urls=num_urls, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
) | ||
|
||
# Adjust the additional_information to fit within the token budget | ||
adjusted_info = adjust_additional_information( | ||
prompt=PREDICTION_PROMPT, | ||
additional_information=additional_information, | ||
model=engine, | ||
) | ||
# Adjust the additional_information to fit within the token budget | ||
adjusted_info = adjust_additional_information( | ||
prompt=PREDICTION_PROMPT, | ||
additional_information=additional_information, | ||
model=engine, | ||
) | ||
|
||
# Reasoning prompt | ||
reasoning_prompt = REASONING_PROMPT.format( | ||
user_prompt=prompt, formatted_docs=adjusted_info | ||
) | ||
# Reasoning prompt | ||
reasoning_prompt = REASONING_PROMPT.format( | ||
user_prompt=prompt, formatted_docs=adjusted_info | ||
) | ||
|
||
# Do reasoning | ||
messages = [ | ||
{"role": "system", "content": SYSTEM_PROMPT}, | ||
{ | ||
"role": "user", | ||
"content": reasoning_prompt, | ||
}, | ||
] | ||
# Do reasoning | ||
messages = [ | ||
{"role": "system", "content": SYSTEM_PROMPT}, | ||
{ | ||
"role": "user", | ||
"content": reasoning_prompt, | ||
}, | ||
] | ||
|
||
# Reasoning | ||
response_reasoning = client.chat.completions.create( | ||
model=engine, | ||
messages=messages, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
n=1, | ||
timeout=150, | ||
stop=None, | ||
) | ||
# Reasoning | ||
response_reasoning = client.chat.completions.create( | ||
model=engine, | ||
messages=messages, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
n=1, | ||
timeout=150, | ||
stop=None, | ||
) | ||
|
||
# Extract the reasoning | ||
reasoning = response_reasoning.choices[0].message.content | ||
# Extract the reasoning | ||
reasoning = response_reasoning.choices[0].message.content | ||
|
||
# Prediction prompt | ||
prediction_prompt = PREDICTION_PROMPT.format( | ||
user_prompt=prompt, reasoning=reasoning | ||
) | ||
# Prediction prompt | ||
prediction_prompt = PREDICTION_PROMPT.format( | ||
user_prompt=prompt, reasoning=reasoning | ||
) | ||
|
||
# Make the prediction | ||
messages = [ | ||
{"role": "system", "content": SYSTEM_PROMPT}, | ||
{ | ||
"role": "user", | ||
"content": prediction_prompt, | ||
}, | ||
] | ||
# Make the prediction | ||
messages = [ | ||
{"role": "system", "content": SYSTEM_PROMPT}, | ||
{ | ||
"role": "user", | ||
"content": prediction_prompt, | ||
}, | ||
] | ||
|
||
response = client.chat.completions.create( | ||
model=engine, | ||
messages=messages, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
n=1, | ||
timeout=150, | ||
stop=None, | ||
functions=[Results.openai_schema], | ||
function_call={'name':'Results'} | ||
) | ||
results = str(Results.from_response(response)) | ||
|
||
pairs = str(results).split() | ||
result_dict = {} | ||
for pair in pairs: | ||
key, value = pair.split("=") | ||
result_dict[key] = float(value) # Convert value to float | ||
results = result_dict | ||
results = json.dumps(results) | ||
if counter_callback is not None: | ||
counter_callback( | ||
input_tokens=response_reasoning.usage.prompt_tokens | ||
+ response.usage.prompt_tokens, | ||
output_tokens=response_reasoning.usage.completion_tokens | ||
+ response.usage.completion_tokens, | ||
response = client.chat.completions.create( | ||
model=engine, | ||
token_counter=count_tokens, | ||
messages=messages, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
n=1, | ||
timeout=150, | ||
stop=None, | ||
functions=[Results.openai_schema], | ||
function_call={'name':'Results'} | ||
) | ||
return results, reasoning_prompt + "////" + prediction_prompt, None, counter_callback | ||
results = str(Results.from_response(response)) | ||
|
||
pairs = str(results).split() | ||
result_dict = {} | ||
for pair in pairs: | ||
key, value = pair.split("=") | ||
result_dict[key] = float(value) # Convert value to float | ||
results = result_dict | ||
results = json.dumps(results) | ||
if counter_callback is not None: | ||
counter_callback( | ||
input_tokens=response_reasoning.usage.prompt_tokens | ||
+ response.usage.prompt_tokens, | ||
output_tokens=response_reasoning.usage.completion_tokens | ||
+ response.usage.completion_tokens, | ||
model=engine, | ||
token_counter=count_tokens, | ||
) | ||
return results, reasoning_prompt + "////" + prediction_prompt, None, counter_callback | ||
except Exception as e: | ||
return f"Invalid response. The following issue was encountered: {str(e)}", "", None, None | ||
Comment on lines
+796
to
+797
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Check response here |
39 changes: 39 additions & 0 deletions
39
packages/napthaai/customs/prediction_request_reasoning_claude/component.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
name: prediction_request_reasoning_claude | ||
author: napthaai | ||
version: 0.1.0 | ||
type: custom | ||
description: A tool for making binary predictions on markets using claude. | ||
license: Apache-2.0 | ||
aea_version: '>=1.0.0, <2.0.0' | ||
fingerprint: | ||
__init__.py: bafybeib36ew6vbztldut5xayk5553rylrq7yv4cpqyhwc5ktvd4cx67vwu | ||
prediction_request_reasoning_claude.py: bafybeian5oyo5v4qvnocxhpgtxcvitlzwsa7hag4lwydmyyth2r7u6dine | ||
fingerprint_ignore_patterns: [] | ||
entry_point: prediction_request_reasoning_claude.py | ||
callable: run | ||
dependencies: | ||
google-api-python-client: | ||
version: ==2.95.0 | ||
googlesearch-python: | ||
version: ==1.2.3 | ||
requests: {} | ||
markdownify: | ||
version: ==0.11.6 | ||
readability-lxml: | ||
version: ==0.8.1 | ||
openai: | ||
version: ==1.11.0 | ||
tiktoken: | ||
version: ==0.5.1 | ||
pypdf2: | ||
version: ==3.0.1 | ||
numpy: | ||
version: '>=1.19.0' | ||
pydantic: | ||
version: '>=1.9.0,<3' | ||
faiss-cpu: | ||
version: ==1.7.4 | ||
docstring-parser: | ||
version: ==0.15 | ||
anthropic: | ||
version: ==0.21.3 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# -*- coding: utf-8 -*- | ||
# ------------------------------------------------------------------------------ | ||
# | ||
# Copyright 2024 Valory AG | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# ------------------------------------------------------------------------------ | ||
|
||
"""This module contains the prediction url cot tool.""" |
37 changes: 37 additions & 0 deletions
37
packages/napthaai/customs/prediction_url_cot/component.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
name: prediction_url_cot | ||
author: napthaai | ||
version: 0.1.0 | ||
type: custom | ||
description: A tool for making binary predictions on markets. | ||
license: Apache-2.0 | ||
aea_version: '>=1.0.0, <2.0.0' | ||
fingerprint: | ||
__init__.py: bafybeiflni5dkn5fqe7fnu4lgbqxzfrgochhqfbgzwz3vlf5grijp3nkpm | ||
prediction_url_cot.py: bafybeiczvut645b7xma2x6ivadhdbbqrvwusjoczidv2bwfbhidazshlfu | ||
fingerprint_ignore_patterns: [] | ||
entry_point: prediction_url_cot.py | ||
callable: run | ||
dependencies: | ||
google-api-python-client: | ||
version: ==2.95.0 | ||
googlesearch-python: | ||
version: ==1.2.3 | ||
requests: {} | ||
markdownify: | ||
version: ==0.11.6 | ||
readability-lxml: | ||
version: ==0.8.1 | ||
anthropic: | ||
version: ==0.21.3 | ||
tiktoken: | ||
version: ==0.5.1 | ||
pypdf2: | ||
version: ==3.0.1 | ||
numpy: | ||
version: '>=1.19.0' | ||
pydantic: | ||
version: '>=1.9.0,<3' | ||
openai: | ||
version: ==1.11.0 | ||
docstring-parser: | ||
version: ==0.15 |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@richardblythman @moarshy Wrapping in a try/except like this really helps to debug when things go wrong. I suggest we take this approach going forward. If you have some cleaner way of doing it than with simple try/except like this please propose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK sure, sounds good.