Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/tools #208

Merged
merged 11 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2024 Valory AG
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------

"""This module contains the prediction request rag claude tool."""
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: prediction_request_rag_claude
author: napthaai
version: 0.1.0
type: custom
description: A tool for making binary predictions on markets using claude.
license: Apache-2.0
aea_version: '>=1.0.0, <2.0.0'
fingerprint:
__init__.py: bafybeihd72xjbjzbqcpm3qfaimqk3ts3qx2l4w2lhca2e4ruso7jngmmce
prediction_request_rag_claude.py: bafybeibxeaoizixha4po7apf4k7gix2hi7x54hwqx4oyp23bwdgao6fv2u
fingerprint_ignore_patterns: []
entry_point: prediction_request_rag_claude.py
callable: run
dependencies:
google-api-python-client:
version: ==2.95.0
googlesearch-python:
version: ==1.2.3
requests: {}
markdownify:
version: ==0.11.6
readability-lxml:
version: ==0.8.1
anthropic:
version: ==0.21.3
tiktoken:
version: ==0.5.1
pydantic:
version: '>=1.9.0,<3'
faiss-cpu:
version: ==1.7.4
openai:
version: ==1.11.0
docstring-parser:
version: ==0.15
pypdf2:
version: ==3.0.1
numpy:
version: '>=1.19.0'
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ license: Apache-2.0
aea_version: '>=1.0.0, <2.0.0'
fingerprint:
__init__.py: bafybeib36ew6vbztldut5xayk5553rylrq7yv4cpqyhwc5ktvd4cx67vwu
prediction_request_reasoning.py: bafybeifjm24tqil3nan37dbg4u7qm3xw3jxfu5eleg3cojmka67dwaclpa
prediction_request_reasoning.py: bafybeid3umzaz7qzxyf4pda6ffmayyhmr7fjx35bk4qzpsi33rtanddrem
fingerprint_ignore_patterns: []
entry_point: prediction_request_reasoning.py
callable: run
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -681,114 +681,117 @@ def extract_question(prompt: str) -> str:

def run(**kwargs) -> Tuple[str, Optional[str], Optional[Dict[str, Any]], Any]:
"""Run the task"""
with OpenAIClientManager(kwargs["api_keys"]["openai"]):
tool = kwargs["tool"]
prompt = extract_question(kwargs["prompt"])
num_urls = kwargs.get("num_urls", DEFAULT_NUM_URLS[tool])
counter_callback = kwargs.get("counter_callback", None)
api_keys = kwargs.get("api_keys", {})
google_api_key = api_keys.get("google_api_key", None)
google_engine_id = api_keys.get("google_engine_id", None)
temperature = kwargs.get("temperature", DEFAULT_OPENAI_SETTINGS["temperature"])
max_tokens = kwargs.get("max_tokens", DEFAULT_OPENAI_SETTINGS["max_tokens"])
engine = kwargs.get("model", TOOL_TO_ENGINE[tool])
print(f"ENGINE: {engine}")
if tool not in ALLOWED_TOOLS:
raise ValueError(f"Tool {tool} is not supported.")

(
additional_information,
queries,
counter_callback,
) = fetch_additional_information(
client=client,
prompt=prompt,
engine=engine,
google_api_key=google_api_key,
google_engine_id=google_engine_id,
counter_callback=counter_callback,
source_links=kwargs.get("source_links", None),
num_urls=num_urls,
temperature=temperature,
max_tokens=max_tokens,
)
try:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richardblythman @moarshy Wrapping in a try/except like this really helps to debug when things go wrong. I suggest we take this approach going forward. If you have some cleaner way of doing it than with simple try/except like this please propose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK sure, sounds good.

with OpenAIClientManager(kwargs["api_keys"]["openai"]):
tool = kwargs["tool"]
prompt = extract_question(kwargs["prompt"])
num_urls = kwargs.get("num_urls", DEFAULT_NUM_URLS[tool])
counter_callback = kwargs.get("counter_callback", None)
api_keys = kwargs.get("api_keys", {})
google_api_key = api_keys.get("google_api_key", None)
google_engine_id = api_keys.get("google_engine_id", None)
temperature = kwargs.get("temperature", DEFAULT_OPENAI_SETTINGS["temperature"])
max_tokens = kwargs.get("max_tokens", DEFAULT_OPENAI_SETTINGS["max_tokens"])
engine = kwargs.get("model", TOOL_TO_ENGINE[tool])
print(f"ENGINE: {engine}")
if tool not in ALLOWED_TOOLS:
raise ValueError(f"Tool {tool} is not supported.")

(
additional_information,
queries,
counter_callback,
) = fetch_additional_information(
client=client,
prompt=prompt,
engine=engine,
google_api_key=google_api_key,
google_engine_id=google_engine_id,
counter_callback=counter_callback,
source_links=kwargs.get("source_links", None),
num_urls=num_urls,
temperature=temperature,
max_tokens=max_tokens,
)

# Adjust the additional_information to fit within the token budget
adjusted_info = adjust_additional_information(
prompt=PREDICTION_PROMPT,
additional_information=additional_information,
model=engine,
)
# Adjust the additional_information to fit within the token budget
adjusted_info = adjust_additional_information(
prompt=PREDICTION_PROMPT,
additional_information=additional_information,
model=engine,
)

# Reasoning prompt
reasoning_prompt = REASONING_PROMPT.format(
user_prompt=prompt, formatted_docs=adjusted_info
)
# Reasoning prompt
reasoning_prompt = REASONING_PROMPT.format(
user_prompt=prompt, formatted_docs=adjusted_info
)

# Do reasoning
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": reasoning_prompt,
},
]
# Do reasoning
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": reasoning_prompt,
},
]

# Reasoning
response_reasoning = client.chat.completions.create(
model=engine,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
n=1,
timeout=150,
stop=None,
)
# Reasoning
response_reasoning = client.chat.completions.create(
model=engine,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
n=1,
timeout=150,
stop=None,
)

# Extract the reasoning
reasoning = response_reasoning.choices[0].message.content
# Extract the reasoning
reasoning = response_reasoning.choices[0].message.content

# Prediction prompt
prediction_prompt = PREDICTION_PROMPT.format(
user_prompt=prompt, reasoning=reasoning
)
# Prediction prompt
prediction_prompt = PREDICTION_PROMPT.format(
user_prompt=prompt, reasoning=reasoning
)

# Make the prediction
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": prediction_prompt,
},
]
# Make the prediction
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": prediction_prompt,
},
]

response = client.chat.completions.create(
model=engine,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
n=1,
timeout=150,
stop=None,
functions=[Results.openai_schema],
function_call={'name':'Results'}
)
results = str(Results.from_response(response))

pairs = str(results).split()
result_dict = {}
for pair in pairs:
key, value = pair.split("=")
result_dict[key] = float(value) # Convert value to float
results = result_dict
results = json.dumps(results)
if counter_callback is not None:
counter_callback(
input_tokens=response_reasoning.usage.prompt_tokens
+ response.usage.prompt_tokens,
output_tokens=response_reasoning.usage.completion_tokens
+ response.usage.completion_tokens,
response = client.chat.completions.create(
model=engine,
token_counter=count_tokens,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
n=1,
timeout=150,
stop=None,
functions=[Results.openai_schema],
function_call={'name':'Results'}
)
return results, reasoning_prompt + "////" + prediction_prompt, None, counter_callback
results = str(Results.from_response(response))

pairs = str(results).split()
result_dict = {}
for pair in pairs:
key, value = pair.split("=")
result_dict[key] = float(value) # Convert value to float
results = result_dict
results = json.dumps(results)
if counter_callback is not None:
counter_callback(
input_tokens=response_reasoning.usage.prompt_tokens
+ response.usage.prompt_tokens,
output_tokens=response_reasoning.usage.completion_tokens
+ response.usage.completion_tokens,
model=engine,
token_counter=count_tokens,
)
return results, reasoning_prompt + "////" + prediction_prompt, None, counter_callback
except Exception as e:
return f"Invalid response. The following issue was encountered: {str(e)}", "", None, None
Comment on lines +796 to +797
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check response here

Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: prediction_request_reasoning_claude
author: napthaai
version: 0.1.0
type: custom
description: A tool for making binary predictions on markets using claude.
license: Apache-2.0
aea_version: '>=1.0.0, <2.0.0'
fingerprint:
__init__.py: bafybeib36ew6vbztldut5xayk5553rylrq7yv4cpqyhwc5ktvd4cx67vwu
prediction_request_reasoning_claude.py: bafybeian5oyo5v4qvnocxhpgtxcvitlzwsa7hag4lwydmyyth2r7u6dine
fingerprint_ignore_patterns: []
entry_point: prediction_request_reasoning_claude.py
callable: run
dependencies:
google-api-python-client:
version: ==2.95.0
googlesearch-python:
version: ==1.2.3
requests: {}
markdownify:
version: ==0.11.6
readability-lxml:
version: ==0.8.1
openai:
version: ==1.11.0
tiktoken:
version: ==0.5.1
pypdf2:
version: ==3.0.1
numpy:
version: '>=1.19.0'
pydantic:
version: '>=1.9.0,<3'
faiss-cpu:
version: ==1.7.4
docstring-parser:
version: ==0.15
anthropic:
version: ==0.21.3
20 changes: 20 additions & 0 deletions packages/napthaai/customs/prediction_url_cot/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2024 Valory AG
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------

"""This module contains the prediction url cot tool."""
37 changes: 37 additions & 0 deletions packages/napthaai/customs/prediction_url_cot/component.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: prediction_url_cot
author: napthaai
version: 0.1.0
type: custom
description: A tool for making binary predictions on markets.
license: Apache-2.0
aea_version: '>=1.0.0, <2.0.0'
fingerprint:
__init__.py: bafybeiflni5dkn5fqe7fnu4lgbqxzfrgochhqfbgzwz3vlf5grijp3nkpm
prediction_url_cot.py: bafybeiczvut645b7xma2x6ivadhdbbqrvwusjoczidv2bwfbhidazshlfu
fingerprint_ignore_patterns: []
entry_point: prediction_url_cot.py
callable: run
dependencies:
google-api-python-client:
version: ==2.95.0
googlesearch-python:
version: ==1.2.3
requests: {}
markdownify:
version: ==0.11.6
readability-lxml:
version: ==0.8.1
anthropic:
version: ==0.21.3
tiktoken:
version: ==0.5.1
pypdf2:
version: ==3.0.1
numpy:
version: '>=1.19.0'
pydantic:
version: '>=1.9.0,<3'
openai:
version: ==1.11.0
docstring-parser:
version: ==0.15
Loading
Loading