-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support internet meme #128
base: develop/v0.2.1
Are you sure you want to change the base?
support internet meme #128
Conversation
WalkthroughThe pull request introduces a comprehensive Internet Meme Agent system with multiple components for processing user inputs, searching memes, and explaining their context. The system leverages OpenAI's GPT-4o model and includes configuration files for various workers, tools, and system settings. Key components include an input interface, meme searcher, visual question answering (VQA), and meme explanation modules. The implementation supports both text and image-based interactions, with a modular workflow design that allows for flexible task chaining and configuration. Changes
Sequence DiagramsequenceDiagram
participant User
participant InputInterface
participant MemeSearcher
participant MemeExplain
User->>InputInterface: Provide instruction and optional image
InputInterface->>MemeSearcher: Forward user instruction
MemeSearcher->>MemeSearcher: Search web for meme information
MemeSearcher->>MemeExplain: Pass search results
MemeExplain->>MemeExplain: Generate meme explanation
MemeExplain-->>User: Return meme details
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
🧹 Nitpick comments (17)
examples/internet_meme/agent/simple_vqa/simple_vqa.py (3)
1-2
: Remove unused imports to clean up the code.The imports
pathlib.Path
andtyping.List
are not used and can be removed to keep the code clean.Apply this diff:
- from pathlib import Path - from typing import List🧰 Tools
🪛 Ruff (0.8.2)
1-1:
pathlib.Path
imported but unusedRemove unused import:
pathlib.Path
(F401)
2-2:
typing.List
imported but unusedRemove unused import:
typing.List
(F401)
8-8
: Remove unused importStrParser
.The import
StrParser
is not used in the code and can be safely removed.Apply this diff:
- from omagent_core.models.llms.prompt.parser import StrParser
🧰 Tools
🪛 Ruff (0.8.2)
8-8:
omagent_core.models.llms.prompt.parser.StrParser
imported but unusedRemove unused import:
omagent_core.models.llms.prompt.parser.StrParser
(F401)
11-11
: Remove unused importcontainer
.The import
container
fromomagent_core.utils.container
is not utilized and can be removed to reduce clutter.Apply this diff:
- from omagent_core.utils.container import container
🧰 Tools
🪛 Ruff (0.8.2)
11-11:
omagent_core.utils.container.container
imported but unusedRemove unused import:
omagent_core.utils.container.container
(F401)
examples/internet_meme/configs/workers/simple_vqa.yaml (1)
2-2
: Add a newline at the end of the file.A newline at the end of the file is recommended to comply with YAML formatting conventions and avoid warnings from static analysis tools.
Apply this diff:
llm: ${sub|gpt} +
🧰 Tools
🪛 yamllint (1.35.1)
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/workers/meme_seacher.yaml (1)
3-3
: Add a newline at the end of the file.Adding a newline at the end of the file is recommended to meet YAML formatting standards and prevent potential issues.
Apply this diff:
tool_manager: ${sub|websearch} +
🧰 Tools
🪛 yamllint (1.35.1)
[error] 3-3: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/tools/websearch.yml (1)
5-5
: Add a newline at the end of the file.Including a newline at the end of the file adheres to YAML formatting conventions and avoids static analysis warnings.
Apply this diff:
llm: ${sub|text_res} +
🧰 Tools
🪛 yamllint (1.35.1)
[error] 5-5: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/agent/meme_explain/user_prompt.prompt (1)
4-6
: Consider adding English translations for better maintainability.The Chinese placeholders might make maintenance challenging for non-Chinese speaking developers. Consider adding English comments or making the template bilingual.
Example enhancement:
Input Information: -搜到的信息: {{info}} -网络梗的名称: {{name}} +搜到的信息 (Retrieved Information): {{info}} +网络梗的名称 (Meme Name): {{name}}examples/internet_meme/configs/llms/gpt.yml (2)
3-4
: Consider HTTPS validation for custom endpointsWhile using environment variables for the API endpoint is good practice, ensure that custom endpoints (if used) are validated for HTTPS.
Consider adding a validation check in the code to ensure the endpoint URL uses HTTPS when custom endpoints are configured.
6-6
: Add newline at end of fileYAML files should end with a newline character.
Add a newline at the end of the file to comply with YAML best practices.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/llms/text_res.yml (2)
1-6
: Consider using YAML anchors to reduce configuration duplicationThis configuration is nearly identical to
gpt.yml
except for thevision
parameter. Consider using YAML anchors and aliases to maintain these configurations more efficiently.Example refactor using YAML anchors:
# _base.yml base_config: &base_config name: OpenaiGPTLLM model_id: gpt-4o api_key: ${env| custom_openai_key, openai_api_key} endpoint: ${env| custom_openai_endpoint, https://api.openai.com/v1} temperature: 0 # gpt.yml <<: *base_config vision: true # text_res.yml <<: *base_config vision: false🧰 Tools
🪛 yamllint (1.35.1)
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
6-6
: Add newline at end of fileYAML files should end with a newline character.
Add a newline at the end of the file to comply with YAML best practices.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/agent/input_interface/input_interface.py (1)
22-45
: Consider implementing rate limitingThe
_run
method should implement rate limiting to prevent abuse of the input interface.Consider adding a rate limiter decorator or implementing rate limiting logic within the method to restrict the frequency of requests from the same user or workflow instance.
examples/internet_meme/agent/meme_searcher/meme_seacher.py (1)
7-7
: Fix variable naming inconsistencyThe variable
root_path
is assigned but never used.-CURRENT_PATH = root_path = Path(__file__).parents[0] +CURRENT_PATH = Path(__file__).parents[0]examples/internet_meme/run_cli.py (3)
31-33
: Remove commented out code for SimpleVQA taskThe file contains commented out code for SimpleVQA task configuration. This should be removed if it's no longer needed.
-# # 2. Simple VQA processing based on user input -# task2 = simple_task(task_def_name='SimpleVQA', task_reference_name='simple_vqa', inputs={'user_instruction': task1.output('user_instruction')})
46-48
: Add cleanup and graceful shutdown handlingThe CLI client initialization lacks cleanup handling for graceful shutdown.
config_path = CURRENT_PATH.joinpath('configs') cli_client = DefaultClient(interactor=workflow, config_path=config_path, workers=[InputInterface()]) +try: cli_client.start_interactor() +except KeyboardInterrupt: + logging.info("Shutting down CLI client...") +finally: + cli_client.cleanup()
39-40
: Improve workflow configuration documentationThe comment "Configure workflow execution flow: Input -> VQA" is outdated and doesn't match the actual flow.
-# Configure workflow execution flow: Input -> VQA +# Configure workflow execution flow: Input -> MemeSearcher -> MemeExplain workflow >> task1 >> task2 >> task3examples/internet_meme/configs/workers/meme_explain.yaml (1)
1-2
: Add newline at end of file and consider additional configuration
- Add a newline at the end of the file (as per YAML best practices)
- Consider adding additional configuration options for better control:
- Model parameters (temperature, max_tokens)
- Rate limiting settings
- Timeout configuration
Here's an improved configuration:
name: MemeExplain llm: ${sub| gpt} +model_params: + temperature: 0.7 + max_tokens: 1000 +rate_limit: + requests_per_minute: 60 +timeout_seconds: 30 +🧰 Tools
🪛 yamllint (1.35.1)
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
examples/internet_meme/agent/input_interface/input_interface.py
(1 hunks)examples/internet_meme/agent/meme_explain/meme_explain.py
(1 hunks)examples/internet_meme/agent/meme_explain/sys_prompt.prompt
(1 hunks)examples/internet_meme/agent/meme_explain/user_prompt.prompt
(1 hunks)examples/internet_meme/agent/meme_searcher/meme_seacher.py
(1 hunks)examples/internet_meme/agent/simple_vqa/simple_vqa.py
(1 hunks)examples/internet_meme/compile_container.py
(1 hunks)examples/internet_meme/configs/llms/gpt.yml
(1 hunks)examples/internet_meme/configs/llms/text_res.yml
(1 hunks)examples/internet_meme/configs/tools/websearch.yml
(1 hunks)examples/internet_meme/configs/workers/meme_explain.yaml
(1 hunks)examples/internet_meme/configs/workers/meme_seacher.yaml
(1 hunks)examples/internet_meme/configs/workers/simple_vqa.yaml
(1 hunks)examples/internet_meme/container.yaml
(1 hunks)examples/internet_meme/run_cli.py
(1 hunks)
🧰 Additional context used
🪛 yamllint (1.35.1)
examples/internet_meme/configs/workers/meme_seacher.yaml
[error] 3-3: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/llms/gpt.yml
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/workers/simple_vqa.yaml
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/tools/websearch.yml
[error] 5-5: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/llms/text_res.yml
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
examples/internet_meme/configs/workers/meme_explain.yaml
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
🪛 Ruff (0.8.2)
examples/internet_meme/compile_container.py
10-10: Redefinition of unused Path
from line 2
Remove definition: Path
(F811)
examples/internet_meme/agent/simple_vqa/simple_vqa.py
1-1: pathlib.Path
imported but unused
Remove unused import: pathlib.Path
(F401)
2-2: typing.List
imported but unused
Remove unused import: typing.List
(F401)
8-8: omagent_core.models.llms.prompt.parser.StrParser
imported but unused
Remove unused import: omagent_core.models.llms.prompt.parser.StrParser
(F401)
11-11: omagent_core.utils.container.container
imported but unused
Remove unused import: omagent_core.utils.container.container
(F401)
examples/internet_meme/agent/meme_explain/meme_explain.py
47-47: Use self.stm(self.workflow_instance_id).get("search_info", None)
instead of an if
block
Replace with self.stm(self.workflow_instance_id).get("search_info", None)
(SIM401)
🔇 Additional comments (2)
examples/internet_meme/configs/llms/gpt.yml (1)
2-2
:
Verify the model ID "gpt-4o"
The specified model ID gpt-4o
appears non-standard for OpenAI's GPT models. Common vision-capable models include gpt-4-vision-preview
.
Please verify if this is a custom model or if it should be updated to a standard OpenAI model identifier.
examples/internet_meme/agent/meme_explain/meme_explain.py (1)
1-14
: LGTM: Imports and path handling are well-structured
The imports are appropriate for the functionality, and the path handling is done safely using Path.parents
.
chat_complete_res = self.llm.generate(records=chat_message) | ||
|
||
# Extract answer text from response | ||
answer = chat_complete_res["choices"][0]["message"]["content"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for unexpected LLM response structure.
Accessing chat_complete_res["choices"][0]["message"]["content"]
without checks may raise exceptions if the response is not as expected. Add error handling to manage unexpected responses.
Apply this diff to handle potential exceptions:
try:
answer = chat_complete_res["choices"][0]["message"]["content"]
except (KeyError, IndexError, TypeError) as e:
# Handle the error, e.g., log it or provide a default response
+ self.logger.error(f"Failed to parse LLM response: {e}")
+ answer = "I'm sorry, I couldn't process your request."
Committable suggestion skipped: line range outside the PR's diff.
if self.stm(self.workflow_instance_id).get('image_cache', None): | ||
img = self.stm(self.workflow_instance_id)['image_cache']['<image_0>'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for missing image key in cache.
When accessing '<image_0>'
in image_cache
, there's a risk of a KeyError
if the key doesn't exist. Consider adding a check to handle this scenario gracefully.
Apply this diff to prevent potential KeyError
:
if self.stm(self.workflow_instance_id).get('image_cache', None):
- img = self.stm(self.workflow_instance_id)['image_cache']['<image_0>']
+ image_cache = self.stm(self.workflow_instance_id)['image_cache']
+ img = image_cache.get('<image_0>')
+ if img:
+ # Add base64 encoded image as second message
+ chat_message.append(Message(
+ role="user",
+ message_type='image',
+ content=[Content(
+ type="image_url",
+ image_url={
+ "url": f"data:image/jpeg;base64,{encode_image(img)}"
+ },
+ )]
+ ))
+ else:
+ # Handle the case where the image is missing
+ pass # Optionally log a warning or take alternative action
Committable suggestion skipped: line range outside the PR's diff.
@@ -0,0 +1,3 @@ | |||
name: MemeSearcher |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Correct the filename to meme_searcher.yaml
for consistency.
The filename meme_seacher.yaml
appears to have a typo. Renaming it to meme_searcher.yaml
will maintain consistency with the component's name MemeSearcher
and improve clarity.
Now, it's your turn to complete the task. | ||
Give anwer using the language according to the user's answer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammar and spelling issues in the English instructions.
The instructions contain spelling and grammar errors that should be corrected.
Apply this diff to fix the issues:
Now, it's your turn to complete the task.
-Give anwer using the language according to the user's answer.
+Give an answer using language that matches the user's response.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Now, it's your turn to complete the task. | |
Give anwer using the language according to the user's answer. | |
Now, it's your turn to complete the task. | |
Give an answer using language that matches the user's response. |
你是一个互联网网络梗百科专家。我会提供一些在网络上搜索到的关于某个梗的解释以及一些相关的使用例子,你的任务是根据网络的信息生成这个网络梗的百科页面。需要包含的信息为: | ||
|
||
1. 网络梗的介绍,解释出处 | ||
2. 关于这个梗的3个使用案例,包括来源和使用例子的内容。如果搜到的信息没有例子,则创造三个例子,这种情况不需要输出来源。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding guidelines for handling sensitive content and incomplete information
The prompt should include:
- Guidelines for handling sensitive, inappropriate, or offensive meme content
- Instructions for cases where information is incomplete or ambiguous
- Criteria for verifying the reliability of sources
Would you like me to propose additional prompt text addressing these concerns?
execution_status, execution_results = self.tool_manager.execute_task( | ||
task=search_query | ||
) | ||
self.callback.send_block(agent_id=self.workflow_instance_id, msg='Using web search tool to search for meme information') | ||
logging.info(execution_results) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and logging for tool execution
The tool execution could benefit from better error handling and logging:
- Add timeout handling
- Log execution failures with details
- Add retry mechanism for transient failures
execution_status, execution_results = self.tool_manager.execute_task(
- task=search_query
+ task=search_query,
+ timeout=self.config.get("search_timeout", 30),
+ retries=self.config.get("max_retries", 3)
)
self.callback.send_block(agent_id=self.workflow_instance_id, msg='Using web search tool to search for meme information')
- logging.info(execution_results)
+ if execution_status == "success":
+ logging.info("Search completed successfully: %s", execution_results)
+ else:
+ logging.error("Search failed: %s", execution_results)
Committable suggestion skipped: line range outside the PR's diff.
workflow >> task1 >> task2 >> task3 | ||
|
||
# Register workflow | ||
workflow.register(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for workflow registration
The workflow registration lacks error handling and could fail silently.
-workflow.register(True)
+try:
+ workflow.register(True)
+except Exception as e:
+ logging.error("Failed to register workflow: %s", e)
+ raise
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
workflow.register(True) | |
try: | |
workflow.register(True) | |
except Exception as e: | |
logging.error("Failed to register workflow: %s", e) | |
raise |
def _run(self, *args, **kwargs): | ||
"""Process user input and generate outfit recommendations. | ||
|
||
Retrieves user instruction and weather information from workflow context, | ||
generates outfit recommendations using the LLM model, and returns the | ||
recommendations while also sending them via callback. | ||
|
||
Args: | ||
*args: Variable length argument list | ||
**kwargs: Arbitrary keyword arguments | ||
|
||
Returns: | ||
str: Generated outfit recommendations | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update docstring to reflect meme explanation functionality
The current docstring refers to "outfit recommendations" which appears to be copied from another module. This should be updated to reflect the actual meme explanation functionality.
Consider updating the docstring:
- """Process user input and generate outfit recommendations.
+ """Process user input and generate meme explanations.
- Retrieves user instruction and weather information from workflow context,
- generates outfit recommendations using the LLM model, and returns the
- recommendations while also sending them via callback.
+ Retrieves user instruction and search information from workflow context,
+ generates meme explanations using the LLM model, and returns the
+ explanation while also sending it via callback.
Args:
*args: Variable length argument list
**kwargs: Arbitrary keyword arguments
Returns:
- str: Generated outfit recommendations
+ str: Generated meme explanation
"""
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _run(self, *args, **kwargs): | |
"""Process user input and generate outfit recommendations. | |
Retrieves user instruction and weather information from workflow context, | |
generates outfit recommendations using the LLM model, and returns the | |
recommendations while also sending them via callback. | |
Args: | |
*args: Variable length argument list | |
**kwargs: Arbitrary keyword arguments | |
Returns: | |
str: Generated outfit recommendations | |
""" | |
def _run(self, *args, **kwargs): | |
"""Process user input and generate meme explanations. | |
Retrieves user instruction and search information from workflow context, | |
generates meme explanations using the LLM model, and returns the | |
explanation while also sending it via callback. | |
Args: | |
*args: Variable length argument list | |
**kwargs: Arbitrary keyword arguments | |
Returns: | |
str: Generated meme explanation | |
""" |
@registry.register_worker() | ||
class MemeExplain(BaseWorker, BaseLLMBackend): | ||
llm: OpenaiGPTLLM | ||
|
||
prompts: List[PromptTemplate] = Field( | ||
default=[ | ||
PromptTemplate.from_file( | ||
CURRENT_PATH.joinpath("sys_prompt.prompt"), role="system" | ||
), | ||
PromptTemplate.from_file( | ||
CURRENT_PATH.joinpath("user_prompt.prompt"), role="user" | ||
), | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for prompt file existence
The code loads prompt files without checking if they exist first. This could lead to runtime errors if the files are missing.
Consider adding validation:
prompts: List[PromptTemplate] = Field(
default=[
PromptTemplate.from_file(
- CURRENT_PATH.joinpath("sys_prompt.prompt"), role="system"
+ sys_prompt_path := CURRENT_PATH.joinpath("sys_prompt.prompt"),
+ role="system"
+ ) if sys_prompt_path.exists() else None,
PromptTemplate.from_file(
- CURRENT_PATH.joinpath("user_prompt.prompt"), role="user"
+ user_prompt_path := CURRENT_PATH.joinpath("user_prompt.prompt"),
+ role="user"
+ ) if user_prompt_path.exists() else None,
]
)
Also, consider adding input validation for the prompt content to prevent potential security issues.
Committable suggestion skipped: line range outside the PR's diff.
# Retrieve user instruction and optional weather info from workflow context | ||
user_instruct = self.stm(self.workflow_instance_id).get("user_instruction") | ||
search_info = self.stm(self.workflow_instance_id)["search_info"] if "search_info" in self.stm(self.workflow_instance_id) else None | ||
# Generate outfit recommendations using LLM with weather and user input | ||
chat_complete_res = self.simple_infer(info=str(search_info), name=user_instruct) | ||
|
||
# Extract recommendations from LLM response | ||
outfit_recommendation = chat_complete_res["choices"][0]["message"]["content"] | ||
|
||
# Send recommendations via callback and return | ||
self.callback.send_answer(agent_id=self.workflow_instance_id, msg=outfit_recommendation) | ||
|
||
self.stm(self.workflow_instance_id).clear() | ||
return outfit_recommendation | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and improve code robustness
Several improvements can be made to make the code more robust:
- The search_info retrieval can be simplified (as suggested by static analysis)
- Missing error handling for LLM failures
- No validation of user_instruct
- Missing type hints
Consider these improvements:
+ def _run(self, *args, **kwargs) -> str:
# Retrieve user instruction and optional weather info from workflow context
- user_instruct = self.stm(self.workflow_instance_id).get("user_instruction")
+ user_instruct = self.stm(self.workflow_instance_id).get("user_instruction")
+ if not user_instruct:
+ raise ValueError("User instruction is required")
+
- search_info = self.stm(self.workflow_instance_id)["search_info"] if "search_info" in self.stm(self.workflow_instance_id) else None
+ search_info = self.stm(self.workflow_instance_id).get("search_info")
# Generate outfit recommendations using LLM with weather and user input
- chat_complete_res = self.simple_infer(info=str(search_info), name=user_instruct)
+ try:
+ chat_complete_res = self.simple_infer(info=str(search_info), name=user_instruct)
+ except Exception as e:
+ self.stm(self.workflow_instance_id).clear()
+ raise RuntimeError(f"Failed to generate meme explanation: {str(e)}")
# Extract recommendations from LLM response
- outfit_recommendation = chat_complete_res["choices"][0]["message"]["content"]
+ try:
+ meme_explanation = chat_complete_res["choices"][0]["message"]["content"]
+ except (KeyError, IndexError) as e:
+ self.stm(self.workflow_instance_id).clear()
+ raise RuntimeError(f"Invalid response format from LLM: {str(e)}")
# Send recommendations via callback and return
- self.callback.send_answer(agent_id=self.workflow_instance_id, msg=outfit_recommendation)
+ self.callback.send_answer(agent_id=self.workflow_instance_id, msg=meme_explanation)
self.stm(self.workflow_instance_id).clear()
- return outfit_recommendation
+ return meme_explanation
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Retrieve user instruction and optional weather info from workflow context | |
user_instruct = self.stm(self.workflow_instance_id).get("user_instruction") | |
search_info = self.stm(self.workflow_instance_id)["search_info"] if "search_info" in self.stm(self.workflow_instance_id) else None | |
# Generate outfit recommendations using LLM with weather and user input | |
chat_complete_res = self.simple_infer(info=str(search_info), name=user_instruct) | |
# Extract recommendations from LLM response | |
outfit_recommendation = chat_complete_res["choices"][0]["message"]["content"] | |
# Send recommendations via callback and return | |
self.callback.send_answer(agent_id=self.workflow_instance_id, msg=outfit_recommendation) | |
self.stm(self.workflow_instance_id).clear() | |
return outfit_recommendation | |
def _run(self, *args, **kwargs) -> str: | |
# Retrieve user instruction and optional weather info from workflow context | |
user_instruct = self.stm(self.workflow_instance_id).get("user_instruction") | |
if not user_instruct: | |
raise ValueError("User instruction is required") | |
search_info = self.stm(self.workflow_instance_id).get("search_info") | |
# Generate outfit recommendations using LLM with weather and user input | |
try: | |
chat_complete_res = self.simple_infer(info=str(search_info), name=user_instruct) | |
except Exception as e: | |
self.stm(self.workflow_instance_id).clear() | |
raise RuntimeError(f"Failed to generate meme explanation: {str(e)}") | |
# Extract recommendations from LLM response | |
try: | |
meme_explanation = chat_complete_res["choices"][0]["message"]["content"] | |
except (KeyError, IndexError) as e: | |
self.stm(self.workflow_instance_id).clear() | |
raise RuntimeError(f"Invalid response format from LLM: {str(e)}") | |
# Send recommendations via callback and return | |
self.callback.send_answer(agent_id=self.workflow_instance_id, msg=meme_explanation) | |
self.stm(self.workflow_instance_id).clear() | |
return meme_explanation |
🧰 Tools
🪛 Ruff (0.8.2)
47-47: Use self.stm(self.workflow_instance_id).get("search_info", None)
instead of an if
block
Replace with self.stm(self.workflow_instance_id).get("search_info", None)
(SIM401)
Summary by CodeRabbit
New Features
Bug Fixes
Documentation