Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable code_interpreter in tool-calling agent #407

Open
1 of 2 tasks
subramen opened this issue Nov 8, 2024 · 0 comments
Open
1 of 2 tasks

Disable code_interpreter in tool-calling agent #407

subramen opened this issue Nov 8, 2024 · 0 comments

Comments

@subramen
Copy link
Contributor

subramen commented Nov 8, 2024

System Info

..

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I am setting up a search agent exactly as shown here: https://github.com/meta-llama/llama-stack-apps/blob/7c92eb274924b38b110ca1759dd487817980e5af/examples/agents/client.py#L38

Despite no instructions to write or execute code, the agent automatically invokes code_interpreter and errors out with AssertionError: Tool code_interpreter not found. This appears to happen when the assistant response contains any code.

How do I explicitly disable code_interpreter?

Error logs

role='user' content='Query: What methods are best for finetuning llama?\n\nSpecialist answers:Based on the provided context, it appears that finetuning LLaMA is not directly mentioned in the code snippets. However, I can infer that finetuning LLaMA is likely to be performed using the `llama_recipes.finetuning` module.\n\nIn the `finetuning.py` file, the `main` function is imported from `llama_recipes.finetuning`, which suggests that this file contains the code for finetuning LLaMA.\n\nTo finetun...<more>...Guard.\n\nAs for finetuning Llama in general, it seems that the provided context only provides information on finetuning Llama Guard, which is a specific application of the Llama model. For general finetuning of Llama, you may need to refer to the official documentation or other external resources.\n\nHowever, based on the provided context, it seems that the `finetune_vision_model.md` file in the `quickstart` folder may provide some information on finetuning Llama for vision tasks.' context=None


role='assistant' content='' stop_reason=<StopReason.end_of_turn: 'end_of_turn'> tool_calls=[ToolCall(call_id='83e9d075-144b-40bc-9319-ca073ad562f9', tool_name=<BuiltinTool.code_interpreter: 'code_interpreter'>, arguments={'code': 'Based on the provided context, it appears that finetuning LLaMA is not directly mentioned in the code snippets. However, I can infer that finetuning LLaMA is likely to be performed using the `llama_recipes.finetuning` module.\n\nTo finetune LLaMA, you would likely need to use the `train_config` from `llama_recipes.configs.training` module, which is imported in the `configs` module.\n\nHere are the possible methods for finetuning LLaMA:\n\n1. Using the `train_config` from `llama_recipes.configs.training` module:\n    ```python\nfrom llama_recipes.configs.training import train_config\n\n# Load the train config from a YAML file\nconfig = load_config(config_path="./config.yaml")\n\n# Update the config with the finetuning configuration\nconfig.update(train_config)\n\n# Use the updated config to finetune LLaMA\nmain(config)\n```\n\n2. Using the `lora_config` from `llama_recipes.configs.peft` module:\n    ```python\nfrom llama_recipes.configs.peft import lora_config\n\n# Load the lora config from a YAML file\nconfig = load_config(config_path="./config.yaml")\n\n# Update the config with the lora configuration\nconfig.update(lora_config)\n\n# Use the updated config to finetune LLaMA\nmain(config)\n```\n\n3. Using the `fsdp_config` from `llama_recipes.configs.fsdp` module:\n    ```python\nfrom llama_recipes.configs.fsdp import fsdp_config\n\n# Load the fsdp config from a YAML file\nconfig = load_config(config_path="./config.yaml")\n\n# Update the config with the fsdp configuration\nconfig.update(fsdp_config)\n\n# Use the updated config to finetune LLaMA\nmain(config)\n```\n\n4. Using the `wandb_config` from `llama_recipes.configs.wandb` module:\n    ```python\nfrom llama_recipes.configs.wandb import wandb_config\n\n# Load the wandb config from a YAML file\nconfig = load_config(config_path="./config.yaml")\n\n# Update the config with the wandb configuration\nconfig.update(wandb_config)\n\n# Use the updated config to finetune LLaMA\nmain(config)\n```\n\n5. Using the `quantization_config` from `llama_recipes.configs.quantization` module:\n    ```python\nfrom llama_recipes.configs.quantization import quantization_config\n\n# Load the quantization config from a YAML file\nconfig = load_config(config_path="./config.yaml")\n\n# Update the config with the quantization configuration\nconfig.update(quantization_config)\n\n# Use the updated config to finetune LLaMA\nmain(config)\n```\n\nNote that these are just possible methods and may require additional configuration and setup. The actual finetuning process may involve more steps and parameters, and may require additional libraries and dependencies.\n\nHowever, based on the provided context, it seems that the `finetune_vision_model.md` file in the `quickstart` folder may provide some information on finetuning LLaMA for vision tasks.'})]



Traceback (most recent call last):
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 206, in sse_generator
    async for item in await event_gen:
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agents.py", line 138, in _create_agent_turn_streaming
    async for event in agent.create_and_execute_turn(request):
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 179, in create_and_execute_turn
    async for chunk in self.run(
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 252, in run
    async for res in self._run(
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 560, in _run
    result_messages = await execute_tool_call_maybe(
  File "/opt/conda/envs/llamastack-vllm-stack/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 824, in execute_tool_call_maybe
    assert name in tools_dict, f"Tool {name} not found"
AssertionError: Tool code_interpreter not found

Expected behavior

don't call code_interpreter, just use search.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant