Skip to content

Commit

Permalink
Add ReAct agent and streamlit web app
Browse files Browse the repository at this point in the history
  • Loading branch information
hemajv committed Oct 28, 2024
1 parent 648baa1 commit c86044b
Show file tree
Hide file tree
Showing 18 changed files with 587 additions and 0 deletions.
23 changes: 23 additions & 0 deletions config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
tools:
- name: "constitution_tool"
description: "Answers questions about the U.S. Constitution."
url: "https://my.app/v1/completions"
config:
method: 'POST'
headers:
'Content-Type': 'application/json'
'Authorization': 'Basic 12345'
body:
prompt: '{{prompt}}'
responseParser: 'json.answer'
responseMetadata:
- name: 'sources'
loc: 'json.sources'
responseFormat:
agent: '{{response}}'
json:
- "response"
- "sources"
examples:
- "What is the definition of a citizen in the U.S. Constitution?"
- "What article describes the power of the judiciary branch?"
123 changes: 123 additions & 0 deletions react_agent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Instructions

## Pre-Requisites

In order to run the ReAct agent setup, the following tools must be installed:
* `poetry`
* `oc` (optional)
* [Ollama](https://ollama.com/)

**Initial Steps**

* Clone the repository to your local system via `git clone [email protected]:redhat-et/llm-agents.git`

* Change directory to the project `cd llm-agents`

* Install the project dependencies `poetry install`

## Local Usage

To run the setup locally, follow the steps below:

1. Create a copy of the `sample.env` and rename it as `.env`, then fill in any environment variables
* The `sample.env` is configured to use Ollama by default, so if you are using that as your LLM provider you don't need to change anything after copying. If you want to use any other LLM then you will need to update `OPENAI_URI` with the model serving endpoint.

2. The `config.yaml` provided in the repo has an example for a dummy `Constitution Tool`. Update the `config.yaml` with information for any tool/API endpoints you'd like the ReAct agent to interact with.

***NOTE**: If you do not wish to use any APIs, remove the "tools" header completely.*

3. If using Ollama, ensure that the model specified in `OPENAI_MODEL` has been pulled into your Ollama instance with `ollama pull <model>`.

4. Run `ollama serve`. If you get an error like "Port 11434 already in use" then Ollama is running and you can move to the next step.

5. In a new terminal, run `poetry run mlflow server` to spin up the MLFlow tracking server. This will run a local instance of MLFlow on your system to log the traces/outputs of the agent application. Navigate to the URL provided to view the MLFlow UI and you will see the outputs being logged under the experiment name `ReAct Agent` -> click on the `Traces` tab.

6. In a new terminal, run `poetry run python react_agent/api.py` to spin up the agent API server.

7. In a new terminal, run `poetry run streamlit run webapp/intro.py` and navigate to the URL provided to use the UI application. You can now chat with the application by asking a question and see the outputs being generated.

## Adding Tools for the ReAct agent

There are two ways you can add tools to the ReAct agent's toolbelt.
Please note that for any of these tools, currently only a single input and a single output is supported.
We may add support for structured inputs in the future.

### Option 1: Custom Python Tool (Easiest)
To create a custom Python tool, open `react_agent/tools`.
Here you will see a few different categories of tools already defined.
In the `math.py` file, you'll see the most basic definition of a tool called `ComputeSquareTool`, which computes the square of a number.
You can use this tool as an example for your tool.

```python
from typing import Optional, Type

from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.tools import BaseTool
from pydantic import BaseModel, Field

class ComputeSquareInput(BaseModel):
"""Compute square tool input structure."""

number: int = Field(description="number to square")


class ComputeSquareTool(BaseTool):
"""Tool for computing the square of a number."""

name = "compute_square_tool"
description = "Compute the square of a number"
args_schema: Type[BaseModel] = ComputeSquareInput

def _run(self, number: str, run_manager: Optional[CallbackManagerForToolRun] = None):
"""Use the tool."""
return float(number) ** 2

async def _arun(self, number: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Tool does not support async")
```
The `name` and `description` of the tool are provided to the LLM when it is invoked and describes what the tool is and what it can be used for.

The `args_schema` describes the inputs the tool expects from the LLM and should be a pydantic class like the `ComputeSquareToolInput`.
For the agents we've defined, only a single input can be passed to the tool.
There are other agents that can support structured inputs (multi-input) but the ones in this PoC do not support that.

The execution of the tool happens in the `_run` method.
The `_run` method should take the input you described in your `args_schema` as a string and return a value that the agent can use to make its next decision.
Values returned by the tool will be converted to a string before being used by the agent.

### Option 2: Add an API
If you have a running API (like a websearch API), you can provide it as a tool to the ReAct agent.
Currently this only supports APIs with JSON inputs and outputs.
In the `config.yaml` file, replace the "constitution_tool" with the config for your tool.
You need the following fields in any tools you define (unless it says "Optional").

```yaml
tools:
- name: "constitution_tool" # A unique name for your tool
description: "Answers questions about the U.S. Constitution." # A sentence or two describing the purpose of your tool
url: "https://www.my.app/v1/completions" # URL to your API endpoint
config:
method: 'POST' # Type of request (POST, GET, etc)
headers: # Headers to send with the request
'Content-Type': 'application/json' # Content type must be application/json
'Authorization': 'Basic 12345' # Optional: Authorization header (base64 encoded)
body: # Key value pairs to send to the endpoint
prompt: '{{prompt}}' # The agent's prompt to the tool will be injected anywhere a {{prompt}} is present
other-key: 'constant-value' # Optional: Any other key-value pairs to send
responseParser: 'json.answer' # Path to the answer to return to the agent
responseMetadata: # Optional: Any metadata in the response to capture for use in responseFormat
- name: 'sources' # Metadata name
loc: 'json.sources' # Metadata path in the response
responseFormat: # Formats to provide the response from the tool to the agent, must have agent and json keys
agent: '{{response}}' # Response to give to the ReAct agent, the response parsed with the responseParser will be injected anywhere a {{response}} is present
json: # Response to give to the Router agent, which will in turn be given to the user
- "response" # The response parsed with responseParser
- "sources" # Optional: any additional keys to return. Must match one of the name fields in responseMetadata
examples: # Optional: Not implemented, provide some example questions your tool can answer
- "What is the definition of a citizen in the U.S. Constitution?"
- "What article describes the power of the judiciary branch?"
```
3 changes: 3 additions & 0 deletions react_agent/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"""Init."""

__version__ = "0.1.0"
17 changes: 17 additions & 0 deletions react_agent/agent/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
"""Define agents."""

from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate

from react_agent.constants import REACT_PROMPT
from react_agent.common.llm import chat_llm, completion_llm
from react_agent.tools import import_tools

def react_agent():
"""Create a ReAct agent."""
response_format = "agent"
tools = import_tools(common_tools_kwargs={"response_format": response_format})
prompt = PromptTemplate.from_template(REACT_PROMPT)
agent = create_react_agent(completion_llm, tools, prompt)
agent_executor = AgentExecutor(name="ReActAgent", agent=agent, tools=tools, handle_parsing_errors=True)
return agent_executor
56 changes: 56 additions & 0 deletions react_agent/api.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
from dotenv import load_dotenv

load_dotenv()

import logging
from contextlib import asynccontextmanager

import mlflow
import uvicorn
from fastapi import FastAPI

from react_agent.agent import react_agent
from react_agent.apispec import ReActRequest, ReActResponse
from react_agent.constants import APP_HOST, APP_PORT

logger = logging.getLogger(__name__)

# Start logging
mlflow.langchain.autolog(log_traces=True)

agents = {}


@asynccontextmanager
async def lifespan(app: FastAPI):
"""Run startup sequence."""
# Add them to the application components
agents["react"] = react_agent()

logger.info("Startup sequence successful")
yield
agents.clear()


app = FastAPI(lifespan=lifespan)


@app.post("/react", response_model=ReActResponse)
async def react(request: ReActRequest):
"""Interact with the ReAct agent."""
agent = agents["react"]

# Send it to the agent
agent_response = agent.invoke({"input": request.prompt, "chat_history": []})
answer = agent_response["output"]
response = ReActResponse(answer=answer)
return response

@app.route("/health")
def health():
"""Perform a service health check."""
return {"status": "ok"}

if __name__ == "__main__":
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(name)s - %(message)s")
uvicorn.run(app, port=APP_PORT, host=APP_HOST)
17 changes: 17 additions & 0 deletions react_agent/apispec.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from typing import Optional

from pydantic import BaseModel


class ReActRequest(BaseModel):
"""Request for ReAct endpoint."""

prompt: str
tools: list[str] = []


class ReActResponse(BaseModel):
"""Response for ReAct endpoint."""

answer: str | dict
tools_used: list[str] = []
50 changes: 50 additions & 0 deletions react_agent/common/llm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
from typing import Any, List, Mapping, Optional

import httpx
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain_core.messages import HumanMessage
from langchain_openai.chat_models import ChatOpenAI

from react_agent.constants import OPENAI_IGNORE_SSL, OPENAI_MODEL, OPENAI_URI


class CustomOpenAI(LLM):
"""Class to define interaction with the hosted OpenAI instance at a specified URI without SSL verification."""

base_url: str
model: str
api_key: str
http_client: httpx.Client = None
temperature: float = 0.8

@property
def _llm_type(self) -> str:
return "custom"

def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
):
# Create the request
llm = ChatOpenAI(
base_url=self.base_url, model=self.model, api_key=self.api_key, http_client=self.http_client, temperature=self.temperature
)
request = [HumanMessage(content=prompt)]
response = llm.invoke(request, stop=stop, **kwargs).content
return response

@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"base_url": self.base_url, "model": self.model}


verify = False if OPENAI_IGNORE_SSL else True
http_client = httpx.Client(verify=verify)

chat_llm = ChatOpenAI(base_url=OPENAI_URI, model=OPENAI_MODEL, http_client=http_client, temperature=0, api_key="NONE")
completion_llm = CustomOpenAI(base_url=OPENAI_URI, model=OPENAI_MODEL, http_client=http_client, temperature=0, api_key="NONE")
56 changes: 56 additions & 0 deletions react_agent/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import os
import pathlib
import yaml

DIRECTORY_PATH = pathlib.Path(os.path.dirname(__file__)).parent

with open(DIRECTORY_PATH / "config.yaml") as f:
CONFIG = yaml.load(f, yaml.SafeLoader)

APP_HOST = os.environ.get("APP_HOST", "0.0.0.0")
APP_PORT = int(os.environ.get("APP_PORT", "2113"))

OPENAI_URI = os.environ.get("OPENAI_URI", "http://localhost:11434/v1")
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "mistral")
OPENAI_IGNORE_SSL = os.environ.get("OPENAI_IGNORE_SSL", False)

ERROR_MESSAGE = "Unable to process request, please try again later."

REACT_PROMPT = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant does not speak in character and uses character tools when asked to speak in character.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
TOOLS:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
{agent_scratchpad}"""
32 changes: 32 additions & 0 deletions react_agent/tools/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
"""Define and import tools."""

import logging

from react_agent.constants import CONFIG
from react_agent.tools.common import CommonTool
from react_agent.tools.math import ComputeSquareTool

logger = logging.getLogger(__name__)


def create_common_tools(**kwargs):
"""Create common tools."""
logger.info("Creating Common Tools from config.yaml")
tool_configs = CONFIG.get("tools", [])
tools = []
for tool_config in tool_configs:
tool = CommonTool(**tool_config, **kwargs)
tools.append(tool)
logger.info(f"Created {tool_config['name']} tool")
return tools


def import_tools(all_return_direct: bool = False, common_tools_kwargs: dict = {}):
"""Gather tools."""
base_tools = [ComputeSquareTool()]
common_tools = create_common_tools(**common_tools_kwargs)
tools = base_tools + common_tools
if all_return_direct:
for tool in tools:
tool.return_direct = True
return tools
Loading

0 comments on commit c86044b

Please sign in to comment.