Skip to content

Commit

Permalink
Merge branch 'main' into no_strong_cache
Browse files Browse the repository at this point in the history
  • Loading branch information
ViStefan authored Oct 30, 2024
2 parents d79c6b1 + 19837e7 commit ca05b27
Show file tree
Hide file tree
Showing 39 changed files with 2,382 additions and 1,385 deletions.
3 changes: 2 additions & 1 deletion docs/source/choosing_llms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ That's why we have an ``init_llm`` function to help you set up the LLM.
llm = init_llm(
llm_framework=LLMFramework.LANGCHAIN,
llm_provider=LLMProvider.ANTHROPIC,
llm_name="claude-3-5-sonnet-20240620",
llm_name="claude-3-5-sonnet-latest",
llm_temperature=0
)
agent = ReActToolCallingMotleyAgent(llm=llm, tools=[...])
Expand All @@ -49,6 +49,7 @@ The currently supported LLM providers (:py:class:`motleycrew.common.enums.LLMPro
- :py:class:`Together <motleycrew.common.enums.LLMProvider.TOGETHER>`
- :py:class:`Replicate <motleycrew.common.enums.LLMProvider.REPLICATE>`
- :py:class:`Ollama <motleycrew.common.enums.LLMProvider.OLLAMA>`
- :py:class:`Azure OpenAI <motleycrew.common.enums.LLMProvider.AZURE_OPENAI>`

Please raise an issue if you need to add support for another LLM provider.

Expand Down
1 change: 1 addition & 0 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Examples
examples/research_agent
examples/validating_agent_output
examples/advanced_output_handling
examples/customer_support
examples/streaming_agent_output
examples/event_driven
autogen
73 changes: 73 additions & 0 deletions docs/source/examples/customer_support.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
Customer support chatbot with Ray Serve
=======================================

This example demonstrates how to build a customer support chatbot using MotleyCrew and Ray Serve.
The chatbot is designed to answer customer queries based on a database of past issues and their resolutions.

The code for this example can be found `here <https://github.com/ShoggothAI/motleycrew/tree/main/motleycrew/applications/customer_support>`_.
Also, see the `blog post <https://blog.motleycrew.ai/blog/building-a-customer-support-chatbot-using-motleycrew-and-ray>`_ about this app.

Key Components
--------------

1. Issue Database

- Stores information about past issues and their solutions in a tree structure
- Intermediate nodes represent issue categories
- Leaf nodes represent individual issues
- Uses Kuzu to store and query the issue tree through our OGM (see :doc:`../knowledge_graph` for more details)

2. AI Support Agent

- Attempts to resolve customer issues based on past solutions
- Navigates the issue tree to find relevant information
- Can ask clarifying questions to the customer
- Proposes solutions or escalates to a human agent if necessary

3. Agent Tools

- IssueTreeViewTool: Allows the agent to navigate the issue tree
- CustomerChatTool: Enables the agent to ask additional questions to the customer
- ResolveIssueTool: Used to submit a solution or escalate to a human agent

4. Ray Serve Deployment

- Exposes the chatbot as an API
- Allows multiple customers to connect simultaneously
- Uses WebSockets over FastAPI for communication

Implementation Details
----------------------

The support agent is implemented using the "every response is a tool call" design.
The agent loop can only end with a ResolveIssueTool call or when a constraint (e.g., number of iterations) is reached.
This is achieved by making the ResolveIssueTool an output handler.

The Ray Serve deployment is configured using a simple decorator:

.. code-block:: python
@serve.deployment(num_replicas=3, ray_actor_options={"num_cpus": 1, "num_gpus": 0})
class SupportAgentDeployment:
...
This setup allows for easy scaling and supports multiple simultaneous sessions balanced between replicas.

Running the Example
-------------------

The project includes sample issue data that can be used to populate the issue tree.

To run this example:

.. code-block:: bash
git clone https://github.com/ShoggothAI/motleycrew.git
cd motleycrew
pip install -r requirements.txt
python -m motleycrew.applications.customer_support.issue_tree # populate the issue tree
ray start --head
python -m motleycrew.applications.customer_support.ray_serve_app
This example showcases the flexibility of MotleyCrew for building agent-based applications, allowing you to choose your preferred agent framework, orchestration model, and deployment solution.
3 changes: 3 additions & 0 deletions docs/source/examples/event_driven.nblink
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"path": "../../../examples/Event-driven orchestration for AI systems.ipynb"
}
3 changes: 0 additions & 3 deletions docs/source/quickstart.nblink

This file was deleted.

87 changes: 87 additions & 0 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
Quickstart
==========

This is a brief introduction to motleycrew.

For a working example of agents, tools, crew, and SimpleTask, check out the :doc:`blog with images <examples/blog_with_images>`.

For a working example of custom tasks that fully utilize the knowledge graph backend, check out the :doc:`research agent <examples/research_agent>`.

Agents and tools
----------------

Motleycrew provides thin wrappers for all the common agent frameworks: Langchain, LlamaIndex, CrewAI, and Autogen (please let us know if you want any others added!).
It also provides thin wrappers for Langchain and LlamaIndex tools, allowing you to use any of these tools with any of these agents.

MotleyCrew also supports **delegation**: you can simply give any agent as a tool to any other agent.

All the wrappers for tools and agents implement the Runnable interface, so you can use them as-is in LCEL and Langgraph code.

Output handlers (aka return_direct)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

An **output handler** is a tool that the agent uses for submitting the final output instead of returning it raw. Besides defining a schema for the output, the output handler enables you to implement any validation logic inside it, including agent-based. If your agent returns invalid output, you can raise an exception that will be returned to the agent so that it can retry.

Essentially, an output handler is a tool that returns its output directly to the user, thus finishing the agent execution. This behavior is enabled by setting the ``return_direct=True`` argument for the tool. Unlike other frameworks, MotleyCrew allows to have multiple output handlers for one agent, from which the agent can choose one.

MotleyCrew also supports **forced output handlers**. This means that the agent will only be able to return output via an output handler, and not directly. This is useful if you want to ensure that the agent only returns output in a specific format.

See our usage examples with a :doc:`simple validator <examples/validating_agent_output>` and an :doc:`advanced output handler with multiple fields <examples/advanced_output_handling>`.

MotleyTool
^^^^^^^^^^

A tool in motleycrew, like in other frameworks, is basically a function that takes an input and returns an output.
It is called a tool in the sense that it is usually used by an agent to perform a specific action.
Besides the function itself, a tool also contains an input schema which describes the input format to the LLM.

``MotleyTool`` is the base class for all tools in motleycrew. It is a subclass of ``Runnable`` that adds some additional features to the tool, along with necessary adapters and converters.

If you pass a tool from a supported framework (currently Langchain, LlamaIndex, and CrewAI) to a motleycrew agent, it will be automatically converted. If you want to have control over this, e.g. to customize tool params, you can do it manually.

.. code-block:: python
motley_tool = MotleyTool.from_supported_tool(my_tool)
It is also possible to define a custom tool using the ``MotleyTool`` base class, overriding the ``run`` method. This is especially useful if you want to access context such as the caller agent or its last input, which can be useful for validation.

.. code-block:: python
class MyTool(MotleyTool):
def run(self, some_input: str) -> str:
return f"Received {some_input} from agent {self.agent} with last input {self.agent_input}"
Tools can be executed asynchronously, either directly of via by an asynchronous agent. By default, the async version will just run the sync version in a separate thread

MotleyTool can reflect exceptions that are raised inside it back to the agent, which can then retry the tool call. You can pass a list of exception classes to the ``exceptions_to_reflect`` argument in the constructor (or even pass the ``Exception`` class to reflect everything).

Crew and tasks
--------------

The other two key concepts in motleycrew are crew and tasks. The crew is the orchestrator for tasks, and must be passed to all tasks at creation; tasks can be connected into a DAG using the ``>>`` operator, for example ``TaskA >> TaskB``. This means that ``TaskB`` will not be started before ``TaskA`` is complete, and will be given ``TaskA``'s output.

Once all tasks and their relationships have been set up, it all can be run via ``crew.run()``, which returns a list of the executed ``TaskUnits`` (see below for details).

SimpleTask
^^^^^^^^^^

``SimpleTask`` is a basic implementation of the ``Task`` API. It only requires a crew, a description, and an agent. When it's executed, the description is combined with the output of any upstream tasks and passed on to the agent, and the agent's output is the tasks's output.

For a working illustration of all the concepts so far, see the :doc:`blog with images <examples/blog_with_images>` example.

Knowledge graph backend and custom tasks
----------------------------------------

The functionality so far is convenient, allowing us to mix all the popular agents and tools, but otherwise fairly vanilla, little different from, for example, the CrewAI semantics. Fortunately, the above introduction just scratched the surface of the motleycrew ``Task`` API.

In motleycrew, a task is basically a set of rules describing how to perform actions. It provides a **worker** (e.g. an agent) and sets of input data called **task units**. This allows defining workflows of any complexity concisely using crew semantics. For a deeper dive, check out the page on :doc:`key concepts <key_concepts>`.

The crew queries and dispatches available task units in a loop, managing task states using an embedded :doc:`knowledge graph <knowledge_graph>`.

This dispatch method easily supports different execution backends, from synchronous to asyncio, threaded, etc.

Example: Recursive question-answering in the research agent
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Motleycrew architecture described above easily allows to generate task units on the fly, if needed. An example of the power of this approach is the :doc:`research agent <examples/research_agent>` that dynamically generates new questions based on retrieved context for previous questions.
This example also shows how workers can collaborate via the shared knowledge graph, storing all necessary data in a way that is natural to the task.
Loading

0 comments on commit ca05b27

Please sign in to comment.