diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index f58ea8979..c1bf236b4 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -12,6 +12,7 @@ A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
+
1. You code
2. How to execute
3. See error
@@ -23,9 +24,10 @@ A clear and concise description of what you expected to happen.
Detailed error messages.
**Environment (please complete the following information):**
- - AgentScope Version: [e.g. 0.0.1 via `print(agentscope.__version__)`]
- - Python Version: [e.g. 3.9]
- - OS: [e.g. macos, windows]
+
+- AgentScope Version: [e.g. 0.0.2 via `print(agentscope.__version__)`]
+- Python Version: [e.g. 3.9]
+- OS: [e.g. macos, windows]
**Additional context**
-Add any other context about the problem here.
\ No newline at end of file
+Add any other context about the problem here.
diff --git a/README.md b/README.md
index d33a9833c..434a784cb 100644
--- a/README.md
+++ b/README.md
@@ -2,9 +2,11 @@ English | [**中文**](README_ZH.md)
# AgentScope
+Start building LLM-empowered multi-agent applications in an easier way.
+
[![](https://img.shields.io/badge/cs.MA-2402.14034-B31C1C?logo=arxiv&logoColor=B31C1C)](https://arxiv.org/abs/2402.14034)
[![](https://img.shields.io/badge/python-3.9+-blue)](https://pypi.org/project/agentscope/)
-[![](https://img.shields.io/badge/pypi-v0.0.1-blue?logo=pypi)](https://pypi.org/project/agentscope/)
+[![](https://img.shields.io/badge/pypi-v0.0.2-blue?logo=pypi)](https://pypi.org/project/agentscope/)
[![](https://img.shields.io/badge/Docs-English%7C%E4%B8%AD%E6%96%87-blue?logo=markdown)](https://modelscope.github.io/agentscope/#welcome-to-agentscope-tutorial-hub)
[![](https://img.shields.io/badge/Docs-API_Reference-blue?logo=markdown)](https://modelscope.github.io/agentscope/)
[![](https://img.shields.io/badge/ModelScope-Demos-4e29ff.svg?logo=data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMjI0IDEyMS4zMyIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCTxwYXRoIGQ9Im0wIDQ3Ljg0aDI1LjY1djI1LjY1aC0yNS42NXoiIGZpbGw9IiM2MjRhZmYiIC8+Cgk8cGF0aCBkPSJtOTkuMTQgNzMuNDloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzYyNGFmZiIgLz4KCTxwYXRoIGQ9Im0xNzYuMDkgOTkuMTRoLTI1LjY1djIyLjE5aDQ3Ljg0di00Ny44NGgtMjIuMTl6IiBmaWxsPSIjNjI0YWZmIiAvPgoJPHBhdGggZD0ibTEyNC43OSA0Ny44NGgyNS42NXYyNS42NWgtMjUuNjV6IiBmaWxsPSIjMzZjZmQxIiAvPgoJPHBhdGggZD0ibTAgMjIuMTloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzM2Y2ZkMSIgLz4KCTxwYXRoIGQ9Im0xOTguMjggNDcuODRoMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzYyNGFmZiIgLz4KCTxwYXRoIGQ9Im0xOTguMjggMjIuMTloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzM2Y2ZkMSIgLz4KCTxwYXRoIGQ9Im0xNTAuNDQgMHYyMi4xOWgyNS42NXYyNS42NWgyMi4xOXYtNDcuODR6IiBmaWxsPSIjNjI0YWZmIiAvPgoJPHBhdGggZD0ibTczLjQ5IDQ3Ljg0aDI1LjY1djI1LjY1aC0yNS42NXoiIGZpbGw9IiMzNmNmZDEiIC8+Cgk8cGF0aCBkPSJtNDcuODQgMjIuMTloMjUuNjV2LTIyLjE5aC00Ny44NHY0Ny44NGgyMi4xOXoiIGZpbGw9IiM2MjRhZmYiIC8+Cgk8cGF0aCBkPSJtNDcuODQgNzMuNDloLTIyLjE5djQ3Ljg0aDQ3Ljg0di0yMi4xOWgtMjUuNjV6IiBmaWxsPSIjNjI0YWZmIiAvPgo8L3N2Zz4K)](https://modelscope.cn/studios?name=agentscope&page=1&sort=latest)
@@ -12,66 +14,118 @@ English | [**中文**](README_ZH.md)
[![](https://img.shields.io/badge/license-Apache--2.0-black)](./LICENSE)
[![](https://img.shields.io/badge/Contribute-Welcome-green)](https://modelscope.github.io/agentscope/tutorial/contribute.html)
-AgentScope is an innovative multi-agent platform designed to empower developers to build multi-agent applications with ease, reliability, and high performance. It features three high-level capabilities:
-
-- **Easy-to-Use**: Programming in pure Python with various pre-built components for immediate use, suitable for developers or users with varying levels of customization requirements. Detailed documentation and examples are provided to help you get started, see our [Tutorial](https://modelscope.github.io/agentscope/#welcome-to-agentscope-tutorial-hub).
-
-- **High Robustness**: Supporting customized fault-tolerance controls and retry mechanisms to enhance application stability.
-
-- **Actor-Based Distribution**: Enabling developers to build distributed multi-agent applications in a centralized programming manner for streamlined development.
-
-If you find our work helpful, please kindly cite [our paper](https://arxiv.org/abs/2402.14034).
+If you find our work helpful, please kindly
+cite [our paper](https://arxiv.org/abs/2402.14034).
Welcome to join our community on
-| [Discord](https://discord.gg/eYMpfnkG8h) | DingTalk | WeChat |
-|---------|----------|--------|
+| [Discord](https://discord.gg/eYMpfnkG8h) | DingTalk | WeChat |
+|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| | | |
----
## News
-- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png) [2024-02-27] We release **AgentScope v0.0.1** now, which is also available in [PyPI](https://pypi.org/project/agentscope/)!
-- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png) [2024-02-14] We release our paper "AgentScope: A Flexible yet Robust Multi-Agent Platform" in [arXiv](https://arxiv.org/abs/2402.14034) now!
-
-Table of Contents
-=================
-
-- [AgentScope](#agentscope)
- - [News](#news)
-- [Table of Contents](#table-of-contents)
- - [Installation](#installation)
- - [From source](#from-source)
- - [Using pip](#using-pip)
- - [Quick Start](#quick-start)
- - [Basic Usage](#basic-usage)
- - [Step 1: Prepare Model Configs](#step-1-prepare-model-configs)
- - [OpenAI API Config](#openai-api-config)
- - [DashScope API Config](#dashscope-api-config)
- - [Post Request API Config](#post-request-api-config)
- - [Step 2: Create Agents](#step-2-create-agents)
- - [Step 3: Construct Conversation](#step-3-construct-conversation)
- - [Advanced Usage](#advanced-usage)
- - [**Pipeline** and **MsgHub**](#pipeline-and-msghub)
- - [Customize Your Own Agent](#customize-your-own-agent)
- - [Built-in Resources](#built-in-resources)
- - [Agent Pool](#agent-pool)
- - [Services](#services)
- - [Example Applications](#example-applications)
- - [License](#license)
- - [Contributing](#contributing)
- - [References](#references)
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] We release **AgentScope** v0.0.2 now! In this new version,
+AgentScope supports [ollama](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)(A local CPU inference engine), [DashScope](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) and Google [Gemini](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) APIs.
+
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] New examples ["Autonomous Conversation with Mentions"](./examples/conversation_with_mentions) and ["Basic Conversation with LangChain library"](./examples/conversation_with_langchain) are available now!
+
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] The [Chinese tutorial](https://modelscope.github.io/agentscope/zh_CN/index.html) of AgentScope is online now!
+
+- [2024-02-27] We release **AgentScope v0.0.1** now, which is also
+available in [PyPI](https://pypi.org/project/agentscope/)!
+- [2024-02-14] We release our paper "AgentScope: A Flexible yet Robust
+Multi-Agent Platform" in [arXiv](https://arxiv.org/abs/2402.14034) now!
+
+---
+
+## What's AgentScope?
+
+AgentScope is an innovative multi-agent platform designed to empower developers
+to build multi-agent applications with large-scale models.
+It features three high-level capabilities:
+
+- 🤝 **Easy-to-Use**: Designed for developers, with [fruitful components](https://modelscope.github.io/agentscope/en/tutorial/204-service.html#),
+[comprehensive documentation](https://modelscope.github.io/agentscope/en/index.html), and broad compatibility.
+
+- ✅ **High Robustness**: Supporting customized fault-tolerance controls and
+retry mechanisms to enhance application stability.
+
+- 🚀 **Actor-Based Distribution**: Building distributed multi-agent
+applications in a centralized programming manner for streamlined development.
+
+**Supported Model Libraries**
+
+AgentScope provides a list of `ModelWrapper` to support both local model
+services and third-party model APIs.
+
+| API | Task | Model Wrapper |
+|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) |
+| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) |
+| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| | Embedding | [`OllamaEmbedding`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) |
+
+**Supported Local Model Deployment**
+
+AgentScope enables developers to rapidly deploy local model services using
+the following libraries.
+
+- [ollama (CPU inference)](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#ollama)
+- [Flask + Transformers](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-transformers-library)
+- [Flask + ModelScope](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-modelscope-library)
+- [FastChat](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#fastchat)
+- [vllm](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#vllm)
+
+**Supported Services**
+
+- Web Search
+- Data Query
+- Retrieval
+- Code Execution
+- File Operation
+- Text Processing
+
+**Example Applications**
+
+- Conversation
+ - [Basic Conversation](./examples/conversation_basic)
+ - [Autonomous Conversation with Mentions](./examples/conversation_with_mentions)
+ - [Self-Organizing Conversation](./examples/conversation_self_organizing)
+ - [Basic Conversation with LangChain library](./examples/conversation_with_langchain)
+
+- Game
+ - [Werewolf](./examples/game_werewolf)
+
+- Distribution
+ - [Distributed Conversation](./examples/distribution_conversation)
+ - [Distributed Debate](./examples/distribution_debate)
+
+More models, services and examples are coming soon!
## Installation
-To install AgentScope, you need to have Python 3.9 or higher installed.
+AgentScope requires **Python 3.9** or higher.
-**_Note: This project is currently in active development, it's recommended to install AgentScope from source._**
+**_Note: This project is currently in active development, it's recommended to
+install AgentScope from source._**
### From source
-- Run the following commands to install AgentScope in editable mode.
+- Install AgentScope in editable mode:
```bash
# Pull the source code from GitHub
@@ -82,7 +136,7 @@ cd AgentScope
pip install -e .
```
-- Building a distributed multi-agent application relies on [gRPC](https://github.com/grpc/grpc) libraries, and you can install the required dependencies as follows.
+- To build distributed multi-agent applications:
```bash
# On windows
@@ -93,7 +147,7 @@ pip install -e .\[distribute\]
### Using pip
-- Use the following command to install the latest released AgentScope.
+- Install AgentScope from pip:
```bash
pip install agentscope
@@ -101,85 +155,46 @@ pip install agentscope
## Quick Start
-### Basic Usage
-
-Taking a multi-agent application with user and assistant agent as an example, you need to take the following steps:
-
-- [Step 1: Prepare Model Configs](#step-1-prepare-model-configs)
-- [Step 2: Create Agents](#step-2-create-agents)
-- [Step 3: Construct Conversation](#step-3-construct-conversation)
-
-#### Step 1: Prepare Model Configs
+### Configuration
-AgentScope supports the following model API services:
+In AgentScope, the model deployment and invocation are decoupled by
+`ModelWrapper`.
-- OpenAI Python APIs, including
- - OpenAI Chat, DALL-E and Embedding API
- - OpenAI-Compatible platforms, e.g. [FastChat](https://github.com/lm-sys/FastChat) and [vllm](https://github.com/vllm-project/vllm)
-- Post request APIs, including
- - [HuggingFace](https://huggingface.co/docs/api-inference/index) and [ModelScope](https://www.modelscope.cn/docs/%E9%AD%94%E6%90%ADv1.5%E7%89%88%E6%9C%AC%20Release%20Note%20(20230428)) inference APIs
- - Customized model APIs
+To use these model wrappers, you need to prepare a model config file as
+follows.
-| | Model Type Argument | Support APIs |
-|----------------------|---------------------|----------------------------------------------------------------|
-| OpenAI Chat API | `openai` | Standard OpenAI Chat API, FastChat and vllm |
-| OpenAI DALL-E API | `openai_dall_e` | Standard DALL-E API |
-| OpenAI Embedding API | `openai_embedding` | OpenAI embedding API |
-| DashScope Chat API | `dashscope_chat` | DashScope chat API, including Qwen series |
-| Post API | `post_api` | Huggingface/ModelScope inference API, and customized post API |
-
-##### OpenAI API Config
-
-For OpenAI APIs, you need to prepare a dict of model config with the following fields:
-
-```
-{
- "config_name": "{config name}", # The name to identify the config
- "model_type": "openai" | "openai_dall_e" | "openai_embedding",
- "model_name": "{model name, e.g. gpt-4}", # The model in openai API
-
- # Optional
- "api_key": "xxx", # The API key for OpenAI API. If not set, env
- # variable OPENAI_API_KEY will be used.
- "organization": "xxx", # The organization for OpenAI API. If not set, env
- # variable OPENAI_ORGANIZATION will be used.
-}
-```
-
-##### DashScope API Config
-
-For DashScope APIs, you need to prepare a dict of model config with the following fields:
+```python
+model_config = {
+ # The identifies of your config and used model wrapper
+ "config_name": "{your_config_name}", # The name to identify the config
+ "model_type": "{model_type}", # The type to identify the model wrapper
-```
-{
- "config_name": "{config name}", # The name to identify the config
- "model_type": "dashscope_chat" | "dashscope_text_embedding" | "dashscope_image_synthesis",
- "model_name": "{model name, e.g. qwen-max}", # The model in dashscope API
- "api_key": "xxx", # The API key for DashScope API.
+ # Detailed parameters into initialize the model wrapper
+ # ...
}
```
-> Note: The dashscope APIs may have strict requirements on the `role` field in messages. Please use with caution.
+Taking OpenAI Chat API as an example, the model configuration is as follows:
-##### Post Request API Config
-
-For post requests APIs, the config contains the following fields.
-
-```
-{
- "config_name": "{config name}", # The name to identify the config
- "model_type": "post_api",
- "api_url": "https://xxx", # The target url
- "headers": { # Required headers
- ...
- },
+```python
+openai_model_config = {
+ "config_name": "my_openai_config", # The name to identify the config
+ "model_type": "openai", # The type to identify the model wrapper
+
+ # Detailed parameters into initialize the model wrapper
+ "model_name": "gpt-4", # The used model in openai API, e.g. gpt-4, gpt-3.5-turbo, etc.
+ "api_key": "xxx", # The API key for OpenAI API. If not set, env
+ # variable OPENAI_API_KEY will be used.
+ "organization": "xxx", # The organization for OpenAI API. If not set, env
+ # variable OPENAI_ORGANIZATION will be used.
}
```
-AgentScope provides fruitful scripts to fast deploy model services in [Scripts](./scripts/README.md).
-For more details of model services, refer to our [Tutorial](https://modelscope.github.io/agentscope/index.html#welcome-to-agentscope-tutorial-hub) and [API Document](https://modelscope.github.io/agentscope/index.html#indices-and-tables).
+More details about how to set up local model services and prepare model
+configurations is in our
+[tutorial](https://modelscope.github.io/agentscope/index.html#welcome-to-agentscope-tutorial-hub).
-#### Step 2: Create Agents
+### Create Agents
Create built-in user and assistant agents as follows.
@@ -191,11 +206,12 @@ import agentscope
agentscope.init(model_configs="./model_configs.json")
# Create a dialog agent and a user agent
-dialog_agent = DialogAgent(name="assistant", model_config_name="your_config_name")
+dialog_agent = DialogAgent(name="assistant",
+ model_config_name="my_openai_config")
user_agent = UserAgent()
```
-#### Step 3: Construct Conversation
+### Construct Conversation
In AgentScope, **message** is the bridge among agents, which is a
**dict** that contains two necessary fields `name` and `content` and an
@@ -203,6 +219,7 @@ optional field `url` to local files (image, video or audio) or website.
```python
from agentscope.message import Msg
+
x = Msg(name="Alice", content="Hi!")
x = Msg("Bob", "What about this picture I took?", url="/path/to/picture.jpg")
```
@@ -213,118 +230,32 @@ with the following code:
```python
x = None
while True:
- x = dialog_agent(x)
- x = user_agent(x)
- if x.content == "exit": # user input "exit" to exit the conversation
- break
-```
-
-### Advanced Usage
-
-#### **Pipeline** and **MsgHub**
-
-To simplify the construction of agents communication, AgentScope provides two helpful tools: **Pipeline** and **MsgHub**.
-
-- **Pipeline**: It allows users to program a communication among agents easily. Taking a sequential pipeline as an example, the following two codes are equivalent, but pipeline is more convenient and elegant.
-
- - Passing message throught agent1, agent2 and agent3 **WITHOUT** pipeline:
-
- ```python
- x1 = agent1(input_msg)
- x2 = agent2(x1)
- x3 = agent3(x2)
- ```
-
- - **WITH** object-level pipeline:
-
- ```python
- from agentscope.pipelines import SequentialPipeline
-
- pipe = SequentialPipeline([agent1, agent2, agent3])
- x3 = pipe(input_msg)
- ```
-
- - **WITH** functional-level pipeline:
-
- ```python
- from agentscope.pipelines.functional import sequentialpipeline
-
- x3 = sequentialpipeline([agent1, agent2, agent3], x=input_msg)
- ```
-
-- **MsgHub**: To achieve a group conversation, AgentScope provides message hub.
-
- - Achieving group conversation **WITHOUT** `msghub`:
-
- ```python
- x1 = agent1(x)
- agent2.observe(x1) # The message x1 should be broadcast to other agents
- agent3.observe(x1)
-
- x2 = agent2(x1)
- agent1.observe(x2)
- agent3.observe(x2)
- ```
-
- - **With** `msghub`: In a message hub, the messages from participants will be broadcast to all other participants automatically. In such case, participated agents even don't need input and output messages explicitly. All we need to do is to decide the order of speaking. Besides, `msghub` also supports dynamic control of participants as follows.
-
- ```python
- from agentscope import msghub
-
- with msghub(participants=[agent1, agent2, agent3]) as hub:
- agent1() # `x = agent1(x)` is also okay
- agent2()
-
- # Broadcast a message to all participants
- hub.broadcast(Msg("Host", "Welcome to join the group conversation!"))
-
- # Add or delete participants dynamically
- hub.delete(agent1)
- hub.add(agent4)
- ```
-
-#### Customize Your Own Agent
-
-To implement your own agent, you need to inherit the `AgentBase` class and implement the `reply` function.
-
-```python
-from agentscope.agents import AgentBase
-
-class MyAgent(AgentBase):
- def reply(self, x):
- # Do something here, e.g. calling your model and get the raw field as your agent's response
- response = self.model(x).raw
- return response
+ x = dialog_agent(x)
+ x = user_agent(x)
+ if x.content == "exit": # user input "exit" to exit the conversation_basic
+ break
```
-#### Built-in Resources
-
-AgentScope provides built-in resources for developers to build their own applications easily. More built-in agents, services and examples are coming soon!
-
-##### Agent Pool
-
-- UserAgent
-- DialogAgent
-- DictDialogAgent
-- ...
-
-##### Services
-
-- Web Search Service
-- Code Execution Service
-- Retrieval Service
-- Database Service
-- File Service
-- ...
-
-##### Example Applications
-
-- Example of Conversation: [examples/conversation](examples/conversation/README.md)
-- Example of Werewolf: [examples/werewolf](examples/werewolf/README.md)
-- Example of Distributed Agents: [examples/distributed](examples/distributed/README.md)
-- ...
-
-More built-in resources are coming soon!
+## Tutorial
+
+- [Getting Started](https://modelscope.github.io/agentscope/en/tutorial/quick_start.html)
+ - [Installation](https://modelscope.github.io/agentscope/en/tutorial/102-installation.html)
+ - [About AgentScope](https://modelscope.github.io/agentscope/en/tutorial/101-agentscope.html)
+ - [Quick Start](https://modelscope.github.io/agentscope/en/tutorial/103-example.html)
+ - [Crafting Your First Application](https://modelscope.github.io/agentscope/en/tutorial/104-usecase.html)
+ - [Logging and WebUI](https://modelscope.github.io/agentscope/en/tutorial/105-logging.html#)
+- [Advanced Exploration](https://modelscope.github.io/agentscope/en/tutorial/advance.html)
+ - [Customize Your Own Agent](https://modelscope.github.io/agentscope/en/tutorial/201-agent.html)
+ - [Agent Interactions: Dive deeper into Pipelines and Messages Hub](https://modelscope.github.io/agentscope/en/tutorial/202-pipeline.html)
+ - [Model Service](https://modelscope.github.io/agentscope/en/tutorial/203-model.html)
+ - [About Service](https://modelscope.github.io/agentscope/en/tutorial/204-service.html)
+ - [About Memory](https://modelscope.github.io/agentscope/en/tutorial/205-memory.html)
+ - [Prompt Engine](https://modelscope.github.io/agentscope/en/tutorial/206-prompt.html)
+ - [Monitor](https://modelscope.github.io/agentscope/en/tutorial/207-monitor.html)
+ - [About Distribution](https://modelscope.github.io/agentscope/en/tutorial/208-distribute.html)
+- [Get Involved](https://modelscope.github.io/agentscope/en/tutorial/contribute.html)
+ - [Join AgentScope Community](https://modelscope.github.io/agentscope/en/tutorial/301-community.html)
+ - [Contributing to AgentScope](https://modelscope.github.io/agentscope/en/tutorial/302-contribute.html)
## License
@@ -334,7 +265,8 @@ AgentScope is released under Apache License 2.0.
Contributions are always welcomed!
-We provide a developer version with additional pre-commit hooks to perform checks compared to the official version:
+We provide a developer version with additional pre-commit hooks to perform
+checks compared to the official version:
```bash
# For windows
@@ -350,7 +282,8 @@ Please refer to our [Contribution Guide](https://modelscope.github.io/agentscope
## References
-If you find our work helpful for your research or application, please cite [our paper](https://arxiv.org/abs/2402.14034):
+If you find our work helpful for your research or application, please
+cite [our paper](https://arxiv.org/abs/2402.14034):
```
@article{agentscope,
diff --git a/README_ZH.md b/README_ZH.md
index 04f915ddc..a90e3f751 100644
--- a/README_ZH.md
+++ b/README_ZH.md
@@ -1,8 +1,12 @@
+[English](./README.md) | 中文
+
# AgentScope
+更简单地构建基于LLM的多智能体应用。
+
[![](https://img.shields.io/badge/cs.MA-2402.14034-B31C1C?logo=arxiv&logoColor=B31C1C)](https://arxiv.org/abs/2402.14034)
[![](https://img.shields.io/badge/python-3.9+-blue)](https://pypi.org/project/agentscope/)
-[![](https://img.shields.io/badge/pypi-v0.0.1-blue?logo=pypi)](https://pypi.org/project/agentscope/)
+[![](https://img.shields.io/badge/pypi-v0.0.2-blue?logo=pypi)](https://pypi.org/project/agentscope/)
[![](https://img.shields.io/badge/Docs-English%7C%E4%B8%AD%E6%96%87-blue?logo=markdown)](https://modelscope.github.io/agentscope/#welcome-to-agentscope-tutorial-hub)
[![](https://img.shields.io/badge/Docs-API_Reference-blue?logo=markdown)](https://modelscope.github.io/agentscope/)
[![](https://img.shields.io/badge/ModelScope-Demos-4e29ff.svg?logo=data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMjI0IDEyMS4zMyIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCTxwYXRoIGQ9Im0wIDQ3Ljg0aDI1LjY1djI1LjY1aC0yNS42NXoiIGZpbGw9IiM2MjRhZmYiIC8+Cgk8cGF0aCBkPSJtOTkuMTQgNzMuNDloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzYyNGFmZiIgLz4KCTxwYXRoIGQ9Im0xNzYuMDkgOTkuMTRoLTI1LjY1djIyLjE5aDQ3Ljg0di00Ny44NGgtMjIuMTl6IiBmaWxsPSIjNjI0YWZmIiAvPgoJPHBhdGggZD0ibTEyNC43OSA0Ny44NGgyNS42NXYyNS42NWgtMjUuNjV6IiBmaWxsPSIjMzZjZmQxIiAvPgoJPHBhdGggZD0ibTAgMjIuMTloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzM2Y2ZkMSIgLz4KCTxwYXRoIGQ9Im0xOTguMjggNDcuODRoMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzYyNGFmZiIgLz4KCTxwYXRoIGQ9Im0xOTguMjggMjIuMTloMjUuNjV2MjUuNjVoLTI1LjY1eiIgZmlsbD0iIzM2Y2ZkMSIgLz4KCTxwYXRoIGQ9Im0xNTAuNDQgMHYyMi4xOWgyNS42NXYyNS42NWgyMi4xOXYtNDcuODR6IiBmaWxsPSIjNjI0YWZmIiAvPgoJPHBhdGggZD0ibTczLjQ5IDQ3Ljg0aDI1LjY1djI1LjY1aC0yNS42NXoiIGZpbGw9IiMzNmNmZDEiIC8+Cgk8cGF0aCBkPSJtNDcuODQgMjIuMTloMjUuNjV2LTIyLjE5aC00Ny44NHY0Ny44NGgyMi4xOXoiIGZpbGw9IiM2MjRhZmYiIC8+Cgk8cGF0aCBkPSJtNDcuODQgNzMuNDloLTIyLjE5djQ3Ljg0aDQ3Ljg0di0yMi4xOWgtMjUuNjV6IiBmaWxsPSIjNjI0YWZmIiAvPgo8L3N2Zz4K)](https://modelscope.cn/studios?name=agentscope&page=1&sort=latest)
@@ -10,15 +14,7 @@
[![](https://img.shields.io/badge/license-Apache--2.0-black)](./LICENSE)
[![](https://img.shields.io/badge/Contribute-Welcome-green)](https://modelscope.github.io/agentscope/tutorial/contribute.html)
-AgentScope是一款全新的Multi-Agent框架,专为应用开发者打造,旨在提供高易用、高可靠的编程体验!
-
-- **高易用**:AgentScope支持纯Python编程,提供多种语法工具实现灵活的应用流程编排,内置丰富的API服务(Service)以及应用样例,供开发者直接使用。同时,AgentScope提供了详尽的[教程](https://modelscope.github.io/agentscope/),[API文档](https://modelscope.github.io/agentscope/)和[应用样例](https://modelscope.github.io/agentscope/)。
-
-- **高鲁棒**:确保开发便捷性和编程效率的同时,针对不同能力的大模型,AgentScope提供了全面的重试机制、定制化的容错控制和面向Agent的异常处理,以确保应用的稳定、高效运行;
-
-- **基于Actor的分布式机制**:AgentScope设计了一种新的基于Actor的分布式机制,实现了复杂分布式工作流的集中式编程和自动并行优化,即用户可以使用中心化编程的方式完成分布式应用的流程编排,同时能够零代价将本地应用迁移到分布式的运行环境中。
-
-如果您觉得我们的工作对您有帮助,请引用[我们的论文](https://arxiv.org/abs/2402.14034)。
+如果您觉得我们的工作对您有帮助,请引用我们的[论文](https://arxiv.org/abs/2402.14034)。
欢迎加入我们的社区
@@ -26,42 +22,97 @@ AgentScope是一款全新的Multi-Agent框架,专为应用开发者打造,
|---------|----------|--------|
| | | |
-目录
-=================
-
-- [AgentScope](#agentscope)
-- [目录](#目录)
- - [安装](#安装)
- - [从源码安装](#从源码安装)
- - [使用pip](#使用pip)
- - [快速开始](#快速开始)
- - [基础使用](#基础使用)
- - [第1步:准备Model Configs](#第1步准备model-configs)
- - [OpenAI API Configs](#openai-api-configs)
- - [DashScope API Config](#dashscope-api-config)
- - [Post Request API Config](#post-request-api-config)
- - [第2步:创建Agent](#第2步创建agent)
- - [第3步:构造对话](#第3步构造对话)
- - [进阶使用](#进阶使用)
- - [**Pipeline**和**MsgHub**](#pipeline和msghub)
- - [定制您自己的Agent](#定制您自己的agent)
- - [内置资源](#内置资源)
- - [Agent Pool](#agent-pool)
- - [Services](#services)
- - [Example Applications](#example-applications)
- - [License](#license)
- - [贡献](#贡献)
- - [引用](#引用)
+## 新闻
+
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] 我们现在发布了**AgentScope** v0.0.2版本!在这个新版本中,AgentScope支持了[ollama](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)(本地CPU推理引擎),[DashScope](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)和[Gemini](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) APIs。
+
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] 新的样例“[带有@功能的自主对话](./examples/conversation_with_mentions)”和“[兼容LangChain的基础对话](./examples/conversation_with_langchain)”上线了!
+
+- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png)
+[2024-03-15] AgentScope的[中文教程](https://modelscope.github.io/agentscope/zh_CN/index.html)上线了!
+
+- [2024-02-27] 我们现在发布了**AgentScope** v0.0.1版本!现在,AgentScope也可以在[PyPI](https://pypi.org/project/agentscope/)上下载
+
+- [2024-02-14] 我们在arXiv上发布了论文“[AgentScope: A Flexible yet Robust Multi-Agent Platform](https://arxiv.org/abs/2402.14034)”!
+
+---
+
+## 什么是AgentScope?
+
+AgentScope是一个创新的多智能体开发平台,旨在赋予开发人员使用大模型轻松构建多智能体应用的能力。
+
+- 🤝 **高易用**: AgentScope专为开发人员设计,提供了[丰富的组件](https://modelscope.github.io/agentscope/en/tutorial/204-service.html#), [全面的文档](https://modelscope.github.io/agentscope/zh_CN/index.html)和广泛的兼容性。
+
+- ✅ **高鲁棒**:支持自定义的容错控制和重试机制,以提高应用程序的稳定性。
+
+- 🚀 **分布式**:支持以中心化的方式构建分布式多智能体应用程序。
+
+**支持的模型API**
+
+AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三方模型API。
+
+| API | Task | Model Wrapper |
+|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |
+| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) |
+| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) |
+| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) |
+| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| | Embedding | [`OllamaEmbedding`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) |
+| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) |
+
+**支持的本地模型部署**
+
+AgentScope支持使用以下库快速部署本地模型服务。
+
+- [ollama (CPU inference)](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#ollama)
+- [Flask + Transformers](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-transformers-library)
+- [Flask + ModelScope](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-modelscope-library)
+- [FastChat](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#fastchat)
+- [vllm](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#vllm)
+
+**支持的服务**
+
+- 网络搜索
+- 数据查询
+- 数据检索
+- 代码执行
+- 文件操作
+- 文本处理
+
+**样例应用**
+
+- 对话
+ - [基础对话](./examples/conversation_basic)
+ - [带有@功能的自主对话](./examples/conversation_with_mentions)
+ - [智能体自组织的对话](./examples/conversation_self_organizing)
+ - [兼容LangChain的基础对话](./examples/conversation_with_langchain)
+
+- 游戏
+ - [狼人杀](./examples/game_werewolf)
+
+- 分布式
+ - [分布式对话](./examples/distribution_conversation)
+ - [分布式辩论](./examples/distribution_debate)
+
+更多模型API、服务和示例即将推出!
## 安装
-要安装AgentScope,您需要安装Python 3.9或更高版本。
+AgentScope需要Python 3.9或更高版本。
**_注意:该项目目前正在积极开发中,建议从源码安装AgentScope。_**
### 从源码安装
-- 运行以下命令以编辑模式安装AgentScope。
+- 以编辑模式安装AgentScope:
```bash
# 从github拉取源代码
@@ -71,7 +122,7 @@ cd AgentScope
pip install -e .
```
-- 构建分布式Multi-Agent应用程序依赖于[gRPC](https://github.com/grpc/grpc)库,您可以按以下方式安装所需的依赖项。
+- 构建分布式多智能体应用需要按照以下方式安装:
```bash
# 在windows上
@@ -82,7 +133,7 @@ pip install -e .\[distribute\]
### 使用pip
-- 使用以下命令安装最新发布的AgentScope。
+- 从pip安装的AgentScope
```bash
pip install agentscope
@@ -90,105 +141,57 @@ pip install agentscope
## 快速开始
-### 基础使用
-
-以用户和助手Agent对话的Multi-Agent应用程序为例,您需要执行以下步骤:
-
-- [第1步:准备Model Configs](#第1步准备model-configs)
-
-- [第2步:创建Agent](#第2步创建agent)
-
-- [第3步:构造对话](#第3步构造对话)
-
-#### 第1步:准备Model Configs
+### 配置
-AgentScope支持以下模型API服务:
+AgentScope中,模型的部署和调用是通过`ModelWrapper`实现解耦的。
-- OpenAI Python APIs,包括
+为了使用这些`ModelWrapper`, 您需要准备如下的模型配置文件:
- - OpenAI Chat, DALL-E和Embedding API
-
- - 兼容OpenAI的Inference库,例如[FastChat](https://github.com/lm-sys/FastChat)和[vllm](https://github.com/vllm-project/vllm)
-
-- Post Request APIs,包括
-
- - [HuggingFace](https://huggingface.co/docs/api-inference/index)和[ModelScope](https://www.modelscope.cn/docs/%E9%AD%94%E6%90%ADv1.5%E7%89%88%E6%9C%AC%20Release%20Note%20(20230428)) Inference API
-
- - 自定义模型API
-
-| | 模型类型参数 | 支持的API |
-|----------------------|---------------------|----------------------------------------------------------------|
-| OpenAI Chat API | `openai` | 标准OpenAI Chat API, FastChat和vllm |
-| OpenAI DALL-E API | `openai_dall_e` | 标准DALL-E API |
-| OpenAI Embedding API | `openai_embedding` | OpenAI 嵌入式API |
-| DashScope Chat API | `dashscope_chat` | DashScope chat API,其中包含通义千问系列 |
-| Post API | `post_api` | Huggingface/ModelScope 推理API, 以及定制化的post API |
-
-##### OpenAI API Configs
-
-对于OpenAI API,您需要准备一个包含以下字段的模型配置字典:
-
-```
-{
- "config_name": "{配置名称}", # 用于识别配置的名称
- "model_type": "openai" | "openai_dall_e" | "openai_embedding",
- "model_name": "{模型名称,例如gpt-4}", # openai API中的模型
- # 可选
- "api_key": "xxx", # OpenAI API的API密钥。如果未设置,将使用环境变量OPENAI_API_KEY。
- "organization": "xxx", # OpenAI API的组织。如果未设置,将使用环境变量OPENAI_ORGANIZATION。
-}
-```
-
-##### DashScope API Config
-
-对于 DashScope API,你需要准备一个包含如下字段的配置字典:
+```python
+model_config = {
+ # 模型配置的名称,以及使用的模型wrapper
+ "config_name": "{your_config_name}", # 模型配置的名称
+ "model_type": "{model_type}", # 模型wrapper的类型
-```
-{
- "config_name": "{配置名称}", # 用于识别配置的名称
- "model_type": "dashscope_chat" | "dashscope_text_embedding" | "dashscope_image_synthesis",
- "model_name": "{模型名称,例如 qwen-max}", # dashscope 中的模型
- "api_key": "xxx", # The API key for DashScope API.
+ # 用以初始化模型wrapper的详细参数
+ # ...
}
```
-> 注意: dashscope API 可能对消息中的`role`域有严格的要求。请谨慎使用。
-
-##### Post Request API Config
-
-对于post请求API,配置包含以下字段。
+以OpenAI Chat API为例,模型配置如下:
-```
-{
- "config_name": "{配置名称}", # 用于识别配置的名称
- "model_type": "post_api",
- "api_url": "https://xxx", # 目标url
- "headers": { # 需要的头信息
- ...
- },
+```python
+openai_model_config = {
+ "config_name": "my_openai_config", # 模型配置的名称
+ "model_type": "openai", # 模型wrapper的类型
+
+ # 用以初始化模型wrapper的详细参数
+ "model_name": "gpt-4", # OpenAI API中的模型名
+ "api_key": "xxx", # OpenAI API的API密钥。如果未设置,将使用环境变量OPENAI_API_KEY。
+ "organization": "xxx", # OpenAI API的组织。如果未设置,将使用环境变量OPENAI_ORGANIZATION。
}
```
-为了方便开发和调试,AgentScope在[scripts](./scripts/README.md)目录下提供了丰富的脚本以快速部署模型服务。
-有关模型服务的详细使用,请参阅我们的[教程](https://modelscope.github.io/agentscope/index.html#welcome-to-agentscope-tutorial-hub)和[API文档](https://modelscope.github.io/agentscope/index.html#indices-and-tables)。
+关于部署本地模型服务和准备模型配置的更多细节,请参阅我们的[教程](https://modelscope.github.io/agentscope/index.html#welcome-to-agentscope-tutorial-hub)。
-#### 第2步:创建Agent
+### 创建Agent
-创建内置的用户和助手Agent:
+创建AgentScope内置的`DialogAgent`和`UserAgent`对象.
```python
from agentscope.agents import DialogAgent, UserAgent
import agentscope
-# 载入模型配置
+# 加载模型配置
agentscope.init(model_configs="./model_configs.json")
# 创建对话Agent和用户Agent
-dialog_agent = DialogAgent(name="assistant", model_config_name="your_config_name")
+dialog_agent = DialogAgent(name="assistant",
+ model_config_name="my_openai_config")
user_agent = UserAgent()
```
-#### 第3步:构造对话
+#### 构造对话
在AgentScope中,**Message**是Agent之间的桥梁,它是一个python**字典**(dict),包含两个必要字段`name`和`content`,以及一个可选字段`url`用于本地文件(图片、视频或音频)或网络链接。
@@ -210,114 +213,26 @@ while True:
break
```
-### 进阶使用
-
-#### **Pipeline**和**MsgHub**
-
-为了简化Agent间通信的构建,AgentScope提供了两种语法工具:**Pipeline**和**MsgHub**。
-
-- **Pipeline**:它允许用户轻松编写Agent间的通信。以Sequential Pipeline为例,以下两种代码等效,但是pipeline的实现方式更加简洁和优雅。
-
- - **不使用** pipeline的情况下,agent1、agent2和agent3顺序传递消息:
-
- ```python
- x1 = agent1(input_msg)
- x2 = agent2(x1)
- x3 = agent3(x2)
- ```
-
- - **使用** pipeline对象的情况下:
-
- ```python
- from agentscope.pipelines import SequentialPipeline
-
- pipe = SequentialPipeline([agent1, agent2, agent3])
- x3 = pipe(input_msg)
- ```
-
- - **使用** functional pipeline的情况下:
-
- ```python
- from agentscope.pipelines.functional import sequentialpipeline
-
- x3 = sequentialpipeline([agent1, agent2, agent3], x=input_msg)
- ```
-
-- **MsgHub**:为了方便地实现多人对话,AgentScope提供了Message Hub。
-
- - **不使用** `msghub`:实现多人对话:
-
- ```python
- x1 = agent1(x)
- agent2.observe(x1) # 消息x1应该广播给其他agent
- agent3.observe(x1)
-
- x2 = agent2(x1)
- agent1.observe(x2)
- agent3.observe(x2)
- ```
-
- - **使用** `msghub`:在Message Hub中,来自参与者的消息将自动广播给所有其他参与者,因此在这种情况下,Agent的调用甚至不需要明确输入和输出消息,我们需要做的就是决定发言的顺序。此外,`msghub`还支持动态控制参与者,如下所示。
-
- ```python
- from agentscope import msghub
-
- with msghub(participants=[agent1, agent2, agent3]) as hub:
- agent1() # `x = agent1(x)`也可行
- agent2()
-
- # 向所有参与者广播一条消息
- hub.broadcast(Msg("Host", "欢迎加入群组对话!"))
-
- # 动态地添加或删除参与者
- hub.delete(agent1)
- hub.add(agent4)
- ```
-
-#### 定制您自己的Agent
-
-要实现您自己的Agent,您需要继承`AgentBase`类并实现`reply`函数。
-
-```python
-from agentscope.agents import AgentBase
-
-class MyAgent(AgentBase):
-
- def reply(self, x):
-
- # 在这里做一些事情,例如调用您的模型并获取原始字段作为agent的回应
- response = self.model(x).raw
- return response
-```
-
-#### 内置资源
-
-AgentScope提供丰富的内置资源以便开发人员轻松构建自己的应用程序。更多内置Agent、Service和Example即将推出!
-
-##### Agent Pool
-
-- UserAgent
-- DialogAgent
-- DictDialogAgent
-- ...
-
-##### Services
-
-- 网络搜索服务
-- 代码执行服务
-- 检索服务
-- 数据库服务
-- 文件服务
-- ...
-
-##### Example Applications
-
-- 对话示例:[examples/conversation](examples/conversation/README.md)
-- 狼人杀示例:[examples/werewolf](examples/werewolf/README.md)
-- 分布式Agent示例:[examples/distributed](examples/distributed/README.md)
-- ...
-
-更多内置资源即将推出!
+## 教程
+
+- [快速上手](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/quick_start.html)
+ - [安装](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/102-installation.html)
+ - [关于AgentScope](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/101-agentscope.html)
+ - [快速开始](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/103-example.html)
+ - [创建您的第一个应用](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/104-usecase.html)
+ - [日志和WebUI](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/105-logging.html#)
+- [进阶使用](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/advance.html)
+ - [定制你自己的Agent](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/201-agent.html)
+ - [智能体间交互](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/202-pipeline.html)
+ - [关于模型](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/203-model.html)
+ - [关于服务](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/204-service.html)
+ - [关于记忆](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/205-memory.html)
+ - [提示工程](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/206-prompt.html)
+ - [监控器](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/207-monitor.html)
+ - [关于分布式](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/208-distribute.html)
+- [参与贡献](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/contribute.html)
+ - [加入AgentScope社区](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/301-community.html)
+ - [贡献到AgentScope](https://modelscope.github.io/agentscope/zh_CN/tutorial_zh/302-contribute.html)
## License
diff --git a/docs/README.md b/docs/README.md
index 5aab32072..3611af1ab 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -12,7 +12,7 @@ pip install sphinx sphinx-autobuild sphinx_rtd_theme myst-parser sphinxcontrib-m
cd sphinx_doc
# step 3: build the sphinx doc
-make clean html
+make clean all
# step 4: view sphinx_doc/build/html/index.html using your browser
```
@@ -32,14 +32,14 @@ src
If a new package (`agentscope/new_package`) is added , please add the corresponding documents as follows:
-1. use the following script to generate template script (`sphinx_doc/source/agentscope.new_package.rst`) of new packages.
+1. use the following script to generate template script (`sphinx_doc/{language}/source/agentscope.new_package.rst`) of new packages.
```shell
cd sphinx_doc
-sphinx-apidoc -o source ../../src/agentscope
+sphinx-apidoc -o {language}/source ../../src/agentscope
```
-2. edit `sphinx_doc/source/agentscope.new_package.rst`, modify the content of the generated template script. For example, modify
+2. edit `sphinx_doc/{language}/source/agentscope.new_package.rst`, modify the content of the generated template script. For example, modify
```
agentscope.new\_package package
@@ -68,7 +68,7 @@ new\_module module
...
```
-1. modify the `sphinx_doc/source/index.rst`, add the new package into the table of contents.
+3. modify the `sphinx_doc/{language}/source/index.rst`, add the new package into the table of contents.
```
.. toctree::
@@ -86,8 +86,7 @@ new\_module module
4. rebuild the sphinx doc of AgentScope
```
-make clean
-make html
+make clean all
```
### Add doc for new modules
@@ -105,7 +104,7 @@ src
If a new module (agentscope/existing_package/new_module.py) is added , please add the corresponding documents as follows:
-1. edit `sphinx_doc/source/agentscope.existing_package.rst` and add the following content.
+1. edit `sphinx_doc/{language}/source/agentscope.existing_package.rst` and add the following content.
```
new\_module module
@@ -120,6 +119,5 @@ new\_module module
2. rebuild the sphinx doc of AgentScope
```
-make clean
-make html
+make clean all
```
diff --git a/docs/sphinx_doc/en/source/_templates/language_selector.html b/docs/sphinx_doc/en/source/_templates/language_selector.html
index cd289bf7e..a8aca93e0 100644
--- a/docs/sphinx_doc/en/source/_templates/language_selector.html
+++ b/docs/sphinx_doc/en/source/_templates/language_selector.html
@@ -1,5 +1,5 @@
diff --git a/docs/sphinx_doc/en/source/tutorial/103-example.md b/docs/sphinx_doc/en/source/tutorial/103-example.md
index 64e1e0af5..4b0e39ea7 100644
--- a/docs/sphinx_doc/en/source/tutorial/103-example.md
+++ b/docs/sphinx_doc/en/source/tutorial/103-example.md
@@ -12,46 +12,37 @@ agents).
AgentScope decouples the deployment and invocation of models to better build multi-agent applications.
In terms of model deployment, users can use third-party model services such
-as OpenAI API, HuggingFace/ModelScope Inference API, and can also quickly
-deploy local open-source model services through the [scripts]
-() in
-the repository. Currently, we support building basic model services quickly
-using Flask with Transformers (or ModelScope), and also support deploying
-local model services through FastChat and vllm inference engines.
-
-While in terms of model invocation, AgentScope provides a `ModelWrapper` class to encapsulate OpenAI API and RESTful Post Request calls.
-Currently, the supported OpenAI APIs include Chat, Image generation, and Embedding.
-Users can specify the model service by setting different model configs.
-
-| Model Usage | Supported APIs |
-| --------------------------- |-----------------------------------------------------------------------------|
-| Text generation | Standard *OpenAI* chat API, FastChat and vllm |
-| Image generation | *DALL-E* API for generating images |
-| Embedding | API for text embeddings |
-| General usages in POST | *Huggingface* and *ModelScope* Inference API, and other customized post API |
-
-Each API has its specific configuration requirements. For example, to configure an OpenAI API, you would need to fill out the following fields in the model config in a dict, a yaml file or a json file:
+as OpenAI API, Google Gemini API, HuggingFace/ModelScope Inference API, or
+quickly deploy local open-source model services through the [scripts](https://github.com/modelscope/agentscope/blob/main/scripts/README.md) in
+the repository.
+
+While for model invocation, users should prepare a model configuration to specify the model service. Taking OpenAI Chat API as an example, the model configuration is like this:
```python
model_config = {
"config_name": "{config_name}", # A unique name for the model config.
"model_type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding".
+
"model_name": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002".
"api_key": "xxx", # Your OpenAI API key. If unset, the environment variable OPENAI_API_KEY is used.
"organization": "xxx", # Your OpenAI organization ID. If unset, the environment variable OPENAI_ORGANIZATION is used.
}
```
-For open-source models, we support integration with various model interfaces such as HuggingFace, ModelScope, FastChat, and vllm. You can find scripts on deploying these services in the `scripts` directory, and we defer the detailed instructions to [[Using Different Model Sources with Model API]](#203-model).
+More details about model invocation, deployment and open-source models please refer to [Model](203-model-en) section.
-You can register your configuration by calling AgentScope's initialization method as follow. Besides, you can also load more than one config by calling init multiple times.
+After preparing the model configuration, you can register your configuration by calling the `init` method of AgentScope. Additionally, you can load multiple model configurations at once.
```python
import agentscope
# init once by passing a list of config dict
-openai_cfg_dict = {...dict_filling...}
-modelscope_cfg_dict = {...dict_filling...}
+openai_cfg_dict = {
+ # ...
+}
+modelscope_cfg_dict = {
+ # ...
+}
agentscope.init(model_configs=[openai_cfg_dict, modelscope_cfg_dict])
```
@@ -71,7 +62,7 @@ dialogAgent = DialogAgent(name="assistant", model_config_name="gpt-4", sys_promp
userAgent = UserAgent()
```
-**NOTE**: Please refer to [[Customizing Your Custom Agent with Agent Pool]](201-agent) for all available agents.
+**NOTE**: Please refer to [Customizing Your Own Agent](201-agent-en) for all available agents.
## Step3: Agent Conversation
@@ -112,6 +103,6 @@ while x is None or x.content != "exit":
x = sequentialpipeline([dialog_agent, user_agent])
```
-For more details about how to utilize pipelines for complex agent interactions, please refer to [[Agent Interactions: Dive deeper into Pipelines and Message Hub]](202-pipeline).
+For more details about how to utilize pipelines for complex agent interactions, please refer to [Pipeline and MsgHub](202-pipeline-en).
[[Return to the top]](#103-start-en)
diff --git a/docs/sphinx_doc/en/source/tutorial/104-usecase.md b/docs/sphinx_doc/en/source/tutorial/104-usecase.md
index 5c894cc65..15ad7982c 100644
--- a/docs/sphinx_doc/en/source/tutorial/104-usecase.md
+++ b/docs/sphinx_doc/en/source/tutorial/104-usecase.md
@@ -308,11 +308,5 @@ Moderator: The day is coming, all the players open your eyes. Last night is peac
Now you've grasped how to conveniently set up a multi-agent application with AgentScope. Feel free to tailor the game to include additional roles and introduce more sophisticated strategies. For more advanced tutorials that delve deeper into more capabilities of AgentScope, such as *memory management* and *service functions* utilized by agents, please refer to the tutorials in the **Advanced Exploration** section and look up the API references.
-## Other Example Applications
-
-- Example of Simple Group Conversation: [examples/Simple Conversation](https://github.com/modelscope/agentscope/tree/main/examples/simple_chat/README.md)
-- Example of Werewolves: [examples/Werewolves](https://github.com/modelscope/agentscope/tree/main/examples/werewolves/README.md)
-- Example of Distributed Agents: [examples/Distributed Agents](https://github.com/modelscope/agentscope/tree/main/examples/distributed_agents/README.md)
-- ...
[[Return to the top]](#104-usecase-en)
diff --git a/docs/sphinx_doc/en/source/tutorial/202-pipeline.md b/docs/sphinx_doc/en/source/tutorial/202-pipeline.md
index 00841bd0e..160c7d0fa 100644
--- a/docs/sphinx_doc/en/source/tutorial/202-pipeline.md
+++ b/docs/sphinx_doc/en/source/tutorial/202-pipeline.md
@@ -1,6 +1,6 @@
(202-pipeline-en)=
-# Agent Interactions: Dive deeper into Pipelines and Message Hub
+# Pipeline and MsgHub
**Pipeline & MsgHub** (message hub) are one or a sequence of steps describing how the structured `Msg` passes between multi-agents, which streamlines the process of collaboration across agents.
diff --git a/docs/sphinx_doc/en/source/tutorial/203-model.md b/docs/sphinx_doc/en/source/tutorial/203-model.md
index f1a6d7af9..0839b26f0 100644
--- a/docs/sphinx_doc/en/source/tutorial/203-model.md
+++ b/docs/sphinx_doc/en/source/tutorial/203-model.md
@@ -1,6 +1,6 @@
(203-model-en)=
-# Model Service
+# Model
In AgentScope, the model deployment and invocation are decoupled by `ModelWrapper`.
Developers can specify their own model by providing model configurations,
@@ -11,7 +11,10 @@ model services.
Currently, AgentScope supports the following model service APIs:
-- OpenAI API, including Chat, image generation (DALL-E), and Embedding.
+- OpenAI API, including chat, image generation (DALL-E), and Embedding.
+- DashScope API, including chat, image sythesis and text embedding.
+- Gemini API, including chat and embedding.
+- Ollama API, including chat, embedding and generation.
- Post Request API, model inference services based on Post
requests, including Huggingface/ModelScope Inference API and various
post request based model APIs.
@@ -29,45 +32,19 @@ import agentscope
agentscope.init(model_configs=MODEL_CONFIG_OR_PATH)
```
-An example of `model_configs` is as follows:
-
-```python
-model_configs = [
- {
- "config_name": "gpt-4-temperature-0.0",
- "model_type": "openai",
- "model_name": "gpt-4",
- "api_key": "xxx",
- "organization": "xxx",
- "generate_args": {
- "temperature": 0.0
- }
- },
- {
- "config_name": "dall-e-3-size-1024x1024",
- "model_type": "openai_dall_e",
- "model_name": "dall-e-3",
- "api_key": "xxx",
- "organization": "xxx",
- "generate_args": {
- "size": "1024x1024"
- }
- },
- # Additional models can be configured here
-]
-```
-
### Configuration Format
-In AgentScope the model configuration is a dictionary used to specify the type of model and set the call parameters.
+In AgentScope, the model configuration is a dictionary used to specify the type of model and set the call parameters.
We divide the fields in the model configuration into two categories: _basic parameters_ and _detailed parameters_.
+
Among them, the basic parameters include `config_name` and `model_type`, which are used to distinguish different model configurations and specific `ModelWrapper` types.
+The detailed parameters will be fed into the corresponding model class's constructor to initialize the model instance.
```python
{
# Basic parameters
- "config_name": "gpt-4-temperature-0.0", # Model configuration name
- "model_type": "openai", # Correspond to `ModelWrapper` type
+ "config_name": "gpt-4-temperature-0.0", # Model configuration name
+ "model_type": "openai", # Correspond to `ModelWrapper` type
# Detailed parameters
# ...
@@ -93,68 +70,308 @@ class OpenAIChatWrapper(OpenAIWrapper):
In the current AgentScope, the supported `model_type` types, the corresponding
`ModelWrapper` classes, and the supported APIs are as follows:
-| Task | model_type | ModelWrapper | Supported APIs |
-|------------------|--------------------|--------------------------|------------------------------------------------------------|
-| Text generation | `openai` | `OpenAIChatWrapper` | Standard OpenAI chat API, FastChat and vllm |
-| Image generation | `openai_dall_e` | `OpenAIDALLEWrapper` | DALL-E API for generating images |
-| Embedding | `openai_embedding` | `OpenAIEmbeddingWrapper` | API for text embeddings |
-| Post Request | `post_api` | `PostAPIModelWrapperBase` | Huggingface/ModelScope Inference API, and customized post API |
+| API | Task | Model Wrapper | `model_type` |
+|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai"` |
+| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_embedding"` |
+| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_dall_e"` |
+| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_chat"` |
+| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_image_synthesis"` |
+| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_text_embedding"` |
+| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_chat"` |
+| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_embedding"` |
+| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` |
+| | Embedding | [`OllamaEmbedding`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` |
+| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` |
+| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` |
#### Detailed Parameters
-According to the different `ModelWrapper`, the parameters contained in the
-detailed parameters are different. However, all detailed parameters will be
-used to initialize the instance of the `ModelWrapper` class. Therefore, more
-detailed parameter descriptions can be viewed according to the constructor of
-their `ModelWrapper` classes.
+In AgentScope, the detailed parameters are different according to the different `ModelWrapper` classes.
+To specify the detailed parameters, you need to refer to the specific `ModelWrapper` class and its constructor.
+Here we provide example configurations for different model wrappers.
-- For OpenAI APIs including text generation, image generation, and text embedding, the model configuration parameters are as follows:
+##### OpenAI API
+
+
+OpenAI Chat API (agents.models.OpenAIChatWrapper
)
```python
-{
- # basic parameters
- "config_name": "gpt-4_temperature-0.0",
+openai_chat_config = {
+ "config_name": "{your_config_name}",
"model_type": "openai",
- # detailed parameters
- # required parameters
- "model_name": "gpt-4", # OpenAI model name
+ # Required parameters
+ "model_name": "gpt-4",
- # optional
- "api_key": "xxx", # OpenAI API Key, if not provided, it will be read from the environment variable
- "organization": "xxx", # Organization name, if not provided, it will be read from the environment variable
- "client_args": { # Parameters for initializing the OpenAI API Client
+ # Optional parameters
+ "api_key": "{your_api_key}", # OpenAI API Key, if not provided, it will be read from the environment variable
+ "organization": "{your_organization}", # Organization name, if not provided, it will be read from the environment variable
+ "client_args": { # Parameters for initializing the OpenAI API Client
# e.g. "max_retries": 3,
},
- "generate_args": { # Parameters passed to the model when calling
+ "generate_args": { # Parameters passed to the model when calling
# e.g. "temperature": 0.0
},
- "budget": 100.0 # API budget
+ "budget": 100 # API budget
}
```
-- For post request API, the model configuration parameters are as follows:
+
+
+
+OpenAI DALL·E API (agentscope.models.OpenAIDALLEWrapper
)
```python
{
- # Basic parameters
- "config_name": "gpt-4_temperature-0.0",
+ "config_name": "{your_config_name}",
+ "model_type": "openai_dall_e",
+
+ # Required parameters
+ "model_name": "{model_name}", # OpenAI model name, e.g. dall-e-2, dall-e-3
+
+ # Optional parameters
+ "api_key": "{your_api_key}", # OpenAI API Key, if not provided, it will be read from the environment variable
+ "organization": "{your_organization}", # Organization name, if not provided, it will be read from the environment variable
+ "client_args": { # Parameters for initializing the OpenAI API Client
+ # e.g. "max_retries": 3,
+ },
+ "generate_args": { # Parameters passed to the model when calling
+ # e.g. "n": 1, "size": "512x512"
+ }
+}
+```
+
+
+
+
+OpenAI Embedding API (agentscope.models.OpenAIEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "{your_config_name}",
+ "model_type": "openai_embedding",
+
+ # Required parameters
+ "model_name": "{model_name}", # OpenAI model name, e.g. text-embedding-ada-002, text-embedding-3-small
+
+ # Optional parameters
+ "api_key": "{your_api_key}", # OpenAI API Key, if not provided, it will be read from the environment variable
+ "organization": "{your_organization}", # Organization name, if not provided, it will be read from the environment variable
+ "client_args": { # Parameters for initializing the OpenAI API Client
+ # e.g. "max_retries": 3,
+ },
+ "generate_args": { # Parameters passed to the model when calling
+ # e.g. "encoding_format": "float"
+ }
+}
+```
+
+
+
+
+
+#### DashScope API
+
+
+DashScope Chat API (agentscope.models.DashScopeChatWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_chat_config",
+ "model_type": "dashscope_chat",
+
+ # Required parameters
+ "model_name": "{model_name}", # The model name in DashScope API, e.g. qwen-max
+
+ # Optional parameters
+ "api_key": "{your_api_key}", # DashScope API Key, if not provided, it will be read from the environment variable
+ "generate_args": {
+ # e.g. "temperature": 0.5
+ },
+}
+```
+
+
+
+
+DashScope Image Synthesis API (agentscope.models.DashScopeImageSynthesisWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_image_synthesis_config",
+ "model_type": "dashscope_image_synthesis",
+
+ # Required parameters
+ "model_name": "{model_name}", # The model name in DashScope Image Synthesis API, e.g. wanx-v1
+
+ # Optional parameters
+ "api_key": "{your_api_key}",
+ "generate_args": {
+ "negative_prompt": "xxx",
+ "n": 1,
+ # ...
+ }
+}
+```
+
+
+
+
+DashScope Text Embedding API (agentscope.models.DashScopeTextEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_text_embedding_config",
+ "model_type": "dashscope_text_embedding",
+
+ # Required parameters
+ "model_name": "{model_name}", # The model name in DashScope Text Embedding API, e.g. text-embedding-v1
+
+ # Optional parameters
+ "api_key": "{your_api_key}",
+ "generate_args": {
+ # ...
+ },
+}
+```
+
+
+
+
+
+#### Gemini API
+
+
+Gemini Chat API (agentscope.models.GeminiChatWrapper
)
+
+```python
+{
+ "config_name": "my_gemini_chat_config",
+ "model_type": "gemini_chat",
+
+ # Required parameters
+ "model_name": "{model_name}", # The model name in Gemini API, e.g. gemini-prp
+
+ # Optional parameters
+ "api_key": "{your_api_key}", # If not provided, the API key will be read from the environment variable GEMINI_API_KEY
+}
+```
+
+
+
+
+Gemini Embedding API (agentscope.models.GeminiEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_gemini_embedding_config",
+ "model_type": "gemini_embedding",
+
+ # Required parameters
+ "model_name": "{model_name}", # The model name in Gemini API, e.g. gemini-prp
+
+ # Optional parameters
+ "api_key": "{your_api_key}", # If not provided, the API key will be read from the environment variable GEMINI_API_KEY
+}
+```
+
+
+
+
+
+#### Ollama API
+
+
+Ollama Chat API (agentscope.models.OllamaChatWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_chat_config",
+ "model_type": "ollama_chat",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # e.g. "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
+
+
+
+Ollama Generation API (agentscope.models.OllamaGenerationWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_generate_config",
+ "model_type": "ollama_generate",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
+
+
+
+Ollama Embedding API (agentscope.models.OllamaEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_embedding_config",
+ "model_type": "ollama_embedding",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
+
+
+
+
+#### Post Request API
+
+
+Post request API (agentscope.models.PostAPIModelWrapperBase
)
+
+```python
+{
+ "config_name": "my_postapiwrapper_config",
"model_type": "post_api",
- # Detailed parameters
- "api_url": "http://xxx.png",
+ # Required parameters
+ "api_url": "https://xxx.xxx",
"headers": {
# e.g. "Authorization": "Bearer xxx",
},
- # Optional parameters, need to be configured according to the requirements of the Post request API
- "json_args": {
- # e.g. "temperature": 0.0
- }
- # ...
+ # Optional parameters
+ "messages_key": "messages",
}
```
+
+
+
+
## Build Model Service from Scratch
For developers who need to build their own model services, AgentScope
@@ -164,57 +381,55 @@ directory.
Specifically, AgentScope provides the following model service scripts:
-- Model service based on **Flask + HuggingFace**
-- Model service based on **Flask + ModelScope**
-- **FastChat** inference engine
-- **vllm** inference engine
+- [CPU inference engine **ollama**](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#ollama)
+- [Model service based on **Flask + Transformers**](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-transformers-library)
+- [Model service based on **Flask + ModelScope**](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-modelscope-library)
+- [**FastChat** inference engine](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#fastchat)
+- [**vllm** inference engine](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#vllm)
-Taking the Flask + Huggingface model service as an example, we will introduce how to use the model service script of AgentScope.
-More model service scripts can be found in [scripts](https://github.com/modelscope/agentscope/blob/main/scripts/) directory.
+About how to quickly start these model services, users can refer to the [README.md](https://github.com/modelscope/agentscope/blob/main/scripts/README.md) file under the [scripts](https://github.com/modelscope/agentscope/blob/main/scripts/) directory.
-### Flask-based Model API Serving
+## Creat Your Own Model Wrapper
-[Flask](https://github.com/pallets/flask) is a lightweight web application framework. It is easy to build a local model API service with Flask.
+AgentScope allows developers to customize their own model wrappers.
+The new model wrapper class should
+- inherit from `ModelWrapperBase` class,
+- provide a `model_type` field to identify this model wrapper in the model configuration, and
+- implement its `__init__` and `__call__` functions.
-#### Using transformers library
+The following is an example for creating a new model wrapper class.
-##### Install Libraries and Set up Serving
+```python
+from agentscope.models import ModelWrapperBase
-Install Flask and Transformers by following the command.
+class MyModelWrapper(ModelWrapperBase):
-```bash
-pip install Flask transformers
-```
+ model_type: str = "my_model"
-Taking model `meta-llama/Llama-2-7b-chat-hf` and port `8000` as an example, set up the model API service by running the following command.
+ def __init__(self, config_name, my_arg1, my_arg2, **kwargs):
+ # Initialize the model instance
+ super().__init__(config_name=config_name)
+ # ...
-```bash
-python flask_transformers/setup_hf_service.py
- --model_name_or_path meta-llama/Llama-2-7b-chat-hf
- --device "cuda:0" # or "cpu"
- --port 8000
+ def __call__(self, input, **kwargs) -> str:
+ # Call the model instance
+ # ...
```
-You can replace `meta-llama/Llama-2-7b-chat-hf` with any model card in the huggingface model hub.
+After creating the new model wrapper class, the model wrapper will be registered into AgentScope automatically.
+You can use it in the model configuration directly.
-##### Use in AgentScope
-
-In AgentScope, you can load the model with the following model configs: [./flask_transformers/model_config.json](https://github.com/modelscope/agentscope/blob/main/scripts/flask_transformers/model_config.json).
+```python
+my_model_config = {
+ # Basic parameters
+ "config_name": "my_model_config",
+ "model_type": "my_model",
-```json
-{
- "model_type": "post_api",
- "config_name": "flask_llama2-7b-chat",
- "api_url": "http://127.0.0.1:8000/llm/",
- "json_args": {
- "max_length": 4096,
- "temperature": 0.5
- }
+ # Detailed parameters
+ "my_arg1": "xxx",
+ "my_arg2": "yyy",
+ # ...
}
```
-##### Note
-
-In this model serving, the messages from post requests should be in **STRING** format. You can use [templates for chat model](https://huggingface.co/docs/transformers/main/chat_templating) from _transformers_ with a little modification based on [`./flask_transformers/setup_hf_service.py`](https://github.com/modelscope/agentscope/blob/main/scripts/flask_transformers/setup_hf_service.py).
-
[[Return to Top]](#203-model-en)
diff --git a/docs/sphinx_doc/en/source/tutorial/204-service.md b/docs/sphinx_doc/en/source/tutorial/204-service.md
index d77fa674c..660d57c5b 100644
--- a/docs/sphinx_doc/en/source/tutorial/204-service.md
+++ b/docs/sphinx_doc/en/source/tutorial/204-service.md
@@ -1,6 +1,6 @@
(204-service-en)=
-# About Service
+# Service
Service function is a set of multi-functional utility tools that can be
used to enhance the capabilities of agents, such as executing Python code,
diff --git a/docs/sphinx_doc/en/source/tutorial/205-memory.md b/docs/sphinx_doc/en/source/tutorial/205-memory.md
index baa95589a..788a99c21 100644
--- a/docs/sphinx_doc/en/source/tutorial/205-memory.md
+++ b/docs/sphinx_doc/en/source/tutorial/205-memory.md
@@ -1,6 +1,6 @@
(205-memory-en)=
-# About Memory
+# Memory
In AgentScope, memory is used to store historical information, allowing the
agent to provide more coherent and natural responses based on context.
diff --git a/docs/sphinx_doc/en/source/tutorial/208-distribute.md b/docs/sphinx_doc/en/source/tutorial/208-distribute.md
index 29273fd46..34321f62c 100644
--- a/docs/sphinx_doc/en/source/tutorial/208-distribute.md
+++ b/docs/sphinx_doc/en/source/tutorial/208-distribute.md
@@ -1,6 +1,6 @@
(208-distribute-en)=
-# About Distribution
+# Distribution
AgentScope implements an Actor-based distributed deployment and parallel optimization, providing the following features:
diff --git a/docs/sphinx_doc/en/source/tutorial/301-community.md b/docs/sphinx_doc/en/source/tutorial/301-community.md
index 438bbd49a..1d5fcb1a2 100644
--- a/docs/sphinx_doc/en/source/tutorial/301-community.md
+++ b/docs/sphinx_doc/en/source/tutorial/301-community.md
@@ -1,6 +1,6 @@
(301-community-en)=
-# Joining The AgentScope Community
+# Joining AgentScope Community
Becoming a part of the AgentScope community allows you to connect with other users and developers. You can share insights, ask questions, and keep up-to-date with the latest developments and interesting multi-agent applications. Here's how you can join us:
diff --git a/docs/sphinx_doc/en/source/tutorial/302-contribute.md b/docs/sphinx_doc/en/source/tutorial/302-contribute.md
index b6fb91364..e4f8b1bbe 100644
--- a/docs/sphinx_doc/en/source/tutorial/302-contribute.md
+++ b/docs/sphinx_doc/en/source/tutorial/302-contribute.md
@@ -1,6 +1,6 @@
(302-contribute-en)=
-# Contributing to AgentScope
+# Contribute to AgentScope
Our community thrives on the diverse ideas and contributions of its members. Whether you're fixing a bug, adding a new feature, improving the documentation, or adding examples, your help is welcome. Here's how you can contribute:
diff --git a/docs/sphinx_doc/en/source/tutorial/main.md b/docs/sphinx_doc/en/source/tutorial/main.md
index 54ab094e4..abc9cb528 100644
--- a/docs/sphinx_doc/en/source/tutorial/main.md
+++ b/docs/sphinx_doc/en/source/tutorial/main.md
@@ -12,24 +12,24 @@ AgentScope is an innovative multi-agent platform designed to empower developers
### Getting Started
-- [Installation Guide](102-installation-en)
-- [Fundamental Concepts](101-agentscope-en)
-- [Getting Started with a Simple Example](103-start-en)
+- [About AgentScope](101-agentscope-en)
+- [Installation](102-installation-en)
+- [Quick Start](103-start-en)
- [Crafting Your First Application](104-usecase-en)
- [Logging and WebUI](105-logging-en)
### Advanced Exploration
- [Customizing Your Own Agent](201-agent-en)
-- [Agent Interactions: Dive deeper into Pipelines and Message Hub](202-pipeline-en)
-- [Using Different Model Sources with Model API](203-model-en)
-- [Enhancing Agent Capabilities with Service Functions](204-service-en)
-- [Memory and Message Management](205-memory-en)
+- [Pipeline and MsgHub](202-pipeline-en)
+- [Model](203-model-en)
+- [Service](204-service-en)
+- [Memory](205-memory-en)
- [Prompt Engine](206-prompt-en)
-- [Monitoring](207-monitor-en)
-- [Distributed Deployment](208-distribute-en)
+- [Monitor](207-monitor-en)
+- [Distribution](208-distribute-en)
### Getting Involved
-- [Joining The AgentScope Community](301-community-en)
-- [Contributing to AgentScope](302-contribute-en)
+- [Joining AgentScope Community](301-community-en)
+- [Contribute to AgentScope](302-contribute-en)
diff --git a/docs/sphinx_doc/en/source/tutorial/quick_start.rst b/docs/sphinx_doc/en/source/tutorial/quick_start.rst
index 8df4cb5d4..ecf79d47e 100644
--- a/docs/sphinx_doc/en/source/tutorial/quick_start.rst
+++ b/docs/sphinx_doc/en/source/tutorial/quick_start.rst
@@ -4,8 +4,8 @@ Getting Started
.. toctree::
:maxdepth: 2
- 102-installation.md
101-agentscope.md
+ 102-installation.md
103-example.md
104-usecase.md
105-logging.md
\ No newline at end of file
diff --git a/docs/sphinx_doc/zh_CN/source/_templates/language_selector.html b/docs/sphinx_doc/zh_CN/source/_templates/language_selector.html
index cd289bf7e..a8aca93e0 100644
--- a/docs/sphinx_doc/zh_CN/source/_templates/language_selector.html
+++ b/docs/sphinx_doc/zh_CN/source/_templates/language_selector.html
@@ -1,5 +1,5 @@
diff --git a/docs/sphinx_doc/zh_CN/source/index.rst b/docs/sphinx_doc/zh_CN/source/index.rst
index 8373a6b1e..6fb5ed52a 100644
--- a/docs/sphinx_doc/zh_CN/source/index.rst
+++ b/docs/sphinx_doc/zh_CN/source/index.rst
@@ -9,7 +9,7 @@ AgentScope 文档
======================================
-.. include:: tutorial_zh/main.md
+.. include:: tutorial/main.md
:parser: myst_parser.sphinx_
.. toctree::
@@ -18,9 +18,9 @@ AgentScope 文档
:hidden:
:caption: AgentScope 教程
- tutorial_zh/quick_start.rst
- tutorial_zh/advance.rst
- tutorial_zh/contribute.rst
+ tutorial/quick_start.rst
+ tutorial/advance.rst
+ tutorial/contribute.rst
.. toctree::
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/101-agentscope.md b/docs/sphinx_doc/zh_CN/source/tutorial/101-agentscope.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/101-agentscope.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/101-agentscope.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/102-installation.md b/docs/sphinx_doc/zh_CN/source/tutorial/102-installation.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/102-installation.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/102-installation.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/103-example.md b/docs/sphinx_doc/zh_CN/source/tutorial/103-example.md
similarity index 62%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/103-example.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/103-example.md
index 1095832aa..780bfe874 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/103-example.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/103-example.md
@@ -8,37 +8,24 @@ AgentScope内置了灵活的通信机制。在本教程中,我们将通过一
为了更好的构建多智能体应用,AgentScope将模型的部署与调用解耦开,以API服务调用的方式支持各种不同的模型。
-在模型部署方面,用户可以使用第三方模型服务,例如OpenAI API,HuggingFace Inference
-API,同时也可以通过仓库中的[脚本](https://github.com/modelscope/agentscope/blob/main/scripts/README.md)快速部署本地开源模型服务,
-目前已支持通过Flask配合Transformers(或ModelScope)快速建立基础的模型服务,同时也已经支持通过FastChat和vllm等推理引擎部署本地模型服务。
+在模型部署方面,用户可以使用第三方模型服务,例如OpenAI API,Google Gemini API, HuggingFace/ModelScope Inference API等,或者也可以通过AgentScope仓库中的[脚本](https://github.com/modelscope/agentscope/blob/main/scripts/README.md)快速部署本地开源模型服务,
-模型调用方面,AgentScope通过`ModelWrapper`类提供OpenAI API和RESTful Post Request调用的封装。
-目前支持的OpenAI API包括了对话(Chat),图片生成(Image generation)和嵌入式(Embedding)。
-用户可以通过设定不同的model config来指定模型服务。
-
-| 模型使用 | APIs |
-|--------------|------------------------------------------------------------------------|
-| 文本生成 | *OpenAI* chat API,FastChat和vllm |
-| 图片生成 | *DALL-E* API |
-| 文本嵌入 | 文本Embedding |
-| 基于Post请求的API | *Huggingface*/*ModelScope* Inference API,以及用户自定应的基于Post请求的API |
-
-每种API都有其特定的配置要求。例如,要配置OpenAI API,您需要在模型配置中填写以下字段:
+模型调用方面,用户需要通过设定模型配置来指定模型服务。以OpenAI Chat API为例,需要准备如下的模型配置:
```python
model_config = {
"config_name": "{config_name}", # A unique name for the model config.
"model_type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding".
+
"model_name": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002".
"api_key": "xxx", # Your OpenAI API key. If unset, the environment variable OPENAI_API_KEY is used.
"organization": "xxx", # Your OpenAI organization ID. If unset, the environment variable OPENAI_ORGANIZATION is used.
}
```
-对于开源模型,我们支持与HuggingFace、ModelScope、FastChat和vllm等各种模型接口的集成。您可以在`scripts
-`目录中找到部署这些服务的脚本,详细说明请见[[模型服务]](203-model).
+更多关于模型调用,部署和开源模型的信息请见[模型](203-model-zh)章节。
-您可以通过调用AgentScope的初始化方法来注册您的配置。此外,您还可以一次性加载多个模型配置。
+准备好模型配置后,用户可以通过调用AgentScope的初始化方法`init`函数来注册您的配置。此外,您还可以一次性加载多个模型配置。
```python
import agentscope
@@ -69,7 +56,7 @@ dialogAgent = DialogAgent(name="assistant", model_config_name="gpt-4", sys_promp
userAgent = UserAgent()
```
-**注意**:请参考[[使用Agent Pool自定义您的自定义智能体]](201-agent)以获取所有可用的智能体以及创建自定义的智能体。
+**注意**:请参考[定制你自己的Agent](201-agent-zh)以获取所有可用的智能体以及创建自定义的智能体。
## 第三步:智能体对话
@@ -112,6 +99,6 @@ while x is None or x.content != "exit":
x = sequentialpipeline([dialog_agent, user_agent])
```
-有关如何使用Pipeline进行复杂的智能体交互的更多细节,请参考[[Agent Interactions: Dive deeper into Pipelines and Message Hub]](202-pipeline)。
+有关如何使用Pipeline进行复杂的智能体交互的更多细节,请参考[Pipeline和MsgHub](202-pipeline-zh)。
[[返回顶部]](#103-example-zh)
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/104-usecase.md b/docs/sphinx_doc/zh_CN/source/tutorial/104-usecase.md
similarity index 97%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/104-usecase.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/104-usecase.md
index 6809b7143..30028be49 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/104-usecase.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/104-usecase.md
@@ -263,7 +263,7 @@ for i in range(1, MAX_GAME_ROUND + 1):
基于它们的角色和上述编码的策略进行互动:
```bash
-cd examples/werewolf
+cd examples/game_werewolf
python main.py # Assuming the pipeline is implemented in main.py
```
@@ -309,11 +309,4 @@ Moderator: The day is coming, all the players open your eyes. Last night is peac
现在你已经掌握了如何使用AgentScope方便地设置多agent应用程序。您可以随意修改游戏,包括引入额外的角色或者引入更复杂的策略。如果你想更深入地探索AgentScope的更多功能,比如agent使用的内存管理和服务函数,请参考高级探索部分的教程并查阅API参考。
-## 其他样例
-
-- 简单群聊样例: [examples/Simple Conversation](https://github.com/modelscope/agentscope/tree/main/examples/simple_chat/README.md)
-- 狼人杀样例[examples/Werewolves](https://github.com/modelscope/agentscope/tree/main/examples/werewolves/README.md)
-- 分布式agents样例[examples/Distributed Agents](https://github.com/modelscope/agentscope/tree/main/examples/distributed_agents/README.md)
-- ...
-
[[返回顶部]](#104-usecase-zh)
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/105-logging.md b/docs/sphinx_doc/zh_CN/source/tutorial/105-logging.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/105-logging.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/105-logging.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/201-agent.md b/docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/201-agent.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/202-pipeline.md b/docs/sphinx_doc/zh_CN/source/tutorial/202-pipeline.md
similarity index 99%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/202-pipeline.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/202-pipeline.md
index ee3ca8575..852785fca 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/202-pipeline.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/202-pipeline.md
@@ -1,6 +1,6 @@
(202-pipeline-zh)=
-# 智能体间交互
+# Pipeline 和 MsgHub
**Pipeline**和**Message Hub**主要用于描绘应用中信息的交换和传播过程,它们极大简化了Multi-Agent应用流程的编排工作。
在本教程中,我们将详细的介绍Pipeline和Message Hub的原理和使用方式。
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md
new file mode 100644
index 000000000..bab0253fa
--- /dev/null
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md
@@ -0,0 +1,449 @@
+(203-model-zh)=
+
+# 模型
+
+AgentScope中,模型的部署和调用是通过`ModelWrapper`来解耦开的,开发者可以通过提供模型配置(Model config)的方式指定模型,同时AgentScope也提供脚本支持开发者自定义模型服务。
+
+## 支持模型
+
+目前,AgentScope内置以下模型服务API的支持:
+
+- OpenAI API,包括对话(Chat),图片生成(DALL-E)和文本嵌入(Embedding)。
+- DashScope API,包括对话(Chat)和图片生成(Image Sythesis)和文本嵌入(Text Embedding)。
+- Gemini API,包括对话(Chat)和嵌入(Embedding)。
+- Ollama API,包括对话(Chat),嵌入(Embedding)和生成(Generation)。
+- Post请求API,基于Post请求实现的模型推理服务,包括Huggingface/ModelScope
+ Inference API和各种符合Post请求格式的API。
+
+## 配置方式
+
+AgentScope中,用户通过`agentscope.init`接口中的`model_configs`参数来指定模型配置。
+`model_configs`可以是一个字典,或是一个字典的列表,抑或是一个指向模型配置文件的路径。
+
+```python
+import agentscope
+
+agentscope.init(model_configs=MODEL_CONFIG_OR_PATH)
+```
+
+其中`model_configs`的一个例子如下:
+
+```python
+model_configs = [
+ {
+ "config_name": "gpt-4-temperature-0.0",
+ "model_type": "openai",
+ "model_name": "gpt-4",
+ "api_key": "xxx",
+ "organization": "xxx",
+ "generate_args": {
+ "temperature": 0.0
+ }
+ },
+ {
+ "config_name": "dall-e-3-size-1024x1024",
+ "model_type": "openai_dall_e",
+ "model_name": "dall-e-3",
+ "api_key": "xxx",
+ "organization": "xxx",
+ "generate_args": {
+ "size": "1024x1024"
+ }
+ },
+ # 在这里可以配置额外的模型
+]
+```
+
+### 配置格式
+
+AgentScope中,模型配置是一个字典,用于指定模型的类型以及设定调用参数。
+我们将模型配置中的字段分为_基础参数_和_调用参数_两类。
+其中,基础参数包括`config_name`和`model_type`两个基本字段,分别用于区分不同的模型配置和具
+体的`ModelWrapper`类型。
+
+```python
+{
+ # 基础参数
+ "config_name": "gpt-4-temperature-0.0", # 模型配置名称
+ "model_type": "openai", # 对应`ModelWrapper`类型
+
+ # 详细参数
+ # ...
+}
+```
+
+#### 基础参数
+
+基础参数中,`config_name`是模型配置的标识,我们将在初始化智能体时用该字段指定使用的模型服务。
+
+`model_type`对应了`ModelWrapper`的类型,用于指定模型服务的类型。对应源代码中`ModelWrapper
+`类的`model_type`字段。
+
+```python
+class OpenAIChatWrapper(OpenAIWrapper):
+ """The model wrapper for OpenAI's chat API."""
+
+ model_type: str = "openai"
+ # ...
+```
+
+在目前的AgentScope中,所支持的`model_type`类型,对应的`ModelWrapper`类,以及支持的
+API如下:
+
+| API | Task | Model Wrapper | `model_type` |
+|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai"` |
+| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_embedding"` |
+| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_dall_e"` |
+| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_chat"` |
+| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_image_synthesis"` |
+| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_text_embedding"` |
+| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_chat"` |
+| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_embedding"` |
+| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` |
+| | Embedding | [`OllamaEmbedding`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` |
+| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` |
+| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` |
+
+#### 详细参数
+
+根据`ModelWrapper`的不同,详细参数中所包含的参数不同。
+但是所有的详细参数都会用于初始化`ModelWrapper`类的实例,因此,更详细的参数说明可以根据`ModelWrapper`类的构造函数来查看。
+下面展示了不同`ModelWrapper`对应的模型配置样例,用户可以修改这些样例以适应自己的需求。
+
+##### OpenAI API
+
+
+OpenAI Chat API (agents.models.OpenAIChatWrapper
)
+
+```python
+openai_chat_config = {
+ "config_name": "{your_config_name}",
+ "model_type": "openai",
+
+ # 必要参数
+ "model_name": "gpt-4",
+
+ # 可选参数
+ "api_key": "{your_api_key}", # OpenAI API Key,如果没有提供,将从环境变量中读取
+ "organization": "{your_organization}", # Organization name,如果没有提供,将从环境变量中读取
+ "client_args": { # 用于初始化OpenAI API Client的参数
+ # 例如:"max_retries": 3,
+ },
+ "generate_args": { # 模型API接口被调用时传入的参数
+ # 例如:"temperature": 0.0
+ },
+ "budget": 100 # API费用预算
+}
+```
+
+
+
+
+OpenAI DALL·E API (agentscope.models.OpenAIDALLEWrapper
)
+
+```python
+{
+ "config_name": "{your_config_name}",
+ "model_type": "openai_dall_e",
+
+ # 必要参数
+ "model_name": "{model_name}", # OpenAI model name, 例如:dall-e-2, dall-e-3
+
+ # 可选参数
+ "api_key": "{your_api_key}", # OpenAI API Key,如果没有提供,将从环境变量中读取
+ "organization": "{your_organization}", # Organization name,如果没有提供,将从环境变量中读取
+ "client_args": { # 用于初始化OpenAI API Client的参数
+ # 例如:"max_retries": 3,
+ },
+ "generate_args": { # 模型API接口被调用时传入的参数
+ # 例如:"n": 1, "size": "512x512"
+ }
+}
+```
+
+
+
+
+OpenAI Embedding API (agentscope.models.OpenAIEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "{your_config_name}",
+ "model_type": "openai_embedding",
+
+ # 必要参数
+ "model_name": "{model_name}", # OpenAI model name, 例如:text-embedding-ada-002, text-embedding-3-small
+
+ # 可选参数
+ "api_key": "{your_api_key}", # OpenAI API Key,如果没有提供,将从环境变量中读取
+ "organization": "{your_organization}", # Organization name,如果没有提供,将从环境变量中读取
+ "client_args": { # 用于初始化OpenAI API Client的参数
+ # 例如:"max_retries": 3,
+ },
+ "generate_args": { # 模型API接口被调用时传入的参数
+ # 例如:"encoding_format": "float"
+ }
+}
+```
+
+
+
+
+
+#### DashScope API
+
+
+DashScope Chat API (agentscope.models.DashScopeChatWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_chat_config",
+ "model_type": "dashscope_chat",
+
+ # 必要参数
+ "model_name": "{model_name}", # DashScope Chat API中的模型名, 例如:qwen-max
+
+ # 可选参数
+ "api_key": "{your_api_key}", # DashScope API Key,如果没有提供,将从环境变量中读取
+ "generate_args": {
+ # 例如:"temperature": 0.5
+ },
+}
+```
+
+
+
+
+DashScope Image Synthesis API (agentscope.models.DashScopeImageSynthesisWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_image_synthesis_config",
+ "model_type": "dashscope_image_synthesis",
+
+ # 必要参数
+ "model_name": "{model_name}", # DashScope Image Synthesis API中的模型名, 例如:wanx-v1
+
+ # 可选参数
+ "api_key": "{your_api_key}",
+ "generate_args": {
+ "negative_prompt": "xxx",
+ "n": 1,
+ # ...
+ }
+}
+```
+
+
+
+
+DashScope Text Embedding API (agentscope.models.DashScopeTextEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_dashscope_text_embedding_config",
+ "model_type": "dashscope_text_embedding",
+
+ # 必要参数
+ "model_name": "{model_name}", # DashScope Text Embedding API中的模型名, 例如:text-embedding-v1
+
+ # 可选参数
+ "api_key": "{your_api_key}",
+ "generate_args": {
+ # ...
+ },
+}
+```
+
+
+
+
+
+#### Gemini API
+
+
+Gemini Chat API (agentscope.models.GeminiChatWrapper
)
+
+```python
+{
+ "config_name": "my_gemini_chat_config",
+ "model_type": "gemini_chat",
+
+ # 必要参数
+ "model_name": "{model_name}", # Gemini Chat API中的模型名,例如:gemini-prp
+
+ # 可选参数
+ "api_key": "{your_api_key}", # 如果没有提供,将从环境变量GEMINI_API_KEY中读取
+}
+```
+
+
+
+
+Gemini Embedding API (agentscope.models.GeminiEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_gemini_embedding_config",
+ "model_type": "gemini_embedding",
+
+ # 必要参数
+ "model_name": "{model_name}", # Gemini Embedding API中的模型名,例如:gemini-prp
+
+ # 可选参数
+ "api_key": "{your_api_key}", # 如果没有提供,将从环境变量GEMINI_API_KEY中读取
+}
+```
+
+
+
+
+
+#### Ollama API
+
+
+Ollama Chat API (agentscope.models.OllamaChatWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_chat_config",
+ "model_type": "ollama_chat",
+
+ # 必要参数
+ "model": "{model_name}", # ollama Chat API中的模型名, 例如:llama2
+
+ # 可选参数
+ "options": { # 模型API接口被调用时传入的参数
+ # 例如:"temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # 控制一次调用后模型在内存中的存活时间
+}
+```
+
+
+
+
+Ollama Generation API (agentscope.models.OllamaGenerationWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_generate_config",
+ "model_type": "ollama_generate",
+
+ # 必要参数
+ "model": "{model_name}", # ollama Generate API, 例如:llama2
+
+ # 可选参数
+ "options": { # 模型API接口被调用时传入的参数
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # 控制一次调用后模型在内存中的存活时间
+}
+```
+
+
+
+
+Ollama Embedding API (agentscope.models.OllamaEmbeddingWrapper
)
+
+```python
+{
+ "config_name": "my_ollama_embedding_config",
+ "model_type": "ollama_embedding",
+
+ # 必要参数
+ "model": "{model_name}", # ollama Embedding API, 例如:llama2
+
+ # 可选参数
+ "options": { # 模型API接口被调用时传入的参数
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # 控制一次调用后模型在内存中的存活时间
+}
+```
+
+
+
+
+
+#### Post Request API
+
+
+Post request API (agentscope.models.PostAPIModelWrapperBase
)
+
+```python
+{
+ "config_name": "my_postapiwrapper_config",
+ "model_type": "post_api",
+
+ # 必要参数
+ "api_url": "https://xxx.xxx",
+ "headers": {
+ # 例如:"Authorization": "Bearer xxx",
+ },
+
+ # 可选参数
+ "messages_key": "messages",
+}
+```
+
+
+
+
+
+## 从零搭建模型服务
+
+针对需要自己搭建模型服务的开发者,AgentScope提供了一些脚本来帮助开发者快速搭建模型服务。您可以在[scripts](https://github.com/modelscope/agentscope/tree/main/scripts)目录下找到这些脚本以及说明。
+
+具体而言,AgentScope提供了以下模型服务的脚本:
+
+- [CPU推理引擎ollama](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#ollama)
+- [基于Flask + Transformers的模型服务](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-transformers-library)
+- [基于Flask + ModelScope的模型服务](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#with-modelscope-library)
+- [FastChat推理引擎](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#fastchat)
+- [vllm推理引擎](https://github.com/modelscope/agentscope/blob/main/scripts/README.md#vllm)
+
+关于如何快速启动这些模型服务,用户可以参考[scripts](https://github.com/modelscope/agentscope/blob/main/scripts/)目录下的[README.md](https://github.com/modelscope/agentscope/blob/main/scripts/README.md)文件。
+
+## 创建自己的Model Wrapper
+
+AgentScope允许开发者自定义自己的模型包装器。新的模型包装器类应该
+- 继承自`ModelWrapperBase`类,
+- 提供`model_type`字段以在模型配置中标识这个Model Wrapper类,并
+- 实现`__init__`和`__call__`函数。
+
+```python
+from agentscope.models import ModelWrapperBase
+
+class MyModelWrapper(ModelWrapperBase):
+
+ model_type: str = "my_model"
+
+ def __init__(self, config_name, my_arg1, my_arg2, **kwargs):
+ # 初始化模型实例
+ super().__init__(config_name=config_name)
+ # ...
+
+ def __call__(self, input, **kwargs) -> str:
+ # 调用模型实例
+ # ...
+```
+
+在创建新的模型包装器类之后,模型包装器将自动注册到AgentScope中。
+您可以直接在模型配置中使用它。
+
+```python
+my_model_config = {
+ # 基础参数
+ "config_name": "my_model_config",
+ "model_type": "my_model",
+
+ # 详细参数
+ "my_arg1": "xxx",
+ "my_arg2": "yyy",
+ # ...
+}
+```
+
+[[返回顶部]](#203-model-zh)
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/204-service.md b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
similarity index 99%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/204-service.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
index 47ddca013..78ddbcb9b 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/204-service.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
@@ -1,6 +1,6 @@
(204-service-zh)=
-# 关于服务
+# 服务函数
服务函数(Service function)是可以增强智能体能力工具,例如执行Python代码、网络搜索、
文件操作等。本教程概述了AgentScope中可用的服务功能,同时介绍如何使用它们来增强智能体的能力。
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/205-memory.md b/docs/sphinx_doc/zh_CN/source/tutorial/205-memory.md
similarity index 99%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/205-memory.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/205-memory.md
index 0949b4387..25b20de64 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/205-memory.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/205-memory.md
@@ -1,6 +1,6 @@
(205-memory-zh)=
-# 关于记忆
+# 记忆
AgentScope中,记忆(memory)用于存储历史消息,从而使智能体能够根据上下文提供更加连贯,更加
自然的响应。
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/206-prompt.md b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/206-prompt.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/207-monitor.md b/docs/sphinx_doc/zh_CN/source/tutorial/207-monitor.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/207-monitor.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/207-monitor.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/208-distribute.md b/docs/sphinx_doc/zh_CN/source/tutorial/208-distribute.md
similarity index 99%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/208-distribute.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/208-distribute.md
index e64d2b492..d882b7690 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/208-distribute.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/208-distribute.md
@@ -1,6 +1,6 @@
(208-distribute-zh)=
-# 关于分布式
+# 分布式
AgentScope实现了基于Actor模式的智能体分布式部署和并行优化,并提供以下的特点:
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/301-community.md b/docs/sphinx_doc/zh_CN/source/tutorial/301-community.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/301-community.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/301-community.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/302-contribute.md b/docs/sphinx_doc/zh_CN/source/tutorial/302-contribute.md
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/302-contribute.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/302-contribute.md
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/advance.rst b/docs/sphinx_doc/zh_CN/source/tutorial/advance.rst
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/advance.rst
rename to docs/sphinx_doc/zh_CN/source/tutorial/advance.rst
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/contribute.rst b/docs/sphinx_doc/zh_CN/source/tutorial/contribute.rst
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/contribute.rst
rename to docs/sphinx_doc/zh_CN/source/tutorial/contribute.rst
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/main.md b/docs/sphinx_doc/zh_CN/source/tutorial/main.md
similarity index 72%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/main.md
rename to docs/sphinx_doc/zh_CN/source/tutorial/main.md
index 70430a995..3a45ee7f2 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/main.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/main.md
@@ -12,24 +12,24 @@ AgentScope是一款全新的Multi-Agent框架,专为应用开发者打造,
### 快速上手
+- [关于AgentScope](101-agentscope-zh)
- [安装](102-installation-zh)
-- [基础概念](101-agentscope-zh)
-- [快速上手案例](103-example-zh)
-- [创建您的第一个应用](104-usecase-zh)
+- [快速开始](103-example-zh)
+- [创造您的第一个应用](104-usecase-zh)
- [日志和WebUI](105-logging-zh)
### 进阶使用
-- [定制自己的Agent](201-agent-zh)
-- [智能体间交互](202-pipeline-zh)
-- [关于模型](203-model-zh)
-- [关于服务](204-service-zh)
-- [关于记忆](205-memory-zh)
+- [定制你自己的Agent](201-agent-zh)
+- [Pipeline和MsgHub](202-pipeline-zh)
+- [模型](203-model-zh)
+- [服务函数](204-service-zh)
+- [记忆](205-memory-zh)
- [提示工程](206-prompt-zh)
- [监控器](207-monitor-zh)
-- [关于分布式](208-distribute-zh)
+- [分布式](208-distribute-zh)
### 参与贡献
-- [加入 AgentScope 社区](301-community-zh)
-- [为 AgentScope 做贡献](302-contribute-zh)
+- [加入AgentScope社区](301-community-zh)
+- [贡献到AgentScope](302-contribute-zh)
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/quick_start.rst b/docs/sphinx_doc/zh_CN/source/tutorial/quick_start.rst
similarity index 100%
rename from docs/sphinx_doc/zh_CN/source/tutorial_zh/quick_start.rst
rename to docs/sphinx_doc/zh_CN/source/tutorial/quick_start.rst
index 8153daefe..afbef5de7 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/quick_start.rst
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/quick_start.rst
@@ -4,8 +4,8 @@
.. toctree::
:maxdepth: 2
- 102-installation.md
101-agentscope.md
+ 102-installation.md
103-example.md
104-usecase.md
105-logging.md
\ No newline at end of file
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial_zh/203-model.md b/docs/sphinx_doc/zh_CN/source/tutorial_zh/203-model.md
deleted file mode 100644
index 1f56943b4..000000000
--- a/docs/sphinx_doc/zh_CN/source/tutorial_zh/203-model.md
+++ /dev/null
@@ -1,209 +0,0 @@
-(203-model-zh)=
-
-# 关于模型
-
-AgentScope中,模型的部署和调用是通过`ModelWrapper`来解耦开的,开发者可以通过提供模型配置(Model config)的方式指定模型,同时AgentScope也提供脚本支持开发者自定义模型服务。
-
-## 支持模型
-
-目前,AgentScope内置以下模型服务API的支持:
-
-- OpenAI API,包括对话(Chat),图片生成(DALL-E)和文本嵌入(Embedding)。
-- Post请求API,基于Post请求实现的模型推理服务,包括Huggingface/ModelScope
- Inference API和各种符合Post请求格式的API。
-
-## 配置方式
-
-AgentScope中,用户通过`agentscope.init`接口中的`model_configs`参数来指定模型配置。
-`model_configs`可以是一个字典,或是一个字典的列表,抑或是一个指向模型配置文件的路径。
-
-```python
-import agentscope
-
-agentscope.init(model_configs=MODEL_CONFIG_OR_PATH)
-```
-
-其中`model_configs`的一个例子如下:
-
-```python
-model_configs = [
- {
- "config_name": "gpt-4-temperature-0.0",
- "model_type": "openai",
- "model_name": "gpt-4",
- "api_key": "xxx",
- "organization": "xxx",
- "generate_args": {
- "temperature": 0.0
- }
- },
- {
- "config_name": "dall-e-3-size-1024x1024",
- "model_type": "openai_dall_e",
- "model_name": "dall-e-3",
- "api_key": "xxx",
- "organization": "xxx",
- "generate_args": {
- "size": "1024x1024"
- }
- },
- # 在这里可以配置额外的模型
-]
-```
-
-### 配置格式
-
-AgentScope中,模型配置是一个字典,用于指定模型的类型以及设定调用参数。
-我们将模型配置中的字段分为_基础参数_和_调用参数_两类。
-其中,基础参数包括`config_name`和`model_type`两个基本字段,分别用于区分不同的模型配置和具
-体的`ModelWrapper`类型。
-
-```python
-{
- # 基础参数
- "config_name": "gpt-4-temperature-0.0", # 模型配置名称
- "model_type": "openai", # 对应`ModelWrapper`类型
-
- # 详细参数
- # ...
-}
-```
-
-#### 基础参数
-
-基础参数中,`config_name`是模型配置的标识,我们将在初始化智能体时用该字段指定使用的模型服务。
-
-`model_type`对应了`ModelWrapper`的类型,用于指定模型服务的类型。对应源代码中`ModelWrapper
-`类的`model_type`字段。
-
-```python
-class OpenAIChatWrapper(OpenAIWrapper):
- """The model wrapper for OpenAI's chat API."""
-
- model_type: str = "openai"
- # ...
-```
-
-在目前的AgentScope中,所支持的`model_type`类型,对应的`ModelWrapper`类,以及支持的
-API如下:
-
-| 任务 | model_type | ModelWrapper | 支持的 API |
-|--------|--------------------|--------------------------|------------------------------------------------------------|
-| 文本生成 | `openai` | `OpenAIChatWrapper` | 标准 OpenAI 聊天 API,FastChat 和 vllm |
-| 图像生成 | `openai_dall_e` | `OpenAIDALLEWrapper` | 用于生成图像的 DALL-E API |
-| 文本嵌入 | `openai_embedding` | `OpenAIEmbeddingWrapper` | 用于文本嵌入的 API |
-| POST请求 | `post_api` | `PostAPIModelWrapperBase` | Huggingface/ModelScope inference API 和自定义的post request API |
-
-#### 详细参数
-
-根据`ModelWrapper`的不同,详细参数中所包含的参数不同。
-但是所有的详细参数都会用于初始化`ModelWrapper`类的实例,因此,更详细的参数说明可以根据`ModelWrapper`类的构造函数来查看。
-
-- OpenAI的API,包括文本生成,图像生成,文本嵌入,其模型配置参数如下
-
-```python
-{
- # 基础参数
- "config_name": "gpt-4_temperature-0.0",
- "model_type": "openai",
-
- # 详细参数
- # 必要参数
- "model_name": "gpt-4", # OpenAI模型名称
-
- # 可选参数
- "api_key": "xxx", # OpenAI API Key,如果没有提供则会从环境变量中读取
- "organization": "xxx", # 组织名称,如果没有提供则会从环境变量中读取
- "client_args": { # 初始化OpenAI API Client的参数
- "max_retries": 3,
- },
- "generate_args": { # 调用模型时传入的参数
- "temperature": 0.0
- },
- "budget": 100.0 # API费用预算
-}
-```
-
-- Post request API,其模型配置参数如下
-
-```python
-{
- # 基础参数
- "config_name": "gpt-4_temperature-0.0",
- "model_type": "post_api",
-
- # 详细参数
- "api_url": "http://xxx.png",
- "headers": {
- # e.g. "Authorization": "Bearer xxx",
- },
-
- # 可选参数,需要根据Post请求API的要求进行配置
- "json_args": {
- # e.g. "temperature": 0.0
- }
- # ...
-}
-```
-
-## 从零搭建模型服务
-
-针对需要自己搭建模型服务的开发者,AgentScope提供了一些脚本来帮助开发者快速搭建模型服务。您可以在[scripts]
-(
-
-具体而言,AgentScope提供了以下模型服务的脚本:
-
-- 基于Flask + HuggingFace的模型服务
-- 基于Flask + ModelScope的模型服务
-- FastChat推理引擎
-- vllm推理引擎
-
-下面我们以Flask + hugingface的模型服务为例,介绍如何使用AgentScope的模型服务脚本。
-更多的模型服务脚本可以在[scripts](https://github.com/modelscope/agentscope/blob/main/scripts/)中查看。
-
-### 基于Flask 的模型 API 服务
-
-[Flask](https://github.com/pallets/flask)是一个轻量级的Web应用框架。利用Flask可以很容易地搭建本地模型API服务。
-
-#### 使用transformers库
-
-##### 安装transformers并配置服务
-
-按照以下命令安装 Flask 和 Transformers:
-
-```bash
-pip install Flask transformers
-```
-
-以模型 `meta-llama/Llama-2-7b-chat-hf` 和端口 `8000` 为例,通过运行以下命令来设置模型 API 服务。
-
-```bash
-python flask_transformers/setup_hf_service.py
- --model_name_or_path meta-llama/Llama-2-7b-chat-hf
- --device "cuda:0" # or "cpu"
- --port 8000
-```
-
-您可以将 `meta-llama/Llama-2-7b-chat-hf` 替换为 huggingface 模型中心的任何模型卡片。
-
-##### 在AgentScope中调用
-
-在 AgentScope 中,您可以使用以下模型配置加载型:[./flask_transformers/model_config.json](https://github.com/modelscope/agentscope/blob/main/scripts/flask_transformers/model_config.json)。
-
-```json
-{
- "model_type": "post_api",
- "config_name": "flask_llama2-7b-chat",
- "api_url": "http://127.0.0.1:8000/llm/",
- "json_args": {
- "max_length": 4096,
- "temperature": 0.5
- }
-}
-```
-
-##### 注意
-
-在这种模型服务中,来自 post 请求的消息应该是 **STRING** 格式。您可以使用来自 *transformers* 的[聊天模型模板](https://huggingface.co/docs/transformers/main/chat_templating),只需在[`./flask_transformers/setup_hf_service.py`](https://github.com/modelscope/agentscope/blob/main/scripts/flask_transformers/setup_hf_service.py)做一点修改即可。
-
-[[返回顶部]](#203-model-zh)
diff --git a/examples/conversation/README.md b/examples/conversation_basic/README.md
similarity index 84%
rename from examples/conversation/README.md
rename to examples/conversation_basic/README.md
index 1bdd093a2..ba108d157 100644
--- a/examples/conversation/README.md
+++ b/examples/conversation_basic/README.md
@@ -5,8 +5,8 @@ assistant agent to have a conversation. When user input "exit", the
conversation ends.
You can modify the `sys_prompt` to change the role of assistant agent.
```bash
-# Note: Set your api_key in conversation.py first
-python conversation.py
+# Note: Set your api_key in conversation_basic.py first
+python conversation_basic.py
```
To set up model serving with open-source LLMs, follow the guidance in
[scripts/REAMDE.md](../../scripts/README.md).
\ No newline at end of file
diff --git a/examples/conversation/conversation.py b/examples/conversation_basic/conversation.py
similarity index 97%
rename from examples/conversation/conversation.py
rename to examples/conversation_basic/conversation.py
index 8eb51af56..f19d5dd1b 100644
--- a/examples/conversation/conversation.py
+++ b/examples/conversation_basic/conversation.py
@@ -7,7 +7,7 @@
def main() -> None:
- """A conversation demo"""
+ """A basic conversation demo"""
agentscope.init(
model_configs=[
diff --git a/examples/agent_builder/agent_builder_instruct.txt b/examples/conversation_self_organizing/agent_builder_instruct.txt
similarity index 100%
rename from examples/agent_builder/agent_builder_instruct.txt
rename to examples/conversation_self_organizing/agent_builder_instruct.txt
diff --git a/examples/agent_builder/auto-discussion.py b/examples/conversation_self_organizing/auto-discussion.py
similarity index 91%
rename from examples/agent_builder/auto-discussion.py
rename to examples/conversation_self_organizing/auto-discussion.py
index a6544afea..28b505127 100644
--- a/examples/agent_builder/auto-discussion.py
+++ b/examples/conversation_self_organizing/auto-discussion.py
@@ -11,7 +11,7 @@
{
"model_type": "openai",
"config_name": "gpt-3.5-turbo",
- "model": "gpt-3.5-turbo",
+ "model_name": "gpt-3.5-turbo",
"api_key": "xxx", # Load from env if not provided
"organization": "xxx", # Load from env if not provided
"generate_args": {
@@ -29,7 +29,7 @@
agentscope.init(model_configs=model_configs)
-# init agent_builder
+# init the self-organizing conversation
agent_builder = DialogAgent(
name="agent_builder",
sys_prompt="You're a helpful assistant.",
@@ -43,7 +43,9 @@
telescope gather than your eye?"
# get the discussion scenario and participant agents
-x = load_txt("examples/agent_builder/agent_builder_instruct.txt").format(
+x = load_txt(
+ "examples/conversation_self_organizing/agent_builder_instruct.txt",
+).format(
question=query,
)
diff --git a/examples/agent_builder/tools.py b/examples/conversation_self_organizing/tools.py
similarity index 100%
rename from examples/agent_builder/tools.py
rename to examples/conversation_self_organizing/tools.py
diff --git a/examples/groupchat/README.md b/examples/conversation_with_mentions/README.md
similarity index 100%
rename from examples/groupchat/README.md
rename to examples/conversation_with_mentions/README.md
diff --git a/examples/groupchat/configs/agent_configs.json b/examples/conversation_with_mentions/configs/agent_configs.json
similarity index 100%
rename from examples/groupchat/configs/agent_configs.json
rename to examples/conversation_with_mentions/configs/agent_configs.json
diff --git a/examples/groupchat/configs/model_configs.json b/examples/conversation_with_mentions/configs/model_configs.json
similarity index 100%
rename from examples/groupchat/configs/model_configs.json
rename to examples/conversation_with_mentions/configs/model_configs.json
diff --git a/examples/groupchat/groupchat_utils.py b/examples/conversation_with_mentions/groupchat_utils.py
similarity index 100%
rename from examples/groupchat/groupchat_utils.py
rename to examples/conversation_with_mentions/groupchat_utils.py
diff --git a/examples/groupchat/main.py b/examples/conversation_with_mentions/main.py
similarity index 100%
rename from examples/groupchat/main.py
rename to examples/conversation_with_mentions/main.py
diff --git a/examples/werewolf/README.md b/examples/game_werewolf/README.md
similarity index 100%
rename from examples/werewolf/README.md
rename to examples/game_werewolf/README.md
diff --git a/examples/werewolf/configs/agent_configs.json b/examples/game_werewolf/configs/agent_configs.json
similarity index 100%
rename from examples/werewolf/configs/agent_configs.json
rename to examples/game_werewolf/configs/agent_configs.json
diff --git a/examples/werewolf/configs/model_configs.json b/examples/game_werewolf/configs/model_configs.json
similarity index 100%
rename from examples/werewolf/configs/model_configs.json
rename to examples/game_werewolf/configs/model_configs.json
diff --git a/examples/werewolf/prompt.py b/examples/game_werewolf/prompt.py
similarity index 100%
rename from examples/werewolf/prompt.py
rename to examples/game_werewolf/prompt.py
diff --git a/examples/werewolf/werewolf.py b/examples/game_werewolf/werewolf.py
similarity index 100%
rename from examples/werewolf/werewolf.py
rename to examples/game_werewolf/werewolf.py
diff --git a/examples/werewolf/werewolf_utils.py b/examples/game_werewolf/werewolf_utils.py
similarity index 100%
rename from examples/werewolf/werewolf_utils.py
rename to examples/game_werewolf/werewolf_utils.py
diff --git a/notebook/conversation.ipynb b/notebook/conversation.ipynb
index dd9da7772..cfef40067 100644
--- a/notebook/conversation.ipynb
+++ b/notebook/conversation.ipynb
@@ -51,7 +51,7 @@
" {\n",
" \"model_type\": \"openai\",\n",
" \"config_name\": \"gpt-3.5-turbo\",\n",
- " \"model\": \"gpt-3.5-turbo\",\n",
+ " \"model_name\": \"gpt-3.5-turbo\",\n",
" \"api_key\": \"xxx\", # Load from env if not provided\n",
" \"organization\": \"xxx\", # Load from env if not provided\n",
" \"generate_args\": {\n",
diff --git a/notebook/distributed_debate.ipynb b/notebook/distributed_debate.ipynb
index 3acd38397..5ce024b4a 100644
--- a/notebook/distributed_debate.ipynb
+++ b/notebook/distributed_debate.ipynb
@@ -50,7 +50,7 @@
" {\n",
" \"model_type\": \"openai\",\n",
" \"config_name\": \"gpt-3.5-turbo\",\n",
- " \"model\": \"gpt-3.5-turbo\",\n",
+ " \"model_name\": \"gpt-3.5-turbo\",\n",
" \"api_key\": \"xxx\",\n",
" \"organization\": \"xxx\",\n",
" \"generate_args\": {\n",
@@ -60,7 +60,7 @@
" {\n",
" \"model_type\": \"openai\",\n",
" \"config_name\": \"gpt-4\",\n",
- " \"model\": \"gpt-4\",\n",
+ " \"model_name\": \"gpt-4\",\n",
" \"api_key\": \"xxx\",\n",
" \"organization\": \"xxx\",\n",
" \"generate_args\": {\n",
diff --git a/notebook/distributed_dialog.ipynb b/notebook/distributed_dialog.ipynb
index 10829dee4..cb20c6a46 100644
--- a/notebook/distributed_dialog.ipynb
+++ b/notebook/distributed_dialog.ipynb
@@ -43,7 +43,7 @@
" {\n",
" \"model_type\": \"openai\",\n",
" \"config_name\": \"gpt-3.5-turbo\",\n",
- " \"model\": \"gpt-3.5-turbo\",\n",
+ " \"model_name\": \"gpt-3.5-turbo\",\n",
" \"api_key\": \"xxx\",\n",
" \"organization\": \"xxx\",\n",
" \"generate_args\": {\n",
@@ -53,7 +53,7 @@
" {\n",
" \"model_type\": \"openai\",\n",
" \"config_name\": \"gpt-4\",\n",
- " \"model\": \"gpt-4\",\n",
+ " \"model_name\": \"gpt-4\",\n",
" \"api_key\": \"xxx\",\n",
" \"organization\": \"xxx\",\n",
" \"generate_args\": {\n",
diff --git a/scripts/README.md b/scripts/README.md
index 5031a1f90..10293ab14 100644
--- a/scripts/README.md
+++ b/scripts/README.md
@@ -1,36 +1,108 @@
-# Set up Model API Serving
+# Set up Local Model API Serving
-In AgentScope, in addition to OpenAI API, we also support open-source
-models with post request API. In this document, we will introduce how to
-fast set up local model API serving with different inference engines.
+AgentScope supports developers to build their local model API serving with different inference engines/libraries.
+This document will introduce how to fast build their local API serving with provided scripts.
Table of Contents
=================
-- [Set up Model API Serving](#set-up-model-api-serving)
-- [Table of Contents](#table-of-contents)
- - [Local Model API Serving](#local-model-api-serving)
- - [Flask-based Model API Serving](#flask-based-model-api-serving)
- - [With Transformers Library](#with-transformers-library)
- - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving)
- - [How to use in AgentScope](#how-to-use-in-agentscope)
- - [Note](#note)
- - [With ModelScope Library](#with-modelscope-library)
- - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-1)
- - [How to use in AgentScope](#how-to-use-in-agentscope-1)
- - [Note](#note-1)
- - [FastChat](#fastchat)
- - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-2)
- - [Supported Models](#supported-models)
+- [Local Model API Serving](#local-model-api-serving)
+ - [ollama](#ollama)
+ - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving)
+ - [How to use in AgentScope](#how-to-use-in-agentscope)
+ - [Flask-based Model API Serving](#flask-based-model-api-serving)
+ - [With Transformers Library](#with-transformers-library)
+ - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving)
+ - [How to use in AgentScope](#how-to-use-in-agentscope-1)
+ - [Note](#note)
+ - [With ModelScope Library](#with-modelscope-library)
+ - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-1)
- [How to use in AgentScope](#how-to-use-in-agentscope-2)
- - [vllm](#vllm)
- - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-3)
- - [Supported models](#supported-models-1)
- - [How to use in AgentScope](#how-to-use-in-agentscope-3)
- - [Model Inference API](#model-inference-api)
+ - [Note](#note-1)
+ - [FastChat](#fastchat)
+ - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-2)
+ - [Supported Models](#supported-models)
+ - [How to use in AgentScope](#how-to-use-in-agentscope-3)
+ - [vllm](#vllm)
+ - [Install Libraries and Set up Serving](#install-libraries-and-set-up-serving-3)
+ - [Supported models](#supported-models-1)
+ - [How to use in AgentScope](#how-to-use-in-agentscope-4)
+- [Model Inference API](#model-inference-api)
## Local Model API Serving
+### ollama
+
+[ollama](https://github.com/ollama/ollama) is a CPU inference engine for LLMs. With ollama, developers can build their local model API serving without GPU requirements.
+
+#### Install Libraries and Set up Serving
+
+- First, install ollama in its [official repository](https://github.com/ollama/ollama) based on your system (e.g. macOS, windows or linux).
+
+- Follow ollama's [guidance](https://github.com/ollama/ollama) to pull or create a model and start its serving. Taking llama2 as an example, you can run the following command to pull the model files.
+
+```bash
+ollama pull llama2
+```
+
+#### How to use in AgentScope
+
+In AgentScope, you can use the following model configurations to load the model.
+
+- For ollama Chat API:
+
+```python
+{
+ "config_name": "my_ollama_chat_config",
+ "model_type": "ollama_chat",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # e.g. "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
+- For ollama generate API:
+
+```python
+{
+ "config_name": "my_ollama_generate_config",
+ "model_type": "ollama_generate",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
+- For ollama embedding API:
+
+```python
+{
+ "config_name": "my_ollama_embedding_config",
+ "model_type": "ollama_embedding",
+
+ # Required parameters
+ "model": "{model_name}", # The model name used in ollama API, e.g. llama2
+
+ # Optional parameters
+ "options": { # Parameters passed to the model when calling
+ # "temperature": 0., "seed": "123",
+ },
+ "keep_alive": "5m", # Controls how long the model will stay loaded into memory
+}
+```
+
### Flask-based Model API Serving
[Flask](https://github.com/pallets/flask) is a lightweight web application
diff --git a/scripts/ollama/model_config.json b/scripts/ollama/model_config.json
new file mode 100644
index 000000000..40e4e41e5
--- /dev/null
+++ b/scripts/ollama/model_config.json
@@ -0,0 +1,32 @@
+[
+ {
+ "config_name": "my_ollama_chat_config",
+ "model_type": "ollama_chat",
+ "model": "{model_name}",
+ "options": {
+ "temperature": 0.5,
+ "seed": "123"
+ },
+ "keep_alive": "5m"
+ },
+ {
+ "config_name": "my_ollama_generate_config",
+ "model_type": "ollama_generate",
+ "model": "{model_name}",
+ "options": {
+ "temperature": 0.5,
+ "seed": "123"
+ },
+ "keep_alive": "5m"
+ },
+ {
+ "config_name": "my_ollama_embedding_config",
+ "model_type": "ollama_embedding",
+ "model": "{model_name}",
+ "options": {
+ "temperature": 0.5,
+ "seed": "123"
+ },
+ "keep_alive": "5m"
+ }
+]
\ No newline at end of file
diff --git a/scripts/ollama/ollama.sh b/scripts/ollama/ollama.sh
new file mode 100644
index 000000000..b3ca8596d
--- /dev/null
+++ b/scripts/ollama/ollama.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+
+ollama pull llama2
\ No newline at end of file
diff --git a/src/agentscope/_version.py b/src/agentscope/_version.py
index 3d4246429..5c3a69705 100644
--- a/src/agentscope/_version.py
+++ b/src/agentscope/_version.py
@@ -1,4 +1,4 @@
# -*- coding: utf-8 -*-
""" Version of AgentScope."""
-__version__ = "0.0.1"
+__version__ = "0.0.2"
diff --git a/src/agentscope/agents/dict_dialog_agent.py b/src/agentscope/agents/dict_dialog_agent.py
index 6e86acbb8..b6fab29df 100644
--- a/src/agentscope/agents/dict_dialog_agent.py
+++ b/src/agentscope/agents/dict_dialog_agent.py
@@ -42,7 +42,7 @@ class DictDialogAgent(AgentBase):
the speak field as the output response.
For usage example, please refer to the example of werewolf in
- `examples/werewolf`"""
+ `examples/game_werewolf`"""
def __init__(
self,