Skip to content

Commit

Permalink
Merge branch 'main' into dev/web_ui
Browse files Browse the repository at this point in the history
# Conflicts:
#	src/agentscope/_init.py
  • Loading branch information
DavdGao committed Feb 19, 2024
2 parents bc030aa + 981ae7c commit 9ea78b4
Show file tree
Hide file tree
Showing 45 changed files with 750 additions and 564 deletions.
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,22 +97,22 @@ AgentScope supports the following model API services:
- [HuggingFace](https://huggingface.co/docs/api-inference/index) and [ModelScope](https://www.modelscope.cn/docs/%E9%AD%94%E6%90%ADv1.5%E7%89%88%E6%9C%AC%20Release%20Note%20(20230428)) inference APIs
- Customized model APIs

| | Type Argument | Support APIs |
|----------------------|--------------------|---------------------------------------------------------------|
| OpenAI Chat API | `openai` | Standard OpenAI Chat API, FastChat and vllm |
| OpenAI DALL-E API | `openai_dall_e` | Standard DALL-E API |
| OpenAI Embedding API | `openai_embedding` | OpenAI embedding API |
| Post API | `post_api` | Huggingface/ModelScope inference API, and customized post API |
| | Model Type Argument | Support APIs |
|----------------------|---------------------|----------------------------------------------------------------|
| OpenAI Chat API | `openai` | Standard OpenAI Chat API, FastChat and vllm |
| OpenAI DALL-E API | `openai_dall_e` | Standard DALL-E API |
| OpenAI Embedding API | `openai_embedding` | OpenAI embedding API |
| Post API | `post_api` | Huggingface/ModelScope inference API, and customized post API |

##### OpenAI API Config

For OpenAI APIs, you need to prepare a dict of model config with the following fields:

```
{
"type": "openai" | "openai_dall_e" | "openai_embedding",
"name": "{your_config_name}", # The name used to identify your config
"model_name": "{model_name, e.g. gpt-4}", # The used model in openai API
"config_name": "{config name}", # The name to identify the config
"model_type": "openai" | "openai_dall_e" | "openai_embedding",
"model_name": "{model name, e.g. gpt-4}", # The model in openai API
# Optional
"api_key": "xxx", # The API key for OpenAI API. If not set, env
Expand All @@ -128,8 +128,8 @@ For post requests APIs, the config contains the following fields.

```
{
"type": "post_api",
"name": "{your_config_name}", # The name used to identify config
"config_name": "{config name}", # The name to identify the config
"model_type": "post_api",
"api_url": "https://xxx", # The target url
"headers": { # Required headers
...
Expand All @@ -152,7 +152,7 @@ import agentscope
agentscope.init(model_configs="./model_configs.json")

# Create a dialog agent and a user agent
dialog_agent = DialogAgent(name="assistant", model="gpt-4")
dialog_agent = DialogAgent(name="assistant", model_config_name="your_config_name")
user_agent = UserAgent()
```

Expand Down
9 changes: 9 additions & 0 deletions docs/sphinx_doc/source/agentscope.agents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,12 @@ dict_dialog_agent module
:members:
:undoc-members:
:show-inheritance:


text_to_image_agent module
-------------------------------

.. automodule:: agentscope.agents.text_to_image_agent
:members:
:undoc-members:
:show-inheritance:
11 changes: 0 additions & 11 deletions docs/sphinx_doc/source/agentscope.configs.rst

This file was deleted.

10 changes: 9 additions & 1 deletion docs/sphinx_doc/source/agentscope.models.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
Models package
==========================

config module
-------------------------------

.. automodule:: agentscope.models.config
:members:
:undoc-members:
:show-inheritance:

model module
-------------------------------

Expand Down Expand Up @@ -29,6 +37,6 @@ Module contents
---------------

.. automodule:: agentscope.models
:members: load_model_by_name, clear_model_configs, read_model_configs
:members: load_model_by_config_name, clear_model_configs, read_model_configs
:undoc-members:
:show-inheritance:
1 change: 0 additions & 1 deletion docs/sphinx_doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ AgentScope Documentation
:caption: AgentScope API Reference

agentscope.agents
agentscope.configs
agentscope.memory
agentscope.models
agentscope.pipelines
Expand Down
20 changes: 10 additions & 10 deletions docs/sphinx_doc/source/tutorial/103-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,20 @@ AgentScope is a versatile platform for building and running multi-agent applicat

Agent is the basic composition and communication unit in AgentScope. To initialize a model-based agent, you need to prepare your configs for avaliable models. AgentScope supports a variety of APIs for pre-trained models. Here is a table outlining the supported APIs and the type of arguments required for each:

| Model Usage | Type Argument in AgentScope | Supported APIs |
| -------------------- | ------------------ |-----------------------------------------------------------------------------|
| Text generation | `openai` | Standard *OpenAI* chat API, FastChat and vllm |
| Image generation | `openai_dall_e` | *DALL-E* API for generating images |
| Embedding | `openai_embedding` | API for text embeddings |
| General usages in POST | `post_api` | *Huggingface* and *ModelScope* Inference API, and other customized post API |
| Model Usage | Model Type Argument in AgentScope | Supported APIs |
| --------------------------- | --------------------------------- |-----------------------------------------------------------------------------|
| Text generation | `openai` | Standard *OpenAI* chat API, FastChat and vllm |
| Image generation | `openai_dall_e` | *DALL-E* API for generating images |
| Embedding | `openai_embedding` | API for text embeddings |
| General usages in POST | `post_api` | *Huggingface* and *ModelScope* Inference API, and other customized post API |

Each API has its specific configuration requirements. For example, to configure an OpenAI API, you would need to fill out the following fields in the model config in a dict, a yaml file or a json file:

```python
model_config = {
"type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding"
"name": "{your_config_name}", # A unique identifier for your config
"model_name": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002"
"config_name": "{config_name}", # A unique name for the model config.
"model_type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding".
"model_name": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002".
"api_key": "xxx", # Your OpenAI API key. If unset, the environment variable OPENAI_API_KEY is used.
"organization": "xxx", # Your OpenAI organization ID. If unset, the environment variable OPENAI_ORGANIZATION is used.
}
Expand Down Expand Up @@ -52,7 +52,7 @@ from agentscope.agents import DialogAgent, UserAgent
agentscope.init(model_configs="./openai_model_configs.json")

# Create a dialog agent and a user agent
dialogAgent = DialogAgent(name="assistant", model="gpt-4")
dialogAgent = DialogAgent(name="assistant", model_config_name="gpt-4")
userAgent = UserAgent()
```

Expand Down
15 changes: 8 additions & 7 deletions docs/sphinx_doc/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,12 @@ As we discussed in the last tutorial, you need to prepare your model configurati
```json
[
{
"type": "openai",
"name": "gpt-4",
"parameters": {
"api_key": "xxx",
"organization_id": "xxx",
"config_name": "gpt-4-temperature-0.0",
"model_type": "openai",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.0
}
},
Expand Down Expand Up @@ -75,13 +76,13 @@ AgentScope provides several out-of-the-box Agents implements and organizes them
"args": {
"name": "Player1",
"sys_prompt": "Act as a player in a werewolf game. You are Player1 and\nthere are totally 6 players, named Player1, Player2, Player3, Player4, Player5 and Player6.\n\nPLAYER ROLES:\nIn werewolf game, players are divided into two werewolves, two villagers, one seer, and one witch. Note only werewolves know who are their teammates.\nWerewolves: They know their teammates' identities and attempt to eliminate a villager each night while trying to remain undetected.\nVillagers: They do not know who the werewolves are and must work together during the day to deduce who the werewolves might be and vote to eliminate them.\nSeer: A villager with the ability to learn the true identity of one player each night. This role is crucial for the villagers to gain information.\nWitch: A character who has a one-time ability to save a player from being eliminated at night (sometimes this is a potion of life) and a one-time ability to eliminate a player at night (a potion of death).\n\nGAME RULE:\nThe game consists of two phases: night phase and day phase. The two phases are repeated until werewolf or villager wins the game.\n1. Night Phase: During the night, the werewolves discuss and vote for a player to eliminate. Special roles also perform their actions at this time (e.g., the Seer chooses a player to learn their role, the witch chooses a decide if save the player).\n2. Day Phase: During the day, all surviving players discuss who they suspect might be a werewolf. No one reveals their role unless it serves a strategic purpose. After the discussion, a vote is taken, and the player with the most votes is \"lynched\" or eliminated from the game.\n\nVICTORY CONDITION:\nFor werewolves, they win the game if the number of werewolves is equal to or greater than the number of remaining villagers.\nFor villagers, they win if they identify and eliminate all of the werewolves in the group.\n\nCONSTRAINTS:\n1. Your response should be in the first person.\n2. This is a conversational game. You should respond only based on the conversation history and your strategy.\n\nYou are playing werewolf in this game.\n",
"model": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
}
```

In this configuration, `Player1` is designated as a `DictDialogAgent`. The parameters include a system prompt (`sys_prompt`) that can guide the agent's behavior, the model (`model`) that determines the type of language model of the agent, and a flag (`use_memory`) indicating whether the agent should remember past interactions.
In this configuration, `Player1` is designated as a `DictDialogAgent`. The parameters include a system prompt (`sys_prompt`) that can guide the agent's behavior, a model config name (`model_config_name`) that determines the name of the model configuration, and a flag (`use_memory`) indicating whether the agent should remember past interactions.

For other players, configurations can be customized based on their roles. Each role may have different prompts, models, or memory settings. You can refer to the JSON file located at `examples/werewolf/configs/agent_configs.json` within the AgentScope examples directory.

Expand Down
7 changes: 3 additions & 4 deletions docs/sphinx_doc/source/tutorial/201-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,14 @@ class AgentBase(Operator):
def __init__(
self,
name: str,
config: Optional[dict] = None,
sys_prompt: Optional[str] = None,
model: Optional[Union[Callable[..., Any], str]] = None,
model_config_name: str = None,
use_memory: bool = True,
memory_config: Optional[dict] = None,
) -> None:

# ... [code omitted for brevity]
def observe(self, x: Union[dict, Sequence[dict]]) -> None:
def observe(self, x: Union[dict, Sequence[dict]]) -> None:
# An optional method for updating the agent's internal state based on
# messages it has observed. This method can be used to enrich the
# agent's understanding and memory without producing an immediate
Expand Down Expand Up @@ -109,7 +108,7 @@ from agentscope.agents import DialogAgent
# Configuration for the DialogAgent
dialog_agent_config = {
"name": "ServiceBot",
"model": "gpt-3.5", # Specify the model used for dialogue generation
"model_config_name": "gpt-3.5", # Specify the model used for dialogue generation
"sys_prompt": "Act as AI assistant to interact with the others. Try to "
"reponse on one line.\n", # Custom prompt for the agent
# Other configurations specific to the DialogAgent
Expand Down
46 changes: 24 additions & 22 deletions docs/sphinx_doc/source/tutorial/203-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,25 @@ where the model configs could be a list of dict:
```json
[
{
"type": "openai",
"name": "gpt-4",
"parameters": {
"api_key": "xxx",
"organization_id": "xxx",
"config_name": "gpt-4-temperature-0.0",
"model_type": "openai",
"model": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.0
}
},
{
"type": "openai_dall_e",
"name": "dall-e-3",
"parameters": {
"api_key": "xxx",
"organization_id": "xxx",
"config_name": "dall-e-3-size-1024x1024",
"model_type": "openai_dall_e",
"model": "dall-e-3",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"size": "1024x1024"
}
}
},
// Additional models can be configured here
]
```
Expand Down Expand Up @@ -86,8 +88,8 @@ In AgentScope, you can load the model with the following model configs: `./flask

```json
{
"type": "post_api",
"name": "flask_llama2-7b-chat",
"model_type": "post_api",
"config_name": "flask_llama2-7b-chat",
"api_url": "http://127.0.0.1:8000/llm/",
"json_args": {
"max_length": 4096,
Expand Down Expand Up @@ -127,8 +129,8 @@ In AgentScope, you can load the model with the following model configs: `flask_m

```json
{
"type": "post_api",
"name": "flask_llama2-7b-ms",
"model_type": "post_api",
"config_name": "flask_llama2-7b-ms",
"api_url": "http://127.0.0.1:8000/llm/",
"json_args": {
"max_length": 4096,
Expand Down Expand Up @@ -169,8 +171,8 @@ Now you can load the model in AgentScope by the following model config: `fastcha

```json
{
"type": "openai",
"name": "meta-llama/Llama-2-7b-chat-hf",
"config_name": "meta-llama/Llama-2-7b-chat-hf",
"model_type": "openai",
"api_key": "EMPTY",
"client_args": {
"base_url": "http://127.0.0.1:8000/v1/"
Expand Down Expand Up @@ -209,8 +211,8 @@ Now you can load the model in AgentScope by the following model config: `vllm_sc

```json
{
"type": "openai",
"name": "meta-llama/Llama-2-7b-chat-hf",
"config_name": "meta-llama/Llama-2-7b-chat-hf",
"model_type": "openai",
"api_key": "EMPTY",
"client_args": {
"base_url": "http://127.0.0.1:8000/v1/"
Expand All @@ -228,8 +230,8 @@ Taking `gpt2` in HuggingFace inference API as an example, you can use the follow

```json
{
"type": "post_api",
"name": 'gpt2',
"config_name": "gpt2",
"model_type": "post_api",
"headers": {
"Authorization": "Bearer {YOUR_API_TOKEN}"
}
Expand All @@ -248,7 +250,7 @@ model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model.eval()
# Do remember to re-implement the `reply` method to tokenize *message*!
agent = YourAgent(name='agent', model=model, tokenizer=tokenizer)
agent = YourAgent(name='agent', model_config_name=config_name, tokenizer=tokenizer)
```

[[Return to the top]](#using-different-model-sources-with-model-api)
11 changes: 6 additions & 5 deletions examples/conversation/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,18 @@
agentscope.init(
model_configs=[
{
"type": "openai",
"name": "gpt-3.5-turbo",
"model_type": "openai",
"config_name": "gpt-3.5-turbo",
"model": "gpt-3.5-turbo",
"api_key": "xxx", # Load from env if not provided
"organization": "xxx", # Load from env if not provided
"generate_args": {
"temperature": 0.5,
},
},
{
"type": "post_api",
"name": "my_post_api",
"model_type": "post_api_chat",
"config_name": "my_post_api",
"api_url": "https://xxx",
"headers": {},
},
Expand All @@ -29,7 +30,7 @@
dialog_agent = DialogAgent(
name="Assistant",
sys_prompt="You're a helpful assistant.",
model="gpt-3.5-turbo", # replace by your model config name
model_config_name="gpt-3.5-turbo", # replace by your model config name
)
user_agent = UserAgent()

Expand Down
6 changes: 3 additions & 3 deletions examples/distributed/configs/debate_agent_configs.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"args": {
"name": "Pro",
"sys_prompt": "Assume the role of a debater who is arguing in favor of the proposition that AGI (Artificial General Intelligence) can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models are a viable path to AGI. Highlight the advancements in language understanding, adaptability, and scalability of GPT models as key factors in progressing towards AGI.",
"model": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
},
Expand All @@ -13,7 +13,7 @@
"args": {
"name": "Con",
"sys_prompt": "Assume the role of a debater who is arguing against the proposition that AGI can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models, while impressive, are insufficient for reaching AGI. Discuss the limitations of GPT models such as lack of understanding, consciousness, ethical reasoning, and general problem-solving abilities that are essential for true AGI.",
"model": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
},
Expand All @@ -22,7 +22,7 @@
"args": {
"name": "Judge",
"sys_prompt": "Assume the role of an impartial judge in a debate where the affirmative side argues that AGI can be achieved using the GPT model framework, and the negative side contests this. Listen to both sides' arguments and provide an analytical judgment on which side presented a more compelling and reasonable case. Consider the strength of the evidence, the persuasiveness of the reasoning, and the overall coherence of the arguments presented by each side.",
"model": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
}
Expand Down
Loading

0 comments on commit 9ea78b4

Please sign in to comment.