Skip to content

Commit

Permalink
enhance distributed debate example
Browse files Browse the repository at this point in the history
  • Loading branch information
pan-x-c committed Mar 14, 2024
1 parent e489305 commit 2f613d4
Show file tree
Hide file tree
Showing 4 changed files with 135 additions and 55 deletions.
41 changes: 23 additions & 18 deletions examples/distributed/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,39 +27,44 @@ Now, you can chat with the assistant agent using the command line.
## Distributed debate competition (`distributed_debate.py`)

This example simulate a debate competition with three participant agents, including the affirmative side (**Pro**), the negative side (**Con**), and the adjudicator (**Judge**).
**You can join in the debate as Pro or Con or both.**

Pro believes that AGI can be achieved using the GPT model framework, while Con contests it. Judge listens to both sides' arguments and provides an analytical judgment on which side presented a more compelling and reasonable case.
Pro believes that AGI can be achieved using the GPT model framework, while Con contests it.
Judge listens to both sides' arguments and provides an analytical judgment on which side presented a more compelling and reasonable case.

Each agent is an independent process and can run on different machines.
You can join the debate as Pro or Con by providing the `--is-human` argument.
Messages generated by any agents can be observed by other agents in the debate.

```
# step 1: setup Pro, Con, Judge agent server separately
# please make sure the ports are available and the ip addresses are accessible, here we use localhost as an example.
# if you run all agent servers on the same machine, you can ignore the host field, it will use localhost by default.
### Step 1: setup Pro, Con agent servers

# setup Pro
```shell
cd examples/distributed
# setup LLM-based Pro
python distributed_debate.py --role pro --pro-host localhost --pro-port 12011
# or join the debate as Pro by yourself
# python distributed_debate.py --role pro --pro-host localhost --pro-port 12011 --is-human
```

# setup Con
```shell
cd examples/distributed
# setup LLM-base Con
python distributed_debate.py --role con --con-host localhost --con-port 12012
# or join the debate as Con by yourself
# python distributed_debate.py --role con --con-host localhost --con-port 12012 --is-human
```

# setup Judge
cd examples/distributed
python distributed_debate.py --role judge --judge-host localhost --judge-port 12013
> Please make sure the ports are available and the ip addresses are accessible, here we use localhost as an example.
> If you run all agent servers on the same machine, you can ignore the host field, it will use localhost by default.
# step 2: run the main process
### step 2: run the main process

```shell
# setup main (Judge is in it)
cd example/distributed
python distributed_debate.py --role main \
--pro-host localhost --pro-port 12011 \
--con-host localhost --con-port 12012 \
--judge-host localhost --judge-port 12013
# step 3: watch the debate process in the terminal of the main process.
--con-host localhost --con-port 12012
```

### step 3: watch or join in the debate in your terminal
6 changes: 3 additions & 3 deletions examples/distributed/configs/debate_agent_configs.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[
{
"class": "DictDialogAgent",
"class": "DialogAgent",
"args": {
"name": "Pro",
"sys_prompt": "Assume the role of a debater who is arguing in favor of the proposition that AGI (Artificial General Intelligence) can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models are a viable path to AGI. Highlight the advancements in language understanding, adaptability, and scalability of GPT models as key factors in progressing towards AGI.",
Expand All @@ -9,7 +9,7 @@
}
},
{
"class": "DictDialogAgent",
"class": "DialogAgent",
"args": {
"name": "Con",
"sys_prompt": "Assume the role of a debater who is arguing against the proposition that AGI can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models, while impressive, are insufficient for reaching AGI. Discuss the limitations of GPT models such as lack of understanding, consciousness, ethical reasoning, and general problem-solving abilities that are essential for true AGI.",
Expand All @@ -18,7 +18,7 @@
}
},
{
"class": "DictDialogAgent",
"class": "DialogAgent",
"args": {
"name": "Judge",
"sys_prompt": "Assume the role of an impartial judge in a debate where the affirmative side argues that AGI can be achieved using the GPT model framework, and the negative side contests this. Listen to both sides' arguments and provide an analytical judgment on which side presented a more compelling and reasonable case. Consider the strength of the evidence, the persuasiveness of the reasoning, and the overall coherence of the arguments presented by each side.",
Expand Down
70 changes: 36 additions & 34 deletions examples/distributed/distributed_debate.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import argparse
import json

from user_proxy_agent import UserProxyAgent

import agentscope
from agentscope.msghub import msghub
from agentscope.agents.dialog_agent import DialogAgent
Expand All @@ -30,9 +32,10 @@ def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument(
"--role",
choices=["pro", "con", "judge", "main"],
choices=["pro", "con", "main"],
default="main",
)
parser.add_argument("--is-human", action="store_true")
parser.add_argument("--pro-host", type=str, default="localhost")
parser.add_argument(
"--pro-port",
Expand All @@ -59,28 +62,34 @@ def setup_server(parsed_args: argparse.Namespace) -> None:
agentscope.init(
model_configs="configs/model_configs.json",
)
with open(
"configs/debate_agent_configs.json",
"r",
encoding="utf-8",
) as f:
configs = json.load(f)
configs = {
"pro": configs[0]["args"],
"con": configs[1]["args"],
"judge": configs[2]["args"],
}
config = configs[parsed_args.role]
host = getattr(parsed_args, f"{parsed_args.role}_host")
port = getattr(parsed_args, f"{parsed_args.role}_port")
server_launcher = RpcAgentServerLauncher(
agent_class=DialogAgent,
agent_kwargs=config,
host=host,
port=port,
)
server_launcher.launch()
server_launcher.wait_until_terminate()
host = getattr(parsed_args, f"{parsed_args.role}_host")
port = getattr(parsed_args, f"{parsed_args.role}_port")
if parsed_args.is_human:
agent_class = UserProxyAgent
config = {"name": parsed_args.role}
else:
with open(
"configs/debate_agent_configs.json",
"r",
encoding="utf-8",
) as f:
configs = json.load(f)
configs = {
"pro": configs[0]["args"],
"con": configs[1]["args"],
"judge": configs[2]["args"],
}
config = configs[parsed_args.role]
agent_class = DialogAgent

server_launcher = RpcAgentServerLauncher(
agent_class=agent_class,
agent_kwargs=config,
host=host,
port=port,
)
server_launcher.launch(in_subprocess=False)
server_launcher.wait_until_terminate()


def run_main_process(parsed_args: argparse.Namespace) -> None:
Expand All @@ -99,24 +108,17 @@ def run_main_process(parsed_args: argparse.Namespace) -> None:
port=parsed_args.con_port,
launch_server=False,
)
judge_agent = judge_agent.to_dist(
host=parsed_args.judge_host,
port=parsed_args.judge_port,
launch_server=False,
)
participants = [pro_agent, con_agent, judge_agent]
hint = Msg(name="System", content=ANNOUNCEMENT)
x = None
with msghub(participants=participants, announcement=hint):
for _ in range(3):
pro_resp = pro_agent(x)
pro_resp = pro_agent()
logger.chat(pro_resp)
con_resp = con_agent(pro_resp)
con_resp = con_agent()
logger.chat(con_resp)
x = judge_agent(con_resp)
logger.chat(x)
x = judge_agent(x)
logger.chat(x)
judge_agent()
judge_agent(x)


if __name__ == "__main__":
Expand Down
73 changes: 73 additions & 0 deletions examples/distributed/user_proxy_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# -*- coding: utf-8 -*-
"""User Proxy Agent class for distributed usage"""
import time
from typing import Sequence, Union
from typing import Optional

from agentscope.agents import AgentBase
from agentscope.message import Msg
from agentscope.web.studio.utils import user_input


class UserProxyAgent(AgentBase):
"""User proxy agent class"""

def __init__(self, name: str = "User", require_url: bool = False) -> None:
"""Initialize a UserProxyAgent object.
Arguments:
name (`str`, defaults to `"User"`):
The name of the agent. Defaults to "User".
require_url (`bool`, defaults to `False`):
Whether the agent requires user to input a URL. Defaults to
False. The URL can lead to a website, a file,
or a directory. It will be added into the generated message
in field `url`.
"""
super().__init__(name=name)

self.name = name
self.require_url = require_url

def reply(
self,
x: dict = None,
required_keys: Optional[Union[list[str], str]] = None,
) -> dict:
if x is not None:
self.speak(x)
self.memory.add(x)

time.sleep(0.5)
content = user_input()

kwargs = {}
if required_keys is not None:
if isinstance(required_keys, str):
required_keys = [required_keys]

for key in required_keys:
kwargs[key] = input(f"{key}: ")

# Input url of file, image, video, audio or website
url = None
if self.require_url:
url = input("URL: ")

# Add additional keys
msg = Msg(
self.name,
content=content,
url=url,
**kwargs, # type: ignore[arg-type]
)

# Add to memory
self.memory.add(msg)

return msg

def observe(self, x: Union[dict, Sequence[dict]]) -> None:
if x is not None:
self.speak(x) # type: ignore[arg-type]
self.memory.add(x)

0 comments on commit 2f613d4

Please sign in to comment.