Skip to content

Commit

Permalink
Merge pull request #13 from manuelseeger/debarcode
Browse files Browse the repository at this point in the history
Debarcode - unmask barcode players
  • Loading branch information
manuelseeger authored Oct 12, 2024
2 parents 5c7fee9 + af61053 commit a0ab345
Show file tree
Hide file tree
Showing 64 changed files with 1,409 additions and 1,470 deletions.
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,8 @@ mongodb/mongo-seed/*.json
obs_tools/ttsmodels/
tests/coverage/
.coverage
tests/test-results.xml
tests/test-results.xml
*report.html
mongodb/mongo-seed/*.json
mongodb/dump/
playground/external/
38 changes: 13 additions & 25 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
"program": "coach.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",

"env": {
"PYDEVD_DISABLE_FILE_VALIDATION": "1"
}
Expand All @@ -23,7 +22,6 @@
"program": "coach.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",

"env": {
"PYDEVD_DISABLE_FILE_VALIDATION": "1",
"AICOACH_audiomode": "text",
Expand All @@ -37,7 +35,6 @@
"program": "coach.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",

"env": {
"PYDEVD_DISABLE_FILE_VALIDATION": "1",
"AICOACH_aibackend": "Mocked"
Expand All @@ -50,34 +47,25 @@
"program": "build.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",
"args": ["--deploy"]
"args": [
"--deploy"
]
},
{
"name": "CLI sync",
"name": "CLI full sync",
"type": "debugpy",
"request": "launch",
"program": "repcli.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",
"args": ["--debug", "--clean", "sync"]
},
{
"name": "CLI add players",
"type": "debugpy",
"request": "launch",
"program": "repcli.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",
"args": ["--debug", "addplayers", "--from=2024-06-07"]
},
{
"name": "CLI deamon",
"type": "debugpy",
"request": "launch",
"program": "repcli.py",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",
"args": ["--debug", "--clean", "deamon"]
"args": [
"--debug",
"--clean",
"sync",
"--from=2024-01-01",
"players",
"replays"
]
},
{
"name": "OBS client",
Expand All @@ -89,4 +77,4 @@
"args": []
}
]
}
}
11 changes: 11 additions & 0 deletions .vscode/tasks.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "Env File Lock",
"type": "shell",
"command": "powershell -Command 'conda env export --no-builds | findstr -v \"prefix\" > environment.yml'",
"problemMatcher": []
}
]
}
90 changes: 46 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@ This is my personal research project to explore the latest in LLM based agents.

Looking up past games when a new game is being played:

![Example new game](archive/aicoach-scanner-example.png "New game started")
![Example new game](assets/aicoach-scanner-example.png "New game started")

Analyzing a replay after a game just finished:

![Example replay](archive/aicoach-newreplay-manners.png "New Replay, discussing player's manners")
![Example replay](assets/aicoach-newreplay-manners.png "New Replay, discussing player's manners")

Answering arbitrary questions on SC2:

![Example replay](archive/aicoach-hey-goat.png "Weighing in on the GOAT debate")
![Example replay](assets/aicoach-hey-goat.png "Weighing in on the GOAT debate")

## Minimal Setup

Expand Down Expand Up @@ -63,41 +63,11 @@ The rest of the settings will be taken from `config.yml`.

Secrets are configured with environment variables. Either provide them at runtime or put them in a dotenv file, like [.env.example](.env.example). (But copy to `.env` since the example file is ignored).

### OpenAI

Prerequisites:

- Setup an OpenAI account and fund with credits
- Create an OpenAI Assistant
- Create an API key.

Add your OpenAI organization, Assistant ID, and API key to the env variables, `AICOACH_ASSISTANT_ID`, `AICOACH_OPENAI_API_KEY`, `AICOACH_OPENAI_ORG_ID`.

Note on cost: Long conversations can cost up to one dollar ($1.00) in OpenAI API usage. AICoach will not incur API costs until one of the wake events is triggered - see below.

If you just want a database with your replays you can skip this step and the next or do it later.

### Build and deploy assistant

```sh
> python build.py
```

to build the assistant. You should have a new file [aicoach/assistant.json](aicoach/assistant.json).

```sh
> python build.py --deploy
```

to deploy the assistant to OpenAI. Check on https://platform.openai.com/playground if the assistant is initialized with tools and instructions.

### Database

Any MongoDB > 4.5 will do. Either setup one by yourself, or follow the instructions in [mongodb/](mongodb/README.md) on how to setup a local database for dev/testing.

If you setup your own MongoDB, create a database, and add 3 collections `replays`, `replays.meta`, `replays.players`.

Add the DB name to settings:
If you setup your own MongoDB, create a database and add the DB name to settings:

```yaml
# config.yourname.yml
Expand All @@ -123,35 +93,66 @@ Use the tool [repcli.py](repcli.py) to populate your DB with replays. The tools
Usage: repcli.py [OPTIONS] COMMAND [ARGS]...
Options:
--clean Delete replays from instant-leave games
--debug Print debug messages, including replay parser
--simulation Run in simulation mode, don't actually insert to DB
--help Show this message and exit.
--clean Delete replays from instant-leave games
--debug Print debug messages, including replay parser
--simulation Run in simulation mode, don't actually insert to DB
-v, --verbose Print verbose output
--help Show this message and exit.
Commands:
deamon Monitor replay folder, add new replays to MongoDB
echo Echo pretty-printed parsed replay data from a .SC2Replay file
sync Sync replays from replay folder to MongoDB
echo Echo pretty-printed parsed replay data from a .SC2Replay file
query Query the DB for replays and players
sync Sync replays and players from replay folder to MongoDB
```

Run

```sh
> python repcli.py --simulation sync --from=2024-01-01
> python repcli.py --simulation sync players replays --from=2024-01-01
```

to read all 1v1 ladder replays from beginning of 2024. With the `--simulation` flag the replays will not actually be commited to DB. Remove the `--simulation` flag and run again to store all replay in DB.
to read all 1v1 ladder replays from beginning of 2024, and add the replays and the players from the replays to the DB. With the `--simulation` flag the replays will not actually be commited to DB. Remove the `--simulation` flag and run again to store all replay in DB.

The `replays` collection of the DB should now be populated with replay documents.

See `python repcli.py sync --help` for more options. You can always repopulate the DB from replay files without destroying anything. AICoach does not change anything on the replay data in the DB.

### OpenAI

Prerequisites:

- Setup an OpenAI account and fund with credits
- Create an OpenAI Assistant
- Create an API key.

Add your OpenAI organization, Assistant ID, and API key to the env variables, `AICOACH_ASSISTANT_ID`, `AICOACH_OPENAI_API_KEY`, `AICOACH_OPENAI_ORG_ID`.

Note on cost: Long conversations can cost up to one dollar ($1.00) in OpenAI API usage. Typically interactions stay below $0.10 however. AICoach will not incur API costs until one of the wake events is triggered - see below.

If you just want a database with your replays you can skip this step and the next or do it later.

### Build and deploy assistant

```sh
> python build.py
```

to build the assistant. You should have a new file [aicoach/assistant.json](aicoach/assistant.json).

```sh
> python build.py --deploy
```

to deploy the assistant to OpenAI. Check on https://platform.openai.com/playground if the assistant is initialized with tools and instructions.


### (Optional) Additional settings

Configure a wake hotkey. On pressing this key (combination) AICoach will wake up and ask for input. Default: `ctrl+alt+w`.

Configure student.emoji if you want to show a [different icon](./playground/emojis.txt) in the terminal output.

You can disable interactions with the `interactive` flag. If off, AI coach will speak, but won't listen for input.

```yaml
# config.yourname.yml
Expand All @@ -162,6 +163,7 @@ student:
emoji: ":woman_student:"
db_name: "YOURDB"
wake_key: "ctrl+alt+w"
interactive: False
```

## Run AICoach
Expand Down
42 changes: 34 additions & 8 deletions aicoach/aicoach.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import json
import logging
from datetime import datetime
from typing import Callable, Dict, Generator
from typing import Callable, Dict, Generator, Generic, Type, TypeVar

from openai import APIError, AssistantEventHandler, OpenAI
from openai.lib.streaming import AssistantStreamManager
Expand All @@ -17,6 +17,7 @@
Run,
)
from openai.types.beta.threads.run import Usage
from pydantic import BaseModel

from config import config

Expand All @@ -27,6 +28,8 @@

client = OpenAI(api_key=config.openai_api_key, organization=config.openai_org_id)

TBaseModel = TypeVar("T", bound=BaseModel)


class AICoach:
threads: Dict[str, Thread] = {}
Expand Down Expand Up @@ -120,6 +123,29 @@ def get_response(self, message) -> str:
buffer += response
return buffer

def get_structured_response(self, message, schema: Type[TBaseModel]) -> TBaseModel:
"""Get the structured response from the assistant for a given message."""
message = client.beta.threads.messages.create(
thread_id=self.thread.id,
role="user",
content=message,
)
new_run = client.beta.threads.runs.create_and_poll(
thread_id=self.thread.id,
assistant_id=self.assistant.id,
tools=[], # structured output requires no tools
response_format={
"type": "json_schema",
"json_schema": {
"name": schema.__name__,
"schema": schema.model_json_schema(),
},
},
)

if new_run.status == "completed":
return schema(**json.loads(self.get_most_recent_message()))

def chat(self, text) -> Generator[str, None, None]:
"""Send a message to the assistant and stream the response.
Expand All @@ -135,10 +161,13 @@ def chat(self, text) -> Generator[str, None, None]:
assistant_id=self.assistant.id,
additional_instructions=self.additional_instructions,
) as stream:

for event in stream:
for token in self._process_event(event):
yield token
try:
for event in stream:
for token in self._process_event(event):
yield token
except APIError as e:
log.error(f"API Error: {e}")
yield ""

def _process_event(self, event) -> Generator[str, None, None]:
if isinstance(event, ThreadMessageDelta):
Expand Down Expand Up @@ -172,9 +201,6 @@ def _handle_tool_calls(self, run: Run) -> dict[str, str]:
def _handle_tool_call(
self, tool_call: RequiredActionFunctionToolCall
) -> tuple[str, str]:
if tool_call.type != "function":
return (None, None)

args = json.loads(tool_call.function.arguments)
name = tool_call.function.name
log.info(f"Calling function {name} with args: {args}")
Expand Down
8 changes: 6 additions & 2 deletions aicoach/aicoach_mock.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
import random
from threading import Thread
from time import sleep
from typing import Callable, Dict, Generator
from typing import Callable, Dict, Generator, Type
from uuid import uuid4

import tiktoken
from openai.types.beta.threads import Message, Text, TextContentBlock
from openai.types.beta.threads.run import Usage
from typing_extensions import override

from .aicoach import AICoach
from .aicoach import AICoach, TBaseModel

data = [
"The current supply count is not part of the game information provided. I can give insights based on replays on record but not from live games or current replays in progress. Would you like me to look up a recent replay instead?",
Expand Down Expand Up @@ -100,6 +100,10 @@ def chat(self, text: str) -> Generator[str, None, None]:
for token in self.generate_stream():
yield token

@override
def get_structured_response(self, message, schema: type[TBaseModel]) -> TBaseModel:
raise NotImplementedError

@override
def get_conversation(self) -> list[Message]:
return [
Expand Down
12 changes: 8 additions & 4 deletions aicoach/functions/LookupPlayer.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
from .base import AIFunction
from typing import Annotated
import requests
import logging
from typing import Annotated

import httpx

from config import config

from .base import AIFunction

log = logging.getLogger(f"{config.name}.{__name__}")

PROFILE_BASE = "https://starcraft2.com/en-us/profile/"
Expand All @@ -24,13 +27,14 @@ def LookupPlayer(
Only call this function with valid toon handles in the format 2-S2-1-1849098.
"""
raise NotImplementedError("This function is not implemented yet.")
summary = {}
try:
handle = toon_handle.replace("-S2-", "/").replace("-", "/")

api_url = API_BASE + handle

with requests.Session() as s:
with httpx.Client() as s:
r = s.get(api_url, timeout=10)

if r.status_code != 200:
Expand Down
Loading

0 comments on commit a0ab345

Please sign in to comment.