StarCraft II playing agents that interface directly with Deepmind's PySC2 API.
At the moment, deep RL agents are unable to defeat the easiest of the scripted bots in the full game. Therefore, I begin by implementing agents intended to tackle the mini-games introduced and described in Starcraft II: A New Challenge for Reinforcement Learning.
- Python 3 (tested with 3.6)
- pysc2 (tested with 2.0.1)
- tensorflow (tested with 1.9.0)
- StarCraft II + Maps
To ensure that the version is compatible with the agents in this repository, I recommend using Pipenv. Otherwise, ensure that you have the requirements listed above, and their dependencies.
$ pip insatll pipenv
$ git clone https://github.com/rayheberer/SC2Agents.git
$ cd SC2Agents
$ pipenv install
http://us.battle.net/sc2/en/legacy-of-the-void/#footer
PySC2 expects the game to be installed in ~/StarCraftII/
, but this can be overriden by setting the SC2PATH
environment variable.
The starter edition is sufficient.
Download the ladder maps
and the mini games
and extract them to your StarcraftII/Maps/
directory.
$ python -m run --map CollectMineralShards --agent agents.deepq.DQNMoveOnly
This is equivalent to:
$ python -m pysc2.bin.agent --map CollectMineralShards --agent agents.deepq.DQNMoveOnly
However, it is possible to specify agent-specific hyperparameters as flags.
Use the --save_dir
and --ckpt_name
flags to specify a TensorFlow checkpoint to read from and write to. By default, an agent will store checkpoints in ./checkpoints/<name-of-agent-class>
.
For example, if there is a checkpoint named DQNMoveOnly2
in ./checkpoints
, to continue training this model run:
python -m run --map CollectMineralShards --agent agents.deepq.DQNMoveOnly --ckpt_name=DQNMoveOnly2
$ tensorboard --logdir=./tensorboard/deepq
$ python -m run --map CollectMineralShards --agent agents.deepq.DQNMoveOnly --training=False
$ python -m pysc2.bin.play --replay <path-to-replay>
Links to pretrained networks and reporting on their results can be found in results.
All checkpoint files are stored in this Google Drive.
The following agents are implemented in this repository:
- A2CAtari - a synchronous variant of DeepMind's baseline actor-critic agent, based on the Atari-net architecture of Asynchronous Methods for Deep Reinforcement Learning
- DQNMoveOnly - a deep q-learner that processes a single screen feature layer through a convolutional neural network and outputs spacial coordinates for the
Move_screen
action.