Skip to content
forked from osudrl/apex

A continuous deep reinforcement learning framework for robotics

License

Notifications You must be signed in to change notification settings

HaoweiLi1/Apex_1

 
 

Repository files navigation

apex


Apex is a small, modular library that contains some implementations of continuous reinforcement learning algorithms. Fully compatible with OpenAI gym.

running1

running2

Running experiments

Basics

Any algorithm can be run from the apex.py entry point.

To run PPO on a cassie environment,

python apex.py ppo --env_name Cassie-v0 --num_procs 12 --run_name experiment01

To run TD3 on the gym environment Walker-v2,

python apex.py td3_async --env_name Walker-v2 --num_procs 12 --run_name experiment02

Logging details / Monitoring live training progress

Tensorboard logging is enabled by default for all algorithms. The logger expects that you supply an argument named logdir, containing the root directory you want to store your logfiles in, and an argument named seed, which is used to seed the pseudorandom number generators.

A basic command line script illustrating this is:

python apex.py ars --logdir logs/ars --seed 1337

The resulting directory tree would look something like this:

trained_models/                         # directory with all of the saved models and tensorboard logs
└── ars                                 # algorithm name
    └── Cassie-v0                       # environment name
        └── 8b8b12-seed1                # unique run name created with hash of hyperparameters
            ├── actor.pt                # actor network for algo
            ├── critic.pt               # critic network for algo
            ├── events.out.tfevents     # tensorboard binary file
            ├── experiment.info         # readable hyperparameters for this run
            └── experiment.pkl          # loadable pickle of hyperparameters

Using tensorboard makes it easy to compare experiments and resume training later on.

To see live training progress

Run $ tensorboard --logdir logs/ then navigate to http://localhost:6006/ in your browser

Cassie Environments:

  • Cassie-v0 : basic unified environment for walking/running policies
  • CassieTraj-v0 : unified environment with reference trajectories
  • CassiePlayground-v0 : environment for executing autonomous missions
  • CassieStanding-v0 : environment for training standing policies

Algorithms:

Currently implemented:

To be implemented long term:

Maybe implemented in future:

  • DXNN
  • ACER and other off-policy methods
  • Model-based methods

Acknowledgements

Thanks to @ikostrikov's whose great implementations were used for debugging. Also thanks to @rll for rllab, which inspired a lot of the high level interface and logging for this library, and to @OpenAI for the original PPO tensorflow implementation. Thanks to @sfujim for the clean implementations of TD3 and DDPG in PyTorch. Thanks @modestyachts for the easy to understand ARS implementation.

About

A continuous deep reinforcement learning framework for robotics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.3%
  • C 2.4%
  • Dockerfile 0.3%