Skip to content

Latest commit

 

History

History
31 lines (27 loc) · 1.56 KB

README.md

File metadata and controls

31 lines (27 loc) · 1.56 KB

About

Implementations of deep reinforcement learning algorithms with Tensorflow Eager, such as:

  • PPO with CLIP-objective and GAE

Goals

  1. Build intuitions by reimplementing the algorithms.
  2. Bring into existence the implementations compatible with Tensorflow's Eager mode and using functionality of the recent versions of Tensorflow instead of obscure implementations branching off older OpenAI code. Specifically:
    • use TF summary writers instead of custom loggers;
    • use TF's distributions instead of custom sampling and custom log-likelihood calculations;
    • use TF datasets instead of custom batching code;
    • use Keras models wherever possible.
  3. Build foundation for me to easily experiment with:
    • MAML
    • DeepMimic-style of reward-shaping using human-provided trajectories

References

Results

  • CartPole-v0:
    • PPO:
      • solved: takes under 100K environment steps on average
      • prefers higher advantage-lambda and gamma, yet isn't very sensitive to value-lambda (though see #15)