Skip to content

Latest commit

 

History

History
65 lines (39 loc) · 2.19 KB

README.md

File metadata and controls

65 lines (39 loc) · 2.19 KB

Metareasoning.jl

Source code for the paper Tuning the Hyperparameters of Anytime Planning: A Metareasoning Approach with Deep Reinforcement Learning (Bhatia, A., Svegliato, J., Nashed, S. B., & Zilberstein, S. (2022). In Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 556-564. https://ojs.aaai.org/index.php/ICAPS/article/view/19842)

This package provides RL environments (compatible with ReinforcementLearning.jl API) for controlling hyperparameters of anytime algorithms RRTStar.jl and AnytimeWeightedAStar.jl.

Install Instructions

  1. Clone this repository
  2. Install Julia 1.7.2.
  3. Within the root directory of this package, run the following command:
julia --project=. -e "using Pkg; Pkg.instantiate()"

You will need a wandb account to log the runs. When you run either rrt_dqn.jl or aastar_dqn.jl for the first time, you will be asked to login to your wandb account.


Training and evaluating models

RRT*

Go through the file rrt_dqn.jl, and run:

julia --project=. rrt_dqn.jl

This will train a dqn model, evaluate it, and record videos of sample episodes. Look for the logs in logs/ directory and on your wandb dashboard.

Anytime Weighted A* (AWA*)

Go through the file aastar_dqn.jl, uncomment the code specific to the desired search problem, and run:

julia --project=. aastar_dqn.jl


Baselines

RRT*

In rrt_baselines.jl, set the desired growth factor, and run:

julia --project=. rrt_dqn.jl

AWA*

Simply run:

julia --project=. aastar_baselines.jl