This project uses DRL to address waypoint following task allocation and planning for multi-robot systems.
This software relies on Python 3.9.
Please see the requirements.txt
file for the details of dependencies. To install the dependencies, run the following command:
pip install -r requirements.txt
Otherwise, you can use the Dockerfile
provided to build the environment. To build the docker image, run the following command:
docker build -t rlwaypointmrta:latest .
To run the docker image, run the following command:
docker run --rm -it rlwaypointmrta:latest
The TSPLIB data files are from: http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html
The Pre-trained RL models for solving TSP problems are from: https://github.com/wouterkool/attention-learn-to-route
python main.py --train
python main.py --eval --eval_dir trained_sessions/moe_mlp/rand_100-3/trained_model/batch31200.pt
Arguments for training and evaluation that can be specified are defined in arg_parser.py
. The docmentation of each argument is avaible by running the following command:
python main.py --help
Pre-trained models are available in the trained_sessions
folder. The trained_model
folder contains the trained model. Some relavent information about the training session is accessible using tesorboard. To run tensorboard, run the following command:
tensorboard --logdir=trained_sessions/$MODEL/$SESSION