This repo contains the implementation of the CARLA autonomous driving benchmark for paper HPRM: High-Performance Robotic Middleware for Intelligent Autonomous Systems. All code has been containerized with Docker for reproducability.
/HPRM
: Code for building HPRM docker image./ROS2
: Code for building ROS2 docker image./carla_server_controller
: Because Carla uses a server-client architecture, andcarla_server_controller
serves as a utility tool to control the startup and shutdown of the carla server./models
: Model weightsckpt_11833344.pth
for the autonomous driving algorithm.yolov5n.pt
for the YOLO object detection algorithm.
/CARLA_0.9.15_PythonAPI.tar.gz
for Carla PythonAPI bindings.
-
Install latest version of Lingua Franca
curl -Ls https://install.lf-lang.org | bash -s cli
-
Install and use desired serializer by following the instructions here.
-
Install the CARLA simulator version
0.9.15
. Because CARLA uses a server-client architecture, CARLA server can be installed anywhere (windows/linux) as long as a network connection can be established between the CARLA simulator and the Docker container of HPRM or ROS2. The following setup assumes that CARLA is installed on the host machine and a Docker container is spawned from the host, but other setups can be configured as well by changing the Docker network configurations. -
Copy the CARLA server controller in
carla_server_controller
. This is a python script that starts a web server that receives command from the Docker container to orchestrate startup / shutdown of the CARLA server. This script should be installed on the same machine as the CARLA simulator. -
Install Flask:
pip3 install flask
-
Run the server controller:
python3 carla_server_controller.py --carlapath=<path to CARLA executable>"
. If CARLA is installed on Windows, modifyrun_controller.bat
with your own CARLA install position and double click on the.bat
file to run the server controller. -
When the script starts up, it will output something like:
* Running on http://127.0.0.1:2010 * Running on http://198.18.0.1:2010
copy the second URL as it will be used to connect the Docker container to the server controller. The following setup will use http://198.18.0.1:2010 as the server URL, please substitute with your network URL.
-
Pull Docker image from Docker Hub:
docker pull depetrol/hprm
-
Run a Docker container from the image:
docker run -it --gpus=all --shm-size 12g -v ./logs:/workspace/logs --net=host depetrol/hprm
This will do the following:
- Start the bash shell in interactive mode, with access to all GPUs(if the drivers are configured correctly).
- Allow the max memory used by the docker image to be 12 GB.
- Share the network stack of the host with the container. This allows the container to connect to the CARLA server and the server controller.
-
Check if the Docker container can connect to the CARLA server controller with provided script:
python server_test.py --server_controller_url http://198.18.0.1:2010
-
Modify the config file with script to connect to your CARLA environment:
python modify_config.py --carla_ip="198.18.0.1" --server_controller_url="http://198.18.0.1:2010" --config_file="./config/carla.yaml"
-
Compile and run HPRM:
- centralized version:
lfc carla_centralized.lf && ./bin/carla_centralized
- decentralized version:
lfc carla_decentralized.lf && ./bin/carla_decentralized
- centralized version:
-
Pull Docker image from Docker Hub:
docker pull depetrol/ros2
-
Run a Docker container from the image:
docker run -it --gpus=all --shm-size 12g -v ./logs:/workspace/logs --net=host depetrol/ros2
-
Check if the Docker container can connect to the CARLA server controller with provided script:
python server_test.py --server_controller_url http://198.18.0.1:2010
-
Modify the config file with script:
python modify_config.py --carla_ip="198.18.0.1" --server_controller_url="http://198.18.0.1:2010" --config_file="./src/carla_sim/carla_sim/config/carla.yaml"
-
Build the ROS2 implementation:
colcon build --symlink-install && python disable_checker.py
-
Because ROS2 needs a new terminal to start each node, we start tmux so we can start multiple windows:
tmux
-
Create three windows with
tmux split-window -h
, and nevigate between them withCtrl B
thenleft arrow / right arrow
key. -
Start and run ROS2 with the following steps. The startup order matters in the case: the CARLA node must be started last because it will start the process of publishing messages to the other two nodes. After starting each node, wait until the node outputs
=== ... Ready ===
which means the node is ready to accept messages.- Start the YOLO object detection node:
source install/setup.bash && ros2 run carla_sim yolo
- Start the PPO autonomous driving node:
source install/setup.bash && ros2 run carla_sim agent
- Start the FUSION node:
source install/setup.bash && ros2 run carla_sim fusion
- Start the CARLA simulation node:
source install/setup.bash && ros2 run carla_sim carla
- Start the YOLO object detection node:
This section is for building the Docker image from source code.
First download CARLA_0.9.15_PythonAPI.tar.gz
from the Release section of this Github repository and put them in the repo root directory(alongside this README.md) for Carla PythonAPI bindings.
- HPRM:
docker build -f HPRM.dockerfile -t depetrol/hprm:latest .
- ROS2:
docker build -f ROS2.dockerfile -t depetrol/ros2:latest .
Thanks to the authors of Model-Based Imitation Learning for Urban Driving for providing a implementation of the PPO autonomous driving algorithm and corresponding configuration code. We modifyed the code to work with CARLA 0.9.15.
Thanks to the authors of End-to-End Urban Driving by Imitating a Reinforcement Learning Coach for providing a gym wrapper around CARLA making it easy to use, and the RL expert PPO autonomous driving algorithm.