We present a novel formulation to reliably navigate a ground robot in uneven outdoor environments. Our hybrid architecture combines intermediate output from a DRL network with attention with input elevation data to obtain a cost-map to perform local navigation tasks. We generate locally least-cost waypoints on the obtained cost-map and integrate our approach with an existing DRL method that computes dynamically feasible robot velocities.
A video summary and demonstrations of the system can be found here
This implementation builds on the Robotic Operating System (ROS-Melodic).
conda create -n terp python=3.7
conda activate terp
conda install pytorch cudatoolkit -c pytorch
Goto grid_map/grid_map_demos/launch/
and modify the octomap_to_gridmap_demo.launch
file as follows,
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<param name="resolution" value="0.5" />
<param name="frame_id" type="string" value="husky/base" />
<param name="base_frame_id" type="string" value="husky/base"/>
<!-- maximum range to integrate (speedup!) -->
<param name="sensor_model/max_range" value="10" />
<param name="latch" value="false"/>
<!-- data source to integrate (PointCloud2) -->
<remap from="cloud_in" to="/husky/lidar_points" />
</node>
To build from source, clone the latest version from this repository into your catkin workspace and compile the package using,
cd catkin_ws/src
git clone https://github.com/kasunweerkoon/terp.git
cd ../
catkin_make
roslaunch arl_unity_ros_ground simulator_with_husky.launch
roslaunch grid_map_demos octomap_to_gridmap_demo.launch
rosrun terp dwa_pozyx_goals.py
conda activate terp
rosrun terp local_waypoint_planner.py
conda activate terp
rosrun terp main_ddpg.py
Thank you for citing our TERP paper if you use any of this code:
@INPROCEEDINGS{9812238,
author={Weerakoon, Kasun and Sathyamoorthy, Adarsh Jagan and Patel, Utsav and Manocha, Dinesh},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
title={TERP: Reliable Planning in Uneven Outdoor Environments using Deep Reinforcement Learning},
year={2022},
volume={},
number={},
pages={9447-9453},
doi={10.1109/ICRA46639.2022.9812238}}