Skip to content

Commit

Permalink
Merge pull request #2 from roahmlab/develop
Browse files Browse the repository at this point in the history
Push non-buggy version & start semantic versioning
  • Loading branch information
BuildingAtom authored Jul 20, 2022
2 parents 51c5ac7 + 86415ee commit caea37f
Show file tree
Hide file tree
Showing 63 changed files with 1,926 additions and 3,516 deletions.
11 changes: 10 additions & 1 deletion mesh_tools/readme.md
Original file line number Diff line number Diff line change
@@ -1 +1,10 @@
This version captures and modifies from commit cab2a59fc8abdcd022748d07d5c8ed6358ae3e5b from https://github.com/uos/mesh_tools.
This version captures and modifies from commit `cab2a59fc8abdcd022748d07d5c8ed6358ae3e5b` from https://github.com/uos/mesh_tools.

If running on a system which captures an out of date version of mesh_tools, you can also clone this specific version into `catkin_ws/src` and add a `CATKIN_IGNORE` file to the resulting `catkin_ws/src/mesh_tools/rviz_map_plugin/` folder to ensure this version of rviz_map_plugin is built instead of the source version.

Step-by-step to do this instructions are as follows:

1. `cd catkin_ws/src`
2. `git clone -n https://github.com/uos/mesh_tools`
3. `git checkout cab2a59fc8abdcd022748d07d5c8ed6358ae3e5b`
4. `touch mesh_tools/rviz_map_plugin/CATKIN_IGNORE`
46 changes: 28 additions & 18 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,22 @@
# sel_map (Semantic ELevation Map)
**Authors:** Parker Ewen ([email protected]), Adam Li ([email protected]), Yuxin Chen ([email protected]), Steven Hong ([email protected]), and Ram Vasudevan ([email protected]).

- All authors affiliated with the Robotics Institute and department of Mechanical Engineering of the University of Michigan, 2505 Hayward Street, Ann Arbor, Michigan, USA.
- All authors are affiliated with the Robotics Institute and department of Mechanical Engineering of the University of Michigan, 2505 Hayward Street, Ann Arbor, Michigan, USA.
- This work is supported by the Ford Motor Company via the Ford-UM Alliance under award N022977, by the Office of Naval Research under award number N00014-18-1-2575, and in part by the National Science Foundation under Grant 1751093.
- `sel_map` was developed in [Robotics and Optimization for Analysis of Human Motion (ROAHM) Lab](http://www.roahmlab.com/) at University of Michigan - Ann Arbor.

## Introduction
<img align="right" height="230" src="/figures/main.png"/>
Semantic ELevation (SEL) map is a semantic Bayesian inferencing framework for real-time elevation mapping and terrain property estimation. The package takes the inputs from RGB-D cameras and robot poses, and recursively estimates both the terrain surface profile and a probability distribution for terrain properties. The package can be deplolyed on a physical legged robotic platform in both indoor and outdoor environments. The semantic networks used in this package are modular and interchangeable, better performance can be achieved using the specific trained networks for the corresponding applications. This package provides several examples such as ResNet-50.

Semantic ELevation (SEL) map is a semantic Bayesian inferencing framework for real-time elevation mapping and terrain property estimation. The package takes the inputs from RGB-D cameras and robot poses, and recursively estimates both the terrain surface profile and a probability distribution for terrain properties. The package can be deployed on a physical legged robotic platform in both indoor and outdoor environments. The semantic networks used in this package are modular and interchangeable, better performance can be achieved using the specific trained networks for the corresponding applications. This package provides several examples such as ResNet-50. The dataset for terrain friction can be found in [terrain_friction_dataset](https://github.com/roahmlab/terrain_friction_dataset) repository. The link to the project website is [here](https://roahmlab.github.io/sel_map/).


<img height="270" src="/figures/flow_diagram.png"/>

<img height="230" src="/figures/terrain_class.png"/> <img height="230" src="/figures/terrain_property.png"/>

## Dependencies
The package is built on Ubuntu 20.04 with ROS Noetic Distribution, and the algorithms are compiled with C++11 and Python3.
The package is built on Ubuntu 20.04 with ROS Noetic Distribution, and the algorithms are compiled with C++11 and Python3.

`sel_map` has the following required dependencies
* mesh-msgs (version 1.1.0 or higher)
Expand All @@ -35,22 +36,20 @@ curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo ap
sudo apt update
sudo apt install ros-noetic-desktop
echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
source ~/.bashr
source ~/.bashrc
```

The following commands can be used to install the dependencies, and follow the [linked instruction](https://catkin-tools.readthedocs.io/en/latest/installing.html) to install `catkin_tools`.
```
sudo apt install ros-noetic-mesh-msgs opencl-headers ros-noetic-hdf5-map-io python3-pip ros-noetic-cv-bridge git ros-noetic-robot-state-publisher ros-noetic-xacro ros-noetic-rviz
```

Make sure the system is setup with CUDA 11 following the [CUDA download manuals](https://developer.nvidia.com/cuda-downloads). One example is shown below.
Make sure the system is setup with an appropriate version of CUDA 11+ following your preferred method. One example is shown below.
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo apt install ./cuda-keyring_1.0-1_all.deb
sudo apt update
sudo apt install cuda
```

Install [pytorch](https://pytorch.org) with CUDA 11. One example is shown below.
Expand All @@ -62,7 +61,7 @@ Install the required python packages using the following command in `sel_map` fo
```
pip install -r requirements.txt
```
(Note: if you want to install them globally, add `sudo` previledge.)
(Note: if you want to install them globally, add `sudo` privilege; however, this is not recommended.)

## Building

Expand Down Expand Up @@ -95,6 +94,10 @@ catkin build
source devel/setup.bash
```

**NOTE:** The included [pytorch-encoding](https://github.com/zhanghang1989/PyTorch-Encoding) wrapper takes a while to compile. If you do not want to compile this, or if for some reason you cannot, you can also disable the package by adding a `CATKIN_IGNORE` file to the wrapper package directory. For example:
```
touch src/sel_map/sel_map_segmentation/pytorch_encoding_wrapper/CATKIN_IGNORE
```

## Usage
Before running the package, make sure you build the package successfully and source the workspace.
Expand All @@ -116,12 +119,12 @@ roslaunch sel_map spot_sel.launch semseg_config:=Encoding_ResNet50_PContext_full
```
(Note: terrain_properties argument should agree with the network models. PContext with pascal_context and ADE with csail_semseg_properties.)

- If you have `CUDA out of memory` error, you can either try a more powerful laptop, or just run the elevation mapping without sementic segmentation.
- If you have `CUDA out of memory` error, you can either try a more powerful laptop, or just run the elevation mapping without semantic segmentation.
```
roslaunch sel_map spot_sel.launch semseg_config:=Bypass.yaml
```

- If want to show the terrain class map instead of terrain properties, specify the `colorscale` argument.
- If you want to show the terrain class map instead of terrain properties, specify the `colorscale` argument.
```
roslaunch sel_map spot_sel.launch colorscale:=use_properties.yaml
```
Expand All @@ -132,10 +135,17 @@ roslaunch sel_map spot_sel.launch colorscale:=use_properties.yaml

`sel_map` is released under a [MIT license](https://github.com/roahmlab/sel_map/blob/main/LICENSE). For a list of all code/library dependencies, please check dependency section. For a closed-source version of `sel_map` for commercial purpose, please contact the authors.

An overview of the theoretical and implementation details has been published in [to_be_added] If you use `sel_map` in an academic work, please cite using the following BibTex entry:
An overview of the theoretical and implementation details has been published in [IEEE Robotics and Automation Letters](https://ieeexplore.ieee.org/document/9792203) and IEEE International Conference on Intelligent Robots and Systems (IROS 2022). If you use `sel_map` in an academic work, please cite using the following BibTex entry:


@article{to_be_added,
title={to_be_added}
}
@article{9792203,
author={Ewen, Parker and Li, Adam and Chen, Yuxin and Hong, Steven and Vasudevan, Ram},
journal={IEEE Robotics and Automation Letters},
title={These Maps are Made for Walking: Real-Time Terrain Property Estimation for Mobile Robots},
year={2022},
volume={7},
number={3},
pages={7083-7090},
doi={10.1109/LRA.2022.3180439}}


1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ numpy>=1.18.4
opencv-python>=4.5.3.56
Pillow>=8.3.2
scipy>=1.5.4
open3d>=0.13.0
53 changes: 27 additions & 26 deletions sel_map/config/colorscales/default.yaml
Original file line number Diff line number Diff line change
@@ -1,28 +1,29 @@
type: linear_ends
stops: [0.2, 0.8]
values:
- 252,197,192
- 250,159,181
- 247,104,161
- 221,52,151
- 174,1,126
- 122,1,119
- 73,0,106
unknown: 120,120,120
# optionally, also specify the absolute ends
ends:
zero: 255,255,255
one: 0,0,0
colorscale:
type: linear_ends
stops: [0.2, 0.8]
values:
- 252,197,192
- 250,159,181
- 247,104,161
- 221,52,151
- 174,1,126
- 122,1,119
- 73,0,106
unknown: 120,120,120
# optionally, also specify the absolute ends
ends:
zero: 255,255,255
one: 0,0,0

# this can also be defined as follows
#type: custom
#stops: [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
# - 252,197,192
# - 250,159,181
# - 247,104,161
# - 221,52,151
# - 174,1,126
# - 122,1,119
# - 73,0,106
#unknown: 120,120,120
# this can also be defined as follows
#type: custom
#stops: [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
# - 252,197,192
# - 250,159,181
# - 247,104,161
# - 221,52,151
# - 174,1,126
# - 122,1,119
# - 73,0,106
#unknown: 120,120,120

3 changes: 2 additions & 1 deletion sel_map/config/colorscales/use_properties.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
type: bypass
colorscale:
type: bypass

23 changes: 23 additions & 0 deletions sel_map/config/robots/anymal.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@

# Specify what the robot base is and what the world base is
world_base: odom
robot_base: base

# TODO
num_cameras: 1
update_policy: fifo
# Configurable section
cameras_registered:
realsense:
image_rectified: /depth_camera_front/color/image_decomp
depth_registered: /depth_camera_front/aligned_depth_to_color/image_decomp
camera_info: /depth_camera_front/aligned_depth_to_color/camera_info
# By not specifying these, we lookup the transform frame instead. (They do no need to be specified)
pose_with_covariance: ''
pose: ''
# TODO: make the below work
#cameras_raw:

# Define intermediate joints needed, but id them with
# child link to keep uniqueness.
joints: ''
23 changes: 23 additions & 0 deletions sel_map/config/robots/carla_data.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@

# Specify what the robot base is and what the world base is
world_base: odom
robot_base: body

# TODO
num_cameras: 1
update_policy: fifo
# Configurable section
cameras_registered:
realsense:
image_rectified: /camera/color/image_raw
depth_registered: /camera/depth/image_raw
camera_info: /camera/depth/camera_info
# By not specifying these, we lookup the transform frame instead. (They do no need to be specified)
pose_with_covariance: ''
pose: ''
# TODO: make the below work
#cameras_raw:

# Define intermediate joints needed, but id them with
# child link to keep uniqueness.
joints: ''
4 changes: 2 additions & 2 deletions sel_map/config/robots/spot.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ update_policy: fifo
# Configurable section
cameras_registered:
realsense:
image_rectified: /camera/color/image_raw
depth_registered: /camera/aligned_depth_to_color/image_raw
image_rectified: /camera/color/image_decomp
depth_registered: /camera/aligned_depth_to_color/image_decomp
camera_info: /camera/aligned_depth_to_color/camera_info
# By not specifying these, we lookup the transform frame instead. (They do no need to be specified)
pose_with_covariance: ''
Expand Down
71 changes: 71 additions & 0 deletions sel_map/config/robots/spot_bag.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@

# Specify what the robot base is and what the world base is
world_base: odom
robot_base: body

# TODO
num_cameras: 1
update_policy: fifo
# Configurable section
cameras_registered:
realsense:
image_rectified: /camera/color/image_raw
depth_registered: /camera/aligned_depth_to_color/image_raw
camera_info: /camera/aligned_depth_to_color/camera_info
# By not specifying these, we lookup the transform frame instead. (They do no need to be specified)
pose_with_covariance: ''
pose: ''
# TODO: make the below work
#cameras_raw:

# Define intermediate joints needed, but id them with
# child link to keep uniqueness.
joints:
# # Angle for hallway test
# camera_link:
# parent: front_rail
# translation:
# x: 0.225
# y: 0.04
# z: 0.01
# rotation:
# x: 0
# y: 0.2292004
# z: 0
# w: 0.9733793
# # Angle for the snow test
# camera_link:
# parent: front_rail
# translation:
# x: 0.225
# y: 0.04
# z: 0.01
# rotation:
# x: 0
# y: 0.3007058
# z: 0
# w: 0.953717
# Angle for the new main fig (19.5)
# camera_link:
# parent: front_rail
# translation:
# x: 0.225
# y: 0.0175
# z: 0.037
# rotation:
# x: 0
# y: 0.1521234
# z: 0
# w: 0.9883615
# Angle for the new bracket
camera_link:
parent: front_rail
translation:
x: 0.264764
y: 0.0175
z: 0.036
rotation:
x: 0
y: 0.258819
z: 0
w: 0.9659258
Loading

0 comments on commit caea37f

Please sign in to comment.