This on-going project is Semantic SLAM using ROS, ORB SLAM and PSPNet101. It will be used in autonomous robotics for semantic understanding and navigation.
Now the visualized semantic map with topological information is reachable, where yellow represents buildings and constructions, green represents vegetation, blue represents vehicles, and red represents roads and sidewalks. The cube is ambiguous building location and green line is the trajectory. You can visualize these information using Rviz.
You can also get the semantic topological map which only contains the ambiguous building location and trajectory.
The whole ROS communication structure of the project is shown below.
If you are going to use our work in your research, please use the citation below.
@INPROCEEDINGS{zhao2019slam,
author={Z. {Zhao} and Y. {Mao} and Y. {Ding} and P. {Ren} and N. {Zheng}},
booktitle={2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI)},
title={Visual-Based Semantic SLAM with Landmarks for Large-Scale Outdoor Environment},
year={2019},
volume={},
number={},
pages={149-154},
keywords={Semantic SLAM;Visual SLAM;Large-Scale SLAM;Semantic Segmentation;Landmark-level Semantic Mapping},
doi={10.1109/CCHI.2019.8901910},
ISSN={null},
month={Sep.},}
The system has been updated to the latest version. I have merged the semantic fusion mode with the SLAM system to achieve real time fusion and better loop closing performance. The map saving, map loading and localization modes have been completed. To run the new version of the system, please run the shell script "run_C.sh". You are welcome for issueing.
I have saved the old version of system in branch "version0.0.1".
Basic prerequisite.
- ROS kinetic
- Python 2.7
- scipy
- sklearn
- numpy(需降级到1.16.1)
To run the PSPNet in ROS, you have to install the following packages.
- Tensorflow-gpu >= 0.4.0 (0.4.0 is highly recommended)
- Keras 2.2.2
可按以下教程安装tensorflow
https://www.tensorflow.org/install/pip?hl=zh-cn&lang=python2
(其中,推荐使用virtualenv;我的CUDA版本是10.0,所以我选择tensorflow-gpu==1.13.1, keras==2.2.2)
To run the ORB_SLAM2 in ROS, you have to install the following packages.
- C++11 or C++0x Compiler
- Pangolin
- OpenCVRequired at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.
- Eigen3 Required at least 3.1.0.
- DBoW2 and g2o (Included in Thirdparty folder)
catkin_ws/
src/
map_generator/
CMakeList.txt
src/
cluster.py
map_engine.py
Third_Part/
ORB_SLAM/
PSPNet_Keras_tensorflow/
test/
result/
.gitignore
README.md
run.sh
First you have to compile the /catkin_ws
using catkin_make
to make sure that the message can be used.
将 “source YOUR_PATH/catkin_ws/devel/setup.bash”写入 ~/.bashrc文件
首先阅读 Third_Part/README.md,这个读完基本下一段就可以不用读了
Then you have to read the README files in the /ThirdPart/ORB_SLAM
and /ThirdPart/PSPNet_Keras_tensorflow
and follow their command to make sure that the ORB SLAM and PSPNet can work correctly.
KITTI原始数据集我已放到百度云 链接:https://pan.baidu.com/s/1xink8C1mjpVoQwzksZs_Pw 提取码:k1rf
按以下链接制作rosbag数据集 https://gitee.com/taiping.z.git/image2rosbag_KITTIodometry
You can then run the script run_C.sh
to use the system. You have to provide the rostopic /camera/image_raw
注意修改run_C.sh
里的路径
cd Semantic_SLAM/
chmod +x run_C.sh
./run_C.sh
Publish the cloud point infomationEncode the cloud point and visual descriptor with semantic informationClustering the cloud points into a single location pointVisualize the resultRun in the KITTI dataset- Run in the TUM dataset
Use C++ for ROS nodeAdd localization modeAdd GPS fusion- Run in simulation environment
- Benchmark in groundtruth
- Run in XJTU campus
Connect all the elements into a single project- Inference accelerate
The state-of-the-art methodologies are achieved by team of Raul Mur-Artal for ORB_SLAM and team of Hengshuang Zhao for PSPNet. Thanks for their great works.
The implementation of PSPNet by keras is presented by VladKry. Thanks for their team's work.