This repository is the pytorch implementation of PointSeg
@article{Wang2018PointSegRS,
title={PointSeg: Real-Time Semantic Segmentation Based on 3D LiDAR Point Cloud},
author={Yuan Wang and Tianyue Shi and Peng Yun and Lei Tai and Ming Liu},
journal={ArXiv},
year={2018},
volume={abs/1807.06288}
}
- Pytorch 1.4 >
- Open3d [optional, just for visualization]
- Tensorboard [optional, for examining the training outputs]
The dataset used for training PointSeg is the same as squeezeseg, which can be downloaded from here.
Example:
~ cd PointSeg
~ python --csv-path ImageSet/csv/ --data-path /path/to/Datasets/lidar_2d/ -c ./config.yaml -j 4 -b 16 --lr 0.01 --device cuda
One pretrained model can be found in checkpoints-folder.
~ cd PointSeg
~ python --csv-path ImageSet/csv/ --data-path /path/to/Datasets/lidar_2d/ -c ./config.yaml -j 4 -b 16 --device cuda -m checkpoints/checkpoint_30_20200502_190744.tar --ds-type=train
Note that during the evaluation the results of each prediction is saved as a numy-file in the "test-pred"-folder. Each saved numpy-file consits of 5 channels (x, y, z, predicted labels, ground-truth labels).
The results of training or evaluation can be inspected using tensorboard. Note that one tensorboard-file for the trainig is already provided. See "runs"-foler.
~ cd PointSeg
tensorboard --logdirs=runs