Skip to content

Latest commit

 

History

History
executable file
·
101 lines (67 loc) · 6.56 KB

README.md

File metadata and controls

executable file
·
101 lines (67 loc) · 6.56 KB

arXiv visitors

Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection (NeurIPS 2022)

Transfer dense point knowledge from the Dense point 3D Detector (DDet) to the Sparse point 3D Detector (SDet). For more details, please refer to:

Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [Paper]
Tianyu Wang, Xiaowei Hu, Zhengzhe Liu, and Chi-Wing Fu

Abstract

LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.

Use CenterPoint+S2D

Installation

Please refer to INSTALL to set up libraries needed for distributed training and sparse convolution.

Benchmark Evaluation and Training

Please refer to GETTING_START to prepare the data. Then follow the instruction there to reproduce our detection results. All detection configurations are included in configs.

Dense Object Set

The code of dense object generation still needs to be cleaned. So we first provide our generated version. Please send us an email with your name, institute, a screenshot of the the Waymo dataset registration confirmation mail, and your intended usage. Please note that Waymo open dataset is under strict non-commercial license. For more details, please refer to Waymo.

TODO List

  • Clean and release the code of dense object generation

Experimental results (Trained on 20% Waymo Open Dataset)

Waymo Open Dataset Val set

Model Veh_L2 Ped_L2 Cyc_L2 Overall mAPH
SECOND 59.4 48.0 55.2 49.7
SECOND+S2D 63.5 51.1 57.0 52.9
CenterPoint-Pillar 64.1 61.1 59.76 57.9
CenterPoint-Pillar+S2D 68.1 66.4 65.3 63.1
CenterPoint 65.5 66.3 66.3 63.78
CenterPoint+S2D 68.2 70.1 69.3 66.9

Waymo Domain Adaption Dataset Val set

Model Veh_L2 mAP Veh_L2 mAPH Ped_L2 mAP Ped_L2 mAPH
SECOND 42.9 41.2 9.8 8.5
SECOND+S2D 46.3 45.0 12.2 10.7
CenterPoint-Pillar 45.3 44.6 8.8 7.3
CenterPoint-Pillar+S2D 50.1 49.6 13.3 11.4
CenterPoint 48.4 47.9 21.2 19.8
CenterPoint+S2D 51.0 50.4 26.0 24.7

Bibtex

@inproceedings{wang2022sparse2dense,
  title={{Sparse2Dense}: Learning to Densify 3D Features for 3D Object Detection   },
  author={Wang, Tianyu and Hu, Xiaowei and Liu, Zhengzhe and Fu, Chi-Wing},
  booktitle=NIPS,
  year={2022},
}

License

Sparse2Dense is release under MIT license (see LICENSE). It is developed based on a forked version of CenterPoint and det3d. See the NOTICE for details. Note that Waymo datasets are under non-commercial licenses.

Contact

Any questions or suggestions are welcome!

Tianyu Wang [email protected]

Acknowlegement

This project is not possible without multiple great opensourced codebases. We list some notable examples below.

Third-party resources

  • ONCE_Benchmark: Implementation of CenterPoint on the ONCE dataset
  • CenterPoint-KITTI: Reimplementation of CenterPoint on the KITTI dataset
  • AFDet: another work inspired by CenterNet achieves good performance on KITTI/Waymo dataset.
  • mmdetection3d: CenterPoint in mmdet framework.