Skip to content

Commit

Permalink
refactor(lidar_centerpoint): add training docs (#5570)
Browse files Browse the repository at this point in the history
Signed-off-by: Kaan Çolak <[email protected]>
  • Loading branch information
kaancolak authored Jun 13, 2024
1 parent d17bfa1 commit b1dd7bb
Showing 1 changed file with 207 additions and 1 deletion.
208 changes: 207 additions & 1 deletion perception/lidar_centerpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,207 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi

You can download the onnx format of trained models by clicking on the links below.

- Centerpoint : [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
- Centerpoint: [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
- Centerpoint tiny: [pts_voxel_encoder_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx), [pts_backbone_neck_head_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx)

`Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs.
`Centerpoint tiny` was trained in `Argoverse 2` (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.

## Training CenterPoint Model and Deploying to the Autoware

### Overview

This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository
and seamlessly deploying it within Autoware.

### Installation

#### Install prerequisites

**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html).

**Step 2.** Create a conda virtual environment and activate it

```bash
conda create --name train-centerpoint python=3.8 -y
conda activate train-centerpoint
```

**Step 3.** Install PyTorch

Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.

```bash
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```

#### Install mmdetection3d

**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM

```bash
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
```

**Step 2.** Install mmdetection3d forked repository

Introduced several valuable enhancements in our fork of the mmdetection3d repository.
Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper.
In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality.

```bash
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
```

#### Use Training Repository with Docker

Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.

Clone fork of the mmdetection3d repository

```bash
git clone https://github.com/autowarefoundation/mmdetection3d.git
```

Build the Docker image by running the following command:

```bash
cd mmdetection3d
docker build -t mmdetection3d -f docker/Dockerfile .
```

Run the Docker container:

```bash
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```

### Preparing NuScenes dataset for training

**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice.

**Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.

**Step 2.** Create a symbolic link to the dataset folder

```bash
ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/
```

**Step 3.** Prepare the NuScenes data by running:

```bash
cd mmdetection3d
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
```

### Training CenterPoint with NuScenes Dataset

#### Prepare the config file

The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
located at `mmdetection3d/projects/AutowareCenterPoint/configs`. This configuration file is a derived version of
[this centerpoint configuration file](https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py)
from mmdetection3D.
In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center,
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**.

The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.

#### Start training

```bash
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom
```

#### Evaluation of the trained model

For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors:
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes,
including training, evaluation, and fine-tuning of models. It is organized in the T4 format.

##### Download the sample dataset

```bash
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
#Extract the dataset to a folder of your choice
tar -xvf lidar_detection_sample_dataset.tar.gz
#Create a symbolic link to the dataset folder
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
```

##### Prepare dataset and evaluate trained model

Create `.pkl` files for training, evaluation, and testing.

The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions.

```bash
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2
```

Run evaluation

```bash
python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT --task lidar_det
```

Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
the base link location of the vehicle.

### Deploying CenterPoint model to Autoware

#### Convert CenterPoint PyTorch model to ONNX Format

The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
You can find it in `mmdetection3d/projects/AutowareCenterPoint` file.

```bash
python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models
```

#### Create the config file for the custom model

Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.

```yaml
/**:
ros__parameters:
class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"]
point_feature_size: 4
max_voxel_size: 40000
point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0]
voxel_size: [0.2, 0.2, 8.0]
downsample_factor: 1
encoder_in_feature_size: 9
# post-process params
circle_nms_dist_threshold: 0.5
iou_nms_target_class_names: ["CAR"]
iou_nms_search_distance_2d: 10.0
iou_nms_threshold: 0.1
yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]
```
#### Launch the lidar_centerpoint node
```bash
cd /YOUR/AUTOWARE/PATH/Autoware
source install/setup.bash
ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/
```

### Changelog

#### v1 (2022/07/06)
Expand Down Expand Up @@ -144,3 +339,14 @@ Example:
[v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx
[v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx
[v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx

## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution

Special thanks to [Deepen AI](https://www.deepen.ai/) for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.

## Legal Notice

_The nuScenes dataset is released publicly for non-commercial use under the Creative
Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.
Additional Terms of Use can be found at <https://www.nuscenes.org/terms-of-use>.
To inquire about a commercial license please contact [email protected]._

0 comments on commit b1dd7bb

Please sign in to comment.