Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add ground segmentation #63

Merged
merged 14 commits into from
Dec 13, 2024
145 changes: 145 additions & 0 deletions docs/use_case/ground_segmentation.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
# Evaluate Ground Segmentation

Evaluate the performance of the Obstacle Segmentation sub-component in Autoware, which is responsible for identifying point clouds originating from obstacle that vehicle should avoid.

## Ground Truth data

The Ground Truth data required for evaluation can be provided using the following two methods, and each can be used by changing the `Evaluation.Conditions.Method` of the scenario.

### annotated_rosbag

This method involves adding a field to the point cloud data in the bag file to represent semantic labels.

Synchronize and subscribe to topics before and after ground removal, and evaluate the accuracy by comparing the number of points with ground and non-ground labels.

In this evaluation framework, the semantic labels are assumed to be recorded in an `INT32` field named `entity_id`.

### annotated_pcd

This method involves adding a field to the point cloud data provided as a dataset (~/driving_log_replayer_v2/ground_segmentation/dataset/data/LIDAR_CONCAT/\*.pcd.bin) to represent semantic labels.

Compare the point cloud after ground removal with the point cloud in the `pcd.bin` file, and evaluate accuracy by examining the labels associated with the points in the processed point cloud.

## Evaluation method

Launching the file executes the following steps:

1. Execute launch of evaluation node (`ground_segmentation_evaluator_node`), `logging_simulator.launch` file and `ros2 bag play` command
2. Autoware receives sensor data input from previously prepared rosbag and performs ground point cloud removal within the perception module.
3. Evaluation node subscribes to Autoware's output topics, evaluates the accuracy of ground point cloud removal, and dumps the results to a file
4. When the playback of the rosbag is finished, Autoware's launch is automatically terminated, and the evaluation is completed.

### Points to note during evaluation

- **annotated_rosbag mode**
The [sensing module of autoware.universe](https://github.com/autowarefoundation/autoware.universe/blob/main/sensing/autoware_pointcloud_preprocessor/src/filter.cpp#L386-L394) needs to be modified as follows:

```diff
if (utils::is_data_layout_compatible_with_point_xyzi(*cloud)) {
RCLCPP_ERROR(
get_logger(),
"The pointcloud layout is compatible with PointXYZI. You may be using legacy "
"code/data");
}

- return;
+ //return;
}
```

- **annotated_pcd mode**
Since the evaluation process takes time, the playback rate of the rosbag needs to be reduced.
Example:
`ros2 launch driving_log_replayer_v2 driving_log_replayer_v2.launch.py scenario_path:=${scenario_file} play_rate:=0.1`

## Evaluation result

The results are calculated for each subscription. The format and available states are described below.

### Normal

If the Accuracy obtained through the evaluation meets or exceeds the Evaluation.Conditions.accuracy_min specified in the scenario, it is assumed to be normal.

### Error

When the normal condition is not met

## Topic name and data type used by evaluation node

Subscribed topics:

| topic name | Data type |
| --------------------------------------------------------- | --------------------------- |
| /sensing/lidar/concatenated/pointcloud    | sensor_msgs/msg/PointCloud2 |
| /perception/obstacle_segmentation/single_frame/pointcloud | sensor_msgs/msg/PointCloud2 |

**NOTE: the`/perception/obstacle_segmentation/single_frame/pointcloud`topic can be modified by changing the `evaluation_target_topic` launch argument.**

Published topics:

| topic name | Data type |
| ---------- | --------- |
| - | - |

## Arguments passed to logging_simulator.launch

- localization: false
- planning: false
- control: false
badai-nguyen marked this conversation as resolved.
Show resolved Hide resolved
- sensing: false
- perception_mode: lidar

## simulation

State the information required to run the simulation.

### Topic to be included in the input rosbag

| topic name | Data type |
| -------------------------------------- | --------------------------- |
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
| /tf | tf2_msgs/msg/TFMessage |

### Topics that must not be included in the input rosbag

| topic name | Data type |
| ---------- | ----------------------- |
| /clock | rosgraph_msgs/msg/Clock |

The clock is output by the --clock option of ros2 bag play, so if it is recorded in the bag itself, it is output twice, so it is not included in the bag.

## evaluation

State the information necessary for the evaluation.

### Scenario Format

See [sample](https://github.com/tier4/driving_log_replayer_v2/blob/develop/sample/ground_segmentation/scenario.ja.yaml)

### Evaluation Result Format

See [sample](https://github.com/tier4/driving_log_replayer_v2/blob/develop/sample/ground_segmentation/result.json)

In ground segmentation, the evaluation results for Accuracy, Precision, Recall, Specificity, and F1-score are output for each frame.

The format of each frame and the metrics format are shown below.
**NOTE: common part of the result file format, which has already been explained, is omitted.**

```json
{
"GroundSegmentation": {
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" },
"Info": {
"TP": "The number of ground points recognized as ground",
"FP": "The number of obstacle points recognized as ground",
"TN": "The number of obstacle points recognized as obstacle",
"FN": "The number of ground points recognized as obstacle",
"Accuracy": "Accuracy value",
"Precision": "Precision value",
"Recall": "Recall value",
"Specificity": "Specificity value",
"F1-score": "F1-score value`"
}
}
}
```
144 changes: 144 additions & 0 deletions docs/use_case/ground_segmentation.ja.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# 地面点群除去の評価

入力点群に対して地面点群をセグメンテーションし、除去する機能について評価する。

## Ground Truthデータ

評価のために必要となるGround Truthデータは以下の2種類の方法で与えることが可能であり、それぞれシナリオの`Evaluation.Conditions.Method`を変更することにより使用できる。

### annotated_rosbag

bagデータに含まれる点群データに、セマンティックラベルを表すフィールドを持たせる方法。

地面点群除去前後の topic を同期 subscribe し、地面ラベルを持つ点数の比較により精度評価を行う。

本評価基盤では、セマンティックラベルは`INT32`型の`entity_id`フィールドに記述されていることが前提となっている。

### annotated_pcd

データセットとして与える点群データ(`~/driving_log_replayer_v2/ground_segmentation/dataset/data/LIDAR_CONCAT/*.pcd.bin`)に、セマンティックラベルを表すフィールドを持たせる方法。
hayato-m126 marked this conversation as resolved.
Show resolved Hide resolved

地面点群除去処理後の点群と、pcd.binファイルに含まれる点群同士を比較し、処理後点群が持つラベルを見ることで精度評価を行う。

## 評価方法

launch を立ち上げると以下のことが実行され、評価される。

1. launch で評価ノード(`ground_segmentation_evaluator_node`)と `logging_simulator.launch`、`ros2 bag play`コマンドを立ち上げる
2. bag から出力されたセンサーデータを autoware が受け取って、perceptionモジュール内で地面点群除去を行う
3. 評価ノードが topic を subscribe して、地面点群の除去精度などについて評価し結果をファイルに記録する
4. bag の再生が終了すると自動で launch が終了して評価が終了する

### 評価時の注意点

- **annotated_rosbagモード**
[autoware.universeのsensingモジュール](https://github.com/autowarefoundation/autoware.universe/blob/main/sensing/autoware_pointcloud_preprocessor/src/filter.cpp#L386-L394)を以下のように書き換える必要がある。

```diff
if (utils::is_data_layout_compatible_with_point_xyzi(*cloud)) {
RCLCPP_ERROR(
get_logger(),
"The pointcloud layout is compatible with PointXYZI. You may be using legacy "
"code/data");
}

- return;
+ //return;
}
```

- **annotated_pcdモード**
評価処理に時間がかかるため、rosbagの再生レートを下げる必要がある。
`ros2 launch driving_log_replayer_v2 driving_log_replayer_v2.launch.py scenario_path:=${scenario_file} play_rate:=0.1`

## 評価結果

topic の subscribe 1 回につき、以下に記述する判定結果が出力される。

### 正常

評価により得られたAccuracyがシナリオに記述されている`Evaluation.Conditions.accuracy_min`以上の場合、正常とする。

### 異常

正常の条件を満たさないとき、異常とする。

## 評価ノードが使用する Topic 名とデータ型

Subscribed topics:

| topic 名 | データ型 |
| --------------------------------------------------------- | --------------------------- |
| /sensing/lidar/concatenated/pointcloud    | sensor_msgs/msg/PointCloud2 |
| /perception/obstacle_segmentation/single_frame/pointcloud | sensor_msgs/msg/PointCloud2 |

**注:`/perception/obstacle_segmentation/single_frame/pointcloud`topicは、launch引数`evaluation_target_topic`で変更可能である。**

Published topics:

| topic 名 | データ型 |
| -------- | -------- |
| - | - |

## logging_simulator.launch に渡す引数

- localization: false
- planning: false
- control: false
badai-nguyen marked this conversation as resolved.
Show resolved Hide resolved
- sensing: false
- perception_mode: lidar

## simulation

シミュレーション実行に必要な情報を述べる。

### 入力 rosbag に含まれるべき topic

| topic 名 | データ型 |
| -------------------------------------- | --------------------------- |
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
| /tf | tf2_msgs/msg/TFMessage |

### 入力 rosbag に含まれてはいけない topic

| topic 名 | データ型 |
| -------- | ----------------------- |
| /clock | rosgraph_msgs/msg/Clock |

clock は、ros2 bag play の--clock オプションによって出力しているので、bag 自体に記録されていると 2 重に出力されてしまうので bag には含めない

## evaluation

評価に必要な情報を述べる。

### シナリオフォーマット

[サンプル](https://github.com/tier4/driving_log_replayer_v2/blob/develop/sample/ground_segmentation/scenario.ja.yaml)参照

### 評価結果フォーマット

[サンプル](https://github.com/tier4/driving_log_replayer_v2/blob/develop/sample/ground_segmentation/result.json)参照

ground_segmentation では、Accuracy、Precision、Recall、Specificity、F1-scoreを評価した結果を各 frame 毎に出力する。

以下に、評価の例を記述する。
**注:結果ファイルフォーマットで解説済みの共通部分については省略する。**

```json
{
"GroundSegmentation": {
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" },
"Info": {
"TP": "地面点として認識された地面点の数",
"FP": "地面点として認識された障害物点の数",
"TN": "障害物点として認識された障害物点の数",
"FN": "障害物点として認識された地面点の数",
"Accuracy": "Accuracyの値",
"Precision": "Precisionの値",
"Recall": "Recallの値",
"Specificity": "Specificityの値",
"F1-score": "F1-scoreの値"
}
}
}
```
Loading