Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: drs conversion #177

Merged
merged 46 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
093e322
Output topic drop error during initialization
miursh Jun 16, 2024
9af2b62
Avoid out-of-bounds access in get_lidar_camera_synced_frame_info
miursh Jun 16, 2024
9da50c2
Enhance image data handling to support CompressedImage format
miursh Jun 16, 2024
7fb79f1
add RX2 constants
miursh Jun 30, 2024
9fdb7e4
Merge branch 'main' into feat/fs_rx2_conversion
miursh Jun 30, 2024
efb9634
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 30, 2024
7dade32
Merge branch 'main' into feat/fs_rx2_conversion
miursh Aug 2, 2024
fd75a01
Merge branch 'main' into feat/fs_rx2_conversion
miursh Sep 19, 2024
13445ff
feat(rosbag2_to_non_annotated_t4): support image undistortion option
miursh Sep 22, 2024
09884d2
feat(rosbag2_to_non_annotated_t4): add DRS related constants/config
miursh Sep 22, 2024
59e860b
raise error when the camera_info is not found
miursh Sep 22, 2024
06e16a7
overwrite distortion coefficient when applying undistortion
miursh Sep 22, 2024
8fa4b00
Merge branch 'feat/fs_rx2_conversion' into feat/drs_conversion
miursh Sep 23, 2024
8ff78be
Merge branch 'feat/add_camera_undistortion_option' into feat/drs_conv…
miursh Sep 23, 2024
8542a24
fix config, set jpeg_quality to 100
miursh Sep 30, 2024
9f56231
update config file
miursh Oct 14, 2024
a9de6ac
modity overwrite logic for rosbag2_to_non_annotated_t4 converter
miursh Oct 14, 2024
cb9db4d
Add elapsed time logging and increase message display interval
miursh Oct 14, 2024
268940d
add error handling, add print
miursh Oct 17, 2024
56da43a
add error handling
miursh Oct 19, 2024
d4ed3a7
update drs config file
miursh Oct 21, 2024
c6f06bd
add output when finish conversion
miursh Oct 22, 2024
04bc398
add output when finish conversion
miursh Oct 22, 2024
607c7a1
Merge branch 'feat/drs_conversion' of github.com:tier4/tier4_percepti…
miursh Oct 29, 2024
d45d573
[tentative] add camera info patch
miursh Oct 29, 2024
6d9648e
feat: update record classes for CoMLOPs format
ktro2828 Oct 15, 2024
c660db6
ci: update test depends
ktro2828 Oct 29, 2024
acf0809
Merge branch 'main' into feat/drs_conversion
miursh Oct 30, 2024
3c6642f
Update perception_dataset/ros2/oxts_msgs/ins_handler.py
ktro2828 Nov 12, 2024
a093a78
feat: update acceleration coordinate system in base_link
ktro2828 Nov 12, 2024
8eb9eec
feat: update handling of velocity/omega/acceleration
ktro2828 Nov 13, 2024
2e8b0f4
chore: remove unused tf-transformations
ktro2828 Nov 13, 2024
a79b5c3
ci: update to install pacmod3-msgs
ktro2828 Nov 13, 2024
9829671
Merge remote-tracking branch 'origin/main' into feat/drs_conversion
miursh Nov 13, 2024
c0b45c0
Merge branch 'feat/comlops/rosbag-conversion' into feat/drs_conversion
miursh Nov 19, 2024
eea021d
[tantative] add angular velocity conversion patch
miursh Nov 19, 2024
b4aa186
update config for DRS conversion
miursh Nov 19, 2024
6ba58e8
fix: bug in pcd conversion
miursh Nov 19, 2024
995767f
Update convert_rosbag2_to_non_annotated_t4_drs.yaml
miursh Dec 4, 2024
2816102
Merge branch 'main' into feat/drs_conversion
miursh Dec 9, 2024
59881b9
Revert "[tantative] add angular velocity conversion patch"
miursh Dec 9, 2024
641e81e
Revert "[tentative] add camera info patch"
miursh Dec 9, 2024
f5a0eda
Merge branch 'main' into feat/drs_conversion
miursh Dec 9, 2024
73ee7b2
Apply suggestions from code review
miursh Dec 10, 2024
7c485e4
remove sample config yaml for Co-MLOps dataset
miursh Dec 10, 2024
bbe2154
Merge branch 'feat/drs_conversion' of github.com:tier4/tier4_percepti…
miursh Dec 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,33 +4,40 @@ description:
conversion:
input_base: ./data/rosbag2
output_base: ./data/non_annotated_t4_format
world_frame_id: "map"
world_frame_id: "base_link"
start_timestamp_sec: 0 # Enter here if there is a timestamp for the start time. If not used, enter 0.
skip_timestamp: 2.0 # Do not load data for the first point cloud timestamp for skip_timestamp seconds.
skip_timestamp: 0.5 # Do not load data for the first point cloud timestamp for skip_timestamp seconds.
num_load_frames: 0 # Maximum number of frames to save as t4 data. Set to 0 to automatically set it based on the number of lidar topics.
accept_frame_drop: False # If true, the conversion will continue even if the LiDAR frame is dropped.
accept_frame_drop: true # If true, the conversion will continue even if the LiDAR frame is dropped.
undistort_image: true # If true, the camera image will be undistorted.
with_ins: true # whether to use INS messages as a ego state instead of `/tf`
with_vehicle_status: true # whether to generate `vehicle_state.json`
with_vehicle_status: false # whether to generate `vehicle_state.json`
# The following configuration is generally not modified unless there are changes to the vehicle sensor configuration.
lidar_sensor:
topic: /sensing/lidar/concatenated/pointcloud
channel: LIDAR_CONCAT
camera_sensors: # Keep the same order as each camera exposure timing
- topic: /sensing/camera/camera3/image_raw/compressed
channel: CAM_BACK_LEFT
delay_msec: 48.33
- topic: /sensing/camera/camera2/image_raw/compressed
channel: CAM_FRONT_LEFT
delay_msec: 65.0
- topic: /sensing/camera/camera0/image_raw/compressed
channel: CAM_FRONT
delay_msec: 81.67
- topic: /sensing/camera/camera4/image_raw/compressed
channel: CAM_FRONT_NARROW
delay_msec: 16.0
- topic: /sensing/camera/camera1/image_raw/compressed
channel: CAM_FRONT_WIDE
delay_msec: 16.0
- topic: /sensing/camera/camera2/image_raw/compressed
channel: CAM_FRONT_RIGHT
delay_msec: 98.33
- topic: /sensing/camera/camera5/image_raw/compressed
delay_msec: -9.0
- topic: /sensing/camera/camera3/image_raw/compressed
channel: CAM_BACK_RIGHT
delay_msec: 115.0
- topic: /sensing/camera/camera1/image_raw/compressed
channel: CAM_BACK
delay_msec: 131.67
delay_msec: -59.0
- topic: /sensing/camera/camera4/image_raw/compressed
channel: CAM_BACK_NARROW
delay_msec: -34.0
- topic: /sensing/camera/camera5/image_raw/compressed
channel: CAM_BACK_WIDE
delay_msec: -84.0
- topic: /sensing/camera/camera6/image_raw/compressed
channel: CAM_BACK_LEFT
delay_msec: -59.0
- topic: /sensing/camera/camera7/image_raw/compressed
channel: CAM_FRONT_LEFT
delay_msec: -59.0
8 changes: 8 additions & 0 deletions perception_dataset/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,14 @@ class SENSOR_ENUM(Enum):
"channel": "CAM_BACK",
"modality": SENSOR_MODALITY_ENUM.CAMERA.value,
}
CAM_BACK_NARROW = {
"channel": "CAM_BACK_NARROW",
"modality": SENSOR_MODALITY_ENUM.CAMERA.value,
}
CAM_BACK_WIDE = {
"channel": "CAM_BACK_WIDE",
"modality": SENSOR_MODALITY_ENUM.CAMERA.value,
}
CAM_FRONT_LEFT = {
"channel": "CAM_FRONT_LEFT",
"modality": SENSOR_MODALITY_ENUM.CAMERA.value,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
import os.path as osp
import shutil
import sys
import time
from typing import Dict, List, Optional, Tuple, Union
import warnings

Expand Down Expand Up @@ -81,21 +82,31 @@ def convert(self):

if not self._overwrite_mode:
dir_exist: bool = False
for bag_dir in bag_dirs:
for bag_dir in bag_dirs[:]: # copy to avoid modifying list while iterating
bag_name: str = osp.basename(bag_dir)

output_dir = osp.join(self._output_base, bag_name)
if osp.exists(output_dir):
logger.error(f"{output_dir} already exists.")
dir_exist = True

if dir_exist:
bag_dirs.remove(bag_dir)
if dir_exist and len(bag_dirs) == 0:
logger.error(f"{output_dir} already exists.")
raise ValueError("If you want to overwrite files, use --overwrite option.")

for bag_dir in bag_dirs:
for bag_dir in sorted(bag_dirs):
logger.info(f"Start converting {bag_dir} to T4 format.")
self._params.input_bag_path = bag_dir
bag_converter = _Rosbag2ToNonAnnotatedT4Converter(self._params)
bag_converter.convert()
try:
bag_converter = _Rosbag2ToNonAnnotatedT4Converter(self._params)
bag_converter.convert()
except Exception as e:
logger.error(f"Error occurred during conversion: {e}")
continue
logger.info(f"Conversion of {bag_dir} is completed")
print(
"--------------------------------------------------------------------------------------------------------------------------"
)


class _Rosbag2ToNonAnnotatedT4Converter:
Expand Down Expand Up @@ -157,7 +168,7 @@ def __init__(self, params: Rosbag2ConverterParams) -> None:
self._output_data_dir = osp.join(
self._output_scene_dir, T4_FORMAT_DIRECTORY_NAME.DATA.value
)
self._msg_display_interval = 10
self._msg_display_interval = 100

shutil.rmtree(self._output_scene_dir, ignore_errors=True)
self._make_directories()
Expand Down Expand Up @@ -259,16 +270,23 @@ def _init_tables(self):

def convert(self):
self._convert()

self._save_tables()
self._save_config()
if not self._without_compress:
self._compress_directory()

def _save_tables(self):
print(
"--------------------------------------------------------------------------------------------------------------------------"
)
for cls_attr in self.__dict__.values():
if isinstance(cls_attr, AbstractTable):
print(f"{cls_attr.FILENAME}: #rows {len(cls_attr)}")
cls_attr.save_json(self._output_anno_dir)
print(
"--------------------------------------------------------------------------------------------------------------------------"
)

def _save_config(self):
config_data = {
Expand Down Expand Up @@ -318,6 +336,7 @@ def _convert(self) -> None:
- is_key_frame=True
- fill in next/prev
"""
start = time.time()
sensor_channel_to_sample_data_token_list: Dict[str, List[str]] = {}

self._init_tables()
Expand All @@ -327,6 +346,7 @@ def _convert(self) -> None:
self._connect_sample_in_scene()
self._connect_sample_data_in_scene(sensor_channel_to_sample_data_token_list)
self._add_scene_description(self._scene_description)
print(f"Total elapsed time: {time.time() - start:.2f} sec")

if self._with_vehicle_status:
self._convert_vehicle_state()
Expand All @@ -348,6 +368,7 @@ def _convert_sensor_data(
logger.info(f"set start_timestamp to {start_timestamp}")

if self._sensor_mode == SensorMode.DEFAULT:
start = time.time()
lidar_sensor_channel = self._lidar_sensor["channel"]
sensor_channel_to_sample_data_token_list[lidar_sensor_channel] = (
self._convert_pointcloud(
Expand All @@ -357,7 +378,7 @@ def _convert_sensor_data(
scene_token=scene_token,
)
)

print(f"LiDAR conversion. total elapsed time: {time.time() - start:.2f} sec\n")
for radar_sensor in self._radar_sensors:
radar_sensor_channel = radar_sensor["channel"]
sensor_channel_to_sample_data_token_list[radar_sensor_channel] = (
Expand All @@ -382,6 +403,7 @@ def _convert_sensor_data(
)

for camera_sensor in self._camera_sensors:
start = time.time()
sensor_channel = camera_sensor["channel"]
sensor_channel_to_sample_data_token_list[sensor_channel] = self._convert_image(
start_timestamp=camera_start_timestamp,
Expand All @@ -400,6 +422,9 @@ def _convert_sensor_data(
camera_start_timestamp = misc_utils.nusc_timestamp_to_unix_timestamp(
first_sample_data_record.timestamp
)
print(
f"camera {camera_sensor['channel']} conversion. total elapsed time: {time.time() - start:.2f} sec\n"
)

def _convert_static_data(self):
# Log, Map
Expand Down Expand Up @@ -968,6 +993,7 @@ def _generate_calibrated_sensor(
self, sensor_channel: str, start_timestamp: builtin_interfaces.msg.Time, topic_name=""
) -> Union[str, Tuple[str, CameraInfo]]:
calibrated_sensor_token = str()
camera_info = None
for sensor_enum in self._sensor_enums:
channel = sensor_enum.value["channel"]
modality = sensor_enum.value["modality"]
Expand Down
9 changes: 5 additions & 4 deletions perception_dataset/utils/rosbag2.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,12 +169,13 @@ def pointcloud_msg_to_numpy(pointcloud_msg: PointCloud2) -> NDArray:
points = point_cloud2_to_array(pointcloud_msg)

# Extract the x, y, z coordinates and additional fields if available
xyz = points["xyz"]
points_arr = points["xyz"]
if "intensity" in points.keys():
intensity = points["intensity"].astype(np.float32)
points_arr = np.hstack((xyz, intensity))
else:
points_arr = xyz
points_arr = np.hstack((points_arr, intensity))
if "lidar_index" in points.keys():
lidar_index = points["lidar_index"].astype(np.float32)
points_arr = np.hstack((points_arr, lidar_index))

# Ensure the resulting array has exactly NUM_DIMENSIONS columns
if points_arr.shape[1] > NUM_DIMENSIONS:
Expand Down
Loading