Skip to content

Adapted version of V2V4Real project for BEV map segmentation

License

Notifications You must be signed in to change notification settings

YuanYunshuang/v2v4real-bevseg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

V2V4real with BEV map segmentation

Introduction

This repo is modified from the project V2V4Real and CoBEVT of UCLA Mobility Lab. Check the original version on the main branch or their official page for more details.

Data Download

OPV2V

Download our augmented OPV2V dataset for lidar-based BEV map segmentation. Unzip with

cat train.part.* > train.zip
cat test.part.* > test.zip
unzip train.zip
unzip test.zip

The unzipped files should have the following structure:

├── opv2v
│   ├── train
|      |── 2021_08_16_22_26_54
|      |── ...
│   ├── test

V2V4Real

Please check the official website to download the V2V4Real dataset (OPV2V format). The unzipped files should have the following structure:

├── v2v4real
│   ├── train
|      |── testoutput_CAV_data_2022-03-15-09-54-40_1
│   ├── validate
│   ├── test

Installation

To set up the codebase environment, do the following steps:

1. Create conda environment (python >= 3.7)

conda create -n v2v4real python=3.7
conda activate v2v4real

2. Pytorch Installation (>= 1.12.0 Required)

Take pytorch 1.12.0 as an example:

conda install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch -c conda-forge

3. spconv 2.x Installation

pip install spconv-cu113

4. Install other dependencies

pip install -r requirements.txt
python setup.py develop

5.Install bbx nms calculation cuda version

python opencood/utils/setup.py build_ext --inplace

Quick Start

Data sequence visualization

To quickly visualize the LiDAR stream in the OPV2V dataset, first modify the validate_dir in your opencood/hypes_yaml/visualization.yaml to the opv2v data path on your local machine, e.g. opv2v/validate, and the run the following commond:

cd ~/v2v4real-bevseg
python opencood/visualization/vis_data_sequence.py [--color_mode ${COLOR_RENDERING_MODE} --isSim]

Arguments Explanation:

  • color_mode : str type, indicating the lidar color rendering mode. You can choose from 'v2vreal', 'constant', 'intensity' or 'z-value'.
  • isSim : bool type, if you are visualizing the simulation data, then claim this argument.

Train your model

OpenCOOD uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:

python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER} --half]

Arguments Explanation:

  • hypes_yaml: the path of the training configuration file, e.g. opencood/hypes_yaml/v2vreal/point_pillar_fax.yaml, meaning you want to train CoBEVT with pointpillar backbone on V2V4Real dataset. See Tutorial 1: Config System to learn more about the rules of the yaml files.
  • model_dir (optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir is given, the trainer will discard the hypes_yaml and load the config.yaml in the checkpoint folder.
  • half (optional): If set, the model will be trained with half precision. It cannot be set with multi-gpu training togetger.

To train on multiple gpus, run the following command:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4  --use_env opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER}]

**For more details, please check the original version of this project

Test the model

Before you run the following command, first make sure the validation_dir in config.yaml under your checkpoint folder refers to the testing dataset path, e.g. v2v4real/test.

python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence] [--save_evibev]

Arguments Explanation:

  • model_dir: the path to your saved model.
  • fusion_method: indicate the fusion strategy, currently support 'nofusion', 'early', 'late', and 'intermediate'.
  • show_vis: whether to visualize the detection overlay with point cloud.
  • show_sequence : the detection results will visualized in a video stream. It can NOT be set with show_vis at the same time.
  • save_evibev : whether to save the test output for later evaluation in evibev project.

BEV segmentation result

Method OPV2V-road OPV2V-object V2V4Real-object OPV2V ckpt V2V4Real ckpt
Fcooper 70.3 52.06 25.87 drawing drawing
AttnFuse 75.32 52.34 25.47 drawing drawing
V2X-ViT 75.03 50.41 29.87 drawing drawing
CoBEVT 75.89 53.34 29.62 drawing drawing

Citation

@inproceedings{xu2023v2v4real,
  title={V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception},
  author={Xu, Runsheng and Xia, Xin and Li, Jinlong and Li, Hanzhao and Zhang, Shuo and Tu, Zhengzhong and Meng, Zonglin and Xiang, Hao and Dong, Xiaoyu and Song, Rui and Yu, Hongkai and Zhou, Bolei and Ma, Jiaqi},
  booktitle={The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)},
  year={2023}
}

Acknowledgment

This dataset belongs to the OpenCDA ecosystem family. The codebase is build upon OpenCOOD, which is the first Open Cooperative Detection framework for autonomous driving.

About

Adapted version of V2V4Real project for BEV map segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published