You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin
demo/data/nuscenes/
work_dirs/without_pretrained/nuscenes_lidar_cam/nuscenes_lidar_cam.py
work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
--cam-type all --score-thr 0.2 --show --snapshot
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:17: UserWarning: build_sampler would be deprecated soon, please use mmdet.registry.TASK_UTILS.build()
warnings.warn('build_sampler would be deprecated soon, please use '
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:39: UserWarning: build_assigner would be deprecated soon, please use mmdet.registry.TASK_UTILS.build()
warnings.warn('build_assigner would be deprecated soon, please use '
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
09/05 23:37:17 - mmengine - INFO - Loads checkpoint by http backend from path: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
Loads checkpoint by local backend from path: work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the save_dir argument.
warnings.warn(f'Failed to add {vis_backend.class}, '
Traceback (most recent call last):
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 78, in
main(args)
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 49, in main
result, data = inference_multi_modality_detector(model, args.pcd, args.img,'',
File "/home/robot/1.code/mmdetection3d/mmdet3d/apis/inference.py", line 233, in inference_multi_modality_detector
data_list = mmengine.load(ann_file)['data_list']
File "/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/fileio/io.py", line 832, in load
raise TypeError(f'Unsupported format: {file_format}')
TypeError: Unsupported format:
Additional information
My issue:
I am looking for special environmental conditions (such as rainy and night scenes from the NuScenes dataset) to test the performance of my improved model and enhance my paper. For this purpose, I need to use the inference visualization program in the demo folder of mmdet3d. However, it only provides daytime scenarios and lacks more diverse environments for testing, which would allow me to demonstrate my model's effectiveness more intuitively.
When I attempted to design my own program and successfully extracted night and rainy data from the NuScenes dataset, or modified the create_data.py program to meet the pkl requirements, I encountered no issues with images and point clouds. However, the pkl files consistently fail to meet the requirements of the inference program. I need help figuring out what to do next. Does mmdet3d provide a method for generating pkl files compatible with the demo folder's inference program?
The demo may be designed for specific data formats, and there might be a lack of clear documentation or examples for creating compatible pkl files for diverse environmental conditions.
I have attempted to modify the create_data.py script to generate pkl files suitable for the demo inference, but I consistently encounter issues with the pkl structure. It would be extremely helpful if there were clearer guidelines or examples for creating these pkl files for different environmental conditions.
Additionally, I'm facing issues running the multi_modality_demo_noann.py script. Both of these problems are related to visualization, so I'd like to address them together.
My main questions are:
How can I generate appropriate pkl files for different environmental conditions (like rain and night) that are compatible with the demo inference program?
What might be causing the multi_modality_demo_noann.py script to fail, and how can I resolve this issue?
Any guidance on these visualization-related problems would be greatly appreciated, as they are crucial for demonstrating the effectiveness of my improved model across various environmental conditions.
The text was updated successfully, but these errors were encountered:
Prerequisite
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmdetection3d
Environment
sys.platform: linux
Python: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 4090
CUDA_HOME: /usr/local/cuda-11.8
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.0.0+cu118
PyTorch compiling details: PyTorch built with:
TorchVision: 0.15.1+cu118
OpenCV: 4.9.0
MMEngine: 0.10.4
MMDetection: 3.2.0
MMDetection3D: 1.3.0+5c0613b
spconv2.0: True
Reproduces the problem - code sample
#The program is on projects/BEVFusion/demo/multi_modality_demo_noann.py
Copyright (c) OpenMMLab. All rights reserved.
from argparse import ArgumentParser
import mmcv
from mmdet3d.apis import inference_multi_modality_detector, init_model
from mmdet3d.registry import VISUALIZERS
def parse_args():
parser = ArgumentParser()
parser.add_argument('pcd', help='Point cloud file')
parser.add_argument('img', help='image file')
#parser.add_argument('ann', help='ann file')
parser.add_argument('config', help='Config file')
parser.add_argument('checkpoint', help='Checkpoint file')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--cam-type',
type=str,
default='CAM_FRONT',
help='choose camera type to inference')
parser.add_argument(
'--score-thr', type=float, default=0.0, help='bbox score threshold')
parser.add_argument(
'--out-dir', type=str, default='demo', help='dir to save results')
parser.add_argument(
'--show',
action='store_true',
help='show online visualization results')
parser.add_argument(
'--snapshot',
action='store_true',
help='whether to save online visualization results')
args = parser.parse_args()
return args
def main(args):
# build the model from a config file and a checkpoint file
model = init_model(args.config, args.checkpoint, device=args.device)
if name == 'main':
args = parse_args()
main(args)
Reproduces the problem - command or script
python projects/BEVFusion/demo/multi_modality_demo_noann.py
demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin
demo/data/nuscenes/
work_dirs/without_pretrained/nuscenes_lidar_cam/nuscenes_lidar_cam.py
work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
--cam-type all --score-thr 0.2 --show --snapshot
Reproduces the problem - error message
demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin
demo/data/nuscenes/
work_dirs/without_pretrained/nuscenes_lidar_cam/nuscenes_lidar_cam.py
work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
--cam-type all --score-thr 0.2 --show --snapshot
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:17: UserWarning:
build_sampler
would be deprecated soon, please usemmdet.registry.TASK_UTILS.build()
warnings.warn('
build_sampler
would be deprecated soon, please use '/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:39: UserWarning:
build_assigner
would be deprecated soon, please usemmdet.registry.TASK_UTILS.build()
warnings.warn('
build_assigner
would be deprecated soon, please use '/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
09/05 23:37:17 - mmengine - INFO - Loads checkpoint by http backend from path: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
Loads checkpoint by local backend from path: work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the
save_dir
argument.warnings.warn(f'Failed to add {vis_backend.class}, '
Traceback (most recent call last):
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 78, in
main(args)
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 49, in main
result, data = inference_multi_modality_detector(model, args.pcd, args.img,'',
File "/home/robot/1.code/mmdetection3d/mmdet3d/apis/inference.py", line 233, in inference_multi_modality_detector
data_list = mmengine.load(ann_file)['data_list']
File "/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/fileio/io.py", line 832, in load
raise TypeError(f'Unsupported format: {file_format}')
TypeError: Unsupported format:
Additional information
My issue:
I am looking for special environmental conditions (such as rainy and night scenes from the NuScenes dataset) to test the performance of my improved model and enhance my paper. For this purpose, I need to use the inference visualization program in the demo folder of mmdet3d. However, it only provides daytime scenarios and lacks more diverse environments for testing, which would allow me to demonstrate my model's effectiveness more intuitively.
When I attempted to design my own program and successfully extracted night and rainy data from the NuScenes dataset, or modified the create_data.py program to meet the pkl requirements, I encountered no issues with images and point clouds. However, the pkl files consistently fail to meet the requirements of the inference program. I need help figuring out what to do next. Does mmdet3d provide a method for generating pkl files compatible with the demo folder's inference program?
The demo may be designed for specific data formats, and there might be a lack of clear documentation or examples for creating compatible pkl files for diverse environmental conditions.
I have attempted to modify the create_data.py script to generate pkl files suitable for the demo inference, but I consistently encounter issues with the pkl structure. It would be extremely helpful if there were clearer guidelines or examples for creating these pkl files for different environmental conditions.
Additionally, I'm facing issues running the multi_modality_demo_noann.py script. Both of these problems are related to visualization, so I'd like to address them together.
My main questions are:
How can I generate appropriate pkl files for different environmental conditions (like rain and night) that are compatible with the demo inference program?
What might be causing the multi_modality_demo_noann.py script to fail, and how can I resolve this issue?
Any guidance on these visualization-related problems would be greatly appreciated, as they are crucial for demonstrating the effectiveness of my improved model across various environmental conditions.
The text was updated successfully, but these errors were encountered: