Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to apply the network on my own dataset using test code #9

Open
mostafa501 opened this issue Dec 27, 2023 · 11 comments
Open

How to apply the network on my own dataset using test code #9

mostafa501 opened this issue Dec 27, 2023 · 11 comments

Comments

@mostafa501
Copy link

Hi, thank you so much for publishing this work, we want to use this network for better accuracy. So we ask how to test our own collected data as we did before in TD3D network.
This is the demo test used in TD3D, is it available to use a snippet like this or not:

from argparse import ArgumentParser
from mmdet3d.apis import inference_segmentor, init_model
import numpy as np
import os
# Convert from txt file to a 32 float bin file  
input_txt_file = '/media/navlab/GNSS/work/instance_segmentation/td3d-main/demo/tested_data/twochair.txt'
filename = os.path.splitext(os.path.basename(input_txt_file))[0]
output_bin_file = f'{input_txt_file}_ok.bin'

# Load the text file and save the bin file
point_cloud = np.loadtxt(input_txt_file, delimiter=',')
point_cloud.astype(np.float32).tofile(output_bin_file)
parser = ArgumentParser()
parser.add_argument('--pcd', default=f'{output_bin_file}', help='Point cloud file')  
parser.add_argument('--config', default='/media/navlab/GNSS/work/instance_segmentation/td3d-main/configs/td3d_is/td3d_is_scannet-3d-18class.py', help='Config file')
parser.add_argument('--checkpoint', default='/media/navlab/GNSS/work/instance_segmentation/td3d-main/pretrained_models/td3d_scannet.pth', help='Checkpoint file')
parser.add_argument('--device', default='cuda:0', help='Device used for inference')
parser.add_argument('--out-dir', type=str, default=f'/media/navlab/GNSS/work/instance_segmentation/td3d-main/demo/extracted_info/{filename}', help='Directory to save results')
parser.add_argument('--show', action='store_true', help='Show online visualization results')
parser.add_argument('--snapshot', action='store_true', help='Whether to save online visualization results')
args = parser.parse_args()

# Build the model from a config file and a checkpoint file
model = init_model(args.config, args.checkpoint, device=args.device)
result, data = inference_segmentor(model, args.pcd)

Thank you so much for the continuous help.

@filaPro
Copy link
Owner

filaPro commented Dec 27, 2023

Yes, some script like this should work. Btw, TD3D uses mmdet3d version 1.0, and for OneFormer3D it is 1.1. And there were a lot of changes between these two versions.

@Lizhinwafu
Copy link

I also want to train my own data, my data only has two semantic categories, how can I change the code in the configuration file (https://github.com/filaPro/oneformer3d/blob/main/configs/oneformer3d_1xb2_s3dis-area-5.py)?

-- (L200) stuff_cls=[0, 1, 2, 3, 4, 5, 6, 12],
thing_cls=[7, 8, 9, 10, 11]))
What do these two lines of code mean? I only have two categories. How can I change them?

@filaPro
Copy link
Owner

filaPro commented Dec 28, 2023

These stuff_cls and thing_cls are only used for panoptic evaluation. You can simply delete panoptic segmentation call from evaluation. Or call your background as 0 class, and your two instance classes as 1 and 2.

@mostafa501
Copy link
Author

@filaPro Thank you for the reply, To run the test demo code, I tried first to install required libararies and run it. But I got this error:

egmentation/oneformer3d-main$ /home/navlab/anaconda3/envs/oneformer3d1/bin/python /media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/demo/pc_seg_demo.py
Traceback (most recent call last):
  File "/media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/demo/pc_seg_demo.py", line 28, in <module>
    model = init_model(args.config, args.checkpoint, device=args.device)
  File "/media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/mmdetection3d/mmdet3d/apis/inference.py", line 59, in init_model
    config = Config.fromfile(config)
  File "/home/navlab/anaconda3/envs/oneformer3d1/lib/python3.8/site-packages/mmengine/config/config.py", line 462, in fromfile
    import_modules_from_strings(**cfg_dict['custom_imports'])
  File "/home/navlab/anaconda3/envs/oneformer3d1/lib/python3.8/site-packages/mmengine/utils/misc.py", line 77, in import_modules_from_strings
    imported_tmp = import_module(imp)
  File "/home/navlab/anaconda3/envs/oneformer3d1/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/oneformer3d/oneformer3d.py", line 4, in <module>
    from torch_scatter import scatter_mean
  File "/home/navlab/.local/lib/python3.8/site-packages/torch_scatter/__init__.py", line 16, in <module>
    torch.ops.load_library(spec.origin)
  File "/home/navlab/anaconda3/envs/oneformer3d1/lib/python3.8/site-packages/torch/_ops.py", line 852, in load_library
    ctypes.CDLL(path)
  File "/home/navlab/anaconda3/envs/oneformer3d1/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory

The error seems to be related to CUDA version, so I checked my GPU CUDA version, can you tell me if I can install this model or what shall I do to install formerone3d network model (where I got 2 versions of CUDA) ?
Thank you.

(oneformer3d1) navlab@navlab-ProLiant-DL380-Gen10:/media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/mmdetection3d$ nvidia-smi
Thu Dec 28 21:04:29 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02   Driver Version: 470.223.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000000:37:00.0 Off |                    0 |
| N/A   48C    P0    29W / 250W |      4MiB / 32510MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2215      G   /usr/lib/xorg/Xorg                  4MiB |
+-----------------------------------------------------------------------------+

(oneformer3d1) navlab@navlab-ProLiant-DL380-Gen10:/media/navlab/GNSS/work/instance_segmentation/oneformer3d-main/mmdetection3d$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

@filaPro
Copy link
Owner

filaPro commented Dec 28, 2023

nvcc --version should show exactly 11.6 version of cuda. This is because you follow our Dockerfile, we download mmcv, torch-scatter and spconv only for cuda 11.6.

@simo23
Copy link

simo23 commented Jan 25, 2024

@filaPro thank you so much for this amazing work. I tried to play around with a solution similar to what @mostafa501 posted above, which is what mmdet3d suggests, but I encountered some issues related to some internal mmdet structure.

I believe the code is trying to find the ground-truth for the provided pointcloud. I've seen that you encountered and solved a similar issue here, and similar errors are happening to me as well.

Could you please provide a working example that runs the method on a custom input pointcloud without any labels available? I think it would be a great addition to the repository, as e.g. Mask3D proposes, so everyone will be able to play with the method and see how it works on their own data.

@filaPro
Copy link
Owner

filaPro commented Jan 25, 2024

Unfortunately don't have much time now as switching to a new job in a new country.

Don't quite understand why smth like demo.ipynb shouldn't work. For sure it doesn't need annotations. In our test pileline for S3DIS dataset we return only point. You can use the same pipeline, just remove the LoadAnnotations3D here. For ScanNet it is much more difficult as we use superpoint clustering before running the model.

@iamthephd
Copy link

These stuff_cls and thing_cls are only used for panoptic evaluation. You can simply delete panoptic segmentation call from evaluation. Or call your background as 0 class, and your two instance classes as 1 and 2.

I have deleted the panoptic segmentation call by commenting line no. 137 to 140 and ret_pan in line 173 in this file, but I am getting error while evaluation as

File "/workspace/oneformer3d/oneformer3d.py", line 952, in pred_sem
    seg_map = mask_pred.argmax(0)
IndexError: argmax(): Expected reduction dim 0 to have non-zero size.

@Lizhinwafu
Copy link

These and are only used for panoptic evaluation. You can simply delete panoptic segmentation call from evaluation. Or call your background as 0 class, and your two instance classes as 1 and 2.stuff_cls``thing_cls

I have deleted the panoptic segmentation call by commenting line no. 137 to 140 and in line 173 in this file, but I am getting error while evaluation asret_pan

File "/workspace/oneformer3d/oneformer3d.py", line 952, in pred_sem
    seg_map = mask_pred.argmax(0)
IndexError: argmax(): Expected reduction dim 0 to have non-zero size.

Have you solved it now?

@iamthephd
Copy link

@Lizhinwafu
Yes, I was able to solve it!

  1. Modify the classes ids in mmdetection3d/tools/dataset_converters/s3dis_data_utils.py based on the your dataset. Modification are to be done at line no. 28, 180, and 181.
  2. Modify the class names in mmdetection3d/tools/dataset_converters/update_infos_to_v2.py at line 538.
  3. Modify the METAINFO in oneformer3d/s3dis_dataset.py by changing the class names and count
  4. Modify configs/oneformer3d_1xb4_scannet.py by changing the num_instance_classes, num_semantic_classes, class_names, etc. There are many changes required in this file (doing this is quite intuitive though)

With this, I was able to train on my custom dataset!

@Lizhinwafu
Copy link

Lizhinwafu commented Jul 25, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants