We follow the procedure in votenet.
-
Download ScanNet v2 data HERE. Link or move the 'scans' folder to this level of directory. If you are performing segmentation tasks and want to upload the results to its official benchmark, please also link or move the 'scans_test' folder to this directory.
-
In this directory, extract point clouds and annotations by running
python batch_load_scannet_data.py
. Add the--scannet200
flag if you want to get markup for the ScanNet200 dataset. -
Enter the project root directory, generate training data by running
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
or for ScanNet200:
mkdir data/scannet200
python tools/create_data.py scannet200 --root-path ./data/scannet --out-dir ./data/scannet200 --extra-tag scannet200
The overall process for ScanNet could be achieved through the following script
python batch_load_scannet_data.py
cd ../..
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
Or for ScanNet200:
python batch_load_scannet_data.py --scannet200
cd ../..
mkdir data/scannet200
python tools/create_data.py scannet200 --root-path ./data/scannet --out-dir ./data/scannet200 --extra-tag scannet200
The directory structure after pre-processing should be as below
scannet
├── meta_data
├── batch_load_scannet_data.py
├── load_scannet_data.py
├── scannet_utils.py
├── README.md
├── scans
├── scans_test
├── scannet_instance_data
├── points
│ ├── xxxxx.bin
├── instance_mask
│ ├── xxxxx.bin
├── semantic_mask
│ ├── xxxxx.bin
├── super_points
│ ├── xxxxx.bin
├── seg_info
│ ├── train_label_weight.npy
│ ├── train_resampled_scene_idxs.npy
│ ├── val_label_weight.npy
│ ├── val_resampled_scene_idxs.npy
├── scannet_oneformer3d_infos_train.pkl
├── scannet_oneformer3d_infos_val.pkl
├── scannet_oneformer3d_infos_test.pkl