A ligweight network for low-texture reconstruction
- python 3.8
- pytorch 1.12.1
-
Download the preprocessed DTU training data and Depths_raw (both from Original MVSNet), and upzip it as the $MVS_TRANING folder.
-
in train.sh, set MVS_TRAINING as your training data path
-
Train LLR-MVSNet: scripts/train.sh ##Testing
-
Download our pre-processed dataset: DTU's testing set (from Original MVSNet) ,Tanks & Temples and ETH3D benchmark. Each dataset is already organized as follows:
root_directory
├──scan1 (scene_name1)
├──scan2 (scene_name2)
├── images
│ ├── 00000000.jpg
│ ├── 00000001.jpg
│ └── ...
├── cams
│ ├── 00000000_cam.txt
│ ├── 00000001_cam.txt
│ └── ...
└── pair.txt
- In scripts/test.sh, set DTU_TESTPATH as $DTU_TESTPATH.
- The DTU_CKPT_FILE is automatically set as your pretrained checkpoint file, you also can download my pretrained model.
- Test on GPU by running scripts/test.sh. The code includes depth map estimation and depth fusion. The outputs are the point clouds in ply format.
- For quantitative evaluation on DTU dataset, download SampleSet and
Points. Unzip them and place
Points
folder inSampleSet/MVS Data/
. The structure looks like:
SampleSet
├──MVS Data
└──Points
In evaluations/dtu/BaseEvalMain_web.m
, set dataPath
as path to SampleSet/MVS Data/
, plyPath
as directory that
stores the reconstructed point clouds and resultsPath
as directory to store the evaluation results. Then run
evaluations/dtu/BaseEvalMain_web.m
in matlab.
- For quantitative evaluation on Tanks & Temples and ETH3D benchmark, please submit to the website.
Results on DTU
Acc. | Comp. | Overall. | |
---|---|---|---|
CasMVSNet | 0.325 | 0.385 | 0.355 |
LLR-MVSNet | 0.314 | 0.318 | 0.316 |
Results on Tanks and Temples benchmark
Mean | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train |
---|---|---|---|---|---|---|---|---|
60.7 | 80.09 | 63.28 | 53.27 | 57.74 | 60.74 | 57.63 | 54.93 | 57.91 |
Results on ETH3D benchmark
Our work is partially baed on these opening source work: MVSNet, MVSNet-pytorch, cascade-stereo, PatchmatchNet,MVSTER.
We appreciate their contributions to the MVS community.