Skip to content

Codes release for "DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering", (CVPRW '19)

Notifications You must be signed in to change notification settings

klnavaneet/differ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 

Repository files navigation

DIFFER

Source codes for the paper "DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering".
Accepted at "3D-WiDGET, First Workshop on Deep GEneraTive models for 3D understanding at CVPR" (CVPRW-19)

Overview

DIFFER proposes a differentiable rendering module to obtain feature projections like rgb image and part segmentations from point clouds. The work is an extension of CAPNet, which could obtain just mask projections from point clouds. The rendering module in differ is depth aware, and hence is able to effectively project features of point clouds on to the image plane. This allows us to train a point cloud reconstruction and feature prediction network in an end-to-end fashion with 2D supervision.

Dataset

We use the ShapeNet dataset in our experiments. For the part segmentation ground truth labels, we use the part annotated point clouds provided by Yi et al.. We use the code provided by Tulsiani et al. to obtain the rendered images and part segmentation maps. Download links for the datasets are provided below:

Download each of the folders, extract them and move them into data/. Save the rendered images and part segmentation maps in data/ShapeNet_rendered/ and data/partseg/ShapeNet_labels/ respectively
The folder structure should be as follows:
--data/
    --ShapeNet_rendered/
    --ShapeNet_pcl/
    --splits/
    --partseg/
        --ShapeNet_rendered/
        --ShapeNet_labels/
        --ShapeNet_pcl/
        --splits/

Usage

Install TensorFlow. We recommend installing version 1.3 so that the additional TensorFlow ops can be compiled.
Clone the repository:

git clone https://github.com/klnavaneet/differ.git
cd differ

Training

Colored Point Cloud Reconstruction (RGB)

To train the model, run

cd rgb
bash run_train_rgb.sh

Evaluation

Colored Point Cloud Reconstruction (RGB)

For visualization and metric calculation, run

cd rgb
bash run_metrics_rgb.sh

Make sure that the trained model exists before running the metric calculation code. Use the option --visualize in place of --tqdm to visualize the reconstructed 3D point clouds.

TODO

  1. Provide pre-trained models
  2. Add code for part segmentation
  3. Add dataset and codes for training and evaluation on Pix3D dataset

Citation

If you make use of the code, please cite the following work:

@inproceedings{navaneet2019differ,
 author = {Navaneet, K L and Mandikal, Priyanka and Jampani, Varun and Babu, R Venkatesh},
 booktitle = {CVPR Workshops},
 title = {{DIFFER}: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering},
 year = {2019}
}

About

Codes release for "DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering", (CVPRW '19)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published