Create by Keunhong Park, Konstantinos Rematas, Ali Farhadi, and Steven M. Seitz.
[Project Page] [arXiv]
If you find PhotoShape useful, please consider citing:
@article{photoshape2018,
author = {Park, Keunhong and Rematas, Konstantinos and Farhadi, Ali and Seitz, Steven M.},
title = {PhotoShape: Photorealistic Materials for Large-Scale Shape Collections},
journal = {ACM Trans. Graph.},
issue_date = {November 2018},
volume = {37},
number = {6},
month = nov,
year = {2018},
articleno = {192},
}
We try to provide everything required to get up and running, but due to licensing and copyright restrictions you will have to download/purchase some data directly from the source.
I recommend pyenv for managing python environments, but anything should work.
# Install pyenv
curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash
pyenv update
# Install dependencies and python
sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev
pyenv install 3.7.1
pyenv virtualenv 3.7.1 photoshape
# Enter the environment and set paths
source env.sh
- Blender 2.79 or higher
- Docker and Docker-Compose
- NVIDIA GPU compatible with PyTorch and OpenGL rendering.
If you just need to use the network, all you'll need is the basic dependencies.
pip install torch torchvision
pip install -r requirements.txt
If you want to run code that requires the dense CRF implementation, you'll need to install that package. This includes the fine alignment step and the substance segmentation for exemplars.
# For dense CRF (needed for substance segmentation).
pip install git+https://github.com/lucasb-eyer/pydensecrf.git
Unfortunately our code for rendering segments and alignment renderings involves some custom shaders with vispy. This can be finnicky to set up. You'll need to install the following fork of PyOpenGL which supports NVIDIA extensions.
# For custom shader support (needed for rendering certain stuff in pipeline)
pip install git+https://github.com/mcfletch/pyopengl.git
Please create an issue if something doesn't work as expected and I'll try to address it.
First we have to fetch this codebase.
# Clone this repository
git clone [email protected]:keunhong/photoshape.git
git submodule update --init --recursive
To facilitate efficient querying and data storage we use PostgreSQL for most metadata.
First install docker and docker-compose following the instructions linked. Once you have those installed, run the database using the following command:
# Remove -d to show logs
docker-compose up -d
# You can stop the service by running
# docker-compose down
Once the database is initialized and running, you can import data from the provided SQL dumps:
docker exec -i $(docker-compose ps -q postgres) psql -U photoshape_user photoshape_db < data/postgres/envmaps.sql
docker exec -i $(docker-compose ps -q postgres) psql -U photoshape_user photoshape_db < data/postgres/exemplars.sql
docker exec -i $(docker-compose ps -q postgres) psql -U photoshape_user photoshape_db < data/postgres/shapes.sql
docker exec -i $(docker-compose ps -q postgres) psql -U photoshape_user photoshape_db < data/postgres/materials.sql
docker exec -i $(docker-compose ps -q postgres) psql -U photoshape_user photoshape_db < data/postgres/exemplar_shape_pairs.sql
Data directories and paths are defined in src/terial/config.py
. Modify the directories so that they are pointing to the
correct location on your disk(s). We've created a basic skeleton of the directory structure in data
.
We use Blender for shape processing and rendering. Unfortunately installing Blender as a Python module is a pretty involved process. Follow directions here.
There also seems to be some effort to have this prepackaged at https://pypi.org/project/bpy/ but I cannot guarantee that it will work.
Blob data associated with rows in the database are stores in a blobs directory. This directory can be configured in
config.py
by modifying BLOB_ROOT
. We recommend using a large disk for this since the data generated can be quite
large.
We've preprocessed exemplar images. Download them to data/blobs
cd data/blobs
curl -L -O https://homes.cs.washington.edu/~kpar/photoshape/blobs/exemplars.tar.gz
tar xvzf exemplars.tar.gz
You will have to download and process Herman Miller 3D shapes. Please review their terms of use before proceeding.
cd data/hermanmiller
# This will download zip files to the zip directory and then
# extract *.3ds models to the extracted directory.
bash download.sh
Next we will import the downloaded shapes and link them to rows in our database. Make sure that all of the *.3ds files are in the root of the directory. Some may be in random directories.
python -m terial.shapes.link_3ds data/hermanmiller/extracted/
The associated shape paths in data/blobs/shapes
should now have uvmapped_v2.obj
files.
Download and extract the ShapeNetCore.v2.zip
file from the ShapeNet website and extract it to data/shapenet
(or somewhere else, but remember to update the config if you do).
You will need to preprocess the ShapeNet shapes so that they have UV maps. Ensure that
SHAPENET_CORE_DIR
, SHAPENET_TAXONOMY_PATH
and SHAPENET_META_DIR
are properly set and run
# Preprocess a certain category.
python -m terial.shapes.preprocess_shapenet --category chair
Then we need to import the processed shape files so that the meshes are associated with the database rows.
python -m terial.shapes.link_shapenet
You can download the preview renders
cd data/blobs
curl -L -O https://homes.cs.washington.edu/~kpar/photoshape/blobs/materials.tar.gz
tar xvzf materials.tar.gz
Materials from Adobe Stock are subject to copyright and must be downloaded from their source. Please refer
to the source_url
field in materials.json
.
Place the extracted materials sorted by substance type in the data/materials/adobe-stock
directory.
Adobe stock materials should look like this:
adobe-stock/fabric/AdobeStock_125038380
├── denim_track_rough
│ ├── denim_track_rough_basecolor.png
│ ├── denim_track_rough_height.png
│ ├── denim_track_rough_metallic.png
│ ├── denim_track_rough_normal.png
│ └── denim_track_rough_roughness.png
└── denim_track_rough.mdl
Materials from Poliigon are subject to copyright and must be downloaded from their source. Please refer
to the source_url
field in materials.json
.
Place the extracted materials sorted by substance type in the data/materials/poliigon
directory.
Download the 1K version of the materials. Poliigon materials should look like this:
poliigon/fabric/FabricCanvas001_1k
├── FabricCanvas001_COL_VAR1_1K.jpg
├── FabricCanvas001_COL_VAR2_1K.jpg
├── FabricCanvas001_DISP_1K.jpg
├── FabricCanvas001_GLOSS_1K.jpg
├── FabricCanvas001_NRM_1K.jpg
├── FabricCanvas001_OBJECTID_1K.png
└── FabricCanvas001_REFL_1K.jpg
We provide materials adapted from Aittala et al. and scanned ourselves. These SVBRDFs have been adapted from the original Aittala BRDF to the Beckmann model in order to support rendering using Blender.
Download the files and extract to data/materials
.
curl -L -O https://homes.cs.washington.edu/~kpar/photoshape/materials-500x500/aittala-beckmann.tar.gz
tar xvzf aittala-beckmann.tar.gz
Materials from vray-material.de.
Download the files and extract to data/materials
.
curl -L -O https://homes.cs.washington.edu/~kpar/photoshape/materials-500x500/vray-materials-de.tar.gz
tar xvzf vray-materials-de.tar.gz
Materials with type BLINN_PHONG
and PRINCIPLED
are defined in the params
field of each material in materials.json
.
We've process our data using PostgreSQL. For your convenience we've exported the data in an easier to use collection of
JSON files found in data/*.json
.
This file contains meta-data regarding the 3D shapes used in this project.
id
: the unique identifier for the shapesource
: the source of the shape (either shapenet or hermanmiller)source_id
: the identifier from the sourcecategory
: the object category of the shape (e.g.,chair
,cabinet
, etc.)split_set
: our train/validation split used for training our material classifierazimuth_correction
: a correction to the azimuth of the model orientation used to create preview renderings
Contains meta-data regarding the exemplar images.
Contains meta-data for Exemplar-Shape pairs.
id
: a unique identifier for the pairshape_id
: the id for the shape associated with this pairexemplar_id
: the id for the exemplar associated wiith this pairfov
: the field of view of the aligned cameraazimuth
: the azimuthal angle in radians of the aligned cameraelevation
: the elevation angle in radians of the aligned camerarank
: the rank of this pair in terms of HoG distance between the shape and exemplar.
Contains meta-data for the materials.
id
: the unique identifier for the materialname
: a name for the materialtype
: the type of BRDF. One of: POLIIGON, MDL, VRAY, BLINN_PHONG, AITTALA_BECKMANN, PRINCIPLEDauthor
: the author of the materialsource
: the source of the material (e.g., poliigon, adobe_stock, aittala, merl, etc.)source_id
: the unique identifier from the sourcesource_url
: the URL the material can be downloaded fromdefault_scale
: the UV mapping should be scaled by 2^s where s is this value
MDL
is the NVIDIA material definition language.PRINCIPLED
materials are the Disney principled BRDF as implemented in Blender.AITTALA_BECKMANN
materials are the Aittala TwoShot SVBRDF materials fitted to the Beckmann model as implemented in Blender.- For
BLINN_PHONG
andPRINCIPLED
materials, theparams
field will contain the material properties. - For
MDL
materials, theparams
field will contain material properties and/or locations of textures.
This contains the mapping from material_id
to output class labels for the material classifier.
The weights for our Download the our network parameters
mkdir -p data/classifier
cd data/classifier
curl -L -O https://homes.cs.washington.edu/~kpar/photoshape/classifier/model.tar.gz
tar xvzf model.tar.gz
Let's try to infer the materials on our sample input in data/classifier/input
. For this example you'll need the
material blobs downloaded as above and the config.py
set up properly. You'll also need to have set up the database
since the script will query the database for the materials.
We'll show the inference results using visdom. Please install visdom, and have it running in the background somewhere:
pip install visdom
python -m visdom.server
Now you can run the example script:
python -m terial.classifier.inference.infer_one \
--checkpoint-path data/classifier/model/model_best.pth.tar \
data/classifier/inputs/chair1.png \
data/classifier/inputs/chair1.mask.png
You should see the following in visdom if everything works correctly: