This code is a reimplementation of the following paper in Python using Pytorch:
Variational Uncalibrated Photometric Stereo under General Lighting Haefner, B., Ye, Z., Gao, M., Wu, T., Quéau, Y. and Cremers, D.; In International Conference on Computer Vision (ICCV), 2019.
We propose an efficient principled variational approach to uncalibrated PS under general illumination. To this end, the Lambertian reflectance model is approximated through a spherical harmonic expansion, which preserves the spatial invariance of the lighting. The joint recovery of shape, reflectance and illumination is then formulated as a single variational problem.
Original MATLAB version is available here
Has been tested under Ubuntu 20.04.5 LTS (Focal Fossa) with an Intel(R) Core(TM) i7-4702HQ CPU @ 2.20GHz, 15Gb of RAM
Has been tested under Ubuntu 20.04.5 LTS (Focal Fossa) with an Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz, 31Gb of RAM, and an NVIDIA GeForce GTX 1070 with 8192 MiB
Have a look at environment.yml
to see the python dependencies, e.g. pytorch is needed, but not necessarily with GPU support.
$ conda env create -f environment.yml
$ conda activate general_ups
Run
$ bash data/download.sh
to download two data sets the data folder:
synthetic_joyfulyell_hippie
of 32Mbxtion_backpack_sf4_ups
of 359Mb
For each of the three usages described here, we provide
- two example data sets, and
- the possibility to run you own dataset
$ python main.py example1 # Run the synthetic example data set
or
$ python main.py example2 # Run the real-world example data set
Run
$ python main.py -h
and$ python main.py manual -h
to see all possible options.
For example you can run the provided data set manually (internally this is what happens):
$ python main.py \
--gpu \
--output=./output/synthetic_joyfulyell_hippie \
manual \
--volume=24.77 \
--mask=data/synthetic_joyfulyell_hippie/mask.png \
--images=data/synthetic_joyfulyell_hippie/images.pth \
--intrinsics=data/synthetic_joyfulyell_hippie/K.pth \
--gt_depth=data/synthetic_joyfulyell_hippie/z_gt.pth \
--gt_light=data/synthetic_joyfulyell_hippie/l_gt_25x9x3.pth \
--gt_albedo=data/synthetic_joyfulyell_hippie/rho_gt.pth
Note that the order of the arguments matter, i.e. -g
, --gpu_id
, and --output
has to be stated before the manual
keyword.
This should result in the following error metrics:
rmse albedo: 0.06496411561965942
rmse_s: 0.11562050133943558
ae_s: 28.797739028930664
ae_n: 7.723351955413818 # paper reported 7.49
rmse_z: 0.6139106154441833
You can run the ballooning and the general_ups code separately as well, have a look at
$ python ballooning.py -h
$ python ballooning.py manual -h
$ python general_ups.py -h
$ python general_ups.py manual -h
This is for example handy if
- you'd like to test our initiatlization on your UPS solver, or
- if you have your own depth initialization and you'd like to run our UPS solver on it.
images
of shape:NxCxHxW
.- binary
mask
of shape:1xHxW
. - Optional:
intrinsics
of shape:3x3
. If not provided orthographic projection is assumed. - Optional:
gt_depth
of shape:HxW
- Optional:
gt_light
of shape:Nx{4 or 9}xC
- Optional:
gt_albedo
of shape:CxHxW
If you make use of the library in any form in a scientific publication, please refer to https://github.com/Bjoernhaefner/general_ups_python
and cite the paper
@inproceedings{haefner2019variational,
title = {Variational Uncalibrated Photometric Stereo under General Lighting},
author = {Bjoern Haefner and Zhenzhang Ye and Maolin Gao and Tao Wu and Yvain Quéau and Daniel Cremers},
booktitle = {IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2019},
doi = {10.1109/ICCV.2019.00863},
}