Xu Ma, Yuqian Zhou, Xingqian Xu, Bin Sun, Valerii Filev, Nikita Orlov, Yun Fu, Humphrey Shi
Primary contact: Xu Ma
From left to right are (1)input raster image, (2)output SVGs of DiffVG (path=5), (3)output SVGs of DiffVG (path=256), and (4)output of our LIVE (path=5). With only 5 paths, DiffVG cannot reconstruct the input image. When increasing the path number to 256 (which is significantly larger than the number of necessary paths), DiffVG is able to reconstruct the input. Differently, our LIVE is able to reconstruct the input smiling face by only 5 paths, and shows a compact layer-wise representation (We re-scale the speed to match the three gifs.).
We suggest users to use the conda for creating new python environment.
Requirement: 5.0<GCC<6.0; nvcc >10.0.
git clone [email protected]:ma-xu/LIVE.git
cd LIVE
conda create -n live python=3.7
conda activate live
conda install -y pytorch torchvision -c pytorch
conda install -y numpy scikit-image
conda install -y -c anaconda cmake
conda install -y -c conda-forge ffmpeg
pip install svgwrite svgpathtools cssutils numba torch-tools scikit-fmm easydict visdom
pip install opencv-python==4.5.4.60 # please install this version to avoid segmentation fault.
cd DiffVG
git submodule update --init --recursive
python setup.py install
cd ..
conda activate live
cd LIVE
# Please modify the paramters accordingly.
python main.py --config <config.yaml> --experiment <experiment-setting> --signature <given-folder-name> --target <input-image> --log_dir <log-dir>
# Here is an simple example:
python main.py --config config/base.yaml --experiment experiment_5x1 --signature smile --target figures/smile.png --log_dir log/
@inproceedings{xu2022live,
title={Towards Layer-wise Image Vectorization},
author={Ma, Xu and Zhou, Yuqian and Xu, Xingqian and Sun, Bin and Filev, Valerii and Orlov, Nikita and Fu, Yun and Shi, Humphrey},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2022}
}
Our implementation is mainly based on the diffvg codebase. We gratefully thank the authors for their wonderful works.
LIVE is under the Apache-2.0 license. Please contact the authors for commercial use.