We provide PyTorch implementations for our CVPR 2020 paper "Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping". paper, suppl.
This project generates multi-style artistic portrait drawings from face photos using a GAN-based model.
From left to right: input, output(style1), output(style2), output(style3)
- Linux or macOS
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
- To install the dependencies, run
pip install -r requirements.txt
A colab demo is here.
-
- Download pre-trained models from BaiduYun(extract code:c9h7) or GoogleDrive and rename the folder to
checkpoints
.
- Download pre-trained models from BaiduYun(extract code:c9h7) or GoogleDrive and rename the folder to
-
- Test for example photos: generate artistic portrait drawings for example photos in the folder
./examples
using
- Test for example photos: generate artistic portrait drawings for example photos in the folder
# with GPU
python test_seq_style.py
# without GPU
python test_seq_style.py --gpu -1
The test results will be saved to a html file here: ./results/pretrained/test_200/index3styles.html
.
The result images are saved in ./results/pretrained/test_200/images3styles
,
where real
, fake1
, fake2
, fake3
correspond to input face photo, style1 drawing, style2 drawing, style3 drawing respectively.
-
- To test on your own photos: First use an image editor to crop the face region of your photo (or use an optional preprocess here). Then specify the folder that contains test photos using option
--dataroot
, specify save folder name using option--savefolder
and run the above command again:
- To test on your own photos: First use an image editor to crop the face region of your photo (or use an optional preprocess here). Then specify the folder that contains test photos using option
# with GPU
python test_seq_style.py --dataroot [input_folder] --savefolder [save_folder_name]
# without GPU
python test_seq_style.py --gpu -1 --dataroot [input_folder] --savefolder [save_folder_name]
# E.g.
python test_seq_style.py --gpu -1 --dataroot ./imgs/test1 --savefolder 3styles_test1
The test results will be saved to a html file here: ./results/pretrained/test_200/index[save_folder_name].html
.
The result images are saved in ./results/pretrained/test_200/images[save_folder_name]
.
An example html screenshot is shown below:
You can contact email [email protected] for any questions.
-
- Prepare for the dataset: 1) download face photos and portrait drawings from internet (e.g. resources). 2) align, crop photos and drawings & 3) prepare nose, eyes, lips masks according to preprocess instructions. 3) put aligned photos under
./datasets/portrait_drawing/train/A
, aligned drawings under./datasets/portrait_drawing/train/B
, masks underA_nose
,A_eyes
,A_lips
,B_nose
,B_eyes
,B_lips
respectively.
- Prepare for the dataset: 1) download face photos and portrait drawings from internet (e.g. resources). 2) align, crop photos and drawings & 3) prepare nose, eyes, lips masks according to preprocess instructions. 3) put aligned photos under
-
- Train a 3-class style classifier and extract the 3-dim style feature (according to paper). And save the style feature of each drawing in the training set in .npy format, in folder
./datasets/portrait_drawing/train/B_feat
- Train a 3-class style classifier and extract the 3-dim style feature (according to paper). And save the style feature of each drawing in the training set in .npy format, in folder
A subset of our training set is here.
-
- Train our model
sh ./scripts/train.sh
Models are saved in folder checkpoints/portrait_drawing
If you use this code for your research, please cite our paper.
@inproceedings{YiLLR20,
title = {Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping},
author = {Yi, Ran and Liu, Yong-Jin and Lai, Yu-Kun and Rosin, Paul L},
booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR '20)},
pages = {8214--8222},
year = {2020}
}
Our code is inspired by pytorch-CycleGAN-and-pix2pix.