Neural style transfer in tensorflow using pretrained VGG-19.
This repo contains a tensorflow implementation of the paper A Neural Algorithm of Artistic Style.
This implementation was part of a deeplearning.ai Convolutional Neural Networks course. Some of the utility functions for using a pretrained VGG-19 network were provided by the course staff.
San Francisco Bay Bridge (5000 iterations)
Original:
Oxford winter skyline (500 iterations)
Original
We used a pretrained model by MatConvNet, available at this link. Download it and put it in data directory together with the scripts to generate images locally.
- tensorflow 1.0.0
- numpy 1.13.3
- scipy 1.0.0
- imageio 2.2.0
- PIL 4.1.1
Download all python files. In the same directory, create a data directory and put there imagenet-vgg-verydeep-19 from [http://www.vlfeat.org/matconvnet/pretrained/]. Create an empty output directory to store outputs.
Run the script as follows:
python generate_image.py <content_image_path> <style_image_path> <output_path> <num_iterations>
There is an optional flag -s for saving intermediate results.
Using 500 iterations to generate 300x400 image takes around 2h on a modest CPU and less than a minute on a Nvidia K80.
Using too large images results may cause out of memory error.
For best results, make sure style and content images are of similar size and shape.