Skip to content

Neural style transfer in Tensorflow using a pretrained VGG-19.

Notifications You must be signed in to change notification settings

PrzemekPobrotyn/Neural-Style-Transfer

Repository files navigation

Neural-Style-Transfer

Neural style transfer in tensorflow using pretrained VGG-19.

This repo contains a tensorflow implementation of the paper A Neural Algorithm of Artistic Style.

This implementation was part of a deeplearning.ai Convolutional Neural Networks course. Some of the utility functions for using a pretrained VGG-19 network were provided by the course staff.

Examples

San Francisco Bay Bridge (5000 iterations) San Francisco Bay Bridge

Original:

San Francisco Bay Bridge

Oxford winter skyline (500 iterations)

Winter Oxford

Original

Winter Oxford

Data

We used a pretrained model by MatConvNet, available at this link. Download it and put it in data directory together with the scripts to generate images locally.

Dependencies

  • tensorflow 1.0.0
  • numpy 1.13.3
  • scipy 1.0.0
  • imageio 2.2.0
  • PIL 4.1.1

Running the script

Download all python files. In the same directory, create a data directory and put there imagenet-vgg-verydeep-19 from [http://www.vlfeat.org/matconvnet/pretrained/]. Create an empty output directory to store outputs.

Run the script as follows:

python generate_image.py <content_image_path> <style_image_path> <output_path> <num_iterations>

There is an optional flag -s for saving intermediate results.

Using 500 iterations to generate 300x400 image takes around 2h on a modest CPU and less than a minute on a Nvidia K80.

Using too large images results may cause out of memory error.

For best results, make sure style and content images are of similar size and shape.

About

Neural style transfer in Tensorflow using a pretrained VGG-19.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages