Skip to content

Wasserstein GAN TensorFlow Implementation

Notifications You must be signed in to change notification settings

pgarec/WGAN-TensorFlow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

WGAN-TensorFlow

This repository is a Tensorflow implementation of Martin Arjovsky's Wasserstein GAN, arXiv:1701.07875v3.

Requirements

  • tensorflow 1.9.0
  • python 3.5.3
  • numpy 1.14.2
  • pillow 5.0.0
  • scipy 0.19.0
  • matplotlib 2.2.2

Applied GAN Structure

  1. Generator (DCGAN)

  1. Critic (DCGAN)

Generated Images

  1. MNIST

  1. CelebA

**Note:** The results are not good as paper mentioned. We found that the Wasserstein distance can't converge well in the CelebA dataset, but it decreased in MNIST dataset.

Documentation

Download Dataset

MNIST dataset will be downloaded automatically if in a specific folder there are no dataset. Use the following command to download CelebA dataset and copy the `CelebA' dataset on the corresponding file as introduced in Directory Hierarchy information.

python download2.py celebA

Directory Hierarchy

.
│   WGAN
│   ├── src
│   │   ├── dataset.py
│   │   ├── download2.py
│   │   ├── main.py
│   │   ├── solver.py
│   │   ├── tensorflow_utils.py
│   │   ├── utils.py
│   │   └── wgan.py
│   Data
│   ├── celebA
│   └── mnist

src: source codes of the WGAN

Implementation Details

Implementation uses TensorFlow to train the WGAN. Same generator and critic networks are used as described in Alec Radford's paper. WGAN does not use a sigmoid function in the last layer of the critic, a log-likelihood in the cost function. Optimizer is used RMSProp instead of Adam.

Training WGAN

Use main.py to train a WGAN network. Example usage:

python main.py --is_train=true --dataset=[celebA|mnist]
  • gpu_index: gpu index, default: 0
  • batch_size: batch size for one feed forward, default: 64
  • dataset: dataset name for choice [celebA|mnist], default: celebA
  • is_train: training or inference mode, default: False
  • learning_rate: initial learning rate, default: 0.00005
  • num_critic: the number of iterations of the critic per generator iteration, default: 5
  • z_dim: dimension of z vector, default: 100
  • iters: number of interations, default: 100000
  • print_freq: print frequency for loss, default: 50
  • save_freq: save frequency for model, default: 10000
  • sample_freq: sample frequency for saving image, default: 200
  • sample_size: sample size for check generated image quality, default: 64
  • load_model: folder of save model that you wish to test, (e.g. 20180704-1736). default: None

Wasserstein Distance During Training

  1. MNIST

  1. CelebA

Evaluate WGAN

Use main.py to evaluate a WGAN network. Example usage:

python main.py --is_train=false --load_model=folder/you/wish/to/test/e.g./20180704-1746

Please refer to the above arguments.

Citation

  @misc{chengbinjin2018wgan,
    author = {Cheng-Bin Jin},
    title = {WGAN-tensorflow},
    year = {2018},
    howpublished = {\url{https://github.com/ChengBinJin/WGAN-TensorFlow}},
    note = {commit xxxxxxx}
  }

Attributions/Thanks

License

Copyright (c) 2018 Cheng-Bin Jin. Contact me for commercial use (or rather any use that is not academic research) (email: [email protected]). Free for research use, as long as proper attribution is given and this copyright notice is retained.

Related Projects

About

Wasserstein GAN TensorFlow Implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%