Skip to content
/ ARAE Public

tensorflow implementation of Adversarially Regularized Autoencoders (ARAE)

Notifications You must be signed in to change notification settings

soobinseo/ARAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ARAE

tensorflow implementation of Adversarially Regularized Autoencoders for Generating Discrete Structures (ARAE)

While the Paper used the Stanford Natural Language Inference dataset for the text generation, this implementation only used the mnist dataset. I implemented the continuous version for this implementaion, but discrete version is implemented as footnote in this code.

Dependencies

  1. tensorflow == 1.0.0
  2. numpy == 1.12.0
  3. matplotlib == 1.3.1

Steps

Run the following code for image reconstruction.


python train.py

Results

  • The model trained 100000 steps

  • Generated from fake(noise) data

The result from noise tend to appear several figures simultaneously.
  • Generated from real data

Notes

I didn't multiply the critic gradient before backpropping to the encoder.

About

tensorflow implementation of Adversarially Regularized Autoencoders (ARAE)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages