Typically, autoencoders (AEs) are assotiated with Neural Networks. Yet in the paper the authors propose to use Decision Tree for AE and claim that their approach has reasonable performance. Here we have reproduced the paper results: we have implemented AE forest algorithm and compared its performance with MLP & CNN AEs on image datasets (MNIST, CIFAR-10, Omniglot).
The code was written by:
- Egor Sevriugov - Tree ensemble based AE (MNIST, CIFAR-10, Omniglot),
- Kirill Shcherbakov - CNN based AE (MNIST, CIFAR-10, Omniglot),
- Maria Begicheva - MLP based AE (MNIST, Omniglot),
- Olga Novitskaya - MLP based AE (CIFAR-10, Omniglot)
AEbyForest: Project | Paper | Report | Presentation | Video
- Tree ensemble based AE (MNIST, CIFAR-10, Omniglot): Google Colab | Code
- CNN based AE (MNIST, CIFAR-10, Omniglot): Google Colab | Code
- MLP based AE (MNIST, Omniglot): Google Colab | Code
- MLP based AE (CIFAR-10, Omniglot): Google Colab | Code
- Python 3
- Google Colaboratory service
- PyTorch 1.4.0, Tensorflow 2.1.0, Keras 2.3.0
- MNIST and CIFAR-10 datasets were got from keras.datasets module: MNIST dataset | CIFAR-10 dataset
- Omniglot dataset was got from torchvision.datasets module: Omniglot dataset
To help users better understand and use our code, for each model we created instructions for running the code and reproducing the results:
-
eForest instruction for running the code and reproducing the results: eForest Instruction
-
CNN-AE instruction for running the code and reproducing the results: CNN-AE Instruction
-
MLP-AE instruction for running the code and reproducing the results: MLP-AE Instruction