Skip to content

VILA-Lab/i-mae

Repository files navigation

A PyTorch Implementation of i-MAE: Linearly Separable Representation in MAE

Kevin Zhang*, Zhiqiang Shen*

Project Page | Paper | BibTeX

Open in Colab

i-MAE2

We provide a PyTorch/GPU based implementation of our technical report i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?

Catalog

  • Pretrain demo with Colab
  • Pre-training and Fine-tuning code
  • Weights Upload

Pre-training

The pre-training instruction is in PRETRAIN.md.

Fine-tuning

The fine-tuning instruction is in FINETUNE.md.

Visualization demo

Please visit our interactive demo on our website, or run our visualization demo with a Colab notebook Open in Colab

Acknowledgement

This repository is based on timm and MAE repositories.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

Citation

If you find this repository helpful, please consider citing our work:

@article{zhang2022i-mae,
  title={i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?},
  author = {Zhang, Kevin and Shen, Zhiqiang},
  journal={arXiv preprint arXiv:2210.11470},
  year={2022}
}

About

i-mae Pytorch Repo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published