Skip to content
/ SRe2L Public

(NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original ImageNet-1K val set.

Notifications You must be signed in to change notification settings

VILA-Lab/SRe2L

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

83 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Large-scale Dataset Distillation

This is a collection of our work targeted at large-scale dataset distillation.

SCDD : Self-supervised Compression Method for Dataset Distillation .

CDA (@TMLR'24): Dataset Distillation via Curriculum Data Synthesis in Large Data Era.

SRe2L (@NeurIPS'23 spotlight): Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective.

Citation

@article{yin2023dataset,
  title={Dataset Distillation via Curriculum Data Synthesis in Large Data Era},
  author={Yin, Zeyuan and Shen, Zhiqiang},
  journal={Transactions on Machine Learning Research},
  year={2024}
}
@inproceedings{yin2023squeeze,
  title={Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective},
  author={Yin, Zeyuan and Xing, Eric and Shen, Zhiqiang},
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
  year={2023},
}

About

(NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original ImageNet-1K val set.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published