Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(New) ImageNet-Only Results for Simple and Transductive CNAPS #78

Open
peymanbateni opened this issue Jan 15, 2022 · 0 comments
Open

(New) ImageNet-Only Results for Simple and Transductive CNAPS #78

peymanbateni opened this issue Jan 15, 2022 · 0 comments

Comments

@peymanbateni
Copy link
Contributor

peymanbateni commented Jan 15, 2022

Dear Meta-Dataset Team,

Hope all is well. Let me thank you again for the wonderful work and effort maintaining and improving this repo. As a researcher that has worked on this benchmark a few times, I really appreciate the work that's being put into it. Thank you for that! As part of a recent work [1], we evaluated Simple CNAPS [2] and Transductive CNAPS [3] on the ImageNet-only variation of Meta-DataSet. Here are the updated results that I would really appreciate if you could add to the ImageNet-only leaderboard:

Simple CNAPS:
ImageNet: 54.8±1.2
Omniglot: 62.0±1.3
Aircraft: 49.2±0.9
Birds: 66.5±1.0
DTD: 71.6±0.7
QuickDraw: 56.6±1.0
Fungi: 37.5±1.2
Flower: 82.1±0.9
Signs: 63.1±1.1
MSCOCO: 45.8±1.0
MNIST (out-of-domain additional dataset): 81.2±0.6
CIFAR10 (out-of-domain additional dataset): 69.9±0.8
CIFAR100 (out-of-domain additional dataset): 59.4±1.0

Transductive CNAPS:
ImageNet: 54.1±1.1
Omniglot: 62.9±1.3
Aircraft: 48.4±0.9
Birds: 67.3±0.9
DTD: 72.5±0.7
QuickDraw: 58.0±1.0
Fungi: 37.7±1.1
Flower: 82.8±0.8
Signs: 61.8±1.1
MSCOCO: 45.8±1.0
MNIST (out-of-domain additional dataset): 83.9±0.7
CIFAR10 (out-of-domain additional dataset): 68.9±0.8
CIFAR100 (out-of-domain additional dataset): 60.0±1.1

Thanks a lot! Also, if it would be possible, I was hoping to update the existing citations for Transductive CNAPS [3] in the repository and also add our recent paper that studies both methods as well. Specifically, the current citation for Transductive CNAPS [3] links to our ArXiv copy, but the paper was accepted and presented at WACV 2022 (see: https://openaccess.thecvf.com/content/WACV2022/html/Bateni_Enhancing_Few-Shot_Image_Classification_With_Unlabelled_Examples_WACV_2022_paper.html). It would be awesome if we could update this.

Lastly, our new work [1] "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" is now on ArXiv (@ https://arxiv.org/abs/2201.05151) and it studies both methods in detail (including the new ImageNet only results). It would be awesome if we could include this manuscript amongst the cited works and add its citation to the two existing entries for Simple and Transductive CNAPS on the main leaderboard (and the two ImageNet only entries to be added).

Thanks a lot and I look forward to seeing these updates added! In the meantime, if you have any questions or need any clarifications, please don't hesitate to let me know!

Best,
Peyman

[1] Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning, Bateni et al., ArXiv 2022
[2] Improved Few-Shot Visual Classification, Bateni et al. CVPR 2020
[3] Enhancing Few-Shot Image Classification with Unlabelled Examples, Bateni et al., WACV 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant