-
Hi folks, Thank you all (and especially Ross) for all of the work put into this library. I searched through the documentation and code before posting this question, but couldn't find an answer: Is there a way to know (or infer) which augmentations were used to train each set of pretrained weights supplied in Thank you very much! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@guydav unfortunately, that is not possible. For horizontal flipping specifically, I'm pretty sure almost every single weight was trained with that enabled. But for the others, dropout, stochastic depth, label smoothing, auto-augn/rand-aug, cutmix/mixup, random erasing, there is no metadata. Where models came from academic paper releases, the hparams match those in the papers (ie |
Beta Was this translation helpful? Give feedback.
@guydav unfortunately, that is not possible. For horizontal flipping specifically, I'm pretty sure almost every single weight was trained with that enabled. But for the others, dropout, stochastic depth, label smoothing, auto-augn/rand-aug, cutmix/mixup, random erasing, there is no metadata.
Where models came from academic paper releases, the hparams match those in the papers (ie
tf_
prefixed efficientnets, deit, convnext, etc). But again, I feel horizontal flipping + the base RandomResizeCrop is used pretty much everywhere.