-
For RandAugment you have a _MAX_LEVEL that clips the Magnitude of RandAugment to 10. Note: I've read the disclaimer here. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
I actually think this might be a bug in your code here. If we're adapting the EfficientNet rand augment code, they too have a
|
Beta Was this translation helpful? Give feedback.
-
@vchiley I have purposely diverged from the standard Tensorflow / 'Google' RandAugment implementations. I should really update some comments and document why, but have not prioritized that. The TF RA magnitude is not intuitive, if you actually look at the augmentations deployed for say M0, M5, M10, M15 it is VERY counter to what you might think, some augmentations go up in strength, but quite a few also go down (or are completely disabled) as you increase the magnitude due to bugs/oversights in the original impl wrt to some augs like the enhancements, posterize/solarize, etc. Each M is essentially it's own thing and so applying sampling to that scale doesn't work well. At M0 there are actually a few augs at max strength, and I believe some completely off at M5, etc. With Worth also pointing out that I purposely remove cutout from RA, it harms the image stats (mean/std) by erasing large regions w/ constant values before the images are normalized. I use random erasing w/ mean 0, std-dev 1.0 after normalization (standardization) as a replacement for that. I don't see the max level = 10 to be an issue, it's actually challenging to limit some of the augs to a sensible, symmetric range if you don't normalize appropriately. However, I do have a TODO to experiment and add a 'boost' or some means of extending magnitudes of some augs where it would/could make sense. I'm not sure if that would be allowing M > 10 for a subset of the augs and clipping for those where it doesn't make sense, or adding an extra 'turbo boost' magnitude for a specified/fixed subset of the augs.... |
Beta Was this translation helpful? Give feedback.
@vchiley I have purposely diverged from the standard Tensorflow / 'Google' RandAugment implementations. I should really update some comments and document why, but have not prioritized that.
The TF RA magnitude is not intuitive, if you actually look at the augmentations deployed for say M0, M5, M10, M15 it is VERY counter to what you might think, some augmentations go up in strength, but quite a few also go down (or are completely disabled) as you increase the magnitude due to bugs/oversights in the original impl wrt to some augs like the enhancements, posterize/solarize, etc. Each M is essentially it's own thing and so applying sampling to that scale doesn't work well. At M0 there are act…