Replies: 1 comment 3 replies
-
@kanlions if you have 16-bit images you should be able to use them with timm train scripts with few to no changes if you disable the prefetcher (prefetcher has uint8 hard coded). The typical normalization should work but you may need to set a custom mean/std ... also you'd be relying on the torchvision ToTensor to do the right thing re 16-bit -> float tensor. That could depend on the specifics of your 16-bit format and you may need to scale yourself to get it centered at 0 w/ roughly std-dev of 1.0. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If I have 16 bit images what type of transformation should be done in the data transform section?
Describe the solution you'd like
I just want to know how to use TIMM with 16 bit images
Describe alternatives you've considered
I have considered dividing it by 65535 but I am not sure whether doing that is enough since the TIMM calls a model specific config so not sure how to handle it
Additional context
Any guidance is appreciated
Beta Was this translation helpful? Give feedback.
All reactions