You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello,thanks for your fabulous work.I wonder what augmentation strategy was using when training on current release demo network?i noticed there are some tiny changes from current training strategy and the mentioned one in original paper.
Original paper seems not mentioning any augmentation,like just input 224x224 images into network,while current one use 176x176 and implement random cropped and mirrored.I wonder what augmentation strategy was using when training on current release demo network?and if it has go through some kind of cropping,where is the image cropped from?is it crop from the image that central crop from imagenet then resize to 256x256 , or just crop from original image from imagenet which has arbitrary resolution?
Please help me if you have free time,thanks!
The text was updated successfully, but these errors were encountered:
jo4y4y94
changed the title
what is the image augmentation method while training on current demo release?
what is the image augmentation strategy while training on current demo release?
Apr 14, 2021
@richzhang Thanks for you reply! Do you know how were those 256x256 images come from? i'm not sure is central cropped from an arbitrary size image then resize to 256x256,or directly resize an arbitrary size image to 256x256,it seems like second option is a little bit weird cause it change the ratio of image.
After reading caffe's official code,i have found out that if you set H and W to 256 while making lmdb file, it will resize all images to 256x256 no matter what original size is .Put it here in case someone is wondering this.
hello,thanks for your fabulous work.I wonder what augmentation strategy was using when training on current release demo network?i noticed there are some tiny changes from current training strategy and the mentioned one in original paper.
Original paper seems not mentioning any augmentation,like just input 224x224 images into network,while current one use 176x176 and implement random cropped and mirrored.I wonder what augmentation strategy was using when training on current release demo network?and if it has go through some kind of cropping,where is the image cropped from?is it crop from the image that central crop from imagenet then resize to 256x256 , or just crop from original image from imagenet which has arbitrary resolution?
Please help me if you have free time,thanks!
The text was updated successfully, but these errors were encountered: