You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for sharing this amazing work/repository.
I'm using transfer learning to fine-tune VGG-Face (2015 model).
I know I have to apply the same image pre-processing to my training images as in the original paper (i.e. "The input to all networks is a face image of size 224×224 with the average face image (computed from the training set) subtracted"), but I run into a doubt: should I use the average face image of the original training dataset or should I use the average face image of my training dataset?
I've tried to find the answer but without success. Any clues?
Thanks
The text was updated successfully, but these errors were encountered:
I am wondering about the exact same thing. When fine-tuning the model or, in my case, stacking dense layers on the 'pool5' layer, should I use the preprocessing_input function or should I do preprocessing based on my own dataset?
I want to use the vggface weights for training an emotion classifier. Subtracting the mean of the vgg16 dataset from my face images seems weird.
"Subtracting the mean of the vgg16 dataset from my face images seems weird." I felt exactly the same way but it seems to be the right thing to do. At least that's the intuition I get from some VGG Transfer Learning implementations I saw on the internet (e.g. https://discuss.pytorch.org/t/vgg-transfer-learning/33808).
Please share any additional information you feel may be helpful.
Hi @rcmalli
First of all, thank you for sharing this amazing work/repository.
I'm using transfer learning to fine-tune VGG-Face (2015 model).
I know I have to apply the same image pre-processing to my training images as in the original paper (i.e. "The input to all networks is a face image of size 224×224 with the average face image (computed from the training set) subtracted"), but I run into a doubt: should I use the average face image of the original training dataset or should I use the average face image of my training dataset?
I've tried to find the answer but without success. Any clues?
Thanks
The text was updated successfully, but these errors were encountered: