You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I made changes to your network. Added the batchnorm and xavier initializations, but i noticed you have used Adam optimizer while paper used SGD with decay and momentum, even the parameters are not the same of paper. I followed the same methodology, but my loss is still pretty high (~3.5),still need to do eval using widerface eval.
Did you train it on widerface train+val or only train?
For eval, did you use the evaluation tool present in site?
The text was updated successfully, but these errors were encountered:
Hi,
I made changes to your network. Added the batchnorm and xavier initializations, but i noticed you have used Adam optimizer while paper used SGD with decay and momentum, even the parameters are not the same of paper. I followed the same methodology, but my loss is still pretty high (~3.5),still need to do eval using widerface eval.
The text was updated successfully, but these errors were encountered: