You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, recently I've been working with your PyTorch implementation of ADDA so first of all thanks for your code!
For now I am only interested in testing the network Ms (trained on the source domain) on the target domain (src_only baseline):
Surprisingly on your github page you announced ~84% accuracy of this src_only baseline which is around 9% over the reported accuracy (75.2%) in the original paper (https://arxiv.org/abs/1702.05464). How can you explain such a difference?
I have tried to limit the number of samples in source domain (MNIST) to 2000 (as in the original paper) and yet I observed a ~87% accuracy (the last modifications I made from your master branch corresponds to that experiment : https://github.com/emmanuelrouxfr/pytorch-adda)
To be as close as possible from the paper setup, I have also tried to set the original batch size to 128 (adjusting the number of epochs to be 625 to fit the mentioned 10 000 learning iterations) and the original learning rates and parameters:
d_learning_rate = 2e-4
c_learning_rate = 2e-4
beta1 = 0.5
beta2 = 0.999
but I can't reproduce the results originally presented in the original paper (~75% accuracy of src_only tested on USPS). It is always much higher than it is supposed to be.
I hope you could help me identify a possible reason to this phenomenon, thanks !
The text was updated successfully, but these errors were encountered:
Hi @JinmingZhao !
It works but the "source only" baseline (you got 96.290323 %) is much higher than in the original paper (around 75 %) which is the issue I raised.
Anyway, thanks you !
Bye bye
@emmanuelrouxfr
I also have the same concern - how come the results (even the baseline) here are better than what the ADDA's authors have reported in their main paper?
Hello, recently I've been working with your PyTorch implementation of ADDA so first of all thanks for your code!
For now I am only interested in testing the network Ms (trained on the source domain) on the target domain (src_only baseline):
Surprisingly on your github page you announced ~84% accuracy of this src_only baseline which is around 9% over the reported accuracy (75.2%) in the original paper (https://arxiv.org/abs/1702.05464). How can you explain such a difference?
I have tried to limit the number of samples in source domain (MNIST) to 2000 (as in the original paper) and yet I observed a ~87% accuracy (the last modifications I made from your master branch corresponds to that experiment : https://github.com/emmanuelrouxfr/pytorch-adda)
To be as close as possible from the paper setup, I have also tried to set the original batch size to 128 (adjusting the number of epochs to be 625 to fit the mentioned 10 000 learning iterations) and the original learning rates and parameters:
d_learning_rate = 2e-4
c_learning_rate = 2e-4
beta1 = 0.5
beta2 = 0.999
but I can't reproduce the results originally presented in the original paper (~75% accuracy of src_only tested on USPS). It is always much higher than it is supposed to be.
I hope you could help me identify a possible reason to this phenomenon, thanks !
The text was updated successfully, but these errors were encountered: