You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can somebody elucidate why have the ADDA codes (this as well as the TensorFlow one) used two feature maps output from the discriminator instead of one? I am wondering why here in adapt.py, we concatenate the source and target features, and then pass the concatenated features to the discriminator for prediction?
Can somebody elucidate why have the ADDA codes (this as well as the TensorFlow one) used two feature maps output from the discriminator instead of one? I am wondering why here in adapt.py, we concatenate the source and target features, and then pass the concatenated features to the discriminator for prediction?
Why not use one and do one prediction at a time, as how it is done in most GAN examples (say here - https://github.com/pytorch/examples/blob/master/dcgan/main.py)??
The text was updated successfully, but these errors were encountered: