Skip to content
This repository has been archived by the owner on Feb 20, 2020. It is now read-only.

Implemenetation and paper differences #4

Open
Tymyan1 opened this issue May 10, 2019 · 0 comments
Open

Implemenetation and paper differences #4

Tymyan1 opened this issue May 10, 2019 · 0 comments

Comments

@Tymyan1
Copy link

Tymyan1 commented May 10, 2019

Very clean code, however I have found what I believe are differences between the paper and the code implementation in the model structure. Could you please share why these differences came into existence?

  1. According to Apendix A the last layer of generator should be c3s1-3-T, but rather c7s1-3-T is used in the code.
  2. The second up-scaling layer in the attention network is commented out (and having this in would mean the following conv should have stride 2?)
  3. The resblocks do not seem to relu the output and while the paper does not mention anything (just says use resblock), from what I know about them, the (out+x) should be passed through relu?
  4. Missing the s′new part of equation num 6?
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant