Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on experiments #33

Open
bkumardevan07 opened this issue Nov 28, 2020 · 1 comment
Open

Clarification on experiments #33

bkumardevan07 opened this issue Nov 28, 2020 · 1 comment

Comments

@bkumardevan07
Copy link

Hi,
Can you clearify what you exactly mean when you say ..
"It was very important to concatenate the input and context vectors in the Attention mechanism."

Also could you also specify your stopping criteria as you mentioned you did not use stop loss.

@soobinseo
Copy link
Owner

  1. "It was very important to concatenate the input and context vectors in the Attention mechanism."

When the experiment was conducted, it was found that the attention plot was not properly formed when the layer was passed without the above conditions, and the quality of the result was not good.

  1. when I use stop loss, the training was not going well. (I should find why it wasn't work)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants