Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to select relation in Attention while Testing? #17

Open
ShomyLiu opened this issue Dec 28, 2017 · 13 comments
Open

How to select relation in Attention while Testing? #17

ShomyLiu opened this issue Dec 28, 2017 · 13 comments

Comments

@ShomyLiu
Copy link

Hi, some thing about the attention confused me a lot.

image

the r is the query vector with relation r (the relation representation).
In train phase, is it r is the target relation label? if so, when in test phase, which r should be chosen to calculate the attention weight for the instances in a bag?

Do I misunderstand something about the paper?

Thanks.

@Mrlyk423
Copy link
Member

While testing, you need to calculate all P(r|s) and just use the query vector with relation r' when calculating P(r'|s).

@ShomyLiu
Copy link
Author

@Mrlyk423
Thanks for your reply. It is quite clear. That is to say, the formula (10)
image
It is just a stack of each relation's score
o_r = M_r s + d_r
which is calculated separately rather than an extra soft-max layer.
Is that right?

@Mrlyk423
Copy link
Member

Yes

@ShomyLiu
Copy link
Author

@Mrlyk423
Thanks, your reply helps me a lot in understanding the paper.
Best.

@ShomyLiu ShomyLiu reopened this Dec 29, 2017
@dgai91
Copy link

dgai91 commented Dec 29, 2017

@Mrlyk423 but in test phase how to calculate alpha(instance attention), and how to choose the y_pred, max(p(r|s))?

@ShomyLiu
Copy link
Author

ShomyLiu commented Jan 16, 2018

@Mrlyk423 Sorry for another question. In the test and evaluation code, why do you only calculate the the top 2000 high probability items.

for (int i=0; i<2000; i++)
{
if (aa[i].second.first!=0)
	correct++;	
fprintf(f,"%lf\t%lf\t%lf\t%s\n",correct/(i+1), correct/tot,aa[i].second.second, aa[i].first.c_str());
}

It seems that it cannot cover all the test data?
looking forward to your reply.
Thanks.

@Mrlyk423
Copy link
Member

For relation extraction, we only focus on the top predict results. If you want to get the all predict results, just change 2000 to the number you need.

@ShomyLiu
Copy link
Author

@Mrlyk423 Thanks very much. By the way, do you have other versions of PCNN+ATT such as tensorflow or pytorch? I try to reproduce PCNN+ATT with pytorch but got a quite worse result comparing with your C++ version. But the pytorch version of PCNN+ONE got a similar result as yours and Zeng2015. This issue confused me for a long time, What may be the possible reason?
Thanks.

@nayakt
Copy link

nayakt commented Mar 25, 2019

Is vector r in equation (8) obtained from Matrix M mentioned in equation (10)? Or there are two separate trainable parameters for r in eq (8) and M in eq (10)?

@ShomyLiu
Copy link
Author

In my opinion, the vector r in equation (8) shares with Matrix M

@nayakt
Copy link

nayakt commented Mar 25, 2019

@Mrlyk423 Thanks very much. By the way, do you have other versions of PCNN+ATT such as tensorflow or pytorch? I try to reproduce PCNN+ATT with pytorch but got a quite worse result comparing with your C++ version. But the pytorch version of PCNN+ONE got a similar result as yours and Zeng2015. This issue confused me for a long time, What may be the possible reason?
Thanks.

I am also facing same issue with pytorch implementation. F1 score is very low when applying attention over PCNN features. Have you been able to fix it? Have you released your pytorch implementation?

@ShomyLiu
Copy link
Author

Yeah, My implementation is in https://github.com/ShomyLiu/pytorch-relation-extraction
The dataset is used in this repo, which is slightly different from the newer version dataset in OpenNRE.

@nayakt
Copy link

nayakt commented Mar 30, 2019

For relation extraction, we only focus on the top predict results. If you want to get the all predict results, just change 2000 to the number you need.

Is the Precision-Recall curve in the paper based on to 2000 predicted results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants