Skip to content

Commit

Permalink
updated union lit-meto
Browse files Browse the repository at this point in the history
  • Loading branch information
e-maud authored Feb 20, 2020
1 parent 7f8c1cf commit 27e1532
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ The Slot Error Rate (SER) is dropped for the shared task evaluation.

The evaluation for NEL works similarly as for NERC. The link of an entity is interpreted as a label. As there is no IOB-tagging, a consecutive row of identical links is considered as a single entity. In terms of boundaries, NEL is only evaluated according to the fuzzy scenario. Thus, to get counted as correct, the system response needs only one overlapping link label with the gold standard.

[TODO: potentially add a word on fuzzy (= more than one link) NEL evaluation]
With respect to the **linking of metonymic mentions**, two evaluation scenarios will be considered: strict, where only the metonymic link will be taken into account, and relaxed, where the union of literal and metonymic annotations will be taken into account. This is not implemented yet in the scorer, it will be done with the next release.


## Scorer
To evaluate the predictions of your system on the dev set, you can run the following command:
Expand Down

0 comments on commit 27e1532

Please sign in to comment.