You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Build a webpage similar to https://nlpprogress.com/english/semantic_parsing.html#ucca-parsing where we: (1) detailed description of the official evaluation protocol (for different corpora?) including eval scripts + versions, normalization, dataset versions etc.; (2) a leader-board with parser outputs, sorted by UCCA official score and another column where they are evaluated on the MRP metric; (3) a bottom part of the page with links to other (unofficial or legacy) exp setups and corresponding leader boards.
Improve UCCA score to more sensible handle unary expansions / multiple categories over the same edge. This will become the new official score. Ask participants of the semeval shared task and conll shared tasks whether they'd like to re-evaluate their systems and post their scores. Evaluation treats multiple categories too leniently huji-nlp/ucca#91
Run the new script on the MRP 2019 and 2020 submitted UCCA parses, after converting them from JSON to XML.
The text was updated successfully, but these errors were encountered:
A leaderboard will require running the experiments with leading parsers on the latest data with native UCCA evaluation (not MRP). Maybe @OfirArviv could help.
The text was updated successfully, but these errors were encountered: