Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "pip install" support #26

Open
EmanuelaBoros opened this issue Feb 20, 2024 · 3 comments
Open

Add "pip install" support #26

EmanuelaBoros opened this issue Feb 20, 2024 · 3 comments

Comments

@EmanuelaBoros
Copy link
Contributor

This could be a first step towards the inclusion in evaluate.

@simon-clematide
Copy link
Contributor

Interesting: The current scorer still relies on many preprocessing steps that are idiosyncratically bound to the HIPE format and evaluation scenario. In a way it could still be seen as an evaluation space. https://huggingface.co/evaluate-metric (similar to GLUE).

@EmanuelaBoros
Copy link
Contributor Author

@simon-clematide (hoping I did not misunderstood) I don’t see it as such a problem that the metric depends on the annotation style (domain-dependent). I was raising the issue mainly because it could be easier to integrate on my side for the training of different models and of course, it could be easier to integrate in a metric such as seqeval.

@EmanuelaBoros
Copy link
Contributor Author

One can see the metric as instead of multitask (CoNLL with columns for each task eg NER, chunking), some type of multilevel (columns in HIPE) - multilevel-seqeval 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants