Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose relevance_score for ranker models through pipeline.eval() #8510

Open
shalinshah1993 opened this issue Oct 31, 2024 · 2 comments
Open
Labels
1.x type:feature New feature or request

Comments

@shalinshah1993
Copy link

Is your feature request related to a problem? Please describe.
There has been prior discussions about exposing model scores for re-ranker and i believe it was added. However, that only works with pipeline.run() API since it returns documents with score metadata. In practice, often, we run large scale evaluation and in that cases, it has a lot of wrapping around pipeline.run() (see here)

Describe the solution you'd like
A very nice solution would be -> in the EvalutionResults df or csv file, there is additional column stored called score which comes from document.score

Describe alternatives you've considered
Hacky way to do it right now, is to take Retriever.csv in eval run add a column with score (re-run ranker inference) then sort based on score and add it as df to EvaluationResults so we save Ranker.csv

This is using Haystack 1.x

@anakin87 anakin87 added 1.x type:feature New feature or request labels Oct 31, 2024
@anakin87
Copy link
Member

Hello. Haystack 1.x is in maintenance mode and the idea is to fix bugs, but not to introduce new features.
Have you considered migrating to 2.x? https://docs.haystack.deepset.ai/docs/migration

@shalinshah1993
Copy link
Author

is the above possible in 2.x?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1.x type:feature New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants