Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add basic evaluators #1074

Merged
merged 1 commit into from
Dec 19, 2023
Merged

Conversation

aybruhm
Copy link
Member

@aybruhm aybruhm commented Dec 19, 2023

No description provided.

@aybruhm aybruhm merged commit 0f56932 into gh/eval-db-schema-refactor Dec 19, 2023
0 of 3 checks passed
try:
evaluation_function = globals()[evaluator_name]

return evaluation_function(correct_answer, variant_output, *additional_args, **additional_kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have multiple notes:

  1. Please note that not all evaluators would require a correct_answer. While some of the evaluators would require inputs. (for instance the AI critic). The current solution does not take care of that.
    Maybe another approach is to provide two payload dictionaries to the evaluators. One data, the other config:
    data would include all the information needed for evaluation (the data point)
    data["prompt"] data["inputs"] data["outputs"]
    config would include all the information needed for the evaluator
    config["regex"] config["ai_prompt"] ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants