Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation metrics for SQL Query not found #128

Open
andreped opened this issue May 7, 2024 · 0 comments
Open

Evaluation metrics for SQL Query not found #128

andreped opened this issue May 7, 2024 · 0 comments

Comments

@andreped
Copy link
Contributor

andreped commented May 7, 2024

From this blog post:
https://defog.ai/blog/open-sourcing-sqleval/

I saw this sentence:

  • "Measures both SQL complexity (think nested queries and multiple joins) and semantic diversity (uses language in different settings)"

When looking through the code, I fail to see a metric being computed capturing this. I was expecting some kind of Model-Graded Eval using an LLM (or similar) to determine the complexity of the SQL query itself or something like this. Perhaps it is something using this:
https://github.com/defog-ai/sql-eval/blob/main/eval/eval.py#L114

Right now, I only see metrics evaluating based on if the SQL query is valid, the runtime of completion, and what the resulting SQL Records contain (pandas.DataFrame). Am I missing something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant