Skip to content

Commit

Permalink
[Eval] Adding CoSQL eval (openai#1268)
Browse files Browse the repository at this point in the history
# Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, **failure to follow
the guidelines below will result in the PR being closed automatically**.
Note that even if the criteria are met, that does not guarantee the PR
will be merged nor GPT-4 access be granted. 🚨

**PLEASE READ THIS**:

In order for a PR to be merged, it must fail on GPT-4. We are aware that
right now, users do not have access, so you will not be able to tell if
the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep
in mind as we run the eval, if GPT-4 gets higher than 90% on the eval,
we will likely reject it since GPT-4 is already capable of completing
the task.

We plan to roll out a way for users submitting evals to see the eval
performance on GPT-4 soon. Stay tuned! Until then, you will not be able
to see the eval performance on GPT-4. **Starting April 10, the minimum
eval count is 15 samples, we hope this makes it easier to create and
contribute evals.**

Also, please note that we're using **Git LFS** for storing the JSON
files, so please make sure that you move the JSON file to Git LFS before
submitting a PR. Details on how to use Git LFS are available
[here](https://git-lfs.com).

## Eval details 📑

### Eval name

CoSQL

### Eval description

[CoSQL](https://yale-lily.github.io/cosql) is a dataset for cross-domain
Conversational text-to-SQL systems. It is the dialogue version of the
[Spider](https://yale-lily.github.io/spider) task, which is an eval
already present in this repo:
[here](https://github.com/openai/evals/blob/main/evals/registry/evals/sql.yaml)

To illustrate, as opposed to Spider SQL, where the task is to generate
SQL given a single question:
```
Q: How many singers do we have?
A: SELECT count(*) FROM singer
```

CoSQL simulates a real-world conversation, where a user would be talking
to a SQL expert, who would then write SQL to retrieve data for them. ex:
```
Q: Which cartoon aired first?
A: SELECT title  FROM cartoon ORDER BY original_air_date LIMIT 1

Q: What was the last cartoon to air?
A: SELECT title  FROM cartoon ORDER BY original_air_date desc LIMIT 1

Q: What channel was it on?
```

and the ideal answer being:
```
A: SELECT channel FROM cartoon ORDER BY original_air_date desc LIMIT 1
```

As illustrated, the model needs to not only be able to reason about
provided SQL tables and schemas (which are provided in the prompts as
system input), but also be able to reference and understand the previous
question-answer pairs.

### What makes this a useful eval?

Being able to generate semantically and logically correct SQL is an
exciting application of GPT with many immediate use-cases.

Doing so in a conversational context is more difficult and tests the
model's ability to reason about prior questions and answers, similar to
what you would expect from a domain expert (e.g, SQL expert).

## Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general,
we are seeking cases where the model does not do a good job despite
being capable of generating a good response (note that there are some
things large language models cannot do, so those would not make good
evals).

Your eval should be:

- [X] Thematically consistent: The eval should be thematically
consistent. We'd like to see a number of prompts all demonstrating some
particular failure mode. For example, we can create an eval on cases
where the model fails to reason about the physical world.
- [X] Contains failures where a human can do the task, but either GPT-4
or GPT-3.5-Turbo could not.
- [X] Includes good signal around what is the right behavior. This means
either a correct answer for `Basic` evals or the `Fact` Model-graded
eval, or an exhaustive rubric for evaluating answers for the `Criteria`
Model-graded eval.
- [X] **Include at least 15 high-quality examples.**

If there is anything else that makes your eval worth including, please
document it below.

## Eval structure 🏗️

Your eval should

- [X] Check that your data is in `evals/registry/data/{name}`
- [X] Check that your YAML is registered at
`evals/registry/evals/{name}.yaml`
- [X] Ensure you have the right to use the data you submit via this eval
  - CoSQL is under the [CC-4 license](https://yale-lily.github.io/cosql)

(For now, we will only be approving evals that use one of the existing
eval classes. You may still write custom eval classes for your own
cases, and we may consider merging them in the future.)

## Final checklist 👀

### Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic
and data under the same MIT license as this repository. You must have
adequate rights to upload any data used in an Eval. OpenAI reserves the
right to use this data in future service improvements to our product.
Contributions to OpenAI Evals will be subject to our usual Usage
Policies (<https://platform.openai.com/docs/usage-policies>).

- [X] I agree that my submission will be made available under an MIT
license and complies with OpenAI's usage policies.

### Email address validation

If your submission is accepted, we will be granting GPT-4 access to a
limited number of contributors. Access will be given to the email
address associated with the commits on the merged pull request.

- [X] I acknowledge that GPT-4 access will only be granted, if
applicable, to the email address used for my merged pull request.

### Limited availability acknowledgment

We know that you might be excited to contribute to OpenAI's mission,
help improve our models, and gain access to GPT-4. However, due to the
requirements mentioned above and the high volume of submissions, we will
not be able to accept all submissions and thus not grant everyone who
opens a PR GPT-4 access. We know this is disappointing, but we hope to
set the right expectation before you open this PR.

- [X] I understand that opening a PR, even if it meets the requirements
above, does not guarantee the PR will be merged nor GPT-4 access be
granted.

### Submit eval

- [X] I have filled out all required fields of this form
- [X] I have used **Git LFS** for the Eval JSON data
- [X] (Ignore if not submitting code) I have run `pip install
pre-commit; pre-commit install` and have verified that `black`, `isort`,
and `autoflake` are running when I commit and push

Failure to fill out all required fields will result in the PR being
closed.

### Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as
many Eval Samples (at least 5) from their contribution here:

<details>
  <summary>View evals in JSON</summary>

  ### Eval
  ```jsonl
{"input": [{"role": "system", "content": "TASK: Answer the following
question with syntactically correct SQLite SQL. The SQL should be
correct and be in context of the previous question-answer pairs.\nTable
Cartoon, columns =
[*,id,Title,Directed_by,Written_by,Original_air_date,Production_code,Channel]\nTable
TV_Channel, columns =
[*,id,series_name,Country,Language,Content,Pixel_aspect_ratio_PAR,Hight_definition_TV,Pay_per_view_PPV,Package_Option]\nTable
TV_series, columns =
[*,id,Episode,Air_Date,Rating,Share,18_49_Rating_Share,Viewers_m,Weekly_Rank,Channel]\nForeign_keys
= [TV_series.Channel = TV_Channel.id,Cartoon.Channel =
TV_Channel.id]\n"}, {"role": "system", "content": "Q: Can you please
tell me the language used on the least number of TV Channels?"},
{"role": "system", "content": "A: SELECT LANGUAGE FROM TV_Channel GROUP
BY LANGUAGE ORDER BY count ( * ) ASC LIMIT 1"}, {"role": "system",
"content": "Q: What language is used on the most number of TV
channels?"}, {"role": "system", "content": "A: SELECT LANGUAGE FROM
TV_Channel GROUP BY LANGUAGE ORDER BY count ( * ) desc LIMIT 1"},
{"role": "user", "content": "Q: What is the most common content of those
TV channels?"}], "ideal": ["A: SELECT content from tv_channel group by
content order by count ( * ) desc limit 1"]}
{"input": [{"role": "system", "content": "TASK: Answer the following
question with syntactically correct SQLite SQL. The SQL should be
correct and be in context of the previous question-answer pairs.\nTable
Addresses, columns =
[*,address_id,line_1,line_2,line_3,city,zip_postcode,state_province_county,country,other_address_details]\nTable
Courses, columns =
[*,course_id,course_name,course_description,other_details]\nTable
Degree_Programs, columns =
[*,degree_program_id,department_id,degree_summary_name,degree_summary_description,other_details]\nTable
Departments, columns =
[*,department_id,department_name,department_description,other_details]\nTable
Sections, columns =
[*,section_id,course_id,section_name,section_description,other_details]\nTable
Semesters, columns =
[*,semester_id,semester_name,semester_description,other_details]\nTable
Student_Enrolment, columns =
[*,student_enrolment_id,degree_program_id,semester_id,student_id,other_details]\nTable
Student_Enrolment_Courses, columns =
[*,student_course_id,course_id,student_enrolment_id]\nTable Students,
columns =
[*,student_id,current_address_id,permanent_address_id,first_name,middle_name,last_name,cell_mobile_number,email_address,ssn,date_first_registered,date_left,other_student_details]\nTable
Transcript_Contents, columns =
[*,student_course_id,transcript_id]\nTable Transcripts, columns =
[*,transcript_id,transcript_date,other_details]\nForeign_keys =
[Degree_Programs.department_id =
Departments.department_id,Sections.course_id =
Courses.course_id,Students.permanent_address_id =
Addresses.address_id,Students.current_address_id =
Addresses.address_id,Student_Enrolment.student_id =
Students.student_id,Student_Enrolment.semester_id =
Semesters.semester_id,Student_Enrolment.degree_program_id =
Degree_Programs.degree_program_id,Student_Enrolment_Courses.student_enrolment_id
=
Student_Enrolment.student_enrolment_id,Student_Enrolment_Courses.course_id
= Courses.course_id,Transcript_Contents.transcript_id =
Transcripts.transcript_id,Transcript_Contents.student_course_id =
Student_Enrolment_Courses.student_course_id]\n"}, {"role": "system",
"content": "Q: Could you tell me the last names of all the students who
live in the state of North Carolina?"}, {"role": "system", "content":
"A: SELECT T1.last_name FROM Students AS T1 JOIN Addresses AS T2 ON
T1.current_address_id = T2.address_id WHERE T2.state_province_county =
'NorthCarolina'"}, {"role": "system", "content": "Q: What about their
first names?"}, {"role": "system", "content": "A: SELECT T1.first_name
FROM Students AS T1 JOIN Addresses AS T2 ON T1.current_address_id =
T2.address_id WHERE T2.state_province_county = 'NorthCarolina'"},
{"role": "user", "content": "Q: What are the last names of the students
who live in North Carolina but have not registered in any degree
program?"}], "ideal": ["A: SELECT T1.last_name FROM Students AS T1 JOIN
Addresses AS T2 ON T1.current_address_id = T2.address_id WHERE
T2.state_province_county = 'NorthCarolina' EXCEPT SELECT DISTINCT
T3.last_name FROM Students AS T3 JOIN Student_Enrolment AS T4 ON
T3.student_id = T4.student_id"]}
{"input": [{"role": "system", "content": "TASK: Answer the following
question with syntactically correct SQLite SQL. The SQL should be
correct and be in context of the previous question-answer pairs.\nTable
Addresses, columns =
[*,address_id,line_1,line_2,line_3,city,zip_postcode,state_province_county,country,other_address_details]\nTable
Courses, columns =
[*,course_id,course_name,course_description,other_details]\nTable
Degree_Programs, columns =
[*,degree_program_id,department_id,degree_summary_name,degree_summary_description,other_details]\nTable
Departments, columns =
[*,department_id,department_name,department_description,other_details]\nTable
Sections, columns =
[*,section_id,course_id,section_name,section_description,other_details]\nTable
Semesters, columns =
[*,semester_id,semester_name,semester_description,other_details]\nTable
Student_Enrolment, columns =
[*,student_enrolment_id,degree_program_id,semester_id,student_id,other_details]\nTable
Student_Enrolment_Courses, columns =
[*,student_course_id,course_id,student_enrolment_id]\nTable Students,
columns =
[*,student_id,current_address_id,permanent_address_id,first_name,middle_name,last_name,cell_mobile_number,email_address,ssn,date_first_registered,date_left,other_student_details]\nTable
Transcript_Contents, columns =
[*,student_course_id,transcript_id]\nTable Transcripts, columns =
[*,transcript_id,transcript_date,other_details]\nForeign_keys =
[Degree_Programs.department_id =
Departments.department_id,Sections.course_id =
Courses.course_id,Students.permanent_address_id =
Addresses.address_id,Students.current_address_id =
Addresses.address_id,Student_Enrolment.student_id =
Students.student_id,Student_Enrolment.semester_id =
Semesters.semester_id,Student_Enrolment.degree_program_id =
Degree_Programs.degree_program_id,Student_Enrolment_Courses.student_enrolment_id
=
Student_Enrolment.student_enrolment_id,Student_Enrolment_Courses.course_id
= Courses.course_id,Transcript_Contents.transcript_id =
Transcripts.transcript_id,Transcript_Contents.student_course_id =
Student_Enrolment_Courses.student_course_id]\n"}, {"role": "system",
"content": "Q: What is the full name of the department that has the
substring computer in its name?"}, {"role": "system", "content": "A:
SELECT department_name FROM Departments WHERE department_name LIKE
'%computer%'"}, {"role": "system", "content": "Q: What's the description
of that department?"}, {"role": "system", "content": "A: SELECT
department_description FROM Departments WHERE department_name LIKE
'%computer%'"}, {"role": "user", "content": "Q: What is that
department's id?"}], "ideal": ["A: SELECT department_id FROM Departments
WHERE department_name LIKE '%computer%'"]}
{"input": [{"role": "system", "content": "TASK: Answer the following
question with syntactically correct SQLite SQL. The SQL should be
correct and be in context of the previous question-answer pairs.\nTable
singer, columns =
[*,Singer_ID,Name,Birth_Year,Net_Worth_Millions,Citizenship]\nTable
song, columns =
[*,Song_ID,Title,Singer_ID,Sales,Highest_Position]\nForeign_keys =
[song.Singer_ID = singer.Singer_ID]\n"}, {"role": "system", "content":
"Q: What are the names of the singers who were born in either 1948 or
1949?"}, {"role": "system", "content": "A: SELECT Name FROM singer WHERE
Birth_Year = 1948 OR Birth_Year = 1949"}, {"role": "system", "content":
"Q: What is their citizenship?"}, {"role": "system", "content": "A:
SELECT Citizenship FROM singer WHERE Birth_Year = 1948 OR Birth_Year =
1949"}, {"role": "user", "content": "Q: Of the singers that were born in
1948 or 1949, which had the highest net worth?"}], "ideal": ["A: SELECT
name FROM singer WHERE Birth_Year = 1948 OR Birth_Year = 1949 order by
Net_Worth_Millions desc limit 1"]}
{"input": [{"role": "system", "content": "TASK: Answer the following
question with syntactically correct SQLite SQL. The SQL should be
correct and be in context of the previous question-answer pairs.\nTable
Has_Pet, columns = [*,StuID,PetID]\nTable Pets, columns =
[*,PetID,PetType,pet_age,weight]\nTable Student, columns =
[*,StuID,LName,Fname,Age,Sex,Major,Advisor,city_code]\nForeign_keys =
[Has_Pet.StuID = Student.StuID,Has_Pet.PetID = Pets.PetID]\n"}, {"role":
"system", "content": "Q: Hey can you tell me the average age for
cats?"}, {"role": "system", "content": "A: SELECT avg ( pet_age ) FROM
pets WHERE PetType = 'cat'"}, {"role": "system", "content": "Q: What
about for dogs?"}, {"role": "system", "content": "A: SELECT avg (
pet_age ) FROM pets WHERE PetType = 'dog'"}, {"role": "user", "content":
"Q: Thanks! Now what's the maximum age for dogs?"}], "ideal": ["A:
SELECT max ( pet_age ) FROM pets WHERE PetType = 'dog'"]}
  ```
</details>
  • Loading branch information
pybae authored Jul 4, 2023
1 parent 534d6b5 commit be0a135
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 0 deletions.
3 changes: 3 additions & 0 deletions evals/registry/data/sql/co_sql.jsonl
Git LFS file not shown
11 changes: 11 additions & 0 deletions evals/registry/evals/co-sql.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
co-sql:
id: co-sql.dev.v0
metrics: [accuracy]
description: Evaluates performance on a 100 samples of the CoSQL dataset, a conversational version of Text-to-SQL tasks. Each conversation simulates a real-world DB scenario where a user asks NLP questions and a SQL expert retrieves answers in response. Yu, Tao, et al. \"CoSQL A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases\" https://arxiv.org/abs/1909.05378
co-sql.dev.v0:
class: evals.elsuite.modelgraded.classify:ModelBasedClassify
args:
samples_jsonl: sql/co_sql.jsonl
eval_type: cot_classify
modelgraded_spec: sql

0 comments on commit be0a135

Please sign in to comment.