Skip to content
This repository has been archived by the owner on Aug 2, 2023. It is now read-only.
/ vicuna-blog-eval Public archive

The code and data for the GPT-4 based benchmark in the vicuna blog post

License

Notifications You must be signed in to change notification settings

lm-sys/vicuna-blog-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

This repo is an archive of the code and data used in the vicuna blog post.

This repo is deprecated and we recommend using our new question set and evaluation pipeline at fastchat.llm_judge.

We do not recommend using this repo because its questions are relatively easy and it does not address limitations of GPT-4 based evaluation such as position bias.


Evaluation

Our AI-enhanced evaluation pipeline is based on GPT-4. This section provides a high-level summary of the pipeline. For detailed instructions, please refer to the evaluation documentation.

Pipeline Steps

  1. Generate answers from different models: Use qa_baseline_gpt35.py for ChatGPT, or specify the model checkpoint and run get_model_answer.py for Vicuna and other models.

  2. Generate reviews with GPT-4: Use GPT-4 to generate reviews automatically. This step can also be performed manually if the GPT-4 API is not available to you.

  3. Generate visualization data: Run generate_webpage_data_from_table.py to generate data for a static website, which allows you to visualize the evaluation data.

  4. Visualize the data: Serve a static website under the webpage directory. You can use python3 -m http.server to serve the website locally.

Data Format and Contribution

We use a data format encoded with JSON Lines for evaluation. The format includes information on models, prompts, reviewers, questions, answers, and reviews.

You can customize the evaluation process or contribute to our project by accessing the relevant data.

For detailed instructions, please refer to the evaluation documentation.

About

The code and data for the GPT-4 based benchmark in the vicuna blog post

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •