This repository has been archived by the owner on Oct 11, 2024. It is now read-only.
forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 10
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge remote-tracking branch 'origin/main' into lwilkinson/profiler-i…
…mprovements
- Loading branch information
Showing
321 changed files
with
10,095 additions
and
2,365 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
# vLLM benchmark suite | ||
|
||
## Introduction | ||
|
||
This directory contains the performance benchmarking CI for vllm. | ||
The goal is to help developers know the impact of their PRs on the performance of vllm. | ||
|
||
This benchmark will be *triggered* upon: | ||
- A PR being merged into vllm. | ||
- Every commit for those PRs with `perf-benchmarks` label. | ||
|
||
**Benchmarking Coverage**: latency, throughput and fix-qps serving on A100 (the support for more GPUs is comming later), with different models. | ||
|
||
**Benchmarking Duration**: about 1hr. | ||
|
||
**For benchmarking developers**: please try your best to constraint the duration of benchmarking to less than 1.5 hr so that it won't take forever to run. | ||
|
||
|
||
## Configuring the workload | ||
|
||
The benchmarking workload contains three parts: | ||
- Latency tests in `latency-tests.json`. | ||
- Throughput tests in `throughput-tests.json`. | ||
- Serving tests in `serving-tests.json`. | ||
|
||
See [descriptions.md](tests/descriptions.md) for detailed descriptions. | ||
|
||
### Latency test | ||
|
||
Here is an example of one test inside `latency-tests.json`: | ||
|
||
```json | ||
[ | ||
{ | ||
"test_name": "latency_llama8B_tp1", | ||
"parameters": { | ||
"model": "meta-llama/Meta-Llama-3-8B", | ||
"tensor_parallel_size": 1, | ||
"load_format": "dummy", | ||
"num_iters_warmup": 5, | ||
"num_iters": 15 | ||
} | ||
}, | ||
] | ||
``` | ||
|
||
In this example: | ||
- The `test_name` attributes is a unique identifier for the test. In `latency-tests.json`, it must start with `latency_`. | ||
- The `parameters` attribute control the command line arguments to be used for `benchmark_latency.py`. Note that please use underline `_` instead of the dash `-` when specifying the command line arguments, and `run-benchmarks-suite.sh` will convert the underline to dash when feeding the arguments to `benchmark_latency.py`. For example, the corresponding command line arguments for `benchmark_latency.py` will be `--model meta-llama/Meta-Llama-3-8B --tensor-parallel-size 1 --load-format dummy --num-iters-warmup 5 --num-iters 15` | ||
|
||
Note that the performance numbers are highly sensitive to the value of the parameters. Please make sure the parameters are set correctly. | ||
|
||
WARNING: The benchmarking script will save json results by itself, so please do not configure `--output-json` parameter in the json file. | ||
|
||
|
||
### Throughput test | ||
The tests are specified in `throughput-tests.json`. The syntax is similar to `latency-tests.json`, except for that the parameters will be fed forward to `benchmark_throughput.py`. | ||
|
||
The number of this test is also stable -- a slight change on the value of this number might vary the performance numbers by a lot. | ||
|
||
### Serving test | ||
We test the throughput by using `benchmark_serving.py` with request rate = inf to cover the online serving overhead. The corresponding parameters are in `serving-tests.json`, and here is an example: | ||
|
||
``` | ||
[ | ||
{ | ||
"test_name": "serving_llama8B_tp1_sharegpt", | ||
"qps_list": [1, 4, 16, "inf"], | ||
"server_parameters": { | ||
"model": "meta-llama/Meta-Llama-3-8B", | ||
"tensor_parallel_size": 1, | ||
"swap_space": 16, | ||
"disable_log_stats": "", | ||
"disable_log_requests": "", | ||
"load_format": "dummy" | ||
}, | ||
"client_parameters": { | ||
"model": "meta-llama/Meta-Llama-3-8B", | ||
"backend": "vllm", | ||
"dataset_name": "sharegpt", | ||
"dataset_path": "./ShareGPT_V3_unfiltered_cleaned_split.json", | ||
"num_prompts": 200 | ||
} | ||
}, | ||
] | ||
``` | ||
|
||
Inside this example: | ||
- The `test_name` attribute is also a unique identifier for the test. It must start with `serving_`. | ||
- The `server-parameters` includes the command line arguments for vLLM server. | ||
- The `client-parameters` includes the command line arguments for `benchmark_serving.py`. | ||
- The `qps_list` controls the list of qps for test. It will be used to configure the `--request-rate` parameter in `benchmark_serving.py` | ||
|
||
The number of this test is less stable compared to the delay and latency benchmarks (due to randomized sharegpt dataset sampling inside `benchmark_serving.py`), but a large change on this number (e.g. 5% change) still vary the output greatly. | ||
|
||
WARNING: The benchmarking script will save json results by itself, so please do not configure `--save-results` or other results-saving-related parameters in `serving-tests.json`. | ||
|
||
## Visualizing the results | ||
The `convert-results-json-to-markdown.py` helps you put the benchmarking results inside a markdown table, by formatting [descriptions.md](tests/descriptions.md) with real benchmarking results. | ||
You can find the result presented as a table inside the `buildkite/performance-benchmark` job page. | ||
If you do not see the table, please wait till the benchmark finish running. | ||
The json version of the table (together with the json version of the benchmark) will be also attached to the markdown file. | ||
The raw benchmarking results (in the format of json files) are in the `Artifacts` tab of the benchmarking. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
steps: | ||
- label: "Wait for container to be ready" | ||
agents: | ||
queue: A100 | ||
plugins: | ||
- kubernetes: | ||
podSpec: | ||
containers: | ||
- image: badouralix/curl-jq | ||
command: | ||
- sh | ||
- .buildkite/nightly-benchmarks/scripts/wait-for-image.sh | ||
- wait | ||
- label: "A100 Benchmark" | ||
agents: | ||
queue: A100 | ||
plugins: | ||
- kubernetes: | ||
podSpec: | ||
priorityClassName: perf-benchmark | ||
containers: | ||
- image: public.ecr.aws/q9t5s3a7/vllm-ci-test-repo:$BUILDKITE_COMMIT | ||
command: | ||
- bash .buildkite/nightly-benchmarks/run-benchmarks-suite.sh | ||
resources: | ||
limits: | ||
nvidia.com/gpu: 8 | ||
volumeMounts: | ||
- name: devshm | ||
mountPath: /dev/shm | ||
env: | ||
- name: VLLM_USAGE_SOURCE | ||
value: ci-test | ||
- name: HF_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
name: hf-token-secret | ||
key: token | ||
nodeSelector: | ||
nvidia.com/gpu.product: NVIDIA-A100-SXM4-80GB | ||
volumes: | ||
- name: devshm | ||
emptyDir: | ||
medium: Memory | ||
# - label: "H100: NVIDIA SMI" | ||
# agents: | ||
# queue: H100 | ||
# plugins: | ||
# - docker#v5.11.0: | ||
# image: public.ecr.aws/q9t5s3a7/vllm-ci-test-repo:$BUILDKITE_COMMIT | ||
# command: | ||
# - bash | ||
# - .buildkite/nightly-benchmarks/run-benchmarks-suite.sh | ||
# mount-buildkite-agent: true | ||
# propagate-environment: true | ||
# propagate-uid-gid: false | ||
# ipc: host | ||
# gpus: all | ||
# environment: | ||
# - VLLM_USAGE_SOURCE | ||
# - HF_TOKEN | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
c387ce5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
smaller_is_better
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"}
23.64511809999575
ms25.236528139927636
ms0.94
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"}
5.9347030184767995
ms6.1105002335698435
ms0.97
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"}
184.62879260333315
ms186.83252736327026
ms0.99
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"}
84.56263215260871
ms85.4025971831417
ms0.99
This comment was automatically generated by workflow using github-action-benchmark.