Skip to content

Commit

Permalink
ci: Continuous Benchmarking (#1785)
Browse files Browse the repository at this point in the history
* instrument benchmarks with bencher

* remove extraneous command

* track benchmarks folder

* add copy_everything_except_instructions benchmark

* update tornado resolving cve error

* update CONTRIBUTING

* a lil more

* Update Makefile

Co-authored-by: Michael Bryant <[email protected]>

* clarify contributing docs

* add missing fixture

* debug printing

* install deps?

* install deps in both workflows

* Update .github/workflows/benchmark_base.yml

Co-authored-by: jselig-rigetti <[email protected]>

---------

Co-authored-by: Michael Bryant <[email protected]>
Co-authored-by: jselig-rigetti <[email protected]>
  • Loading branch information
3 people authored Jul 1, 2024
1 parent 6665757 commit 3ed86cc
Show file tree
Hide file tree
Showing 9 changed files with 24,182 additions and 14 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/benchmark_base.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
on:
push:
branches: main

jobs:
benchmark_base_branch:
name: Continuous Benchmarking with Bencher
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: '3.9'
- uses: actions/cache@v2
with:
path: .venv
key: poetry-${{ hashFiles('poetry.lock') }}
- uses: bencherdev/bencher@main
- name: Track base branch benchmarks with Bencher
run: |
. scripts/ci_install_deps
bencher run \
--project pyquil \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch master \
--testbed ci-runner-linux \
--err \
--file results.json \
poetry run pytest --benchmark-json results.json test/benchmarks
36 changes: 36 additions & 0 deletions .github/workflows/benchmark_pr.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
on:
pull_request:
types: [opened, reopened, edited, synchronize]

jobs:
benchmark_pr_branch:
name: Continuous Benchmarking PRs with Bencher
if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository
permissions:
pull-requests: write
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: '3.9'
- uses: actions/cache@v2
with:
path: .venv
key: poetry-${{ hashFiles('poetry.lock') }}
- uses: bencherdev/bencher@main
- name: Track PR Benchmarks
run: |
. scripts/ci_install_deps
bencher run \
--project pyquil \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch '${{ github.head_ref }}' \
--branch-start-point '${{ github.base_ref }}' \
--branch-start-point-hash '${{ github.event.pull_request.base.sha }}' \
--testbed ci-runner-linux \
--err \
--github-actions '${{ secrets.GITHUB_TOKEN }}' \
--file results.json \
poetry run pytest --benchmark-json results.json test/benchmarks
27 changes: 27 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,33 @@ code, run all of the slow tests, and also calculate code coverage, you could run
pytest --cov=pyquil --use-seed=False --runslow <path/to/test-file-or-dir>
```

#### Benchmarks

We use benchmarks to ensure the performance of pyQuil is tracked over time, preventing unintended
regressions. Benchmarks are written and run using [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/latest/).
This plugin provides a fixture called `benchmark` that can be used to benchmark a Python function.

For organization, all benchmarks are located in the `test/benchmarks` directory. To run the
benchmarks, use the command:

```
pytest -v test/benchmarks # or use the Makefile: `make bench`
```

Note that benchmark results are unique to your machine. They can't be directly compared to benchmark
results on another machine unless it's a machine with identical specifications running in a similar
environment. To track performance over time in a controlled way, we use _continuous benchmarking_.
When a PR is opened, CI will run the benchmarks and compare the results to the most recent results
on the `master` branch. Since CI always uses the same image and workflow, the results should be
reasonably consistent. That said, the runners could share resources or do something else unexpected
that impacts the benchmarks. If you get unexpected results, you may want to re-run the benchmark
to see if the results are consistent. When opening or reviewing a PR, you should evaluate the results
and ensure there are no unexpected regressions.

Continuous benchmarking is implemented with
[bencher](https://bencher.dev/docs/tutorial/quick-start/). See their documentation for more
information.

### Building the Docs

The [pyQuil docs](https://pyquil.readthedocs.io) build automatically as part of the CI pipeline.
Expand Down
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ DOCKER_TAG=rigetti/forest:$(COMMIT_HASH)
.PHONY: all
all: dist

.PHONY: bench
bench:
poetry run pytest -v test/benchmarks

.PHONY: check-all
check-all: check-format check-types check-style

Expand Down
59 changes: 45 additions & 14 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ pytest-xdist = "^3.6.1"
pytest-rerunfailures = "^14.0.0"
pytest-timeout = "^2.3.1"
pytest-mock = "^3.14.0"
pytest-benchmark = "4.0.0"
respx = "^0.21.1"
syrupy = "^4.6.1"

Expand Down
Loading

0 comments on commit 3ed86cc

Please sign in to comment.