-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track benchmarks in CI #2012
Track benchmarks in CI #2012
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to have separate workflow file for benchmark master
and benchmark pull request
? And is embedding JS in the workflow the standard way to upload the result, or is there an action but which doesn't apply in our case?
I'm asking out of curiosity, but since this is just experimentation, let's not block this PR in any way.
There are a couple of reasons (and I'll document them in the workflow files). The first point is that we want to treat master and pr benchmarks differently when tracking statistics: when checking if a PR has regressed performance, we only want to compare it to previous master benchmarks, not to all benchmarks that we've ever run. The other reason (which explains the annoying JS scripts) is that the bencher API key doesn't get provided to PRs from forks, so tracking benchmarks in forks takes two steps: run the benchmark in a fork and upload the result as an artifact, and then download the result in a different workflow and upload it to bencher. Meanwhile, we need to propagate the pr metadata between these workflows. The scripts for doing this mostly came from the bencher documentation |
This is an initial attempt to track benchmarks in CI.
It runs (just a few) benchmarks on every push to master, and on each pr it also runs a few benchmarks and compares them to master. It uses bencher.dev to track benchmark results over time.
This probably doesn't work yet. I've been trying to debug why the PR benchmark result uploads don't work; it turns out the workflow file needs to be in the default branch before it will run...