-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support reporting geometric mean by benchmark tags #132
Conversation
This requires the following changes to pyperf first: psf/pyperf#132
It seems like you want to add a new key in metadata. In that case, you should specify it in the doc: https://pyperf.readthedocs.io/en/latest/api.html#metadata I dislike changing the default formatting to add "(all)". Can you try to omit "(all)"? For example, if no benchmark has tags, it's weird to display the magic "all" tag.
In this table, it's also not easy for me to understand which benchmarks are used to compute the geometric means for each tag, since benchmark tags are not listed. Would it make sense to list tags? |
Do you have real examples of tags on benchmarks? I mean what are real tag values? |
Another option is to render one table per tag: it would only list benchmarks matching this tag, and so the "geometric mean" final row would summarize the table. And there would always be a last table with all benchmarks. |
We're mostly doing this work in expectation of cleaning the tags up to be more useful. The motivation is so we don't overoptimize for microbenchmarks, of which there are currently many in the suite. There's further discussion of how we might use tags going forward.
I like this idea. It also would resolve your other comment about 'all' being surprising in the untagged case. |
Addresses python/pyperformance#208 This reports geometric mean organized by the tag(s) assigned to each benchmark. This will allow us to include benchmarks in the pyperformance suite that we don't necessarily want to include in "one big overall number" to represent progress.
78ea3ec
to
efb04a7
Compare
c40c127
to
30e8f03
Compare
@vstinner: Do these changes work for you? |
+----------------+---------------------+-----------------------+ | ||
| Geometric mean | (ref) | 1.22x slower | | ||
+----------------+---------------------+-----------------------+ | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow, that looks great, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Sadly, the tests fail:
|
Sorry -- forgot to commit the new test files -- let's see how this works. EDIT: I guess this needs @vstinner or someone to re-approve the CI run. |
This requires the following changes to pyperf first: psf/pyperf#132
* Support reporting geometric mean by tags This requires the following changes to pyperf first: psf/pyperf#132 * Ensure `tags` is always a list * Use property * Update pyperf
Addresses python/pyperformance#208
This reports geometric mean organized by the tag(s) assigned to each benchmark.
This will allow us to include benchmarks in the pyperformance suite that we
don't necessarily want to include in "one big overall number" to represent progress.