A strategy for dealing with broken benchmarks? #186
Replies: 4 comments
-
A few thoughts:
It would be really nice if If we think this makes sense, I'd be happy to write up a quick GitHub Action to do so. (This job also doesn't have to live on
I think our current strategy works well: revert the change if the breakage was gratuitous, otherwise work with the broken project to fix it.
This one's pretty easy: just pin a Git revision instead of a PyPI release. It requires a bit of |
Beta Was this translation helpful? Give feedback.
-
It would be really nice if pyperformance had a nightly CI job that ran
itself against the current CPython main branch. That way we would catch
things almost instantly.
The speed.python.org job does this, effectively, don't they? It's not the
latest PyPerformance, but that's not what's breaking -- it's the latest
CPython main branch that keeps breaking stuff (obscure stuff, but that some
of the benchmarks' deps use), and it's definitely running against that
nightly.
|
Beta Was this translation helpful? Give feedback.
-
True. But as we've seen, that's a very quiet failure. At least with a nightly CI job, you have the option of subscribing to notifications/emails for these failures. Another option would be to stick a big red banner at the top of SPO when a benchmark is failing. But I actually can't remember the last time I visited the site, myself. It also seems like a lot more work. |
Beta Was this translation helpful? Give feedback.
-
In the last few months we've broken several pyperformance benchmarks (their dependencies actually) with CPython main. (See #175.) Some of the problems have been fixed upstream (e.g. Cython but not released yet). Regardless, it is likely that we will break extensions again with future changes.
So...
(@gvanrossum and I were talking about this the other day.)
Beta Was this translation helpful? Give feedback.
All reactions