-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimise Scenario #1434
Comments
This indeed seems like an obvious win. Presumably we can just pass in
Both of these are the positional/keyword argument processing. I wonder if we can find a tidy way to have newer Python use the built-in support and only fall back to this on old Python.
I wonder if this is also the same issue: lots of log lines and each one involves creating a |
Yeah, this is probably the single biggest win. I made this change locally when running Happy for us to spend a few hours of rainy-day time looking at the other bottlenecks too. (Let's just be careful not to get too carried away on the long tail...) |
Alex Batisse mentioned that Scenario tests take much longer and the culprit is YAML? |
I checked, and I do have libyaml bundled with/used by pyyaml. When I profiled my tests, I noticed that the tests were spending a long time serializing yaml in Mind you, this is but a quick and dirty experiment, but here is what I tried:
This yielded a ~15% speed increase for my test file |
Some initial times (best of 3), using traefik-k8s (branch
|
@Batalex what repo (and branch, if appropriate) are you testing against? I'll use it as a second benchmark so that we're checking against multiple types of tests. |
I think I was using the latest release (7.0.5) because I directly edited the files in the virtual env between two sessions during the sprint. I don't have this venv around any more, but the project itself has |
Hey @Batalex, sorry for being unclear. I meant what test suite are you running (one of the data platform repos I assume), so I can run against that one too with changes to make sure that they do improve things. |
Ah, sorry, I was quite slow-witted yesterday. I was testing on https://github.com/canonical/kafka-k8s-operator (branch |
…#1494) This PR changes Scenario to always use `:memory:` as the location for the unit state database, to speed up running tests. In most cases, the charm root location is a `TemporaryDirectory`, so the unit state database only exists during the event emission anyway. The database could be accessed if a specific root directory was provided (or inside the context manager context) but the correct way to interact with the database is through the State API (deferred events, saved state objects) so we don't want to leave that open for users. Timing (best of 3): |Suite|main|branch| |-|-|-| |operator unit tests (no concurrency)|4m11.923s|3m41.354s| |traefik scenario tests|3m49.479s|1m56.795s| |kafka-k8s-operator scenario tests|0m13.463s|0m13.236s| Note that this [is already the case for Harness](https://github.com/canonical/operator/blob/0ad731a6a9118dfac240e84681f5c41810c4f3f2/ops/_private/harness.py#L309). Refs #1434
…ine (#1495) The Scenario code that limits the number of positional arguments (needed in Python 3.8 since the dataclasses there do not have `KW_ONLY`) is quite expensive. For the case of `JujuLogLine`, it's not actually providing any value, because both of the arguments can be positional. `JujuLogLine` objects are created frequently (one per logging call), so the cost of creation matters a lot for the test performance. This PR changes `JujuLogLine` to be a regular frozen dataclass. There are other performance improvements possible in the max-positional-args code (most significant of which would be only using it for Python 3.8 and 3.9) but this one-line change provides most of the benefits, so seems worth doing before any more significant work. Timing (best of 3): |Suite|main|branch| |-|-|-| |operator unit tests (no concurrency)|4m11.923s|3m20.175s| |traefik scenario tests|3m49.479s|2m18.024s| |kafka-k8s-operator scenario tests|0m13.463s|0m9.973s| Refs #1434
The Scenario code that limits the number of positional arguments (needed in Python 3.8 since the dataclasses there do not have `KW_ONLY`) is quite expensive. We have a method of providing this functionality in 3.8, but it is unfortunately expensive, and therefore a regression from older versions of Scenario. I considered 4 options here: * Manually write out the dozen or so `__init__` signatures (as we had at one point in the [original PR](canonical/ops-scenario#137)). I'd rather not for the same reasons we made the decision at that time. * Figure out a way to have a syntax that allows using regular `dataclasses.KW_ONLY` with modern Python and only use our custom code on older Python. I spent some time looking into this, and couldn't figure out a way to do it where the code still seemed readable and maintainable enough. * Cache the most expensive work (what's in this PR). * Don't bother doing anything now, and eagerly wait for the ability to drop 3.8 support. A roughly 5% performance regression felt worth fixing as long as the change is fairly straightforward (although this change only gets about half of that back). The most expensive parts of the `__new__` method are the ones that work with `inspect`: `inspect.signature` in particular, but also getting the default value of the parameters. If we assume that no-one is run-time altering the signature of the class (I believe no-one should be) then the values of these never actually change, but we are currently calculating them every time an instance of the class is created. This PR changes that to cache those three values the first time they are needed. There's one additional performance tweak in the branch that doesn't make a significant difference, but is trivial to do: when checking if the YAML files exist, skip the filesystem `exists()` call if we just created an empty temporary directory a few lines above, since we know that it will never exist. A drive-by: I happened to notice while working on this branch Harness referring to `options.yaml`, which does not exist (any more?) as far as I know, so a docs tweak to address that. Timing (best of 3): |Suite|main|branch| |-|-|-| |operator unit tests (no concurrency)|165.25s|161.89s| |traefik scenario tests|45.49|44.30s| |kafka-k8s-operator scenario tests|4.48s|4.38s| Refs #1434
Some improvements have been made - I believe these should address the regressions since Scenario 6.x and hopefully a bit more. More can be done, of course, but I think this is sufficient for now. #1498 could be moved up in priority if there are some test suites that are long (at least 30s) where that's the issue. I would definitely like to hear from anyone who has either performance regressions from Scenario 6.x that aren't fixed now, or whose conversion from Harness to Scenario results in slower tests.
|
The unit testing suite for traefik is probably the largest scenario test battery around.
Today I ran it and I realized it took over a minute and a half to complete, so I decided to profile it.
This is the result:
to produce the profile, run with this tox env:
There are some obvious candidates for optimization:
State.__new__
. Can we do something about that?juju-log
takes so long?profiling scenario's own test suite yields a very similar picture:
The text was updated successfully, but these errors were encountered: