Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce the amount of YAML load/dump calls present in Scenario tests #1498

Open
tonyandrewmeyer opened this issue Dec 12, 2024 · 1 comment · May be fixed by tonyandrewmeyer/operator#7
Open
Assignees

Comments

@tonyandrewmeyer
Copy link
Contributor

We currently work with the YAML file(s) (either charmcraft.yaml or metadata.yaml with optional actions.yaml and config.yaml) quite a lot, particularly in the common "autoload" case when not explicitly providing a charm root directory:

  • Whenever a Context object is created, we load the metadata (trying charmcraft.yaml first and falling back to the legacy files if needed) (so 1x open and 1x YAML load)
  • With each run, we create a temporary directory and write the three (this is simulating the Juju state) files there (so 3x open and 3x YAML dump), using the (dictionary) data from the charm spec. This directory is discarded after the run.
  • With each run, we create an ops.CharmMeta object, reading the text of the file written above from metadata.yaml and actions.yaml (and probably in the future config.yaml) and doing a YAML load.

There's definitely some optimisation possible here.

This is partially split off from #1434 (but that ticket is mostly being treated as regressions from earlier Scenario, and this is not that, although it might result in worse performance when moving from Harness to Scenario).

@tonyandrewmeyer tonyandrewmeyer self-assigned this Dec 12, 2024
@tonyandrewmeyer
Copy link
Contributor Author

A note for timing, if using the same charms as with other performance testing: kafka-k8s-operator currently reads the three YAML files in the test code and passes the already-loaded data to the Context, so probably won't gain a lot here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant