test scenario itself against real juju #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a new test suite aimed at validating the output of the simulated backend against the output of the real hook tools.
tests/consistency/conftest.py exposes a compare function to be used like:
compare("relation_list", "relation_name").
this will compare the _MockModelBackend result against the result of, essentially, running juju exec -u some-unit/0 -m some-model -- relation-list relation_name.
If the hook tool call returns an error code, wrapped by _ModelBackend in an ops.model.ModelError, compare() checks that _MockModelBackend raises a similar error.
If the hook tool call returns a value, compare() checks that _MockModelBackend returns the exact same value as _ModelBackend.
For now this works with a hardcoded model and unit name, and the State used by _MockModelBackend is also hardcoded.
Next steps:
create a testing charm (or bundle?) to pack and deploy using pytest-operator, and a corresponding State. Or pick a reference charm/bundle to deploy and test with. Probably best.