Skip to content

Commit

Permalink
docs: clearly deprecate Harness in the testing how-tos (#1508)
Browse files Browse the repository at this point in the history
Small adjustments to the how-to guides on testing to make it clear that
Harness is deprecated:

* Tweak the general guide so that it's still basically saying the same
thing, but refers to Scenario rather than Harness.
* Adjust example tests to use ops.testing Scenario classes rather than
Harness.
* Make sure it's clear that `ops[testing]` should be installed to write
unit tests.
* Use Scenario from `ops.testing` rather than from `scenario`.

Note that this does not substantially change the testing documentation:
there's a roadmap item to do that later in 25.04.

[Live preview](https://ops--1508.org.readthedocs.build/en/1508/)

---------

Co-authored-by: Ben Hoyt <[email protected]>
Co-authored-by: Dave Wilding <[email protected]>
  • Loading branch information
3 people authored Dec 19, 2024
1 parent dea45c1 commit 91a4ea4
Show file tree
Hide file tree
Showing 4 changed files with 91 additions and 103 deletions.
8 changes: 1 addition & 7 deletions docs/explanation/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,7 @@ Unit tests are intended to be isolating and fast to complete. These are the test
**Tools.** Unit testing a charm can be done using:

- [`pytest`](https://pytest.org/) and/or [`unittest`](https://docs.python.org/3/library/unittest.html) and
- [`ops.testing.Harness`](https://operator-framework.readthedocs.io/en/latest/#module-ops.testing) and/or {ref}``ops-scenario` <scenario>`

<!--
Unit tests are written using the `unittest` library shipped with Python or [pytest](https://pypi.org/project/pytest/). To facilitate unit testing of charms, use the [testing harness](https://juju.is/docs/sdk/testing) specifically designed for charmed operators which is available in the [Charmed Operator SDK](https://operator-framework.readthedocs.io/en/latest/#module-ops.testing).
-->


- [state transition testing](https://ops.readthedocs.io/en/latest/reference/ops-testing.html), using the `ops` unit testing framework

**Examples.**

Expand Down
104 changes: 38 additions & 66 deletions docs/howto/get-started-with-charm-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,60 +91,48 @@ A 'live', deployed Juju application will have access to all the inputs we discus

You will notice that the starting point is typically always an event. A charm doesn't do anything unless it's being run, and it is only run when an event occurs. So there is *always* an event context to be mocked. This has important consequences for the unit-testing framework, as we will see below.

### The harness
### The testing framework

In the charming world, unit testing means using Harness.
In the charming world, unit testing means state-transition testing.

> See more [`ops.testing.Harness`](https://ops.readthedocs.io/en/latest/harness.html#ops.testing.Harness)
> See more [`ops.testing`](https://ops.readthedocs.io/en/latest/reference/ops-testing.html)
Harness is the 'mocker' for most inputs and outputs you will need. Where a live charm would gather its input through context variables and calls to the Juju api (by running the hook tools), a charm under unit test will gather data via a mocked backend managed by Harness. Where a live charm would produce output by writing files to a filesystem, Harness exposes a mock filesystem the charm will be able to interact with without knowing the difference. More specific outputs, however, will need to be mocked individually.
`State` is the 'mocker' for most inputs and outputs you will need. Where a live charm would gather its input through context variables and calls to the Juju API (by running the hook tools), a charm under unit test will gather data using a mocked backend managed by the testing framework. Where a live charm would produce output by writing files to a filesystem, `Context` and `Container` expose a mock filesystem the charm will be able to interact with without knowing the difference. More specific outputs, however, will need to be mocked individually.

A typical test with Harness will look like this:
A typical test will look like this:

- set things up:
- set up the charm and its metadata
- set up the harness
- mock any 'output' callable that you know would misfire or break (e.g. a system call -- you don't want a unittest to reboot your laptop)
- configure the charm
- mock any relation data
- **mock an event**
- set up the context
- mock any 'output' callable that you know would misfire or break (for example, a system call -- you don't want a unit test to reboot your laptop)
- set up the Juju state in which the event will fire, including config and relation data
- **simulate an event via `Context.run`**
- get the output
- run assertions on the output

> Obviously, other flows are possible; for example, where you unit test individual charm methods without going through the whole event context setup, but this is the characteristic one.
### Understanding Harness
### Understanding the testing framework

When you instantiate a `Harness` object, the charm instance does not exist yet. Just like in a live charm, it is possible that when the charm is executed for the first time, the Juju model already has given it storage, relations, some config, or leadership. This delay is meant to give us a chance to simulate this in our test setup. You create a `Harness` object, then you prepare the 'initial state' of the model mock, then you finally initialise the charm and simulate one or more events.
When you instantiate `Context` and `State` objects, the charm instance does not exist yet. Just like in a live charm, it is possible that when the charm is executed for the first time, the Juju model already has given it storage, relations, some config, or leadership. This delay is meant to give us a chance to simulate this in our test setup. You create a `State` object, then you prepare the 'initial state' of the model mock, then you finally initialise the charm and simulate one or more events.

There are two ways to initialize a harnessed charm:
The `Context` provides methods for all the Juju events. For example:

* When a charm is deployed, it goes through the Setup phase, a fixed sequence of events. `Harness` has a method, `begin_with_initial_hooks()`, that runs this sequence.
* Alternatively, you can initialise the charm by calling `begin()`. This will instantiate the charm without firing any Setup phase event.

<!-- UPDATE LINKS:
> See more: [A charm's life](), [`ops.testing.Harness.begin_with_initial_hooks()`](https://ops.readthedocs.io/en/latest/harness.html#ops.testing.Harness.begin_with_initial_hooks), [`ops.testing.Harness.begin()`](https://ops.readthedocs.io/en/latest/harness.html#ops.testing.Harness.begin)
-->

After the Setup phase, the charm goes into Operation. To test operation-phase-related events, the harness provides some methods to simulate the most common scenarios. For example:

- the cloud admin changes the charm config: `harness.update_config`
- the cloud admin relates this charm to some other: `harness.add_relation`
- a remote unit joins in a relation (e.g. because the cloud admin has scaled up a remote charm): `harness.add_relation_unit`
- a remote unit touches its relation data: `harness.update_relation_data`
- the cloud admin removes a relation: `harness.remove_relation`
- a resource is attached/detached: `harness.attach_storage` / `harness.detach_storage`
- a container becomes ready: `harness.container_pebble_ready`

Therefore, one typically will not have to manually `.emit()` events, but can rely on the `Harness` utilities and focus on the higher level abstractions that they expose.
- the cloud admin changes the charm config: `ctx.on.config_changed()`
- the cloud admin relates this charm to some other: `ctx.on.relation_created(relation)`
- a remote unit joins in a relation (for example, because the cloud admin has scaled up a remote charm): `ctx.on.relation_joined(relation)`
- a remote unit touches its relation data: `ctx.on.relation_changed(relation)`
- a cloud admin removes a relation: `ctx.on.relation_departed(relation)`
- a storage is attached/detached: `ctx.on.storage_attach(storage)` / `ctx.on.storage_detached(storage)`
- a container becomes ready: `ctx.on.pebble_ready(container)`

### Writing a test

The typical way in which we want to structure a test is:
- configure the required inputs
- arrange the required inputs
- mock any function or system call you need to
- initialise the charm
- fire some event OR use one of the harness methods to trigger a predefined event sequence
- act, by calling `ctx.run`
- assert some output matches what is expected, or some function is called with the expected parameters, etc...

A simple test might look like this:
Expand All @@ -155,38 +143,31 @@ from ops import testing

def test_pebble_ready_writes_config_file():
"""Test that on pebble-ready, a config file is written"""
harness: testing.Harness[MyCharm] = testing.Harness(MyCharm)
# If you want to mock charm config:
harness.update_config({'foo': 'bar'})
# If you want to mock charm leadership:
harness.set_leader(True)

# If you want to mock relation data:
relation_ID = harness.add_relation('relation-name', 'remote-app-name')
harness.add_relation_unit(relation_ID, 'remote-app-name/0')
harness.update_relation_data(relation_ID, 'remote-app-name/0', {'baz': 'qux'})

# We are done setting up the inputs.
ctx = testing.Context(MyCharm)

harness.begin()
charm = harness.charm # This is a MyCharm instance.
relation = testing.Relation(
'relation-name',
remote_app_name='remote-app-name',
remote_units_data={1: {'baz': 'qux'}},
)

# We are done setting up the inputs:
state_in = testing.State(
config={'foo': 'bar'}, # Mock the current charm config.
leader=True, # Mock the charm leadership.
relations={relation}, # Mock relation data.
)

# This will fire a `<container-name>-pebble-ready` event.
harness.container_pebble_ready("workload")
state_out = ctx.run(ctx.on.pebble_ready(container), state_in)

# Suppose that MyCharm has written a YAML config file via Pebble.push():
container = charm.unit.get_container("workload")
file = "/opt/workload/path_to_config_file.yaml"
config = yaml.safe_load(container.pull(file).read())
container = state_out.get_container(container.name)
file = "path_to_config_file.yaml"
config = yaml.safe_load((container.get_filesystem() / file).read())
assert config[0]['foo']['bar'] == 'baz' # or whatever
```

```{note}
An important difference between a harnessed charm and a 'live', deployed, charm is that `Harness` holds on to the charm instance between events, while a deployed charm garbage-collects the charm instance between hooks. So if your charm were to set some states in, say, instance attributes, and rely on it on subsequent event handling loops, the unit tests based on the harness would not be able to catch that mistake. An integration test would.
```

## Integration testing

Where unit testing focuses on black-box method-by-method verification, integration testing focuses on the big picture. Typically integration tests check that the charm does not break (generally this means: blocks with status `blocked` or `error`) when a (mocked) cloud admin performs certain operations. These operations are scripted by using, in order of abstraction:
Expand Down Expand Up @@ -221,7 +202,6 @@ Once you have used `ops_test` to get a model in which to run your integration te
```{note}
*Pro tip*: you can prevent `ops_test` from tearing down the model on exit by passing the `--keep-models` argument. This is useful when the tests fail and the logs don't provide a sufficient post-mortem and a real live autopsy is required.
```

Detailed documentation of how to use `ops_test` and `pytest-operator` is out of scope for this document. However, this is an example of a typical integration test:
Expand Down Expand Up @@ -255,13 +235,6 @@ async def test_operation(ops_test: OpsTest):

A good integration testing suite will check that the charm continues to operate as expected whenever possible, by combining these simple elements.

## Functional testing

Some charms represent their workload by means of an object-oriented wrapper, which mediates between operator code and the implementation of operation logic. In such cases, it can be useful to add a third category of tests, namely functional tests, that black-box test that workload wrapper without worrying about the substrate it runs on (the charm, the cloud, the machine or pod...).
For an example charm adopting this strategy, see [parca-operator](https://github.com/jnsgruk/parca-operator). Nowadays, the preferred tool to do functional testing is Scenario.

> See more: [Scenario](https://github.com/canonical/ops-scenario), {ref}`Write a Scenario test for a charm <write-scenario-tests-for-a-charm>`
## Continuous integration

Typically, you want the tests to be run automatically against any PR into your repository's main branch, and sometimes, to trigger a new release whenever that succeeds. CD is out of scope for this article, but we will look at how to set up a basic CI.
Expand Down Expand Up @@ -333,4 +306,3 @@ Integration tests are a bit more complex, because in order to run those tests, a
## Conclusion

We have examined all angles one might take when testing a charm, and given a brief overview of the most popular frameworks for implementing unit and integration tests, all the way to how one would link them up with a CI system to make sure the repository remains clean and tested.

74 changes: 48 additions & 26 deletions docs/howto/write-scenario-tests-for-a-charm.md
Original file line number Diff line number Diff line change
@@ -1,45 +1,67 @@
(write-scenario-tests-for-a-charm)=
# How to write scenario tests for a charm
# How to write unit tests for a charm

First of all, install scenario:
First of all, install the Ops testing framework. To do this in a virtual environment
while you are developing, use `pip` or another package
manager. For example:

`pip install ops-scenario`
```
pip install ops[testing]
```

Normally, you'll include this in the dependency group for your unit tests, for
example in a test-requirements.txt file:

```text
ops[testing] ~= 2.17
```

Or in `pyproject.toml`:

```toml
[dependency-groups]
test = [
"ops[testing] ~= 2.17",
]
```

Then, open a new `test_foo.py` file where you will put the test code.

```python
# import the necessary objects from scenario and ops
from scenario import State, Context
import ops
from ops import testing
```


Then declare a new charm type:
```python
class MyCharm(ops.CharmBase):
pass
```
And finally we can write a test function. The test code should use a Context object to encapsulate the charm type being tested (`MyCharm`) and any necessary metadata, then declare the initial `State` the charm will be presented when run, and `run` the context with an `event` and that initial state as parameters.

And finally we can write a test function. The test code should use a `Context` object to encapsulate the charm type being tested (`MyCharm`) and any necessary metadata, then declare the initial `State` the charm will be presented when run, and `run` the context with an `event` and that initial state as parameters.

In code:

```python
def test_charm_runs():
# arrange:
# create a Context to specify what code we will be running
ctx = Context(MyCharm, meta={'name': 'my-charm'})
# and create a State to specify what simulated data the charm being run will access
state_in = State()
# act:
# ask the context to run an event, e.g. 'start', with the state we have previously created
# Arrange:
# Create a Context to specify what code we will be running,
ctx = testing.Context(MyCharm)
# and create a State to specify what simulated data the charm being run will access.
state_in = testing.State(leader=True)

# Act:
# Ask the context to run an event, e.g. 'start', with the state we have previously created.
state_out = ctx.run(ctx.on.start(), state_in)
# assert:
# verify that the output state looks like you expect it to

# Assert:
# Verify that the output state looks like you expect it to.
assert state_out.status.unit.name == 'unknown'
```

> See more:
> - [State](https://ops.readthedocs.io/en/latest/state-transition-testing.html#ops.testing.State)
> - [Context](https://ops.readthedocs.io/en/latest/state-transition-testing.html#ops.testing.Context)
> - [State](https://ops.readthedocs.io/en/latest/reference/ops-testing.html#ops.testing.State)
> - [Context](https://ops.readthedocs.io/en/latest/reference/ops-testing.html#ops.testing.Context)
```{note}
Expand All @@ -48,8 +70,8 @@ If you like using unittest, you should rewrite this as a method of some TestCase

## Mocking beyond the State

If you wish to use Scenario to test an existing charm type, you will probably need to mock out certain calls that are not covered by the `State` data structure.
In that case, you will have to manually mock, patch or otherwise simulate those calls on top of what Scenario does for you.
If you wish to use the framework to test an existing charm type, you will probably need to mock out certain calls that are not covered by the `State` data structure.
In that case, you will have to manually mock, patch or otherwise simulate those calls on top of what the framework does for you.

For example, suppose that the charm we're testing uses the `KubernetesServicePatch`. To update the test above to mock that object, modify the test file to contain:

Expand All @@ -63,17 +85,17 @@ def my_charm():
yield MyCharm
```

Then you should rewrite the test to pass the patched charm type to the Context, instead of the unpatched one. In code:
Then you should rewrite the test to pass the patched charm type to the `Context`, instead of the unpatched one. In code:

```python
def test_charm_runs(my_charm):
# arrange:
# create a Context to specify what code we will be running
ctx = Context(my_charm, meta={'name': 'my-charm'})
# Arrange:
# Create a Context to specify what code we will be running
ctx = Context(my_charm)
# ...
```

```{note}
If you use pytest, you should put the `my_charm` fixture in a toplevel `conftest.py`, as it will likely be shared between all your scenario tests.
If you use pytest, you should put the `my_charm` fixture in a top level `conftest.py`, as it will likely be shared between all your unit tests.
```
8 changes: 4 additions & 4 deletions docs/howto/write-unit-tests-for-a-charm.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
(write-unit-tests-for-a-charm)=
# How to write unit tests for a charm
# How to write legacy unit tests for a charm

The Ops library provides a testing harness, so you can check your charm does the right thing in different scenarios without having to create a full deployment. When you run `charmcraft init`, the template charm it creates includes some sample tests, along with a `tox.ini` file; use `tox` to run the tests and to get a short report of unit test coverage.
`ops` provides a legacy testing harness that was previously used to check your charm does the right thing in different scenarios without having to create a full deployment.

## Testing basics

Here’s a minimal example, taken from the `charmcraft init` template with some additional comments:

```python
# Import Ops library's testing harness
# Import Ops library's legacy testing harness
import ops
import ops.testing
import pytest

# Import your charm class
from charm import TestCharmCharm

Expand Down Expand Up @@ -192,4 +193,3 @@ harness.update_config(key_values={'the-answer': 42}) # can_connect is False
harness.container_pebble_ready('foo') # set can_connect to True
assert 42 == harness.charm.config_value
```

0 comments on commit 91a4ea4

Please sign in to comment.