-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hexanator Charm Review PR #1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! I added a few comments, but I didn't try running this myself.
src/charm.py
Outdated
# Log messages can be retrieved using juju debug-log | ||
logger = logging.getLogger(__name__) | ||
|
||
VALID_LOG_LEVELS = ["info", "debug", "warning", "error", "critical"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is leftover from the charmcraft profile, right? Since you're not using it it might as well be removed.
tests/integration/test_charm.py
Outdated
|
||
logger = logging.getLogger(__name__) | ||
|
||
METADATA = yaml.safe_load(Path("./metadata.yaml").read_text()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work? It looks like it's from the older version of the charmcraft profile that still used metadata.yaml
in the source repo. It seems like you'd need it to be charmcraft.yaml
instead.
tests/integration/test_charm.py
Outdated
""" | ||
# Build and deploy charm from local source folder | ||
charm = await ops_test.build_charm(".") | ||
resources = {"httpbin-image": METADATA["resources"]["httpbin-image"]["upstream-source"]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I assume you haven't worked on this yet, since the source name is also the profile one not the one you're actually using.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good start! It looks like you've made good use of Rockcraft and the base layer, so the charm needs to do almost nothing except start the service (replan) -- nice.
I'd suggest a few things to get this to "onboarding-ready" level:
- I think it's a bit too basic: I think we need at least a relation/integration or an action.
- Currently
tox
doesn't run cleanly (just formatting issues I think?). - Make sure the integration test(s) work.
- I guess we don't need any config values? That's good if not -- the more self-configuring, the better. And it certainly simplifies things.
|
||
|
||
def test_ingress_requirer(harness): | ||
assert harness.charm.ingress._auto_data == (None, None, 80) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there another way to test this without resorting to accessing lib-private fields? Could we listen for the custom events the charm lib sends instead? (If any.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thinkn that the proper way is probably to examine the data bag.
test_scenario.py: test_ingress()
does that.
I'm not sure about harness tests, I feel they are meant to be lower level, but maybe I'm wrong and should send an relation event and then poke at the databag too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is definitely common for Harness tests to trigger relation events (either explicitly or via the shortcuts) and to assert on what's in the data bags.
Thank you for the early review, addressing... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty reasonable for a first charm!
monkeypatch.setattr( | ||
"socket.getfqdn", lambda: "hexanator-0.hexanator-endpoints.mymodel.svc.cluster.local" | ||
) | ||
relation = initial_state.get_relation(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tonyandrewmeyer It's a pity we still need (semi-)magic numbers here. Any way to avoid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's only there because the relation object is hidden away in a fixture.
You could use get_relations
since you know there's only one for the endpoint in this case, or you could adjust the fixtures, so that the relation is a fixture that both the initial state and the test use, so that you have the id available.
The `gubernator` service is configured and enabled in the `rockcraft.yaml` file. | ||
Pebble starts with `--on-hold` in the workload container, release it. | ||
""" | ||
event.workload.replan() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a very slim chance of this failing because Pebble (or the workload container) has gone away between the ready event being triggered and here. If that happened, the unit would go into error state, and the problem would probably solve itself (k8s recreation or whatever) and when the event was retried, or a new pebble-ready was received, all would be ok.
I feel like that's fine. However, charmers don't like going into error state, so here would catch the Pebble error (probably ConnectionError) and defer. Up to you what you do :)
|
||
|
||
def test_ingress_requirer(harness): | ||
assert harness.charm.ingress._auto_data == (None, None, 80) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is definitely common for Harness tests to trigger relation events (either explicitly or via the shortcuts) and to assert on what's in the data bags.
tests/unit/test_scenario.py
Outdated
|
||
@pytest.fixture | ||
def initial_state(): | ||
pebble_layers = {"default": ops.pebble.Layer(raw=default_pebble_layer())} # type: ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could avoid the ignore by having the function return a LayerDict, not dict.
|
||
|
||
def test_startup(ctx, initial_state): | ||
container = list(initial_state.containers)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'd like to avoid people referring to state components by number (this is one of the reasons they are changing from lists to sets). You know the name of the container, so it seems simplest to just do:
container = list(initial_state.containers)[0] | |
container = initial_state.get_container("gubernator") |
state = ctx.run(ctx.on.pebble_ready(container), initial_state) | ||
assert state.unit_status == ops.ActiveStatus() | ||
assert ( | ||
list(state.containers)[0].service_status["gubernator"] == ops.pebble.ServiceStatus.ACTIVE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here:
list(state.containers)[0].service_status["gubernator"] == ops.pebble.ServiceStatus.ACTIVE | |
state.get_container("gubernator").service_status["gubernator"] == ops.pebble.ServiceStatus.ACTIVE |
No description provided.