-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add Unit.reboot for machine charms #1041
Conversation
Oh, looks like |
Co-authored-by: Ben Hoyt <[email protected]>
Co-authored-by: Ben Hoyt <[email protected]>
Co-authored-by: Ben Hoyt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, thanks! Let's wait for @jameinel's review too.
If we can tell that you're in k8s, can't we fail inside |
We don't have an easy way to do that right now. You can look for So maybe it's time to add a helper method |
I think just having reboot do something on k8s is also fine, and we
wouldn't want to work around the workaround in the future.
…On Tue, Oct 17, 2023 at 3:41 PM Ben Hoyt ***@***.***> wrote:
If we can tell that you're in k8s, can't we fail inside ops itself?
We don't have an easy way to do that right now. You can look for
containers in metadata.yaml, but charms with no containers can be
deployed on K8s too. Harry mentioned
<https://chat.charmhub.io/charmhub/pl/wkxduuqszt8cjdcxq7ihoe8mwr> that
JUJU_CONTAINER_NAMES will be set on K8s (it may be empty, but it'll be
set).
So maybe it's time to add a helper method is_k8s_sidecar. We've needed
this before too. Or maybe charm.type (though something more specific than
"type") or something which is an enum, one of machine, k8s-podspec,
k8s-sidecar.
—
Reply to this email directly, view it on GitHub
<#1041 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABRQ7MXIJ4UH7FYXUO2F5TX73NOJAVCNFSM6AAAAAA53CFIGOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRXGA2TGNZTGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having "if harness.last_rebooted is not None" is fine as a way to test that it actually did trigger a reboot if there was some other thing that you were looking for.
I also think we probably want some form of interrupt in python code. I've tested at least:
echo hello
reboot
echo goodbye
In bash, and you will occasionally see the "goodbye" but not always. I don't know whether juju-reboot --now
guarantees you will/won't get your process back.
But likely charm authors will assume that if they pass now=True they won't execute any more code.
If Juju adds support for doing something to k8s when If we have something like: if container_type == "machine":
raise ModelError("reboot() is only available when running on machine controllers") Then you'd need to update ops as well (to the version where we presumably strip that out). That would be less surprising, which seems good.
|
In my tests, I couldn't get any (Python) code to run after the call to reboot (which was done with |
…o ensure that it will change every time, even if the reboots are right next to each other.
…er than relying on Juju.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like it! Approved, with one nit comment on the docstring.
FWIW, I raised the question more so that we wouldn't have a flaky test suite, than because I was worried about the reboot time not being precise enough. |
Adds a new
reboot
method toUnit
, that's a wrapper around thejuju-reboot
command.juju-reboot
is only available for machine charms. When it's run for a K8s charm, Juju logs an error message but doesn't return an response with the error (or have a non-zero exit), which means that we have to fail silently.Adds a corresponding
reboot()
method to the harness backend.From the point of view of the harness, we assume that everything is essentially the same after rebooting without
--now
, so this is a no-op. The most likely change that would impact the harness would be a leadership change, but it seems better to let the Charmer model that explicitly in their tests.When rebooting with
--now
, what should happen is that everything after that point in the event handler is skipped, there's a simulated reboot (again, roughly a no-op), and then the event is re-emitted. I went down a long rabbit-hole trying to get the harness to do this, but it ended up way too messy for minimal benefit. Instead, aSystemExit
exception is raised and the Charmer is expected to handle it in their tests, for example:Fixes #929