-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement upgrade-relation
for control plane nodes
#200
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upgrades are obviously hard. We want to guide the user rather than road-blocking them at every turn. In the end, the juju admin knows best what they want their cluster to do and how to get it to the revision they want. We shouldn't make unlimited hurdles from them to cross. This process should guide them -- not block them at every turn.
I say this as one who has performed these upgrades before and putting too many guardrails up saves the casual user but can impede the person who has accepted the risk and just wants the cluster upgraded
status.add(ops.BlockedStatus(f"Version mismatch with {unit.name}")) | ||
raise ReconcilerError(f"Version mismatch with {unit.name}") | ||
# NOTE: Add a check to validate if we are doing an upgrade | ||
status.add(ops.WaitingStatus("Upgrading the cluster")) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this method _announce_kubernetes_version
feels similar to the above get_worker_version
in some ways that it's reading the version field from the k8s-cluster
or cluster
relation.
So is a version mismatch now a waiting situation because an upgrade is in-progress? Is that why there's a NOTE
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, i see now that it's because the raising of the reconciler error prevented the upgrade events from engaging. Whew -- we still need a check to let folks know they're running on out-of-spec-version of the applications
For now _announce_kubernetes_version
is only run on the lead CP. say you deployed and related a kw 1.35 to a 1.31 cluster. I imagine the 1.35 workers may not join. Should they join? Is the k8s-cp the right place to gripe about it? You're right that we should at least make sure we're not in an upgrade scenario before we raise the reconciler error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work here. Thanks mateo!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Left a few comments.
Test coverage for 7e7e480
Static code analysis report
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Overview
Introduce upgrade orchestration for control plane nodes.
Rationale
To simplify the upgrade process for charms, this pull request adds orchestration logic specifically for upgrading the control plane nodes, specifically the charm core (
k8s
snap). It does not cover the worker node upgrade orchestration, which will be addressed in a future pull request.Changes
on_upgrade_granted
handler to manage the upgrade process for nodes in the cluster.