-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MISC] Revert upgrade lib #489
Conversation
# skip run while in recovery. The event will be retrigged when the cluster is ready | ||
logger.debug("Cluster in recovery, skip...") | ||
# Only defer for vm, that will set unit states to "ready" on upgrade-charm | ||
# on k8s only the upgrading unit will receive the upgrade-charm event | ||
# and deferring will prevent the upgrade stack from being popped | ||
logger.debug("Cluster in recovery, deferring...") | ||
event.defer() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the cause why the rollback gets stuck.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @dragomirp , this was causing concurrent upgrades on units, so I don't think we should revert it on the lib.
Does manual upgrades run also stuck?
Is the test timming out? I've also observed juju takes humongous amount of time to trigger upgrade-charm
after a refresh being run, eating a lot of the wait period, forcing us to increase timeout times for upgrade tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The real issue is the following:
- In the
test_upgrade_from_edge
test:- TimescaleDB in the snap is leading to a longer snap refresh (only some more seconds).
- With that, Patroni is performing a failover, so a new unit, different from the Juju leader unit (the former primary), becomes the primary.
- Also with that, the former primary unit, the Juju leader unit, becomes a replica (not a synchronous standby).
- In the
test_fail_and_rollback
test:- After the failed upgrade, the
pre-upgrade-check
is run, which sets therecovery
state for the Juju leader unit (which is a replica). - When the refresh is started, the upgrade charm event is fired in each of the three units, setting the state to ready in all of them.
- The upgrade changed event is fired only in the non-primary units because only the relation data from the Juju leader unit changed (the state changed from
recovery
toready
, while in the other units, it was already set toready
, so no change, no events firing for the Juju leader unit). - As the Juju leader unit is the replica, it’s the top of the stack unit (the one that should be upgraded first). However, as the upgrade relation changed event hasn’t fired for it, it cannot start the upgrade (in this case is a rollback).
- After the failed upgrade, the
The new version of the upgrade library (which is being reverted through this PR) is also causing this issue to start happening because of the missing defer call that Dragomir commented about.
However, the root cause is the failover happening, which has shown a specific path of the execution that wasn’t covered by the lib. Before adding TimescaleDB, there was no failover, so the Juju leader unit was the primary during the entire test, and what happened in the test_fail_and_rollback
test was the following:
* After the failed upgrade, the pre-upgrade-check
is run, which sets the recovery
state for the Juju leader unit (which is the primary).
* When the refresh is started, the upgrade charm event is fired in each of the three units, setting the state to ready in all of them.
* As the Juju leader unit was not the top of the stack unit, it was ok for it not to receive the upgrade relation event in the first moment.
For Kafka and MySQL operators on VM, it hasn’t happened because both always set the upgrade stack with the Juju leader unit as the last unit to be upgraded. For the following codes, remember that the build_upgrade_stack
is always executed in the Juju leader unit.
It may be fixed by canonical/data-platform-libs#176.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. sad.
Should we test PostgreSQL test on MySQL changes? :-D
# skip run while in recovery. The event will be retrigged when the cluster is ready | ||
logger.debug("Cluster in recovery, skip...") | ||
# Only defer for vm, that will set unit states to "ready" on upgrade-charm | ||
# on k8s only the upgrading unit will receive the upgrade-charm event | ||
# and deferring will prevent the upgrade stack from being popped | ||
logger.debug("Cluster in recovery, deferring...") | ||
event.defer() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Superseded by #492 |
Reverting the upgrade charm lib