-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add ability to define deployment strategy #38
Conversation
For this change there are still two things I'm still not sure about.
|
4cc0f93
to
d0169e3
Compare
0320382
to
99d7c07
Compare
@ericgraf Thanks for the PR.
Our documentation is generated from here:
Maybe take two versions, start with the first one and upgrade to the next one. |
Thank you @rchincha . I moved this PR back to draft until I have those two things done. |
0b406e6
to
a372b59
Compare
Signed-off-by: ericgraf <[email protected]>
a372b59
to
a41ba63
Compare
@rchincha I added two tests in this PR. |
@Andreea-Lupu pls review this as well. @ericgraf thanks for updating the documentation as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
A word of caution though that we are relying on k8s' rollback if something goes wrong and not a version path/plan check.
What type of PR is this?
Feature
Which issue does this PR fix:
What does this PR do / Why do we need it:
This PR introduces the ability to specify the upgrade strategy of the Zot deployment.
Strategy recreate needs to be set when trying to upgrade a single replica deployment with a PV defined.
In ReadWriteOnce accessMode pvcs the new pod will be stuck in creating until the pod with the pvc is manually deleted.
In ReadWriteMany accessMode pvcs the new pod will go into a crash loop with the error
operation timeout: boltdb file is already in use
.If an issue # is not available please add repro steps and logs showing the issue:
Run the below commands to repoduce the failure.
New Pod is in a bad state
Crash looping pod error logs:
Testing done on this change:
The below commands were run with code from this PR.
The deployment strategy is set to recreate in the commands which allowed the deployment to successfully scale down the old replicaset, then spins up the new replicaset.
Final state after doing the upgrade test.
Automation added to e2e:
Two new tests were added:
Will this break upgrades or downgrades?
No.
This change sets the values in values.yaml to the deployment strategy values in the values,yaml file.
Does this PR introduce any user-facing change?:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.