Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration to Gardener Operator #360

Draft
wants to merge 10 commits into
base: master
Choose a base branch
from
Draft

Migration to Gardener Operator #360

wants to merge 10 commits into from

Conversation

Gerrit91
Copy link
Contributor

@Gerrit91 Gerrit91 commented Nov 25, 2024

Description

Deployments through old control plane charts will soon be deprecated and not maintained anymore. We need to migrate our environment to the Gardener operator. There is no real choice here.

In order to test this entire automation, I enabled the mini-lab to run the Gardener Control Plane (no shoot provisioning for now): metal-stack/mini-lab#202

This release contains the migration to the Gardener operator. In order to have a successful migration, please adapt your deployment roles and read these instructions carefully.

As advised by SAP, it is recommended to deploy the Gardener operator into a dedicated Kubernetes cluster, which is now also possible with the deployment through metal-roles. You can, however, also decide to continue to deploy the Garden cluster into the same cluster as the soil.

Now, let's come to the actual migration:

The `gardener` role does not deploy the Gardener control plane anymore. Instead, there is a new, dedicated role `gardener-operator` to spin up the control plane. By default, this role requires setting existing cluster certificates for the virtual garden and the ETCD encryption key, otherwise it will fail to deploy. This will ensure that the new virtual garden will be accessible by existing Gardenlets without having to migrate to auto-generated certificates for now and that the ETCD can continue to use the existing backup data. Deploying Gardenlets through the operator will be enabled in a future metal-roles release.

It is expected that the current state of the Gardener ETCD is migrated to new PVs by leveraging the backup-restore functionality. 

In order not to have two ETCDs running and backing up data at the same time, it is required to shut down the running Gardener control plane before rolling out the `gardener-operator` role. 

- If you decide to deploy the operator into the existing Gardener cluster, the `gardener-operator` role will automatically scale down the existing control plane during deployment. 
- If you decide to deploy the operator into a separate cluster, you can use the following commands prior to the deployment in order to scale down the existing control plane:

   `kubectl -n garden scale --replicas 0 deploy gardener-scheduler`
   `kubectl -n garden scale --replicas 0 deploy gardener-controller-manager`
   `kubectl -n garden scale --replicas 0 deploy gardener-apiserver`
   `kubectl -n garden scale --replicas 0 deploy garden-kube-apiserver`
   `kubectl -n garden scale --replicas 0 sts etcd-main`

   Before running these commands, prepare the entire deployment in order to keep the Gardener downtime as minimal as possible.

As soon as the new control plane is up and running, point your existing DNS entry of the Gardener kube-apiserver to the istio gateway located in the Gardener operator cluster. Note that the virtual garden is **not** exposed through an own ingress-controller anymore but through the self-managed istio instance.

You can then clean up the previous Gardener control plane helm charts.

@Gerrit91 Gerrit91 changed the title Gardener local Migration to Gardener Operator Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant