Replies: 1 comment 2 replies
-
In general, we don't document/test procedure like this which doesn't involve full machine reset. I would probably scale up before scaling down, so going 3->4->3 vs. 3->2->3 (for etcd quorum reasons).
This is never needed, it should just contain the DNS name in your case, but probably doesn't even need that if the controlplane endpoint is configured correctly to point to the domain name. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
My current setup looks like this:
Node A have some deployments using local-path-provisioner PVCs, that I'll like to keep when doing the conversion.
Now, I'll like to add a Node D, make it a ControlPlane without allowSchedule, and convert Node A into a worker node while keeping its
/var
partition intact (to avoid losing data from local-path-provisioner). Later on, I'm planing on adding more worker nodes, but that shouldn't be problematic.I've searched previous discussions, and found this: #7187
It doesnt exactly applies to my setup as I'm not running Rook Ceph, which should be able to recover data from the other working nodes while doing the conversion, but still provided some useful data to me.
This is the plan I've in mind, but I'll like to confirm with the community it's okay and it wont cause data loss or cluster failure:
worker.yaml
configuration for Node A usingsecrets.yaml
, and apply it so that it stops scheduling control plane pods on it.kubectl label node <Node A name> node-role.kubernetes.io/control-plane-
to remove the control plane label from it.talosctl etcd remove-member <Node A ID>
to remove Node A from etcd cluster.controlplane.yaml
configurationcluster.apiServer.certSANs
is updated to remove Node A from the list and add Node D to itIs this plan ok or am I missing something?
Beta Was this translation helpful? Give feedback.
All reactions