diff --git a/README.md b/README.md new file mode 100644 index 0000000..8b63a76 --- /dev/null +++ b/README.md @@ -0,0 +1,293 @@ +# Kubernetes on macOS (Apple silicon) + +## Goals +Setup a fully functional multi-node Kubernetes cluster on macOS (Apple silicon) with both Host-VM and VM-VM communication. + +## Prerequisites + +### Tools +Homebrew will be used to install all tools needed on macOS host. +- [ ] [Homebrew](https://brew.sh/) +- [ ] [Git](https://git-scm.com/) +- [ ] [Lima VM](https://github.com/lima-vm/lima) +- [ ] [socket_vmnet](https://github.com/lima-vm/socket_vmnet/) +- [ ] [cilium-cli](https://github.com/cilium/cilium-cli/) +- [ ] [kubectl](https://github.com/kubernetes/kubectl) +- [ ] [helm](https://helm.sh/) + +### Assumptions +Git repo has been cloned to local macOS hosts. All commands are to be executed from repo root on host, unless stated otherwise. + +### Kubernetes cluster configurations +- Single Control Plane and Three worker nodes Kubernetes cluster (execute steps for single Control Plane (CP) cluster configuration) +- High Available (HA) Control Plane Kubernetes cluster with Three Control Plane and Three Worker nodes (execute all steps) + +### Networking +Shared network `192.168.105.0/24` in macOS is used as it allows both Host-VM and VM-VM communication. By default Lima VM uses DHCP range until `192.168.105.99` therefore we use IP address range from `192.168.105.100` and onward in our Kubernetes setup. To have predictable node IPs for a Kubernetes cluster, it is neccessary to [reserve IPs](https://github.com/lima-vm/socket_vmnet#how-to-reserve-dhcp-addresses) to be used from DHCP server in macOS. + +#### Kubernetes node IP range +Define following [/etc/bootptab](macos/etc/bootptab) file. +| Host | MAC Address | IP address | Comments | +| -------- | ----------------- | --------------- | ------------------------------------------- | +| cp | 52:55:55:12:34:00 | 192.168.105.100 | Control Plane (CP) Virtual IP (VIP) address | +| cp-1 | 52:55:55:12:34:01 | 192.168.105.101 | | +| cp-2 | 52:55:55:12:34:02 | 192.168.105.102 | Additional CP node in HA CP cluster. | +| cp-3 | 52:55:55:12:34:03 | 192.168.105.103 | Additional CP node in HA CP cluster. | +| worker-1 | 52:55:55:12:34:04 | 192.168.105.104 | | +| worker-2 | 52:55:55:12:34:05 | 192.168.105.105 | | +| worker-3 | 52:55:55:12:34:06 | 192.168.105.106 | | + +Reload macOS DHCP daeamon. +``` +sudo /bin/launchctl kickstart -kp system/com.apple.bootpd +``` +#### Kubernetes API server +Kubernetes API server is available via VIP address `192.168.105.100` + +#### Kubernetes L4 load balancer IP address pool +To access services from host address pool for L4 load balancer needs to be configured to same shared subnet than node IPs. Therefore we will use `192.168.105.240/28` as L4 load balancer address pool giving us 14 usable addresses. IP address `192.168.105.241` will be assigned for Ingress Controller. + +#### Troubleshooting `socket_vmnet` related issues +Update sudoers config and _config/networks.yaml file. +Currently it is neccessary to replace `socketVMNet` field in `~/.lima/_config/networks.yaml` with absolute path, instead of symbolic link and generate sudoers configuration to able to execute `limactl start`. + +After `socket_vmnet` is upgraded, it is neccessary to adjust the absolute path in `networks.yaml` and regenerate sudoers configuration with +``` +limactl sudoers >etc_sudoers.d_lima && sudo install -o root etc_sudoers.d_lima "/private/etc/sudoers.d/lima" +``` + +## Provision machines for Kubernetes +[Lima VM](https://github.com/lima-vm/lima) is used to provision machines for Kubernetes. + +Create machines (Virtual Machines (VM) for nodes) for single Control Plane (CP) cluster configuration. +``` +limactl create --set='.networks[].macAddress="52:55:55:12:34:01"' --name cp-1 machines/ubuntu-machine-tmpl.yaml --tty=false +limactl create --set='.networks[].macAddress="52:55:55:12:34:04"' --name worker-1 machines/ubuntu-machine-tmpl.yaml --tty=false +limactl create --set='.networks[].macAddress="52:55:55:12:34:05"' --name worker-2 machines/ubuntu-machine-tmpl.yaml --tty=false +limactl create --set='.networks[].macAddress="52:55:55:12:34:06"' --name worker-3 machines/ubuntu-machine-tmpl.yaml --tty=false +``` + +Create machines for other Control Plane (CP) nodes to implement HA cluster configuration. +``` +limactl create --set='.networks[].macAddress="52:55:55:12:34:02"' --name cp-2 machines/ubuntu-machine-tmpl.yaml --tty=false +limactl create --set='.networks[].macAddress="52:55:55:12:34:03"' --name cp-3 machines/ubuntu-machine-tmpl.yaml --tty=false +``` + +Please note that machine template file provisions components of the latest Kubernetes release. + +Start machines for single Control Plane (CP) configuration. +``` +limactl start cp-1 +limactl start worker-1 +limactl start worker-2 +limactl start worker-3 +``` + +Start machines for other Control Plane (CP) nodes to implement HA cluster configuration. +``` +limactl start cp-2 +limactl start cp-3 +``` + +Check that all all machines are running +``` +limactl list +``` + + +## Initiate Kubernetes cluster + +### Prerequisites +Copy `kubeadm` config files into the machines for single Control Plane (CP) configuration. +``` +limactl cp manifests/kubeadm/cp-1-init-cfg.yaml cp-1: +limactl cp manifests/kubeadm/worker-1-join-cfg.yaml worker-1: +limactl cp manifests/kubeadm/worker-2-join-cfg.yaml worker-2: +limactl cp manifests/kubeadm/worker-3-join-cfg.yaml worker-3: +``` + +Copy `kubeadm` config files into the other Control Plane (CP) node machines to implement HA cluster configuration. +``` +limactl cp manifests/kubeadm/cp-2-join-cfg.yaml cp-2: +limactl cp manifests/kubeadm/cp-3-join-cfg.yaml cp-3: +``` + + +### Setup single Control Plane (CP) Kubernetes cluster with three worker nodes + +#### Initiate Kubernetes Control Plane (CP) in CP-1 machine +Following steps are to be run inside of `cp-1` machine + +Generate `kube-vip` static pod manifest +``` +export KVVERSION=v0.6.3 +export INTERFACE=lima0 +export VIP=192.168.105.100 +sudo ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION +sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip manifest pod \ + --arp \ + --controlplane \ + --address $VIP \ + --interface $INTERFACE \ + --leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml +``` + +Initiate Kubernetes Control Plane (CP) +``` +sudo kubeadm init --upload-certs --config cp-1-init-cfg.yaml +``` + +Copy `kubeconfig` for use by a regular user +``` +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +Install Gateway API bundle with experimental resources support. For details, see [Gatewway API project](https://gateway-api.sigs.k8s.io/guides/) +``` +kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/experimental-install.yaml +``` + +Install CNI (Cilium) with L2 load balancer, Ingress Controller and Gateway API support enabled. +``` +cilium install --version 1.14.4 \ + --set operator.replicas=1 \ + --set ipam.operator.clusterPoolIPv4PodCIDRList="10.244.0.0/16" \ + --set kubeProxyReplacement=true \ + --set l2announcements.enabled=true \ + --set ingressController.enabled=true \ + --set ingressController.default=true \ + --set ingressController.loadbalancerMode=shared \ + --set ingressController.service.loadBalancerIP="192.168.105.241" \ + --set gatewayAPI.enabled=true +``` +Note: `cilium status` will show TLS error until `kubelet serving` certificates are approved. + +#### Setup `kubeconfig` on macOS host +Following steps are to be run on `macOS` host + +Export `kubeconfig` from a CP node to host. +``` +limactl cp cp-1:.kube/config ~/.kube/config.k8s-on-macos +``` + +Set context to freshly created Kubernetes cluster +``` +export KUBECONFIG=~/.kube/config.k8s-on-macos +``` + +Test that you are able to access the cluster from macOS host +``` +kubectl version +``` + +#### Join worker nodes to Kubernetes cluster +Following steps are to be run inside of respective worker node machines + +Join `worker-1` +``` +sudo kubeadm join --config worker-1-join-cfg.yaml +``` + +Join `worker-2` +``` +sudo kubeadm join --config worker-2-join-cfg.yaml +``` + +Join `worker-3` +``` +sudo kubeadm join --config worker-3-join-cfg.yaml +``` + + +### Join other Control Plane (CP) nodes to implement High Available (HA) Kubernetes cluster +Following steps are to be run inside of `cp-2` machine` + +Generate `kube-vip` static pod manifest +``` +export KVVERSION=v0.6.3 +export INTERFACE=lima0 +export VIP=192.168.105.100 +sudo ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION +sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip manifest pod \ + --arp \ + --controlplane \ + --address $VIP \ + --interface $INTERFACE \ + --leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml +``` + +Join additional Control Plane (CP) node +``` +sudo kubeadm join --config cp-2-join-cfg.yaml +``` + +Copy `kubeconfig` for use by a regular user +``` +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +Following steps are to be run inside of `cp-3` node machine` + +Generate `kube-vip` static pod manifest +``` +export KVVERSION=v0.6.3 +export INTERFACE=lima0 +export VIP=192.168.105.100 +sudo ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION +sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip manifest pod \ + --arp \ + --controlplane \ + --address $VIP \ + --interface $INTERFACE \ + --leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml +``` + +Join additional Control Plane (CP) +``` +sudo kubeadm join --config cp-3-join-cfg.yaml +``` + +Copy `kubeconfig` for use by a regular user +``` +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + + +### Manual approval of `kubelet serving` certificates +Approve any pending `kubelet-serving` certificate +``` +kubectl get csr +kubectl get csr | grep "Pending" | awk '{print $1}' | xargs kubectl certificate approve +``` + +### Install add-ons +#### Metrics server +Install metrics server +``` +kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml +``` + +### Finalize initial configuration +Apply address pool configuration for L2 loadbalancer +``` +kubectl apply -f manifests/cilium/l2-aware-lb-cfg.yaml +``` + +### Finals checks + +Check from `macOS` host that cluster works as expected +``` +kubectl version +cilium status +kubectl get nodes -o wide +kubectl get all -A -o wide +kubectl top nodes +kubectl top pods -A --sort-by=memory +``` +--- END --- \ No newline at end of file diff --git a/machines/ubuntu-machine-tmpl.yaml b/machines/ubuntu-machine-tmpl.yaml new file mode 100644 index 0000000..c1ca738 --- /dev/null +++ b/machines/ubuntu-machine-tmpl.yaml @@ -0,0 +1,100 @@ +# +# Kubernetes node (machine) template based on Ubuntu Cloud Image +# + +# VM type: "qemu" or "vz" (on macOS 13 and later). +vmType: qemu + +# OpenStack-compatible (cloud-init) disk image. +images: +- location: "https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img" + arch: "aarch64" + +# Virtual Machine (VM) specification (CPUs, RAM and Disk) +cpus: 2 +memory: "2GiB" +disk: "8GiB" + +# Networking configuration +# Use limactl create / start --set='.networks[].macAddress="52:55:55:12:34:56"' to set unique mac address to each VM created +# https://github.com/lima-vm/socket_vmnet/tree/v1.1.2#how-to-use-static-ip-addresses +networks: +- lima: shared + macAddress: "12:34:56:78:9A:BC" + +# No mounts exposed +mounts: [] + +# Enable system-wide (aka rootful) containerd and its dependencies (BuildKit, Stargz Snapshotter) +containerd: + system: true + user: false + +provision: + +# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic +- mode: system + script: | + #!/bin/bash + set -eux -o pipefail + + cat < /etc/containerd/config.toml + sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml + systemctl restart containerd + +# Install Cilium CLI +# https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/ +- mode: system + script: | + #!/bin/bash + set -eux -o pipefail + + CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) + CLI_ARCH=amd64 + if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi + curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} + sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum + tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin + rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} diff --git a/macos/etc/bootptab b/macos/etc/bootptab new file mode 100644 index 0000000..f7b8c91 --- /dev/null +++ b/macos/etc/bootptab @@ -0,0 +1,10 @@ +# bootptab +%% +# hostname hwtype hwaddr ipaddr bootfile +cp 1 52:55:55:12:34:00 192.168.105.100 +cp-1 1 52:55:55:12:34:01 192.168.105.101 +cp-2 1 52:55:55:12:34:02 192.168.105.102 +cp-3 1 52:55:55:12:34:03 192.168.105.103 +worker-1 1 52:55:55:12:34:04 192.168.105.104 +worker-2 1 52:55:55:12:34:05 192.168.105.105 +worker-3 1 52:55:55:12:34:06 192.168.105.106 \ No newline at end of file diff --git a/macos/~/.lima/_config/networks.yaml b/macos/~/.lima/_config/networks.yaml new file mode 100644 index 0000000..064df7a --- /dev/null +++ b/macos/~/.lima/_config/networks.yaml @@ -0,0 +1,40 @@ +# Path to socket_vmnet executable. Because socket_vmnet is invoked via sudo it should be +# installed where only root can modify/replace it. This means also none of the +# parent directories should be writable by the user. +# +# The varRun directory also must not be writable by the user because it will +# include the socket_vmnet pid file. Those will be terminated via sudo, so replacing +# the pid file would allow killing of arbitrary privileged processes. varRun +# however MUST be writable by the daemon user. +# +# None of the paths segments may be symlinks, why it has to be /private/var +# instead of /var etc. +paths: +# socketVMNet requires Lima >= 0.12 . +# socketVMNet: "/opt/socket_vmnet/bin/socket_vmnet" +# socketVMNet: "/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet" + socketVMNet: "/opt/homebrew/Cellar/socket_vmnet/1.1.3/bin/socket_vmnet" + varRun: /private/var/run/lima + sudoers: /private/etc/sudoers.d/lima + +group: everyone + +networks: + shared: + mode: shared + gateway: 192.168.105.1 + dhcpEnd: 192.168.105.99 + netmask: 255.255.255.0 + bridged: + mode: bridged + interface: en0 + # bridged mode doesn't have a gateway; dhcp is managed by outside network + host: + mode: host + gateway: 192.168.106.1 + dhcpEnd: 192.168.106.99 + netmask: 255.255.255.0 + + # User mode network (experimental, subnet not changeable) + user-net-v2: + mode: user-v2 diff --git a/manifests/cilium/l2-aware-lb-cfg.yaml b/manifests/cilium/l2-aware-lb-cfg.yaml new file mode 100644 index 0000000..d739457 --- /dev/null +++ b/manifests/cilium/l2-aware-lb-cfg.yaml @@ -0,0 +1,21 @@ +apiVersion: cilium.io/v2alpha1 +kind: CiliumL2AnnouncementPolicy +metadata: + name: default +spec: + nodeSelector: + matchExpressions: + - key: node-role.kubernetes.io/control-plane + operator: DoesNotExist + interfaces: + - lima0 + externalIPs: true + loadBalancerIPs: true +--- +apiVersion: cilium.io/v2alpha1 +kind: CiliumLoadBalancerIPPool +metadata: + name: "default" +spec: + cidrs: + - cidr: "192.168.105.240/28" diff --git a/manifests/kube-vip/kube-vip.yaml b/manifests/kube-vip/kube-vip.yaml new file mode 100644 index 0000000..2789a3d --- /dev/null +++ b/manifests/kube-vip/kube-vip.yaml @@ -0,0 +1,61 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + name: kube-vip + namespace: kube-system +spec: + containers: + - args: + - manager + env: + - name: vip_arp + value: "true" + - name: port + value: "6443" + - name: vip_interface + value: lima0 + - name: vip_cidr + value: "32" + - name: cp_enable + value: "true" + - name: cp_namespace + value: kube-system + - name: vip_ddns + value: "false" + - name: vip_leaderelection + value: "true" + - name: vip_leasename + value: plndr-cp-lock + - name: vip_leaseduration + value: "5" + - name: vip_renewdeadline + value: "3" + - name: vip_retryperiod + value: "1" + - name: address + value: 192.168.105.100 + - name: prometheus_server + value: :2112 + image: ghcr.io/kube-vip/kube-vip:v0.6.3 + imagePullPolicy: Always + name: kube-vip + resources: {} + securityContext: + capabilities: + add: + - NET_ADMIN + - NET_RAW + volumeMounts: + - mountPath: /etc/kubernetes/admin.conf + name: kubeconfig + hostAliases: + - hostnames: + - kubernetes + ip: 127.0.0.1 + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes/admin.conf + name: kubeconfig +status: {} diff --git a/manifests/kubeadm/cp-1-init-cfg.yaml b/manifests/kubeadm/cp-1-init-cfg.yaml new file mode 100644 index 0000000..c0d7b05 --- /dev/null +++ b/manifests/kubeadm/cp-1-init-cfg.yaml @@ -0,0 +1,43 @@ +# kubeadm init config - Single / Multiple control plane and multiple worker nodes K8s cluster +# advertiseAddress, node-ip and controlPlaneEndpoint must match with 'lima0' interface (shared network assumed) +# podSubnet range needs to match with CNI, Cilium uses its own resource for pod network,and --ipam.operator.clusterPoolIPv4PodCIDRList is used to change from Cilium default +apiVersion: kubeadm.k8s.io/v1beta3 +kind: InitConfiguration +localAPIEndpoint: + advertiseAddress: 192.168.105.101 + bindPort: 6443 +nodeRegistration: + name: "cp-1" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.101 +bootstrapTokens: +- token: "9a08jv.c0izixklcxtmnze7" + description: "kubeadm bootstrap token" + ttl: "1h" +certificateKey: "e6a2eb85817ab72a4f494f30285ec9785706a83bfcbd2204" +skipPhases: + - addon/kube-proxy +--- +apiVersion: kubeadm.k8s.io/v1beta3 +kind: ClusterConfiguration +clusterName: "kubernetes-cluster" +controlPlaneEndpoint: 192.168.105.100:6443 +apiServer: + certSANs: + - "127.0.0.1" + - "192.168.105.100" + - "192.168.105.101" + - "192.168.105.102" + - "192.168.105.103" + - "kubernetes-cluster.k8s.internal" +networking: + serviceSubnet: "10.96.0.0/16" + podSubnet: "10.244.0.0/16" + dnsDomain: "cluster.local" +featureGates: +--- +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +# 'kubectl get csr' / 'kubectl certificate approve' are used to approve kubernetes.io/kubelet-serving CSRs +serverTLSBootstrap: true diff --git a/manifests/kubeadm/cp-2-join-cfg.yaml b/manifests/kubeadm/cp-2-join-cfg.yaml new file mode 100644 index 0000000..b9cdcbd --- /dev/null +++ b/manifests/kubeadm/cp-2-join-cfg.yaml @@ -0,0 +1,21 @@ +# kubeadm join config - Multiple control plane and multiple worker nodes K8s cluster +# apiServerEndpoint should have same config than in init config for controlPlaneEndpoint +# Bootstrap token should match with one in init config +# name must be to unique for every worker and node-ip must match with 'lima0' interface +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +controlPlane: + localAPIEndpoint: + advertiseAddress: 192.168.105.102 + bindPort: 6443 + certificateKey: "e6a2eb85817ab72a4f494f30285ec9785706a83bfcbd2204" +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.105.100:6443 + token: 9a08jv.c0izixklcxtmnze7 + unsafeSkipCAVerification: true +nodeRegistration: + name: "cp-2" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.102 \ No newline at end of file diff --git a/manifests/kubeadm/cp-3-join-cfg.yaml b/manifests/kubeadm/cp-3-join-cfg.yaml new file mode 100644 index 0000000..5ef9a9c --- /dev/null +++ b/manifests/kubeadm/cp-3-join-cfg.yaml @@ -0,0 +1,21 @@ +# kubeadm join config - Multiple control plane and multiple worker nodes K8s cluster +# apiServerEndpoint should have same config than in init config for controlPlaneEndpoint +# Bootstrap token should match with one in init config +# name must be to unique for every worker and node-ip must match with 'lima0' interface +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +controlPlane: + localAPIEndpoint: + advertiseAddress: 192.168.105.103 + bindPort: 6443 + certificateKey: "e6a2eb85817ab72a4f494f30285ec9785706a83bfcbd2204" +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.105.100:6443 + token: 9a08jv.c0izixklcxtmnze7 + unsafeSkipCAVerification: true +nodeRegistration: + name: "cp-3" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.103 \ No newline at end of file diff --git a/manifests/kubeadm/worker-1-join-cfg.yaml b/manifests/kubeadm/worker-1-join-cfg.yaml new file mode 100644 index 0000000..2764b0f --- /dev/null +++ b/manifests/kubeadm/worker-1-join-cfg.yaml @@ -0,0 +1,16 @@ +# kubeadm join config - Single control plane, multiple worker nodes K8s cluster +# apiServerEndpoint should have same config than in init config for controlPlaneEndpoint +# Bootstrap token should match with one in init config +# name must be to unique for every worker and node-ip must match with 'lima0' interface +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.105.101:6443 + token: 9a08jv.c0izixklcxtmnze7 + unsafeSkipCAVerification: true +nodeRegistration: + name: "worker-1" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.104 diff --git a/manifests/kubeadm/worker-2-join-cfg.yaml b/manifests/kubeadm/worker-2-join-cfg.yaml new file mode 100644 index 0000000..9bf7033 --- /dev/null +++ b/manifests/kubeadm/worker-2-join-cfg.yaml @@ -0,0 +1,16 @@ +# kubeadm join config - Single control plane, multiple worker nodes K8s cluster +# apiServerEndpoint should have same config than in init config for controlPlaneEndpoint +# Bootstrap token should match with one in init config +# name must be to unique for every worker and node-ip must match with 'lima0' interface +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.105.101:6443 + token: 9a08jv.c0izixklcxtmnze7 + unsafeSkipCAVerification: true +nodeRegistration: + name: "worker-2" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.105 diff --git a/manifests/kubeadm/worker-3-join-cfg.yaml b/manifests/kubeadm/worker-3-join-cfg.yaml new file mode 100644 index 0000000..2b8f3e8 --- /dev/null +++ b/manifests/kubeadm/worker-3-join-cfg.yaml @@ -0,0 +1,16 @@ +# kubeadm join config - Single control plane, multiple worker nodes K8s cluster +# apiServerEndpoint should have same config than in init config for controlPlaneEndpoint +# Bootstrap token should match with one in init config +# name must be to unique for every worker and node-ip must match with 'lima0' interface +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.105.101:6443 + token: 9a08jv.c0izixklcxtmnze7 + unsafeSkipCAVerification: true +nodeRegistration: + name: "worker-3" + criSocket: "unix:///var/run/containerd/containerd.sock" + kubeletExtraArgs: + node-ip: 192.168.105.106 \ No newline at end of file