From 89f3045ee7104053af19d180a182d7d9c1e673ad Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 14:51:16 -0600 Subject: [PATCH 01/13] Add CAPT playground: This will allow users to try out CAPT. Signed-off-by: Jacob Weinstock --- .gitignore | 6 +- README.md | 78 +++++++++- capt/Taskfile.yaml | 115 +++++++++++++++ capt/config.yaml | 28 ++++ capt/scripts/create_vms.sh | 34 +++++ capt/scripts/generate_bmc.sh | 29 ++++ capt/scripts/generate_hardware.sh | 32 +++++ capt/scripts/generate_secret.sh | 19 +++ capt/scripts/generate_state.sh | 132 +++++++++++++++++ capt/scripts/update_state.sh | 48 +++++++ capt/scripts/virtualbmc.sh | 22 +++ capt/tasks/Taskfile-capi.yaml | 143 +++++++++++++++++++ capt/tasks/Taskfile-create.yaml | 221 +++++++++++++++++++++++++++++ capt/tasks/Taskfile-delete.yaml | 58 ++++++++ capt/tasks/Taskfile-vbmc.yaml | 43 ++++++ capt/templates/bmc-machine.tmpl | 16 +++ capt/templates/bmc-secret.tmpl | 9 ++ capt/templates/clusterctl.tmpl | 7 + capt/templates/hardware.tmpl | 34 +++++ capt/templates/kustomization.tmpl | 227 ++++++++++++++++++++++++++++++ 20 files changed, 1296 insertions(+), 5 deletions(-) create mode 100644 capt/Taskfile.yaml create mode 100644 capt/config.yaml create mode 100755 capt/scripts/create_vms.sh create mode 100755 capt/scripts/generate_bmc.sh create mode 100755 capt/scripts/generate_hardware.sh create mode 100755 capt/scripts/generate_secret.sh create mode 100755 capt/scripts/generate_state.sh create mode 100755 capt/scripts/update_state.sh create mode 100755 capt/scripts/virtualbmc.sh create mode 100644 capt/tasks/Taskfile-capi.yaml create mode 100644 capt/tasks/Taskfile-create.yaml create mode 100644 capt/tasks/Taskfile-delete.yaml create mode 100644 capt/tasks/Taskfile-vbmc.yaml create mode 100644 capt/templates/bmc-machine.tmpl create mode 100644 capt/templates/bmc-secret.tmpl create mode 100644 capt/templates/clusterctl.tmpl create mode 100644 capt/templates/hardware.tmpl create mode 100644 capt/templates/kustomization.tmpl diff --git a/.gitignore b/.gitignore index 997ca2f8..76f98344 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,5 @@ -.vagrant \ No newline at end of file +.vagrant +error.log +.task +.state +capt/output/ \ No newline at end of file diff --git a/README.md b/README.md index 0fca4422..0ad27e32 100644 --- a/README.md +++ b/README.md @@ -1,21 +1,29 @@ # Playground -The playground is an example deployment of the Tinkerbell stack for use in learning and testing. It is not a production reference architecture. +This playground repository holds example deployments for use in learning and testing. +The following playgrounds are available: + +- [Tinkerbell stack playground](#tinkerbell-stack-playground) +- [Cluster API Provider Tinkerbell (CAPT) playground](#cluster-api-provider-tinkerbell-capt-playground) + +## Tinkerbell Stack Playground + +The following section containers the Tinkerbell stack playground instructions. It is not a production reference architecture. Please use the [Helm chart](https://github.com/tinkerbell/charts) for production deployments. -## Quick-Starts +### Quick-Starts The following quick-start guides will walk you through standing up the Tinkerbell stack. There are a few options for this. Pick the one that works best for you. -## Options +### Options - [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md) - [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md) - [Kubernetes](docs/quickstarts/KUBERNETES.md) -## Next Steps +### Next Steps By default the Vagrant quickstart guides automatically install Ubuntu on the VM (machine1). You can provide your own OS template. To do this: @@ -38,3 +46,65 @@ By default the Vagrant quickstart guides automatically install Ubuntu on the VM ``` 1. Restart the machine to provision (if using the vagrant playground test machine this is done by running `vagrant destroy -f machine1 && vagrant up machine1`) + +## Cluster API Provider Tinkerbell (CAPT) Playground + +The Cluster API Provider Tinkerbell (CAPT) is a Kubernetes Cluster API provider that uses Tinkerbell to provision machines. You can find more information about CAPT [here](https://github.com/tinkerbell/cluster-api-provider-tinkerbell). The CAPT playground is an example deployment for use in learning and testing. It is not a production reference architecture. + +### Getting Started + +The CAPT playground is a tool that will create a local CAPT deployment and a single workload cluster. This includes creating and/or installing a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create the workload cluster, and a Virtual BMC server to manage the VMs. + +Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./capt/config.yaml) as needed. + +### Prerequisites + +#### Binaries + +- [Libvirtd](https://wiki.debian.org/KVM) >= libvirtd (libvirt) 8.0.0 +- [Docker](https://docs.docker.com/engine/install/) >= 24.0.7 +- [Helm](https://helm.sh/docs/intro/install/) >= v3.13.1 +- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) >= v0.20.0 +- [clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) >= v1.6.0 +- [kubectl](https://www.downloadkubernetes.com/) >= v1.28.2 +- [virt-install](https://virt-manager.org/) >= 4.0.0 +- [task](https://taskfile.dev/installation/) >= 3.37.2 + +#### Hardware + +- at least 60GB of free and very fast disk space (etcd is very disk I/O sensitive) +- at least 8GB of free RAM +- at least 4 CPU cores + +### Usage + +Create the CAPT playground: + +```bash +# Run the creation process and follow the outputted next steps at the end of the process. +task create-playground +``` + +Delete the CAPT playground: + +```bash +task delete-playground +``` + +### Known Issues + +#### DNS issue + +KinD on Ubuntu has a known issue with DNS resolution in KinD pod containers. This affect the Download of HookOS in the Tink stack helm deployment. There are a few [known workarounds](https://github.com/kubernetes-sigs/kind/issues/1594#issuecomment-629509450). The recommendation for the CAPT playground is to add a DNS nameservers to Docker's `daemon.json` file. This can be done by adding the following to `/etc/docker/daemon.json`: + +```json +{ + "dns": ["1.1.1.1"] +} +``` + +Then restart Docker: + +```bash +sudo systemctl restart docker +``` diff --git a/capt/Taskfile.yaml b/capt/Taskfile.yaml new file mode 100644 index 00000000..9ff16af5 --- /dev/null +++ b/capt/Taskfile.yaml @@ -0,0 +1,115 @@ +version: '3' + +includes: + create: ./tasks/Taskfile-create.yaml + delete: ./tasks/Taskfile-delete.yaml + vbmc: ./tasks/Taskfile-vbmc.yaml + capi: ./tasks/Taskfile-capi.yaml + +vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + CURR_DIR: + sh: pwd + STATE_FILE: ".state" + STATE_FILE_FQ_PATH: + sh: echo {{joinPath .CURR_DIR .STATE_FILE}} + +tasks: + create-playground: + silent: true + summary: | + Create the CAPT playground. Use the .playground file to define things like cluster size and Kubernetes version. + cmds: + - task: system-deps-warnings + - task: validate-binaries + - task: ensure-output-dir + - task: generate-state + - task: create:playground-ordered + - task: next-steps + + delete-playground: + silent: true + summary: | + Delete the CAPT playground. + cmds: + - task: validate-binaries + - task: delete:playground + + validate-binaries: + silent: true + summary: | + Validate all required dependencies for the CAPT playground. + cmds: + - for: ['virsh', 'docker', 'helm', 'kind', 'kubectl', 'clusterctl', 'virt-install', 'brctl', 'yq'] + cmd: command -v {{ .ITEM }} >/dev/null || echo "'{{ .ITEM }}' was not found in the \$PATH, please ensure it is installed." + # sudo apt install virtinst # for virt-install + # sudo apt install bridge-utils # for brctl + + system-deps-warnings: + summary: | + Run CAPT playground system warnings. + silent: true + cmds: + - echo "Please ensure you have the following:" + - echo "60GB of free and very fast disk space (etcd is very disk I/O sensitive)" + - echo "8GB of free RAM" + - echo "4 CPU cores" + + ensure-output-dir: + summary: | + Create the output directory. + cmds: + - mkdir -p {{.OUTPUT_DIR}} + - mkdir -p {{.OUTPUT_DIR}}/xdg + status: + - echo ;[ -d {{.OUTPUT_DIR}} ] + - echo ;[ -d {{.OUTPUT_DIR}}/xdg ] + + generate-state: + summary: | + Populate the state file. + sources: + - config.yaml + generates: + - .state + cmds: + - ./scripts/generate_state.sh config.yaml .state + + next-steps: + silent: true + summary: | + Next steps after creating the CAPT playground. + vars: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + NODE_BASE: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KIND_KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - | + echo + echo The workload cluster is now being created. + echo Once the cluster nodes are up and running, you will need to deploy a CNI for the cluster to be fully functional. + echo + echo 1. Watch and wait for the first control plane node to be provisioned successfully: STATE_SUCCESS + echo "KUBECONFIG={{.KIND_KUBECONFIG}} kubectl get workflows -n {{.NAMESPACE}} -w" + echo + echo + echo 2. Watch and wait for the Kubernetes API server to be ready and responding: + echo "until KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get node; do echo 'Waiting for Kube API server to respond...'; sleep 5; done" + echo + echo 3. Deploy a CNI + echo Cilium + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig cilium install" + echo or KUBEROUTER + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml" + echo + echo 4. Watch and wait for all nodes to join the cluster and be ready: + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get nodes -w" + - touch {{.OUTPUT_DIR}}/.next-steps-displayed + status: + - echo ;[ -f {{.OUTPUT_DIR}}/.next-steps-displayed ] diff --git a/capt/config.yaml b/capt/config.yaml new file mode 100644 index 00000000..b09f4da0 --- /dev/null +++ b/capt/config.yaml @@ -0,0 +1,28 @@ +--- +clusterName: "capt-playground" +outputDir: "output" +namespace: "tink" +counts: + controlPlanes: 1 + workers: 1 + spares: 1 +versions: + capt: 0.5.3 + chart: 0.4.4 + kube: v1.28.3 + os: 20.04 +os: + registry: ghcr.io/jacobweinstock/capi-images + distro: ubuntu + sshKey: "" +vm: + baseName: "node" + cpusPerVM: 2 + memInMBPerVM: 2048 + diskSizeInGBPerVM: 10 + diskPath: "/tmp" +virtualBMC: + containerName: "virtualbmc" + image: ghcr.io/jacobweinstock/virtualbmc + user: "root" + pass: "calvin" diff --git a/capt/scripts/create_vms.sh b/capt/scripts/create_vms.sh new file mode 100755 index 00000000..3d113a48 --- /dev/null +++ b/capt/scripts/create_vms.sh @@ -0,0 +1,34 @@ +#!/bin/bash + +set -euo pipefail + +# Create VMs + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare BRIDGE_NAME="$(yq eval '.kind.bridgeName' "$STATE_FILE")" + declare CPUS="$(yq eval '.vm.cpusPerVM' "$STATE_FILE")" + declare MEM="$(yq eval '.vm.memInMBPerVM' "$STATE_FILE")" + declare DISK_SIZE="$(yq eval '.vm.diskSizeInGBPerVM' "$STATE_FILE")" + declare DISK_PATH="$(yq eval '.vm.diskPath' "$STATE_FILE")" + + while IFS=$',' read -r name mac; do + # create the VM + virt-install \ + --description "CAPT VM" \ + --ram "$MEM" --vcpus "$CPUS" \ + --os-variant "ubuntu20.04" \ + --graphics "vnc" \ + --boot "uefi,firmware.feature0.name=enrolled-keys,firmware.feature0.enabled=no,firmware.feature1.name=secure-boot,firmware.feature1.enabled=yes" \ + --noautoconsole \ + --noreboot \ + --import \ + --connect "qemu:///system" \ + --name "$name" \ + --disk "path=$DISK_PATH/$name-disk.img,bus=virtio,size=10,sparse=yes" \ + --network "bridge:$BRIDGE_NAME,mac=$mac" + done < <(yq e '.vm.details.[] | [key, .mac] | @csv' "$STATE_FILE") +} + +main "$@" \ No newline at end of file diff --git a/capt/scripts/generate_bmc.sh b/capt/scripts/generate_bmc.sh new file mode 100755 index 00000000..c962914a --- /dev/null +++ b/capt/scripts/generate_bmc.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +set -euo pipefail + +# This script creates the BMC machine yaml files needed for the CAPT playground. + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + + rm -f "$OUTPUT_DIR"/bmc-machine*.yaml + + namespace=$(yq eval '.namespace' "$STATE_FILE") + bmc_ip=$(yq eval '.virtualBMC.ip' "$STATE_FILE") + + while IFS=$',' read -r name port; do + export NODE_NAME="$name" + export BMC_IP="$bmc_ip" + export BMC_PORT="$port" + export NAMESPACE="$namespace" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/bmc-machine.tmpl > "$OUTPUT_DIR"/bmc-machine-"$NODE_NAME".yaml + unset NODE_NAME + unset BMC_IP + unset BMC_PORT + unset NAMESPACE + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") +} + +main "$@" \ No newline at end of file diff --git a/capt/scripts/generate_hardware.sh b/capt/scripts/generate_hardware.sh new file mode 100755 index 00000000..a87516d2 --- /dev/null +++ b/capt/scripts/generate_hardware.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Generate hardware + +set -euo pipefail + +function main() { + # Generate hardware + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare -r NS=$(yq eval '.namespace' "$STATE_FILE") + + rm -f "$OUTPUT_DIR"/hardware*.yaml + + while IFS=$',' read -r name mac role ip gateway; do + export NODE_NAME="$name" + export NODE_MAC="$mac" + export NODE_ROLE="$role" + export NODE_IP="$ip" + export GATEWAY_IP="$gateway" + export NAMESPACE="$NS" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/hardware.tmpl > "$OUTPUT_DIR"/hardware-"$NODE_NAME".yaml + unset NODE_ROLE + unset NODE_NAME + unset NODE_IP + unset NODE_MAC + unset GATEWAY_IP + done < <(yq e '.vm.details.[] | [key, .mac, .role, .ip, .gateway] | @csv' "$STATE_FILE") + +} + +main "$@" \ No newline at end of file diff --git a/capt/scripts/generate_secret.sh b/capt/scripts/generate_secret.sh new file mode 100755 index 00000000..6e3f7c19 --- /dev/null +++ b/capt/scripts/generate_secret.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +# Generate secret. All machines share the same secret. The only customization is the namespace, user name, and password. + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + export NAMESPACE=$(yq eval '.namespace' "$STATE_FILE") + export BMC_USER_BASE64=$(yq eval '.virtualBMC.user' "$STATE_FILE" | tr -d '\n' | base64) + export BMC_PASS_BASE64=$(yq eval '.virtualBMC.pass' "$STATE_FILE" | tr -d '\n' | base64) + + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/bmc-secret.tmpl > "$OUTPUT_DIR"/bmc-secret.yaml + unset BMC_USER_BASE64 + unset BMC_PASS_BASE64 + unset NAMESPACE +} + +main "$@" + diff --git a/capt/scripts/generate_state.sh b/capt/scripts/generate_state.sh new file mode 100755 index 00000000..ae9e2876 --- /dev/null +++ b/capt/scripts/generate_state.sh @@ -0,0 +1,132 @@ +#!/bin/bash +# This script generates the state data needed for creating the CAPT playground. + +# state file spec +cat < /dev/null +--- +clusterName: "capt-playground" +outputDir: "/home/tink/repos/tinkerbell/cluster-api-provider-tinkerbell/playground/output" +namespace: "tink" +counts: + controlPlanes: 1 + workers: 1 + spares: 1 +versions: + capt: 0.5.3 + chart: 0.4.4 + kube: v1.28.8 + os: 22.04 +os: + registry: reg.weinstocklabs.com/tinkerbell/cluster-api-provider-tinkerbell + distro: ubuntu + sshKey: "" + version: "2204" +vm: + baseName: "node" + cpusPerVM: 2 + memInMBPerVM: 2048 + diskSizeInGBPerVM: 10 + diskPath: "/tmp" + details: + node1: + mac: 02:7f:92:bd:2d:57 + bmc: + port: 6231 + role: control-plane + ip: 172.18.10.21 + gateway: 172.18.0.1 + node2: + mac: 02:f3:eb:c1:aa:2b + bmc: + port: 6232 + role: worker + ip: 172.18.10.22 + gateway: 172.18.0.1 + node3: + mac: 02:3c:e6:70:1b:5e + bmc: + port: 6233 + role: spare + ip: 172.18.10.23 + gateway: 172.18.0.1 +virtualBMC: + containerName: "virtualbmc" + image: ghcr.io/jacobweinstock/virtualbmc + user: "root" + pass: "calvin" + ip: 172.18.0.3 +totalNodes: 3 +kind: + kubeconfig: /home/tink/repos/tinkerbell/cluster-api-provider-tinkerbell/playground/output/kind.kubeconfig + gatewayIP: 172.18.0.1 + nodeIPBase: 172.18.10.20 + bridgeName: br-d086780dac6b +tinkerbell: + vip: 172.18.10.74 +cluster: + controlPlane: + vip: 172.18.10.75 + podCIDR: 172.100.0.0/16 +EOF + +set -euo pipefail + +function generate_mac() { + declare NODE_NAME="$1" + + echo "$NODE_NAME" | md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/' +} + +function main() { + # read in the config.yaml file and populate the .state file + declare CONFIG_FILE="$1" + declare STATE_FILE="$2" + + # update outputDir to be a fully qualified path + output_dir=$(yq eval '.outputDir' "$CONFIG_FILE") + if [[ "$output_dir" = /* ]]; then + echo + else + current_dir=$(pwd) + output_dir="$current_dir/$output_dir" + fi + config_file=$(realpath "$CONFIG_FILE") + state_file="$STATE_FILE" + + cp -a "$config_file" "$state_file" + yq e -i '.outputDir = "'$output_dir'"' "$state_file" + + # totalNodes + total_nodes=$(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file") + $(yq eval '.counts.spares' "$state_file"))) + yq e -i ".totalNodes = $total_nodes" "$state_file" + + # populate vmNames + base_name=$(yq eval '.vm.baseName' "$state_file") + base_ipmi_port=6230 + for i in $(seq 1 $total_nodes); do + name="$base_name$i" + mac=$(generate_mac "$name") + yq e -i ".vm.details.$name.mac = \"$mac\"" "$state_file" + yq e -i ".vm.details.$name.bmc.port = $(($base_ipmi_port + $i))" "$state_file" + # set the node role + if [[ $i -le $(yq eval '.counts.controlPlanes' "$state_file") ]]; then + yq e -i ".vm.details.$name.role = \"control-plane\"" "$state_file" + elif [[ $i -le $(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file"))) ]]; then + yq e -i ".vm.details.$name.role = \"worker\"" "$state_file" + else + yq e -i ".vm.details.$name.role = \"spare\"" "$state_file" + fi + unset name + unset mac + done + + # populate kind.kubeconfig + yq e -i '.kind.kubeconfig = "'$output_dir'/kind.kubeconfig"' "$state_file" + + # populate the expected OS version in the raw image name (22.04 -> 2204) + os_version=$(yq eval '.versions.os' "$state_file") + os_version=$(echo "$os_version" | tr -d '.') + yq e -i '.os.version = "'$os_version'"' "$state_file" +} + +main "$@" diff --git a/capt/scripts/update_state.sh b/capt/scripts/update_state.sh new file mode 100755 index 00000000..268277ef --- /dev/null +++ b/capt/scripts/update_state.sh @@ -0,0 +1,48 @@ +#!/bin/bash + +set -euo pipefail + +# this script updates the state file with the generated hardware data + +function main() { + declare -r STATE_FILE="$1" + declare CLUSTER_NAME=$(yq eval '.clusterName' "$STATE_FILE") + declare GATEWAY_IP=$(docker inspect -f '{{ .NetworkSettings.Networks.kind.Gateway }}' "$CLUSTER_NAME"-control-plane) + declare NODE_IP_BASE=$(awk -F"." '{print $1"."$2".10.20"}' <<< "$GATEWAY_IP") + declare NODE_BASE=$(yq eval '.vm.baseName' "$STATE_FILE") + declare IP_LAST_OCTET=$(echo "$NODE_IP_BASE" | cut -d. -f4) + + yq e -i '.kind.gatewayIP = "'$GATEWAY_IP'"' "$STATE_FILE" + yq e -i '.kind.nodeIPBase = "'$NODE_IP_BASE'"' "$STATE_FILE" + + # set an ip and gateway per node + idx=1 + while IFS=$',' read -r name; do + v=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx)) + ((idx++)) + yq e -i ".vm.details.$name.ip = \"$v\"" "$STATE_FILE" + yq e -i ".vm.details.$name.gateway = \"$GATEWAY_IP\"" "$STATE_FILE" + unset v + done < <(yq e '.vm.details.[] | [key] | @csv' "$STATE_FILE") + + # set the Tinkerbell Load Balancer IP (VIP) + offset=50 + t_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset)) + yq e -i '.tinkerbell.vip = "'$t_lb'"' "$STATE_FILE" + + # set the cluster control plane load balancer IP (VIP) + cp_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset + 1)) + yq e -i '.cluster.controlPlane.vip = "'$cp_lb'"' "$STATE_FILE" + + # set the cluster pod cidr + POD_CIDR=$(awk -F"." '{print $1".100.0.0/16"}' <<< "$GATEWAY_IP") + yq e -i '.cluster.podCIDR = "'$POD_CIDR'"' "$STATE_FILE" + + # set the KinD bridge name + network_id=$(docker network inspect -f '{{.Id}}' kind) + bridge_name="br-${network_id:0:12}" + yq e -i '.kind.bridgeName = "'$bridge_name'"' "$STATE_FILE" + +} + +main "$@" \ No newline at end of file diff --git a/capt/scripts/virtualbmc.sh b/capt/scripts/virtualbmc.sh new file mode 100755 index 00000000..0a0ab167 --- /dev/null +++ b/capt/scripts/virtualbmc.sh @@ -0,0 +1,22 @@ +#!/bin/bash + +set -euo pipefail + +# This script will registry and start virtual bmc entries in a running virtualbmc container + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + + username=$(yq eval '.virtualBMC.user' "$STATE_FILE") + password=$(yq eval '.virtualBMC.pass' "$STATE_FILE") + + container_name=$(yq eval '.virtualBMC.containerName' "$STATE_FILE") + while IFS=$',' read -r name port; do + docker exec "$container_name" vbmc add --username "$username" --password "$password" --port "$port" "$name" + docker exec "$container_name" vbmc start "$name" + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") + +} + +main "$@" \ No newline at end of file diff --git a/capt/tasks/Taskfile-capi.yaml b/capt/tasks/Taskfile-capi.yaml new file mode 100644 index 00000000..4c0ef974 --- /dev/null +++ b/capt/tasks/Taskfile-capi.yaml @@ -0,0 +1,143 @@ +version: '3' + +tasks: + + ordered: + summary: | + CAPI tasks run in order of dependency. + cmds: + - task: create-cluster-yaml + - task: init + - task: generate-cluster-yaml + - task: create-kustomize-file + - task: apply-kustomization + + create-cluster-yaml: + run: once + summary: | + Create the cluster yaml. + env: + CAPT_VERSION: + sh: yq eval '.versions.capt' {{.STATE_FILE_FQ_PATH}} + vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + cmds: + - envsubst '$CAPT_VERSION' < templates/clusterctl.tmpl > {{.OUTPUT_DIR}}/clusterctl.yaml + status: + - grep -q "$CAPT_VERSION" {{.OUTPUT_DIR}}/clusterctl.yaml + + init: + run: once + deps: [create-cluster-yaml] + summary: | + Initialize the cluster. + env: + TINKERBELL_IP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + CLUSTERCTL_DISABLE_VERSIONCHECK: true + XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg" + XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg" + vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + KIND_GATEWAY_IP: + sh: yq eval '.kind.gatewayIP' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl --config {{.OUTPUT_DIR}}/clusterctl.yaml init --infrastructure tinkerbell + status: + - expected=1; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get pods -n capt-system |grep -ce "capt-controller"); [[ "$got" == "$expected" ]] + + generate-cluster-yaml: + run: once + deps: [init] + summary: | + Generate the cluster yaml. + env: + CONTROL_PLANE_VIP: + sh: yq eval '.cluster.controlPlane.vip' {{.STATE_FILE_FQ_PATH}} + POD_CIDR: + sh: yq eval '.cluster.podCIDR' {{.STATE_FILE_FQ_PATH}} + CLUSTERCTL_DISABLE_VERSIONCHECK: true + XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg" + XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg" + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + OUTPUT_DIR: + sh: yq eval '.outputDir' config.yaml + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} + CP_COUNT: + sh: yq eval '.counts.controlPlanes' {{.STATE_FILE_FQ_PATH}} + WORKER_COUNT: + sh: yq eval '.counts.workers' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl generate cluster {{.CLUSTER_NAME}} --config {{.OUTPUT_DIR}}/clusterctl.yaml --kubernetes-version "{{.KUBE_VERSION}}" --control-plane-machine-count="{{.CP_COUNT}}" --worker-machine-count="{{.WORKER_COUNT}}" --target-namespace={{.NAMESPACE}} --write-to {{.OUTPUT_DIR}}/prekustomization.yaml + status: + - grep -q "{{.KUBE_VERSION}}" {{.OUTPUT_DIR}}/prekustomization.yaml + + create-kustomize-file: + run: once + summary: | + Kustomize file for the CAPI generated config file (prekustomization.yaml). + env: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + OS_REGISTRY: + sh: yq eval '.os.registry' {{.STATE_FILE_FQ_PATH}} + OS_DISTRO: + sh: yq eval '.os.distro' {{.STATE_FILE_FQ_PATH}} + OS_VERSION: + sh: yq eval '.os.version' {{.STATE_FILE_FQ_PATH}} + VERSIONS_OS: + sh: yq eval '.versions.os' {{.STATE_FILE_FQ_PATH}} + SSH_AUTH_KEY: + sh: yq eval '.os.sshKey' {{.STATE_FILE_FQ_PATH}} + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} + TINKERBELL_VIP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + vars: + OUTPUT_DIR: + sh: yq eval '.outputDir' config.yaml + sources: + - config.yaml + generates: + - "{{.OUTPUT_DIR}}/kustomization.yaml" + cmds: + - envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/kustomization.tmpl > {{.OUTPUT_DIR}}/kustomization.yaml + + apply-kustomization: + run: once + deps: [generate-cluster-yaml, create-kustomize-file] + summary: | + Kustomize the cluster yaml. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + sources: + - "{{.OUTPUT_DIR}}/kustomization.yaml" + - "{{.OUTPUT_DIR}}/prekustomization.yaml" + generates: + - "{{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml" + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl kustomize {{.OUTPUT_DIR}} -o {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml \ No newline at end of file diff --git a/capt/tasks/Taskfile-create.yaml b/capt/tasks/Taskfile-create.yaml new file mode 100644 index 00000000..6d3d42b6 --- /dev/null +++ b/capt/tasks/Taskfile-create.yaml @@ -0,0 +1,221 @@ +version: '3' + +includes: + vbmc: ./Taskfile-vbmc.yaml + capi: ./Taskfile-capi.yaml + +tasks: + playground-ordered: + silent: true + summary: | + Create the CAPT playground. + cmds: + - task: kind-cluster + - task: update-state + - task: deploy-tinkerbell-helm-chart + - task: vbmc:start-server + - task: vbmc:update-state + - task: hardware-cr + - task: bmc-machine-cr + - task: bmc-secret + - task: vms + - task: vbmc:start-vbmcs + - task: apply-bmc-secret + - task: apply-bmc-machines + - task: apply-hardware + - task: capi:ordered + - task: create-workload-cluster + - task: get-workload-cluster-kubeconfig + + kind-cluster: + run: once + summary: | + Install a KinD cluster. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - kind create cluster --name {{.CLUSTER_NAME}} --kubeconfig "{{.KUBECONFIG}}" + - until KUBECONFIG="{{.KUBECONFIG}}" kubectl wait --for=condition=ready node --all --timeout=5m; do echo "Waiting for nodes to be ready..."; sleep 1; done + status: + - KUBECONFIG="{{.KUBECONFIG}}" kind get clusters | grep -q {{.CLUSTER_NAME}} + + update-state: + silent: true + run: once + deps: [kind-cluster] + summary: | + Update the state file with the KinD cluster information. Should be run only after the KinD cluster is created. + cmds: + - ./scripts/update_state.sh "{{.STATE_FILE_FQ_PATH}}" + + hardware-cr: + run: once + deps: [update-state] + summary: | + Create BMC Machine object. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/hardware-*.yaml" + cmds: + - ./scripts/generate_hardware.sh {{.STATE_FILE_FQ_PATH}} + + bmc-machine-cr: + run: once + deps: [vbmc:update-state] + summary: | + Create BMC Machine objects. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/bmc-machine-*.yaml" + cmds: + - ./scripts/generate_bmc.sh {{.STATE_FILE_FQ_PATH}} + + bmc-secret: + run: once + deps: [update-state] + summary: | + Create BMC Machine objects. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/bmc-secret.yaml" + cmds: + - ./scripts/generate_secret.sh {{.STATE_FILE_FQ_PATH}} + + deploy-tinkerbell-helm-chart: + run: once + deps: [kind-cluster, update-state] + summary: | + Deploy the Tinkerbell Helm chart. + vars: + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + LB_IP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + TRUSTED_PROXIES: + sh: KUBECONFIG={{.KUBECONFIG}} kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' + STACK_CHART_VERSION: + sh: yq eval '.versions.chart' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + CHART_NAME: tink-stack + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" helm install {{.CHART_NAME}} oci://ghcr.io/tinkerbell/charts/stack --version "{{.STACK_CHART_VERSION}}" --create-namespace --namespace {{.NAMESPACE}} --wait --set "smee.trustedProxies={{.TRUSTED_PROXIES}}" --set "hegel.trustedProxies={{.TRUSTED_PROXIES}}" --set "stack.loadBalancerIP={{.LB_IP}}" --set "smee.publicIP={{.LB_IP}}" + status: + - KUBECONFIG="{{.KUBECONFIG}}" helm list -n {{.NAMESPACE}} | grep -q {{.CHART_NAME}} + + vms: + run: once + deps: [update-state, vbmc:update-state] + summary: | + Create Libvirt VMs. + vars: + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + cmds: + - ./scripts/create_vms.sh "{{.STATE_FILE_FQ_PATH}}" + status: + - expected={{.TOTAL_HARDWARE}}; got=$(virsh --connect qemu:///system list --all --name |grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + apply-bmc-secret: + run: once + deps: [kind-cluster, bmc-secret] + summary: | + Apply the BMC secret. + vars: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/bmc-secret.yaml + status: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl get secret bmc-creds -n {{.NAMESPACE}} + + apply-bmc-machines: + run: once + deps: [kind-cluster, bmc-machine-cr] + summary: | + Apply the BMC machines. + vars: + NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: NAMES } + cmd: KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/bmc-machine-{{.ITEM}}.yaml + status: + - expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get machines.bmc -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + apply-hardware: + run: once + deps: [kind-cluster, hardware-cr] + summary: | + Apply the hardware. + vars: + NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: NAMES } + cmd: KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/hardware-{{.ITEM}}.yaml + status: + - expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get hardware -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + create-workload-cluster: + run: once + deps: [kind-cluster, capi:ordered] + summary: | + Create the workload cluster by applying the generated manifest file. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + cmds: + - until KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml >>{{.OUTPUT_DIR}}/error.log 2>&1; do echo "Trying kubectl apply again..."; sleep 3; done + - echo "Workload manifest applied to cluster." + status: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl get -n {{.NAMESPACE}} cluster {{.CLUSTER_NAME}} + + get-workload-cluster-kubeconfig: + run: once + deps: [create-workload-cluster] + summary: | + Get the workload cluster's kubeconfig. + vars: + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + cmds: + - until KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} >>{{.OUTPUT_DIR}}/error.log 2>&1 ; do echo "Waiting for workload cluster kubeconfig to be available..."; sleep 4; done + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} > {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig + - echo "Workload cluster kubeconfig saved to {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig." + status: + - echo ; [ -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig ] \ No newline at end of file diff --git a/capt/tasks/Taskfile-delete.yaml b/capt/tasks/Taskfile-delete.yaml new file mode 100644 index 00000000..73c6d989 --- /dev/null +++ b/capt/tasks/Taskfile-delete.yaml @@ -0,0 +1,58 @@ +version: '3' + +tasks: + + playground: + summary: | + Delete the CAPT playground. + cmds: + - task: kind-cluster + - task: vbmc-container + - task: vms + - task: output-dir + + kind-cluster: + summary: | + Delete the KinD cluster. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + cmds: + - kind delete cluster --name {{.CLUSTER_NAME}} + status: + - got=$(kind get clusters | grep -c {{.CLUSTER_NAME}} || :); [[ "$got" == "0" ]] + + vms: + summary: | + Delete the VMs. + vars: + VM_NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: VM_NAMES } + cmd: (virsh --connect qemu:///system destroy {{.ITEM}} || true) ## if the VM is already off, this will fail + - for: { var: VM_NAMES } + cmd: virsh --connect qemu:///system undefine --nvram --remove-all-storage {{.ITEM}} + status: + - got=$(virsh --connect qemu:///system list --all --name | grep -ce "{{.VM_BASE_NAME}}*" || :); [[ "$got" == "0" ]] + + vbmc-container: + summary: | + Delete the Virtual BMC container. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - docker rm -f {{.VBMC_CONTAINER_NAME}} + status: + - got=$(docker ps -a | grep -c {{.VBMC_CONTAINER_NAME}} || :); [[ "$got" == "0" ]] + + output-dir: + summary: | + Delete the output directory. + cmds: + - rm -rf {{.OUTPUT_DIR}} + status: + - echo ;[ ! -d {{.OUTPUT_DIR}} ] diff --git a/capt/tasks/Taskfile-vbmc.yaml b/capt/tasks/Taskfile-vbmc.yaml new file mode 100644 index 00000000..a5d66f51 --- /dev/null +++ b/capt/tasks/Taskfile-vbmc.yaml @@ -0,0 +1,43 @@ +version: '3' + +tasks: + + start-server: + run: once + summary: | + Start the virtualbmc server. Requires the "kind" docker network to exist. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + VBMC_CONTAINER_IMAGE: + sh: yq eval '.virtualBMC.image' {{.STATE_FILE_FQ_PATH}} + cmds: + - docker run -d --privileged --rm --network kind -v /var/run/libvirt/libvirt-sock-ro:/var/run/libvirt/libvirt-sock-ro -v /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock --name {{.VBMC_CONTAINER_NAME}} {{.VBMC_CONTAINER_IMAGE}} + status: + - docker ps | grep -q {{.VBMC_CONTAINER_NAME}} + + start-vbmcs: + run: once + deps: [start-server] + summary: | + Register and start the virtualbmc servers. Requires that the virtual machines exist. + vars: + VBMC_NAME: + sh: yq e '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - ./scripts/virtualbmc.sh {{.STATE_FILE_FQ_PATH}} + status: + - expected=$(yq e '.totalNodes' {{.STATE_FILE_FQ_PATH}}); got=$(docker exec {{.VBMC_NAME}} vbmc list | grep -c "running" || :); [[ "$got" == "$expected" ]] + + update-state: + run: once + deps: [start-server] + summary: | + Update the state file with the virtual bmc server information. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); yq e -i '.virtualBMC.ip = "'$vbmc_ip'"' {{.STATE_FILE_FQ_PATH}} + status: + - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); [[ "$(yq eval '.virtualBMC.ip' {{.STATE_FILE_FQ_PATH}})" == "$vbmc_ip" ]] \ No newline at end of file diff --git a/capt/templates/bmc-machine.tmpl b/capt/templates/bmc-machine.tmpl new file mode 100644 index 00000000..11d8ee2c --- /dev/null +++ b/capt/templates/bmc-machine.tmpl @@ -0,0 +1,16 @@ +apiVersion: bmc.tinkerbell.org/v1alpha1 +kind: Machine +metadata: + name: $NODE_NAME + namespace: $NAMESPACE +spec: + connection: + authSecretRef: + name: bmc-creds + namespace: $NAMESPACE + host: $BMC_IP + insecureTLS: true + port: $BMC_PORT + providerOptions: + ipmitool: + port: $BMC_PORT \ No newline at end of file diff --git a/capt/templates/bmc-secret.tmpl b/capt/templates/bmc-secret.tmpl new file mode 100644 index 00000000..35fa3e9c --- /dev/null +++ b/capt/templates/bmc-secret.tmpl @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: $BMC_PASS_BASE64 + username: $BMC_USER_BASE64 +kind: Secret +metadata: + name: bmc-creds + namespace: $NAMESPACE +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/capt/templates/clusterctl.tmpl b/capt/templates/clusterctl.tmpl new file mode 100644 index 00000000..606bbf60 --- /dev/null +++ b/capt/templates/clusterctl.tmpl @@ -0,0 +1,7 @@ +providers: + - name: "tinkerbell" + url: "https://github.com/tinkerbell/cluster-api-provider-tinkerbell/releases/v$CAPT_VERSION/infrastructure-components.yaml" + type: "InfrastructureProvider" +images: + infrastructure-tinkerbell: + tag: v$CAPT_VERSION \ No newline at end of file diff --git a/capt/templates/hardware.tmpl b/capt/templates/hardware.tmpl new file mode 100644 index 00000000..bdfdd840 --- /dev/null +++ b/capt/templates/hardware.tmpl @@ -0,0 +1,34 @@ +apiVersion: tinkerbell.org/v1alpha1 +kind: Hardware +metadata: + labels: + tinkerbell.org/role: $NODE_ROLE + name: $NODE_NAME + namespace: $NAMESPACE +spec: + bmcRef: + apiGroup: bmc.tinkerbell.org + kind: Machine + name: $NODE_NAME + disks: + - device: /dev/vda + interfaces: + - dhcp: + arch: x86_64 + hostname: $NODE_NAME + ip: + address: $NODE_IP + gateway: $GATEWAY_IP + netmask: 255.255.0.0 + lease_time: 4294967294 + mac: $NODE_MAC + name_servers: + - 8.8.8.8 + - 1.1.1.1 + netboot: + allowPXE: true + allowWorkflow: true + metadata: + instance: + hostname: $NODE_NAME + id: $NODE_MAC \ No newline at end of file diff --git a/capt/templates/kustomization.tmpl b/capt/templates/kustomization.tmpl new file mode 100644 index 00000000..bb6862e4 --- /dev/null +++ b/capt/templates/kustomization.tmpl @@ -0,0 +1,227 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +namespace: $NAMESPACE +resources: + - prekustomization.yaml +patches: + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*control-plane.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + tinkerbell.org/role: control-plane + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*worker.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + tinkerbell.org/role: worker + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*control-plane.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + templateOverride: | + version: "0.1" + name: playground-template + global_timeout: 6000 + tasks: + - name: "playground-template" + worker: "{{.device_1}}" + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + actions: + - name: "stream-image" + image: quay.io/tinkerbell-actions/oci2disk:v1.0.0 + timeout: 600 + environment: + IMG_URL: $OS_REGISTRY/$OS_DISTRO-$OS_VERSION:$KUBE_VERSION.gz + DEST_DISK: {{ index .Hardware.Disks 0 }} + COMPRESSED: true + - name: "add-tink-cloud-init-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: + Ec2: + metadata_urls: ["http://$TINKERBELL_VIP:50061"] + strict_id: false + system_info: + default_user: + name: tink + groups: [wheel, adm] + sudo: ["ALL=(ALL) NOPASSWD:ALL"] + shell: /bin/bash + manage_etc_hosts: localhost + warnings: + dsid_missing_source: off + - name: "add-tink-cloud-init-ds-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/ds-identify.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: Ec2 + - name: "kexec-image" + image: ghcr.io/jacobweinstock/waitdaemon:0.2.0 + timeout: 90 + pid: host + environment: + BLOCK_DEVICE: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + IMAGE: quay.io/tinkerbell-actions/kexec:v1.0.0 + WAIT_SECONDS: 10 + volumes: + - /var/run/docker.sock:/var/run/docker.sock + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*worker.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + templateOverride: | + version: "0.1" + name: playground-template + global_timeout: 6000 + tasks: + - name: "playground-template" + worker: "{{.device_1}}" + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + actions: + - name: "stream-image" + image: quay.io/tinkerbell-actions/oci2disk:v1.0.0 + timeout: 600 + environment: + IMG_URL: $OS_REGISTRY/$OS_DISTRO-$OS_VERSION:$KUBE_VERSION.gz + DEST_DISK: {{ index .Hardware.Disks 0 }} + COMPRESSED: true + - name: "add-tink-cloud-init-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: + Ec2: + metadata_urls: ["http://$TINKERBELL_VIP:50061"] + strict_id: false + system_info: + default_user: + name: tink + groups: [wheel, adm] + sudo: ["ALL=(ALL) NOPASSWD:ALL"] + shell: /bin/bash + manage_etc_hosts: localhost + warnings: + dsid_missing_source: off + - name: "add-tink-cloud-init-ds-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/ds-identify.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: Ec2 + - name: "kexec-image" + image: ghcr.io/jacobweinstock/waitdaemon:0.2.0 + timeout: 90 + pid: host + environment: + BLOCK_DEVICE: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + IMAGE: quay.io/tinkerbell-actions/kexec:v1.0.0 + WAIT_SECONDS: 10 + volumes: + - /var/run/docker.sock:/var/run/docker.sock + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellCluster + name: ".*" + version: v1beta1 + patch: |- + - op: add + path: /spec + value: + imageLookupBaseRegistry: "$OS_REGISTRY" + imageLookupOSDistro: "$OS_DISTRO" + imageLookupOSVersion: "$VERSIONS_OS" + - target: + group: bootstrap.cluster.x-k8s.io + kind: KubeadmConfigTemplate + name: "playground-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec/users + value: + - name: tink + sudo: ALL=(ALL) NOPASSWD:ALL + sshAuthorizedKeys: + - $SSH_AUTH_KEY + - target: + group: controlplane.cluster.x-k8s.io + kind: KubeadmControlPlane + name: "playground-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/kubeadmConfigSpec/users + value: + - name: tink + sudo: ALL=(ALL) NOPASSWD:ALL + sshAuthorizedKeys: + - $SSH_AUTH_KEY + From 265fbb452fc8f3f85bc11218a823adacc495ad7f Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 14:59:02 -0600 Subject: [PATCH 02/13] Linting fixes Signed-off-by: Jacob Weinstock --- .gitignore | 3 ++- capt/Taskfile.yaml | 17 ++++++++++++++--- capt/tasks/Taskfile-capi.yaml | 5 ++--- capt/tasks/Taskfile-create.yaml | 4 ++-- capt/tasks/Taskfile-delete.yaml | 3 +-- capt/tasks/Taskfile-vbmc.yaml | 7 +++---- 6 files changed, 24 insertions(+), 15 deletions(-) diff --git a/.gitignore b/.gitignore index 76f98344..9f5727b9 100644 --- a/.gitignore +++ b/.gitignore @@ -2,4 +2,5 @@ error.log .task .state -capt/output/ \ No newline at end of file +capt/output/ +.vscode/ \ No newline at end of file diff --git a/capt/Taskfile.yaml b/capt/Taskfile.yaml index 9ff16af5..915578b7 100644 --- a/capt/Taskfile.yaml +++ b/capt/Taskfile.yaml @@ -1,4 +1,4 @@ -version: '3' +version: "3" includes: create: ./tasks/Taskfile-create.yaml @@ -41,7 +41,18 @@ tasks: summary: | Validate all required dependencies for the CAPT playground. cmds: - - for: ['virsh', 'docker', 'helm', 'kind', 'kubectl', 'clusterctl', 'virt-install', 'brctl', 'yq'] + - for: + [ + "virsh", + "docker", + "helm", + "kind", + "kubectl", + "clusterctl", + "virt-install", + "brctl", + "yq", + ] cmd: command -v {{ .ITEM }} >/dev/null || echo "'{{ .ITEM }}' was not found in the \$PATH, please ensure it is installed." # sudo apt install virtinst # for virt-install # sudo apt install bridge-utils # for brctl @@ -55,7 +66,7 @@ tasks: - echo "60GB of free and very fast disk space (etcd is very disk I/O sensitive)" - echo "8GB of free RAM" - echo "4 CPU cores" - + ensure-output-dir: summary: | Create the output directory. diff --git a/capt/tasks/Taskfile-capi.yaml b/capt/tasks/Taskfile-capi.yaml index 4c0ef974..b257db9f 100644 --- a/capt/tasks/Taskfile-capi.yaml +++ b/capt/tasks/Taskfile-capi.yaml @@ -1,7 +1,6 @@ -version: '3' +version: "3" tasks: - ordered: summary: | CAPI tasks run in order of dependency. @@ -140,4 +139,4 @@ tasks: generates: - "{{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml" cmds: - - KUBECONFIG="{{.KUBECONFIG}}" kubectl kustomize {{.OUTPUT_DIR}} -o {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml \ No newline at end of file + - KUBECONFIG="{{.KUBECONFIG}}" kubectl kustomize {{.OUTPUT_DIR}} -o {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml diff --git a/capt/tasks/Taskfile-create.yaml b/capt/tasks/Taskfile-create.yaml index 6d3d42b6..88c6ba14 100644 --- a/capt/tasks/Taskfile-create.yaml +++ b/capt/tasks/Taskfile-create.yaml @@ -1,4 +1,4 @@ -version: '3' +version: "3" includes: vbmc: ./Taskfile-vbmc.yaml @@ -218,4 +218,4 @@ tasks: - KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} > {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig - echo "Workload cluster kubeconfig saved to {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig." status: - - echo ; [ -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig ] \ No newline at end of file + - echo ; [ -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig ] diff --git a/capt/tasks/Taskfile-delete.yaml b/capt/tasks/Taskfile-delete.yaml index 73c6d989..4af7fb51 100644 --- a/capt/tasks/Taskfile-delete.yaml +++ b/capt/tasks/Taskfile-delete.yaml @@ -1,7 +1,6 @@ -version: '3' +version: "3" tasks: - playground: summary: | Delete the CAPT playground. diff --git a/capt/tasks/Taskfile-vbmc.yaml b/capt/tasks/Taskfile-vbmc.yaml index a5d66f51..6c926933 100644 --- a/capt/tasks/Taskfile-vbmc.yaml +++ b/capt/tasks/Taskfile-vbmc.yaml @@ -1,7 +1,6 @@ -version: '3' +version: "3" tasks: - start-server: run: once summary: | @@ -28,7 +27,7 @@ tasks: - ./scripts/virtualbmc.sh {{.STATE_FILE_FQ_PATH}} status: - expected=$(yq e '.totalNodes' {{.STATE_FILE_FQ_PATH}}); got=$(docker exec {{.VBMC_NAME}} vbmc list | grep -c "running" || :); [[ "$got" == "$expected" ]] - + update-state: run: once deps: [start-server] @@ -40,4 +39,4 @@ tasks: cmds: - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); yq e -i '.virtualBMC.ip = "'$vbmc_ip'"' {{.STATE_FILE_FQ_PATH}} status: - - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); [[ "$(yq eval '.virtualBMC.ip' {{.STATE_FILE_FQ_PATH}})" == "$vbmc_ip" ]] \ No newline at end of file + - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); [[ "$(yq eval '.virtualBMC.ip' {{.STATE_FILE_FQ_PATH}})" == "$vbmc_ip" ]] From ba6c06a8ded234b30f69095e5832467672308675 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 15:19:42 -0600 Subject: [PATCH 03/13] Linting issues with shell formatting Signed-off-by: Jacob Weinstock --- capt/scripts/create_vms.sh | 48 ++++++++-------- capt/scripts/generate_bmc.sh | 34 ++++++------ capt/scripts/generate_hardware.sh | 40 +++++++------- capt/scripts/generate_secret.sh | 19 +++---- capt/scripts/generate_state.sh | 92 +++++++++++++++---------------- capt/scripts/update_state.sh | 76 ++++++++++++------------- capt/scripts/virtualbmc.sh | 20 +++---- 7 files changed, 164 insertions(+), 165 deletions(-) diff --git a/capt/scripts/create_vms.sh b/capt/scripts/create_vms.sh index 3d113a48..7d400a70 100755 --- a/capt/scripts/create_vms.sh +++ b/capt/scripts/create_vms.sh @@ -5,30 +5,30 @@ set -euo pipefail # Create VMs function main() { - declare -r STATE_FILE="$1" - declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") - declare BRIDGE_NAME="$(yq eval '.kind.bridgeName' "$STATE_FILE")" - declare CPUS="$(yq eval '.vm.cpusPerVM' "$STATE_FILE")" - declare MEM="$(yq eval '.vm.memInMBPerVM' "$STATE_FILE")" - declare DISK_SIZE="$(yq eval '.vm.diskSizeInGBPerVM' "$STATE_FILE")" - declare DISK_PATH="$(yq eval '.vm.diskPath' "$STATE_FILE")" + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare BRIDGE_NAME="$(yq eval '.kind.bridgeName' "$STATE_FILE")" + declare CPUS="$(yq eval '.vm.cpusPerVM' "$STATE_FILE")" + declare MEM="$(yq eval '.vm.memInMBPerVM' "$STATE_FILE")" + declare DISK_SIZE="$(yq eval '.vm.diskSizeInGBPerVM' "$STATE_FILE")" + declare DISK_PATH="$(yq eval '.vm.diskPath' "$STATE_FILE")" - while IFS=$',' read -r name mac; do - # create the VM - virt-install \ - --description "CAPT VM" \ - --ram "$MEM" --vcpus "$CPUS" \ - --os-variant "ubuntu20.04" \ - --graphics "vnc" \ - --boot "uefi,firmware.feature0.name=enrolled-keys,firmware.feature0.enabled=no,firmware.feature1.name=secure-boot,firmware.feature1.enabled=yes" \ - --noautoconsole \ - --noreboot \ - --import \ - --connect "qemu:///system" \ - --name "$name" \ - --disk "path=$DISK_PATH/$name-disk.img,bus=virtio,size=10,sparse=yes" \ - --network "bridge:$BRIDGE_NAME,mac=$mac" - done < <(yq e '.vm.details.[] | [key, .mac] | @csv' "$STATE_FILE") + while IFS=$',' read -r name mac; do + # create the VM + virt-install \ + --description "CAPT VM" \ + --ram "$MEM" --vcpus "$CPUS" \ + --os-variant "ubuntu20.04" \ + --graphics "vnc" \ + --boot "uefi,firmware.feature0.name=enrolled-keys,firmware.feature0.enabled=no,firmware.feature1.name=secure-boot,firmware.feature1.enabled=yes" \ + --noautoconsole \ + --noreboot \ + --import \ + --connect "qemu:///system" \ + --name "$name" \ + --disk "path=$DISK_PATH/$name-disk.img,bus=virtio,size=10,sparse=yes" \ + --network "bridge:$BRIDGE_NAME,mac=$mac" + done < <(yq e '.vm.details.[] | [key, .mac] | @csv' "$STATE_FILE") } -main "$@" \ No newline at end of file +main "$@" diff --git a/capt/scripts/generate_bmc.sh b/capt/scripts/generate_bmc.sh index c962914a..9a2a2f66 100755 --- a/capt/scripts/generate_bmc.sh +++ b/capt/scripts/generate_bmc.sh @@ -5,25 +5,25 @@ set -euo pipefail # This script creates the BMC machine yaml files needed for the CAPT playground. function main() { - declare -r STATE_FILE="$1" - declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") - rm -f "$OUTPUT_DIR"/bmc-machine*.yaml + rm -f "$OUTPUT_DIR"/bmc-machine*.yaml - namespace=$(yq eval '.namespace' "$STATE_FILE") - bmc_ip=$(yq eval '.virtualBMC.ip' "$STATE_FILE") + namespace=$(yq eval '.namespace' "$STATE_FILE") + bmc_ip=$(yq eval '.virtualBMC.ip' "$STATE_FILE") - while IFS=$',' read -r name port; do - export NODE_NAME="$name" - export BMC_IP="$bmc_ip" - export BMC_PORT="$port" - export NAMESPACE="$namespace" - envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/bmc-machine.tmpl > "$OUTPUT_DIR"/bmc-machine-"$NODE_NAME".yaml - unset NODE_NAME - unset BMC_IP - unset BMC_PORT - unset NAMESPACE - done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") + while IFS=$',' read -r name port; do + export NODE_NAME="$name" + export BMC_IP="$bmc_ip" + export BMC_PORT="$port" + export NAMESPACE="$namespace" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/bmc-machine-"$NODE_NAME".yaml + unset NODE_NAME + unset BMC_IP + unset BMC_PORT + unset NAMESPACE + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") } -main "$@" \ No newline at end of file +main "$@" diff --git a/capt/scripts/generate_hardware.sh b/capt/scripts/generate_hardware.sh index a87516d2..99a75689 100755 --- a/capt/scripts/generate_hardware.sh +++ b/capt/scripts/generate_hardware.sh @@ -5,28 +5,28 @@ set -euo pipefail function main() { - # Generate hardware - declare -r STATE_FILE="$1" - declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") - declare -r NS=$(yq eval '.namespace' "$STATE_FILE") + # Generate hardware + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare -r NS=$(yq eval '.namespace' "$STATE_FILE") - rm -f "$OUTPUT_DIR"/hardware*.yaml + rm -f "$OUTPUT_DIR"/hardware*.yaml - while IFS=$',' read -r name mac role ip gateway; do - export NODE_NAME="$name" - export NODE_MAC="$mac" - export NODE_ROLE="$role" - export NODE_IP="$ip" - export GATEWAY_IP="$gateway" - export NAMESPACE="$NS" - envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/hardware.tmpl > "$OUTPUT_DIR"/hardware-"$NODE_NAME".yaml - unset NODE_ROLE - unset NODE_NAME - unset NODE_IP - unset NODE_MAC - unset GATEWAY_IP - done < <(yq e '.vm.details.[] | [key, .mac, .role, .ip, .gateway] | @csv' "$STATE_FILE") + while IFS=$',' read -r name mac role ip gateway; do + export NODE_NAME="$name" + export NODE_MAC="$mac" + export NODE_ROLE="$role" + export NODE_IP="$ip" + export GATEWAY_IP="$gateway" + export NAMESPACE="$NS" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/hardware-"$NODE_NAME".yaml + unset NODE_ROLE + unset NODE_NAME + unset NODE_IP + unset NODE_MAC + unset GATEWAY_IP + done < <(yq e '.vm.details.[] | [key, .mac, .role, .ip, .gateway] | @csv' "$STATE_FILE") } -main "$@" \ No newline at end of file +main "$@" diff --git a/capt/scripts/generate_secret.sh b/capt/scripts/generate_secret.sh index 6e3f7c19..a83b1da8 100755 --- a/capt/scripts/generate_secret.sh +++ b/capt/scripts/generate_secret.sh @@ -3,17 +3,16 @@ # Generate secret. All machines share the same secret. The only customization is the namespace, user name, and password. function main() { - declare -r STATE_FILE="$1" - declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") - export NAMESPACE=$(yq eval '.namespace' "$STATE_FILE") - export BMC_USER_BASE64=$(yq eval '.virtualBMC.user' "$STATE_FILE" | tr -d '\n' | base64) - export BMC_PASS_BASE64=$(yq eval '.virtualBMC.pass' "$STATE_FILE" | tr -d '\n' | base64) + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + export NAMESPACE=$(yq eval '.namespace' "$STATE_FILE") + export BMC_USER_BASE64=$(yq eval '.virtualBMC.user' "$STATE_FILE" | tr -d '\n' | base64) + export BMC_PASS_BASE64=$(yq eval '.virtualBMC.pass' "$STATE_FILE" | tr -d '\n' | base64) - envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/bmc-secret.tmpl > "$OUTPUT_DIR"/bmc-secret.yaml - unset BMC_USER_BASE64 - unset BMC_PASS_BASE64 - unset NAMESPACE + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/bmc-secret.yaml + unset BMC_USER_BASE64 + unset BMC_PASS_BASE64 + unset NAMESPACE } main "$@" - diff --git a/capt/scripts/generate_state.sh b/capt/scripts/generate_state.sh index ae9e2876..941cd492 100755 --- a/capt/scripts/generate_state.sh +++ b/capt/scripts/generate_state.sh @@ -2,7 +2,7 @@ # This script generates the state data needed for creating the CAPT playground. # state file spec -cat < /dev/null +cat </dev/null --- clusterName: "capt-playground" outputDir: "/home/tink/repos/tinkerbell/cluster-api-provider-tinkerbell/playground/output" @@ -72,61 +72,61 @@ EOF set -euo pipefail function generate_mac() { - declare NODE_NAME="$1" + declare NODE_NAME="$1" - echo "$NODE_NAME" | md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/' + echo "$NODE_NAME" | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/' } function main() { - # read in the config.yaml file and populate the .state file - declare CONFIG_FILE="$1" - declare STATE_FILE="$2" + # read in the config.yaml file and populate the .state file + declare CONFIG_FILE="$1" + declare STATE_FILE="$2" - # update outputDir to be a fully qualified path - output_dir=$(yq eval '.outputDir' "$CONFIG_FILE") - if [[ "$output_dir" = /* ]]; then - echo - else - current_dir=$(pwd) - output_dir="$current_dir/$output_dir" - fi - config_file=$(realpath "$CONFIG_FILE") - state_file="$STATE_FILE" + # update outputDir to be a fully qualified path + output_dir=$(yq eval '.outputDir' "$CONFIG_FILE") + if [[ $output_dir == /* ]]; then + echo + else + current_dir=$(pwd) + output_dir="$current_dir/$output_dir" + fi + config_file=$(realpath "$CONFIG_FILE") + state_file="$STATE_FILE" - cp -a "$config_file" "$state_file" - yq e -i '.outputDir = "'$output_dir'"' "$state_file" + cp -a "$config_file" "$state_file" + yq e -i '.outputDir = "'$output_dir'"' "$state_file" - # totalNodes - total_nodes=$(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file") + $(yq eval '.counts.spares' "$state_file"))) - yq e -i ".totalNodes = $total_nodes" "$state_file" + # totalNodes + total_nodes=$(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file") + $(yq eval '.counts.spares' "$state_file"))) + yq e -i ".totalNodes = $total_nodes" "$state_file" - # populate vmNames - base_name=$(yq eval '.vm.baseName' "$state_file") - base_ipmi_port=6230 - for i in $(seq 1 $total_nodes); do - name="$base_name$i" - mac=$(generate_mac "$name") - yq e -i ".vm.details.$name.mac = \"$mac\"" "$state_file" - yq e -i ".vm.details.$name.bmc.port = $(($base_ipmi_port + $i))" "$state_file" - # set the node role - if [[ $i -le $(yq eval '.counts.controlPlanes' "$state_file") ]]; then - yq e -i ".vm.details.$name.role = \"control-plane\"" "$state_file" - elif [[ $i -le $(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file"))) ]]; then - yq e -i ".vm.details.$name.role = \"worker\"" "$state_file" - else - yq e -i ".vm.details.$name.role = \"spare\"" "$state_file" - fi - unset name - unset mac - done + # populate vmNames + base_name=$(yq eval '.vm.baseName' "$state_file") + base_ipmi_port=6230 + for i in $(seq 1 $total_nodes); do + name="$base_name$i" + mac=$(generate_mac "$name") + yq e -i ".vm.details.$name.mac = \"$mac\"" "$state_file" + yq e -i ".vm.details.$name.bmc.port = $((base_ipmi_port + i))" "$state_file" + # set the node role + if [[ $i -le $(yq eval '.counts.controlPlanes' "$state_file") ]]; then + yq e -i ".vm.details.$name.role = \"control-plane\"" "$state_file" + elif [[ $i -le $(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file"))) ]]; then + yq e -i ".vm.details.$name.role = \"worker\"" "$state_file" + else + yq e -i ".vm.details.$name.role = \"spare\"" "$state_file" + fi + unset name + unset mac + done - # populate kind.kubeconfig - yq e -i '.kind.kubeconfig = "'$output_dir'/kind.kubeconfig"' "$state_file" + # populate kind.kubeconfig + yq e -i '.kind.kubeconfig = "'$output_dir'/kind.kubeconfig"' "$state_file" - # populate the expected OS version in the raw image name (22.04 -> 2204) - os_version=$(yq eval '.versions.os' "$state_file") - os_version=$(echo "$os_version" | tr -d '.') - yq e -i '.os.version = "'$os_version'"' "$state_file" + # populate the expected OS version in the raw image name (22.04 -> 2204) + os_version=$(yq eval '.versions.os' "$state_file") + os_version=$(echo "$os_version" | tr -d '.') + yq e -i '.os.version = "'$os_version'"' "$state_file" } main "$@" diff --git a/capt/scripts/update_state.sh b/capt/scripts/update_state.sh index 268277ef..f27a6479 100755 --- a/capt/scripts/update_state.sh +++ b/capt/scripts/update_state.sh @@ -5,44 +5,44 @@ set -euo pipefail # this script updates the state file with the generated hardware data function main() { - declare -r STATE_FILE="$1" - declare CLUSTER_NAME=$(yq eval '.clusterName' "$STATE_FILE") - declare GATEWAY_IP=$(docker inspect -f '{{ .NetworkSettings.Networks.kind.Gateway }}' "$CLUSTER_NAME"-control-plane) - declare NODE_IP_BASE=$(awk -F"." '{print $1"."$2".10.20"}' <<< "$GATEWAY_IP") - declare NODE_BASE=$(yq eval '.vm.baseName' "$STATE_FILE") - declare IP_LAST_OCTET=$(echo "$NODE_IP_BASE" | cut -d. -f4) - - yq e -i '.kind.gatewayIP = "'$GATEWAY_IP'"' "$STATE_FILE" - yq e -i '.kind.nodeIPBase = "'$NODE_IP_BASE'"' "$STATE_FILE" - - # set an ip and gateway per node - idx=1 - while IFS=$',' read -r name; do - v=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx)) - ((idx++)) - yq e -i ".vm.details.$name.ip = \"$v\"" "$STATE_FILE" - yq e -i ".vm.details.$name.gateway = \"$GATEWAY_IP\"" "$STATE_FILE" - unset v - done < <(yq e '.vm.details.[] | [key] | @csv' "$STATE_FILE") - - # set the Tinkerbell Load Balancer IP (VIP) - offset=50 - t_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset)) - yq e -i '.tinkerbell.vip = "'$t_lb'"' "$STATE_FILE" - - # set the cluster control plane load balancer IP (VIP) - cp_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset + 1)) - yq e -i '.cluster.controlPlane.vip = "'$cp_lb'"' "$STATE_FILE" - - # set the cluster pod cidr - POD_CIDR=$(awk -F"." '{print $1".100.0.0/16"}' <<< "$GATEWAY_IP") - yq e -i '.cluster.podCIDR = "'$POD_CIDR'"' "$STATE_FILE" - - # set the KinD bridge name - network_id=$(docker network inspect -f '{{.Id}}' kind) - bridge_name="br-${network_id:0:12}" - yq e -i '.kind.bridgeName = "'$bridge_name'"' "$STATE_FILE" + declare -r STATE_FILE="$1" + declare CLUSTER_NAME=$(yq eval '.clusterName' "$STATE_FILE") + declare GATEWAY_IP=$(docker inspect -f '{{ .NetworkSettings.Networks.kind.Gateway }}' "$CLUSTER_NAME"-control-plane) + declare NODE_IP_BASE=$(awk -F"." '{print $1"."$2".10.20"}' <<<"$GATEWAY_IP") + declare NODE_BASE=$(yq eval '.vm.baseName' "$STATE_FILE") + declare IP_LAST_OCTET=$(echo "$NODE_IP_BASE" | cut -d. -f4) + + yq e -i '.kind.gatewayIP = "'$GATEWAY_IP'"' "$STATE_FILE" + yq e -i '.kind.nodeIPBase = "'$NODE_IP_BASE'"' "$STATE_FILE" + + # set an ip and gateway per node + idx=1 + while IFS=$',' read -r name; do + v=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx)) + ((idx++)) + yq e -i ".vm.details.$name.ip = \"$v\"" "$STATE_FILE" + yq e -i ".vm.details.$name.gateway = \"$GATEWAY_IP\"" "$STATE_FILE" + unset v + done < <(yq e '.vm.details.[] | [key] | @csv' "$STATE_FILE") + + # set the Tinkerbell Load Balancer IP (VIP) + offset=50 + t_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset)) + yq e -i '.tinkerbell.vip = "'$t_lb'"' "$STATE_FILE" + + # set the cluster control plane load balancer IP (VIP) + cp_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset + 1)) + yq e -i '.cluster.controlPlane.vip = "'$cp_lb'"' "$STATE_FILE" + + # set the cluster pod cidr + POD_CIDR=$(awk -F"." '{print $1".100.0.0/16"}' <<<"$GATEWAY_IP") + yq e -i '.cluster.podCIDR = "'$POD_CIDR'"' "$STATE_FILE" + + # set the KinD bridge name + network_id=$(docker network inspect -f '{{.Id}}' kind) + bridge_name="br-${network_id:0:12}" + yq e -i '.kind.bridgeName = "'$bridge_name'"' "$STATE_FILE" } -main "$@" \ No newline at end of file +main "$@" diff --git a/capt/scripts/virtualbmc.sh b/capt/scripts/virtualbmc.sh index 0a0ab167..d36b7be1 100755 --- a/capt/scripts/virtualbmc.sh +++ b/capt/scripts/virtualbmc.sh @@ -5,18 +5,18 @@ set -euo pipefail # This script will registry and start virtual bmc entries in a running virtualbmc container function main() { - declare -r STATE_FILE="$1" - declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") - username=$(yq eval '.virtualBMC.user' "$STATE_FILE") - password=$(yq eval '.virtualBMC.pass' "$STATE_FILE") + username=$(yq eval '.virtualBMC.user' "$STATE_FILE") + password=$(yq eval '.virtualBMC.pass' "$STATE_FILE") - container_name=$(yq eval '.virtualBMC.containerName' "$STATE_FILE") - while IFS=$',' read -r name port; do - docker exec "$container_name" vbmc add --username "$username" --password "$password" --port "$port" "$name" - docker exec "$container_name" vbmc start "$name" - done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") + container_name=$(yq eval '.virtualBMC.containerName' "$STATE_FILE") + while IFS=$',' read -r name port; do + docker exec "$container_name" vbmc add --username "$username" --password "$password" --port "$port" "$name" + docker exec "$container_name" vbmc start "$name" + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") } -main "$@" \ No newline at end of file +main "$@" From 1810f4e5bb2a493c61e31525e961fb30973e26bf Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 15:41:13 -0600 Subject: [PATCH 04/13] Fix ssh pub key not making it to nodes: The name wasnt matching properly and now is. Signed-off-by: Jacob Weinstock --- capt/tasks/Taskfile-capi.yaml | 2 ++ capt/templates/kustomization.tmpl | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/capt/tasks/Taskfile-capi.yaml b/capt/tasks/Taskfile-capi.yaml index b257db9f..d69d2a98 100644 --- a/capt/tasks/Taskfile-capi.yaml +++ b/capt/tasks/Taskfile-capi.yaml @@ -113,6 +113,8 @@ tasks: sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} TINKERBELL_VIP: sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} vars: OUTPUT_DIR: sh: yq eval '.outputDir' config.yaml diff --git a/capt/templates/kustomization.tmpl b/capt/templates/kustomization.tmpl index bb6862e4..7d1e6b89 100644 --- a/capt/templates/kustomization.tmpl +++ b/capt/templates/kustomization.tmpl @@ -201,7 +201,7 @@ patches: - target: group: bootstrap.cluster.x-k8s.io kind: KubeadmConfigTemplate - name: "playground-.*" + name: "$CLUSTER_NAME-.*" version: v1beta1 patch: |- - op: add @@ -214,7 +214,7 @@ patches: - target: group: controlplane.cluster.x-k8s.io kind: KubeadmControlPlane - name: "playground-.*" + name: "$CLUSTER_NAME-.*" version: v1beta1 patch: |- - op: add From 4220d7b9560ca3f1ecdbbcde3f5ca11e4722ae65 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 21:52:09 -0600 Subject: [PATCH 05/13] Update kube version, add kubevip version, handle kubevip for k8s > 1.29: Move the default k8s version to 1.29. Upgrade to kubevip 0.8.0. Handle the super-admin.conf in kubevip for k8s versions >= 1.29. Signed-off-by: Jacob Weinstock --- capt/config.yaml | 3 ++- capt/tasks/Taskfile-capi.yaml | 8 ++++++++ capt/templates/kustomization.tmpl | 11 ++++++++++- 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/capt/config.yaml b/capt/config.yaml index b09f4da0..edc29ac3 100644 --- a/capt/config.yaml +++ b/capt/config.yaml @@ -9,8 +9,9 @@ counts: versions: capt: 0.5.3 chart: 0.4.4 - kube: v1.28.3 + kube: v1.29.4 os: 20.04 + kubevip: 0.8.0 os: registry: ghcr.io/jacobweinstock/capi-images distro: ubuntu diff --git a/capt/tasks/Taskfile-capi.yaml b/capt/tasks/Taskfile-capi.yaml index d69d2a98..0a82ab8b 100644 --- a/capt/tasks/Taskfile-capi.yaml +++ b/capt/tasks/Taskfile-capi.yaml @@ -115,7 +115,15 @@ tasks: sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} CLUSTER_NAME: sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBEVIP_VERSION: + sh: yq eval '.versions.kubevip' {{.STATE_FILE_FQ_PATH}} + CONTROL_PLANE_VIP: + sh: yq eval '.cluster.controlPlane.vip' {{.STATE_FILE_FQ_PATH}} + CONF_PATH: # https://github.com/kube-vip/kube-vip/issues/684 + sh: "[[ $(echo {{.KUBE_VERSION}} | awk -F. '{print $2}') -gt 28 ]] && echo /etc/kubernetes/super-admin.conf || echo /etc/kubernetes/admin.conf" vars: + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} OUTPUT_DIR: sh: yq eval '.outputDir' config.yaml sources: diff --git a/capt/templates/kustomization.tmpl b/capt/templates/kustomization.tmpl index 7d1e6b89..da20bae3 100644 --- a/capt/templates/kustomization.tmpl +++ b/capt/templates/kustomization.tmpl @@ -224,4 +224,13 @@ patches: sudo: ALL=(ALL) NOPASSWD:ALL sshAuthorizedKeys: - $SSH_AUTH_KEY - + - target: + group: controlplane.cluster.x-k8s.io + kind: KubeadmControlPlane + name: "$CLUSTER_NAME-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/kubeadmConfigSpec/preKubeadmCommands + value: + - mkdir -p /etc/kubernetes/manifests && ctr images pull ghcr.io/kube-vip/kube-vip:v$KUBEVIP_VERSION && ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v$KUBEVIP_VERSION vip /kube-vip manifest pod --arp --interface $(ip -4 -j route list default | jq -r .[0].dev) --address $CONTROL_PLANE_VIP --controlplane --leaderElection --k8sConfigPath $CONF_PATH > /etc/kubernetes/manifests/kube-vip.yaml From b04fc40a0a8e5d79166a0b99dea681f2d4c4b431 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Mon, 10 Jun 2024 21:57:45 -0600 Subject: [PATCH 06/13] Add post create note on location of management and workload kubeconfig's Signed-off-by: Jacob Weinstock --- capt/Taskfile.yaml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/capt/Taskfile.yaml b/capt/Taskfile.yaml index 915578b7..fd6cbbf8 100644 --- a/capt/Taskfile.yaml +++ b/capt/Taskfile.yaml @@ -105,6 +105,8 @@ tasks: echo echo The workload cluster is now being created. echo Once the cluster nodes are up and running, you will need to deploy a CNI for the cluster to be fully functional. + echo The management cluster kubeconfig is located at: {{.KIND_KUBECONFIG}} + echo The workload cluster kubeconfig is located at: {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig echo echo 1. Watch and wait for the first control plane node to be provisioned successfully: STATE_SUCCESS echo "KUBECONFIG={{.KIND_KUBECONFIG}} kubectl get workflows -n {{.NAMESPACE}} -w" From 666fb32cc140c86f047eadcfd68771d5397e75fb Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Wed, 26 Jun 2024 14:24:23 -0600 Subject: [PATCH 07/13] Update Helm chart version, add pause: Added a pause before applying all CAPI objects so that any User customizations that aren't supported in any Task can be applied. This is also the time where a custom build of the CAPT controller could be deployed, allowing for development of CAPT with the playground. Signed-off-by: Jacob Weinstock --- capt/config.yaml | 2 +- capt/tasks/Taskfile-create.yaml | 6 ++++++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/capt/config.yaml b/capt/config.yaml index edc29ac3..bffa45db 100644 --- a/capt/config.yaml +++ b/capt/config.yaml @@ -8,7 +8,7 @@ counts: spares: 1 versions: capt: 0.5.3 - chart: 0.4.4 + chart: 0.4.5 kube: v1.29.4 os: 20.04 kubevip: 0.8.0 diff --git a/capt/tasks/Taskfile-create.yaml b/capt/tasks/Taskfile-create.yaml index 88c6ba14..f655b673 100644 --- a/capt/tasks/Taskfile-create.yaml +++ b/capt/tasks/Taskfile-create.yaml @@ -24,9 +24,15 @@ tasks: - task: apply-bmc-machines - task: apply-hardware - task: capi:ordered + - task: allow-customization - task: create-workload-cluster - task: get-workload-cluster-kubeconfig + allow-customization: + prompt: The Workload cluster is ready to be provisioned. Execution is paused to allow for any User customizations. Press `y` to continue to Workload cluster creation. Press `n` to exit the whole process. + cmds: + - echo 'Creating Workload cluster' + kind-cluster: run: once summary: | From 6981e7c1fdd35747d572cc2c3685eff06e37e162 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Wed, 26 Jun 2024 21:12:11 -0600 Subject: [PATCH 08/13] Fix kustomize patch for HardwareAffinity: The patch for HardwareAffinity was being overridden by the patch for the templateOverride. This caused no HardwareAffinity to be in place and all node being used for any role. Signed-off-by: Jacob Weinstock --- capt/templates/kustomization.tmpl | 28 +++++----------------------- 1 file changed, 5 insertions(+), 23 deletions(-) diff --git a/capt/templates/kustomization.tmpl b/capt/templates/kustomization.tmpl index da20bae3..0931107e 100644 --- a/capt/templates/kustomization.tmpl +++ b/capt/templates/kustomization.tmpl @@ -18,29 +18,6 @@ patches: - labelSelector: matchLabels: tinkerbell.org/role: control-plane - - target: - group: infrastructure.cluster.x-k8s.io - kind: TinkerbellMachineTemplate - name: ".*worker.*" - version: v1beta1 - patch: |- - - op: add - path: /spec/template/spec - value: - hardwareAffinity: - required: - - labelSelector: - matchLabels: - tinkerbell.org/role: worker - - target: - group: infrastructure.cluster.x-k8s.io - kind: TinkerbellMachineTemplate - name: ".*control-plane.*" - version: v1beta1 - patch: |- - - op: add - path: /spec/template/spec - value: templateOverride: | version: "0.1" name: playground-template @@ -118,6 +95,11 @@ patches: - op: add path: /spec/template/spec value: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + tinkerbell.org/role: worker templateOverride: | version: "0.1" name: playground-template From 2c184badba69d8e7bc8d3948d1b412d6b4bc37ef Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Thu, 27 Jun 2024 08:36:33 -0600 Subject: [PATCH 09/13] Move each playground and readme to its own directory: Signed-off-by: Jacob Weinstock --- README.md | 109 +----------------- capt/README.md | 63 ++++++++++ capt/config.yaml | 4 +- stack/README.md | 40 +++++++ .../docs}/quickstarts/KUBERNETES.md | 0 .../docs}/quickstarts/VAGRANTLVIRT.md | 2 +- .../docs}/quickstarts/VAGRANTVBOX.md | 2 +- {vagrant => stack/vagrant}/.env | 0 {vagrant => stack/vagrant}/Vagrantfile | 0 {vagrant => stack/vagrant}/hardware.yaml | 0 {vagrant => stack/vagrant}/setup.sh | 0 {vagrant => stack/vagrant}/template.yaml | 0 .../vagrant}/ubuntu-download.yaml | 0 {vagrant => stack/vagrant}/workflow.yaml | 0 14 files changed, 110 insertions(+), 110 deletions(-) create mode 100644 capt/README.md create mode 100644 stack/README.md rename {docs => stack/docs}/quickstarts/KUBERNETES.md (100%) rename {docs => stack/docs}/quickstarts/VAGRANTLVIRT.md (99%) rename {docs => stack/docs}/quickstarts/VAGRANTVBOX.md (99%) rename {vagrant => stack/vagrant}/.env (100%) rename {vagrant => stack/vagrant}/Vagrantfile (100%) rename {vagrant => stack/vagrant}/hardware.yaml (100%) rename {vagrant => stack/vagrant}/setup.sh (100%) rename {vagrant => stack/vagrant}/template.yaml (100%) rename {vagrant => stack/vagrant}/ubuntu-download.yaml (100%) rename {vagrant => stack/vagrant}/workflow.yaml (100%) diff --git a/README.md b/README.md index 0ad27e32..c353569b 100644 --- a/README.md +++ b/README.md @@ -1,110 +1,7 @@ # Playground -This playground repository holds example deployments for use in learning and testing. +Welcome to the Tinkerbell Playground! This playground repository holds example deployments for use in learning and testing. The following playgrounds are available: -- [Tinkerbell stack playground](#tinkerbell-stack-playground) -- [Cluster API Provider Tinkerbell (CAPT) playground](#cluster-api-provider-tinkerbell-capt-playground) - -## Tinkerbell Stack Playground - -The following section containers the Tinkerbell stack playground instructions. It is not a production reference architecture. -Please use the [Helm chart](https://github.com/tinkerbell/charts) for production deployments. - -### Quick-Starts - -The following quick-start guides will walk you through standing up the Tinkerbell stack. -There are a few options for this. -Pick the one that works best for you. - -### Options - -- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md) -- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md) -- [Kubernetes](docs/quickstarts/KUBERNETES.md) - -### Next Steps - -By default the Vagrant quickstart guides automatically install Ubuntu on the VM (machine1). You can provide your own OS template. To do this: - -1. Login to the stack VM - - ```bash - vagrant ssh stack - ``` - -1. Add your template. An example Template object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/template.yaml) and more Template documentation can be found [here](https://tinkerbell.org/docs/concepts/templates/). - - ```bash - kubectl apply -f my-OS-template.yaml - ``` - -1. Create the workflow. An example Workflow object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/workflow.yaml). - - ```bash - kubectl apply -f my-custom-workflow.yaml - ``` - -1. Restart the machine to provision (if using the vagrant playground test machine this is done by running `vagrant destroy -f machine1 && vagrant up machine1`) - -## Cluster API Provider Tinkerbell (CAPT) Playground - -The Cluster API Provider Tinkerbell (CAPT) is a Kubernetes Cluster API provider that uses Tinkerbell to provision machines. You can find more information about CAPT [here](https://github.com/tinkerbell/cluster-api-provider-tinkerbell). The CAPT playground is an example deployment for use in learning and testing. It is not a production reference architecture. - -### Getting Started - -The CAPT playground is a tool that will create a local CAPT deployment and a single workload cluster. This includes creating and/or installing a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create the workload cluster, and a Virtual BMC server to manage the VMs. - -Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./capt/config.yaml) as needed. - -### Prerequisites - -#### Binaries - -- [Libvirtd](https://wiki.debian.org/KVM) >= libvirtd (libvirt) 8.0.0 -- [Docker](https://docs.docker.com/engine/install/) >= 24.0.7 -- [Helm](https://helm.sh/docs/intro/install/) >= v3.13.1 -- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) >= v0.20.0 -- [clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) >= v1.6.0 -- [kubectl](https://www.downloadkubernetes.com/) >= v1.28.2 -- [virt-install](https://virt-manager.org/) >= 4.0.0 -- [task](https://taskfile.dev/installation/) >= 3.37.2 - -#### Hardware - -- at least 60GB of free and very fast disk space (etcd is very disk I/O sensitive) -- at least 8GB of free RAM -- at least 4 CPU cores - -### Usage - -Create the CAPT playground: - -```bash -# Run the creation process and follow the outputted next steps at the end of the process. -task create-playground -``` - -Delete the CAPT playground: - -```bash -task delete-playground -``` - -### Known Issues - -#### DNS issue - -KinD on Ubuntu has a known issue with DNS resolution in KinD pod containers. This affect the Download of HookOS in the Tink stack helm deployment. There are a few [known workarounds](https://github.com/kubernetes-sigs/kind/issues/1594#issuecomment-629509450). The recommendation for the CAPT playground is to add a DNS nameservers to Docker's `daemon.json` file. This can be done by adding the following to `/etc/docker/daemon.json`: - -```json -{ - "dns": ["1.1.1.1"] -} -``` - -Then restart Docker: - -```bash -sudo systemctl restart docker -``` +- [Tinkerbell stack playground](stack/README.md) +- [Cluster API Provider Tinkerbell (CAPT) playground](capt/README.md) diff --git a/capt/README.md b/capt/README.md new file mode 100644 index 00000000..53885ff9 --- /dev/null +++ b/capt/README.md @@ -0,0 +1,63 @@ +# Cluster API Provider Tinkerbell (CAPT) Playground + +The Cluster API Provider Tinkerbell (CAPT) is a Kubernetes Cluster API provider that uses Tinkerbell to provision machines. You can find more information about CAPT [here](https://github.com/tinkerbell/cluster-api-provider-tinkerbell). The CAPT playground is an example deployment for use in learning and testing. It is not a production reference architecture. + +## Getting Started + +The CAPT playground is a tool that will create a local CAPT deployment and a single workload cluster. This includes creating and installing a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create the workload cluster, and a Virtual BMC server to manage the VMs. + +Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./capt/config.yaml) as needed. + +## Prerequisites + +### Binaries + +- [Libvirtd](https://wiki.debian.org/KVM) >= libvirtd (libvirt) 8.0.0 +- [Docker](https://docs.docker.com/engine/install/) >= 24.0.7 +- [Helm](https://helm.sh/docs/intro/install/) >= v3.13.1 +- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) >= v0.20.0 +- [clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) >= v1.6.0 +- [kubectl](https://www.downloadkubernetes.com/) >= v1.28.2 +- [virt-install](https://virt-manager.org/) >= 4.0.0 +- [task](https://taskfile.dev/installation/) >= 3.37.2 + +### Hardware + +- at least 60GB of free and very fast disk space (etcd is very disk I/O sensitive) +- at least 8GB of free RAM +- at least 4 CPU cores + +## Usage + +Get started by looking at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get started and familiar with the playground before customizing. + +Create the CAPT playground: + +```bash +# Run the creation process and follow the outputted next steps at the end of the process. +task create-playground +``` + +Delete the CAPT playground: + +```bash +task delete-playground +``` + +## Known Issues + +### DNS issue + +KinD on Ubuntu has a known issue with DNS resolution in KinD pod containers. This affect the Download of HookOS in the Tink stack helm deployment. There are a few [known workarounds](https://github.com/kubernetes-sigs/kind/issues/1594#issuecomment-629509450). The recommendation for the CAPT playground is to add a DNS nameservers to Docker's `daemon.json` file. This can be done by adding the following to `/etc/docker/daemon.json`: + +```json +{ + "dns": ["1.1.1.1"] +} +``` + +Then restart Docker: + +```bash +sudo systemctl restart docker +``` diff --git a/capt/config.yaml b/capt/config.yaml index bffa45db..9cd0255f 100644 --- a/capt/config.yaml +++ b/capt/config.yaml @@ -5,11 +5,11 @@ namespace: "tink" counts: controlPlanes: 1 workers: 1 - spares: 1 + spares: 3 versions: capt: 0.5.3 chart: 0.4.5 - kube: v1.29.4 + kube: v1.28.3 os: 20.04 kubevip: 0.8.0 os: diff --git a/stack/README.md b/stack/README.md new file mode 100644 index 00000000..870b694a --- /dev/null +++ b/stack/README.md @@ -0,0 +1,40 @@ +## Tinkerbell Stack Playground + +The following section container the Tinkerbell stack playground instructions. It is not a production reference architecture. +Please use the [Helm chart](https://github.com/tinkerbell/charts) for production deployments. + +## Quick-Starts + +The following quick-start guides will walk you through standing up the Tinkerbell stack. +There are a few options for this. +Pick the one that works best for you. + +## Options + +- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md) +- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md) +- [Kubernetes](docs/quickstarts/KUBERNETES.md) + +## Next Steps + +By default the Vagrant quickstart guides automatically install Ubuntu on the VM (machine1). You can provide your own OS template. To do this: + +1. Login to the stack VM + + ```bash + vagrant ssh stack + ``` + +1. Add your template. An example Template object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/template.yaml) and more Template documentation can be found [here](https://tinkerbell.org/docs/concepts/templates/). + + ```bash + kubectl apply -f my-OS-template.yaml + ``` + +1. Create the workflow. An example Workflow object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/workflow.yaml). + + ```bash + kubectl apply -f my-custom-workflow.yaml + ``` + +1. Restart the machine to provision (if using the vagrant playground test machine this is done by running `vagrant destroy -f machine1 && vagrant up machine1`) diff --git a/docs/quickstarts/KUBERNETES.md b/stack/docs/quickstarts/KUBERNETES.md similarity index 100% rename from docs/quickstarts/KUBERNETES.md rename to stack/docs/quickstarts/KUBERNETES.md diff --git a/docs/quickstarts/VAGRANTLVIRT.md b/stack/docs/quickstarts/VAGRANTLVIRT.md similarity index 99% rename from docs/quickstarts/VAGRANTLVIRT.md rename to stack/docs/quickstarts/VAGRANTLVIRT.md index eb78a430..54e710d3 100644 --- a/docs/quickstarts/VAGRANTLVIRT.md +++ b/stack/docs/quickstarts/VAGRANTLVIRT.md @@ -22,7 +22,7 @@ This option will also create a VM and provision an OS onto it. 1. Start the stack ```bash - cd vagrant + cd stack/vagrant vagrant up # This process will take about 5-10 minutes depending on your internet connection. # Hook is about 400MB in size and the Ubuntu jammy image is about 500MB diff --git a/docs/quickstarts/VAGRANTVBOX.md b/stack/docs/quickstarts/VAGRANTVBOX.md similarity index 99% rename from docs/quickstarts/VAGRANTVBOX.md rename to stack/docs/quickstarts/VAGRANTVBOX.md index b9b311a4..14c5484f 100644 --- a/docs/quickstarts/VAGRANTVBOX.md +++ b/stack/docs/quickstarts/VAGRANTVBOX.md @@ -21,7 +21,7 @@ This option will also create a VM and provision an OS onto it. 1. Start the stack ```bash - cd vagrant + cd stack/vagrant vagrant up # This process will take up to 10 minutes depending on your internet connection. # It will download HookOS, which is a couple hundred megabytes in size, and an Ubuntu cloud image, which is about 600MB. diff --git a/vagrant/.env b/stack/vagrant/.env similarity index 100% rename from vagrant/.env rename to stack/vagrant/.env diff --git a/vagrant/Vagrantfile b/stack/vagrant/Vagrantfile similarity index 100% rename from vagrant/Vagrantfile rename to stack/vagrant/Vagrantfile diff --git a/vagrant/hardware.yaml b/stack/vagrant/hardware.yaml similarity index 100% rename from vagrant/hardware.yaml rename to stack/vagrant/hardware.yaml diff --git a/vagrant/setup.sh b/stack/vagrant/setup.sh similarity index 100% rename from vagrant/setup.sh rename to stack/vagrant/setup.sh diff --git a/vagrant/template.yaml b/stack/vagrant/template.yaml similarity index 100% rename from vagrant/template.yaml rename to stack/vagrant/template.yaml diff --git a/vagrant/ubuntu-download.yaml b/stack/vagrant/ubuntu-download.yaml similarity index 100% rename from vagrant/ubuntu-download.yaml rename to stack/vagrant/ubuntu-download.yaml diff --git a/vagrant/workflow.yaml b/stack/vagrant/workflow.yaml similarity index 100% rename from vagrant/workflow.yaml rename to stack/vagrant/workflow.yaml From 3dcfc588a57d268ec78d498df910b790221e474b Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Thu, 27 Jun 2024 08:39:23 -0600 Subject: [PATCH 10/13] Update CAPT readme Signed-off-by: Jacob Weinstock --- capt/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/capt/README.md b/capt/README.md index 53885ff9..f09b848f 100644 --- a/capt/README.md +++ b/capt/README.md @@ -29,7 +29,7 @@ Start by reviewing and installing the [prerequisites](#prerequisites) and unders ## Usage -Get started by looking at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get started and familiar with the playground before customizing. +Get started ensuring all the dependencies are installed. Then look at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. Create the CAPT playground: From f52e0883024ab67bcae7dd3e381ae14180e7c546 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Thu, 27 Jun 2024 08:40:16 -0600 Subject: [PATCH 11/13] Update CAPT readme Signed-off-by: Jacob Weinstock --- capt/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/capt/README.md b/capt/README.md index f09b848f..7a16f95e 100644 --- a/capt/README.md +++ b/capt/README.md @@ -29,7 +29,7 @@ Start by reviewing and installing the [prerequisites](#prerequisites) and unders ## Usage -Get started ensuring all the dependencies are installed. Then look at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. +Get started by ensuring all the dependencies are installed and you have the required hardware. Then look at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. Create the CAPT playground: From 40655c07f0e8ef579a48daf9980c6870cc3e6c9d Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Thu, 27 Jun 2024 08:46:34 -0600 Subject: [PATCH 12/13] Update CAPT readme with next steps scaffolding Signed-off-by: Jacob Weinstock --- capt/README.md | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/capt/README.md b/capt/README.md index 7a16f95e..b9744b13 100644 --- a/capt/README.md +++ b/capt/README.md @@ -6,7 +6,7 @@ The Cluster API Provider Tinkerbell (CAPT) is a Kubernetes Cluster API provider The CAPT playground is a tool that will create a local CAPT deployment and a single workload cluster. This includes creating and installing a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create the workload cluster, and a Virtual BMC server to manage the VMs. -Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./capt/config.yaml) as needed. +Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./config.yaml) as needed. ## Prerequisites @@ -29,7 +29,7 @@ Start by reviewing and installing the [prerequisites](#prerequisites) and unders ## Usage -Get started by ensuring all the dependencies are installed and you have the required hardware. Then look at the [`config.yaml`](config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. +Start by looking at the [`config.yaml`](./config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. Create the CAPT playground: @@ -44,6 +44,30 @@ Delete the CAPT playground: task delete-playground ``` +## Next Steps + +With the playground up and running and a workload cluster created, you can run through a few CAPI lifecycle operations. + +### Move/pivot the Tinkerbell stack and CAPI/CAPT components to a workload cluster + +To be written. + +### Upgrade the management cluster + +To be written. + +### Upgrade the workload cluster + +To be written. + +### Scale out the workload cluster + +To be written. + +### Scale in the workload cluster + +To be written. + ## Known Issues ### DNS issue From 25bccc4c3303a9caf06898774382c2774e36ef18 Mon Sep 17 00:00:00 2001 From: Jacob Weinstock Date: Thu, 27 Jun 2024 08:49:07 -0600 Subject: [PATCH 13/13] Update the linting location for vagrant Signed-off-by: Jacob Weinstock --- .github/workflows/ci-non-go.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci-non-go.sh b/.github/workflows/ci-non-go.sh index 40d3d472..06ac9911 100755 --- a/.github/workflows/ci-non-go.sh +++ b/.github/workflows/ci-non-go.sh @@ -18,7 +18,7 @@ if ! shfmt -f . | xargs shfmt -s -l -d; then failed=1 fi -if ! rufo vagrant/Vagrantfile; then +if ! rufo stack/vagrant/Vagrantfile; then failed=1 fi