From 3ae07fb435a29ffca331002962af46ff7a9e255e Mon Sep 17 00:00:00 2001 From: Saurabh Parekh Date: Mon, 29 Apr 2024 18:33:08 -0700 Subject: [PATCH] Fix htmltest errors for docs presubmit prow job --- .../docs/getting-started/baremetal/bare-spec.md | 10 +++++----- .../docs/getting-started/vsphere/vsphere-spec.md | 16 ++++++++-------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs/content/en/docs/getting-started/baremetal/bare-spec.md b/docs/content/en/docs/getting-started/baremetal/bare-spec.md index f1350210699e4..7024f63fb3131 100644 --- a/docs/content/en/docs/getting-started/baremetal/bare-spec.md +++ b/docs/content/en/docs/getting-started/baremetal/bare-spec.md @@ -177,7 +177,7 @@ The Kubernetes version you want to use for your cluster. Supported values: `1.28 Identifies the name of the management cluster. If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name. -### workerNodeGroupConfigurations (optional) +### workerNodeGroupConfigurations This takes in a list of node groups that you can define for your workers. You can omit `workerNodeGroupConfigurations` when creating Bare Metal clusters. If you omit `workerNodeGroupConfigurations`, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without `workerNodeGroupConfigurations`. @@ -260,17 +260,17 @@ Example: When this is set to n, the old worker node group can be scaled down by ## TinkerbellDatacenterConfig Fields -### tinkerbellIP (required) +### tinkerbellIP Required field to identify the IP address of the Tinkerbell service. This IP address must be a unique IP in the network range that does not conflict with other IPs. Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs. When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity. -### osImageURL (optional) +### osImageURL Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see [building baremetal node images]({{< relref "../../osmgmt/artifacts/#build-bare-metal-node-images" >}}). This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run [upgrade cluster command]({{< relref "../../clustermgmt/cluster-upgrades/baremetal-upgrades/#upgrade-cluster-command" >}}). The `osImageURL` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the `osImageURL` name should include 1.24, 1_24, 1-24 or 124. -### hookImagesURLPath (optional) +### hookImagesURLPath Optional field to replace the HookOS image. This field is useful if you want to provide a customized HookOS image or simply host the standard image locally. See [Artifacts]({{< relref "../../osmgmt/artifacts/#hookos-kernel-and-initial-ramdisk-for-bare-metal" >}}) for details. @@ -303,7 +303,7 @@ In the example, there are `TinkerbellMachineConfig` sections for control plane ( The following fields identify information needed to configure the nodes in each of those groups. >**_NOTE:_** Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers. > -### hardwareSelector (optional) +### hardwareSelector Use fields under `hardwareSelector` to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster. Choose any label name you like. For example, if you had added the label `node=cp-machine` to the machines listed in your CSV file that you want to be control plane nodes, the following `hardwareSelector` field would cause those machines to be added to the control plane: diff --git a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md index 664ad59545c36..e857115187212 100644 --- a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md +++ b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md @@ -137,7 +137,7 @@ range that does not conflict with other VMs. the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are [here]({{< relref "../vsphere/vsphere-prereq/#prepare-a-vmware-vsphere-environment" >}}) -### controlPlaneConfiguration.taints (optional) +### controlPlaneConfiguration.taints A list of taints to apply to the control plane nodes of the cluster. Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces `node-role.kubernetes.io/master`. For k8s versions 1.24+, it replaces `node-role.kubernetes.io/control-plane`. The default control plane components will tolerate the provided taints. @@ -148,7 +148,7 @@ Modifying the taints associated with the control plane configuration will cause Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration. > -### controlPlaneConfiguration.labels (optional) +### controlPlaneConfiguration.labels A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default. @@ -159,7 +159,7 @@ the existing nodes. This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups. -### workerNodeGroupConfigurations.count (required) +### workerNodeGroupConfigurations.count Number of worker nodes. Optional if the [cluster autoscaler curated package]({{< relref "../../packages/cluster-autoscaler/addclauto" >}}) is installed and autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. @@ -176,14 +176,14 @@ Minimum number of nodes for this node group's autoscaling configuration. ### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.taints (optional) +### workerNodeGroupConfigurations.taints A list of taints to apply to the nodes in the worker node group. Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration. At least one node group must **NOT** have `NoSchedule` or `NoExecute` taints applied to it. -### workerNodeGroupConfigurations.labels (optional) +### workerNodeGroupConfigurations.labels A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. @@ -195,13 +195,13 @@ The Kubernetes version you want to use for this worker node group. [Supported va Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec. -### externalEtcdConfiguration.count (optional) +### externalEtcdConfiguration.count Number of etcd members -### externalEtcdConfiguration.machineGroupRef (optional) +### externalEtcdConfiguration.machineGroupRef Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See [VSphereMachineConfig Fields](#vspheremachineconfig-fields) below. -### datacenterRef (required) +### datacenterRef Refers to the Kubernetes object with vsphere environment specific configuration. See [VSphereDatacenterConfig Fields](#vspheredatacenterconfig-fields) below. ### kubernetesVersion (required)