diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/airgapped-upgrades.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/airgapped-upgrades.md
index b9852a228dbe..207ee1f60001 100644
--- a/docs/content/en/docs/clustermgmt/cluster-upgrades/airgapped-upgrades.md
+++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/airgapped-upgrades.md
@@ -22,7 +22,7 @@ The procedure to upgrade EKS Anywhere clusters in airgapped environments is simi
If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you upgrade your EKS Anywhere cluster, configure `registryMirrorConfiguration` in your EKS Anywhere cluster specification with the information for your local registry. For details see the [Registry Mirror Configuration documentation.]({{< relref "../../getting-started/optional/registrymirror/#registry-mirror-cluster-spec" >}})
->**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of the upgraded node operating system image and hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl" >}})
+>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of the upgraded node operating system image and hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl-optional" >}})
### Next Steps
- [Build upgraded node operating system images for your cluster]({{< relref "../../osmgmt/artifacts/#building-images-for-a-specific-eks-anywhere-version" >}})
diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md
index aed83dd67c67..d2551f2f4a42 100755
--- a/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md
+++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md
@@ -108,7 +108,7 @@ spec:
...
```
->**_NOTE:_** If you have a custom machine image for your nodes in your cluster config yaml or to upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), you may also need to update your [`TinkerbellDatacenterConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbelldatacenterconfig-fields" >}}) or [`TinkerbellMachineConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbellmachineconfig-fields" >}}) with the new operating system image URL [`osImageURL`]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl" >}}).
+>**_NOTE:_** If you have a custom machine image for your nodes in your cluster config yaml or to upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), you may also need to update your [`TinkerbellDatacenterConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbelldatacenterconfig-fields" >}}) or [`TinkerbellMachineConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbellmachineconfig-fields" >}}) with the new operating system image URL [`osImageURL`]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl-optional" >}}).
and then you will run the [upgrade cluster command]({{< relref "baremetal-upgrades/#upgrade-cluster-command" >}}).
diff --git a/docs/content/en/docs/getting-started/_configuration/cluster_clusterNetwork.html b/docs/content/en/docs/getting-started/_configuration/cluster_clusterNetwork.html
index a3f29b337ea7..6e287ea908a5 100644
--- a/docs/content/en/docs/getting-started/_configuration/cluster_clusterNetwork.html
+++ b/docs/content/en/docs/getting-started/_configuration/cluster_clusterNetwork.html
@@ -8,29 +8,29 @@
### clusterNetwork.cniConfig (required)
CNI plugin configuration. Supports `cilium`.
-### clusterNetwork.cniConfig.cilium.policyEnforcementMode
+### clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)
Optionally specify a policyEnforcementMode of `default`, `always` or `never`.
-### clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces
+### clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)
Optionally specify a network interface name or interface prefix used for
masquerading. See EgressMasqueradeInterfaces
option.
-### clusterNetwork.cniConfig.cilium.skipUpgrade
+### clusterNetwork.cniConfig.cilium.skipUpgrade (optional)
When true, skip Cilium maintenance during upgrades. Also see Use a custom
CNI.
-### clusterNetwork.cniConfig.cilium.routingMode
+### clusterNetwork.cniConfig.cilium.routingMode (optional)
Optionally specify the routing mode. Accepts `default` and `direct`. Also see RoutingMode
option.
-### clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR
+### clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)
Optionally specify the CIDR to use when RoutingMode is set to direct.
When specified, Cilium assumes networking for this CIDR is preconfigured and
hands traffic destined for that range to the Linux network stack without
applying any SNAT.
-### clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR
+### clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)
Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct.
When specified, Cilium assumes networking for this CIDR is preconfigured and
hands traffic destined for that range to the Linux network stack without
diff --git a/docs/content/en/docs/getting-started/airgapped/_index.md b/docs/content/en/docs/getting-started/airgapped/_index.md
index e2de6d3a5926..b11a375786f7 100644
--- a/docs/content/en/docs/getting-started/airgapped/_index.md
+++ b/docs/content/en/docs/getting-started/airgapped/_index.md
@@ -39,7 +39,7 @@ The process for preparing your airgapped environment for EKS Anywhere is summari
If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you create your EKS Anywhere cluster, configure `registryMirrorConfiguration` in your EKS Anywhere cluster specification with the information for your local registry. For details see the [Registry Mirror Configuration documentation.]({{< relref "../../getting-started/optional/registrymirror/#registry-mirror-cluster-spec" >}})
->**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../baremetal/bare-spec/#osimageurl" >}})
+>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../baremetal/bare-spec/#osimageurl-optional" >}})
### Next Steps
- Review EKS Anywhere [cluster networking requirements]({{< relref "../ports" >}})
diff --git a/docs/content/en/docs/getting-started/baremetal/bare-spec.md b/docs/content/en/docs/getting-started/baremetal/bare-spec.md
index 8e5a9d05515d..130ad3f04c8e 100644
--- a/docs/content/en/docs/getting-started/baremetal/bare-spec.md
+++ b/docs/content/en/docs/getting-started/baremetal/bare-spec.md
@@ -121,7 +121,7 @@ the control plane nodes for kube-apiserver loadbalancing.
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See `TinkerbellMachineConfig Fields` below.
-### controlPlaneConfiguration.taints
+### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint (For k8s versions prior to 1.24, `node-role.kubernetes.io/master`. For k8s versions 1.24+, `node-role.kubernetes.io/control-plane`). The default control plane components will tolerate the provided taints.
@@ -132,29 +132,29 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
-### controlPlaneConfiguration.labels
+### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
-#### controlPlaneConfiguration.upgradeRolloutStrategy
+#### controlPlaneConfiguration.upgradeRolloutStrategy (optional)
Configuration parameters for upgrade strategy.
-#### controlPlaneConfiguration.upgradeRolloutStrategy.type
+#### controlPlaneConfiguration.upgradeRolloutStrategy.type (optional)
Default: `RollingUpdate`
Type of rollout strategy. Supported values: `RollingUpdate`,`InPlace`.
>**_NOTE:_** The upgrade rollout strategy type must be the same for all control plane and worker nodes.
-#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate
+#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate (optional)
Configuration parameters for customizing rolling upgrade behavior.
>**_NOTE:_** The rolling update parameters can only be configured if `upgradeRolloutStrategy.type` is `RollingUpdate`.
-#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge
+#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)
Default: 1
This can not be 0 if maxUnavailable is 0.
@@ -163,27 +163,27 @@ The maximum number of machines that can be scheduled above the desired number of
Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.
-### controlPlaneConfiguration.skipLoadBalancerDeployment
+### controlPlaneConfiguration.skipLoadBalancerDeployment (optional)
Optional field to skip deploying the control plane load balancer. Make sure your infrastructure can handle control plane load balancing when you set this field to true. In most cases, you should not set this field to true.
-### datacenterRef
+### datacenterRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration. See `TinkerbellDatacenterConfig Fields` below.
### kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: `1.28`, `1.27`, `1.26`, `1.25`, `1.24`
-### managementCluster
+### managementCluster (required)
Identifies the name of the management cluster.
If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name.
-### workerNodeGroupConfigurations
+### workerNodeGroupConfigurations (optional)
This takes in a list of node groups that you can define for your workers.
You can omit `workerNodeGroupConfigurations` when creating Bare Metal clusters. If you omit `workerNodeGroupConfigurations`, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without `workerNodeGroupConfigurations`.
>**_NOTE:_** Empty `workerNodeGroupConfigurations` is not supported when Kubernetes version <= 1.21.
-### workerNodeGroupConfigurations.count
+### workerNodeGroupConfigurations.count (optional)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.
Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
@@ -194,52 +194,52 @@ Refers to the Kubernetes object with Tinkerbell-specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
-### workerNodeGroupConfigurations.autoscalingConfiguration
+### workerNodeGroupConfigurations.autoscalingConfiguration (optional)
Configuration parameters for Cluster Autoscaler.
>**_NOTE:_** Autoscaling configuration is not supported when using the `InPlace` upgrade rollout strategy.
-### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.taints
+### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.
-### workerNodeGroupConfigurations.labels
+### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
-### workerNodeGroupConfigurations.kubernetesVersion
+### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24`
Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec.
-#### workerNodeGroupConfigurations.upgradeRolloutStrategy
+#### workerNodeGroupConfigurations.upgradeRolloutStrategy (optional)
Configuration parameters for upgrade strategy.
-#### workerNodeGroupConfigurations.upgradeRolloutStrategy.type
+#### workerNodeGroupConfigurations.upgradeRolloutStrategy.type (optional)
Default: `RollingUpdate`
Type of rollout strategy. Supported values: `RollingUpdate`,`InPlace`.
>**_NOTE:_** The upgrade rollout strategy type must be the same for all control plane and worker nodes.
-#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate
+#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate (optional)
Configuration parameters for customizing rolling upgrade behavior.
>**_NOTE:_** The rolling update parameters can only be configured if `upgradeRolloutStrategy.type` is `RollingUpdate`.
-#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxSurge
+#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)
Default: 1
This can not be 0 if maxUnavailable is 0.
@@ -248,7 +248,7 @@ The maximum number of machines that can be scheduled above the desired number of
Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.
-#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxUnavailable
+#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional)
Default: 0
This can not be 0 if MaxSurge is 0.
@@ -259,17 +259,17 @@ Example: When this is set to n, the old worker node group can be scaled down by
## TinkerbellDatacenterConfig Fields
-### tinkerbellIP
+### tinkerbellIP (required)
Required field to identify the IP address of the Tinkerbell service.
This IP address must be a unique IP in the network range that does not conflict with other IPs.
Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs.
When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.
-### osImageURL
+### osImageURL (optional)
Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see [building baremetal node images]({{< relref "../../osmgmt/artifacts/#build-bare-metal-node-images" >}}). This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run [upgrade cluster command]({{< relref "../../clustermgmt/cluster-upgrades/baremetal-upgrades/#upgrade-cluster-command" >}}).
The `osImageURL` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the `osImageURL` name should include 1.24, 1_24, 1-24 or 124.
-### hookImagesURLPath
+### hookImagesURLPath (optional)
Optional field to replace the HookOS image.
This field is useful if you want to provide a customized HookOS image or simply host the standard image locally.
See [Artifacts]({{< relref "../../osmgmt/artifacts/#hookos-kernel-and-initial-ramdisk-for-bare-metal" >}}) for details.
@@ -290,19 +290,19 @@ my-web-server
└── ubuntu-v1.23.7-eks-a-12-amd64.gz
```
-### skipLoadBalancerDeployment
+### skipLoadBalancerDeployment (optional)
Optional field to skip deploying the default load balancer for Tinkerbell stack.
EKS Anywhere for Bare Metal uses `kube-vip` load balancer by default to expose the Tinkerbell stack externally.
You can disable this feature by setting this field to `true`.
->**_NOTE:_** If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at [tinkerbellIP]({{< relref "#tinkerbellip" >}}) once the cluster creation is finished. One way to achieve this is by using the [MetalLB]({{< relref "../../packages/metallb" >}}) package.
+>**_NOTE:_** If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at [tinkerbellIP]({{< relref "#tinkerbellip-required" >}}) once the cluster creation is finished. One way to achieve this is by using the [MetalLB]({{< relref "../../packages/metallb" >}}) package.
## TinkerbellMachineConfig Fields
In the example, there are `TinkerbellMachineConfig` sections for control plane (`my-cluster-name-cp`) and worker (`my-cluster-name`) machine groups.
The following fields identify information needed to configure the nodes in each of those groups.
>**_NOTE:_** Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.
>
-### hardwareSelector
+### hardwareSelector (optional)
Use fields under `hardwareSelector` to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster.
Choose any label name you like.
For example, if you had added the label `node=cp-machine` to the machines listed in your CSV file that you want to be control plane nodes, the following `hardwareSelector` field would cause those machines to be added to the control plane:
@@ -331,7 +331,7 @@ See TinkerbellTemplateConfig fields below.
EKS Anywhere will generate default templates based on `osFamily` during the `create` command.
You can override this default template by providing your own template here.
-### users
+### users (optional)
The name of the user you want to configure to access your virtual machines through SSH.
The default is `ec2-user`.
@@ -471,7 +471,7 @@ spec:
Pay special attention to the `BOOTCONFIG_CONTENTS` environment section below if you wish to set up console redirection for the kernel and systemd.
If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here.
-However, if you need all boot output to be shown via a server’s serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.
+However, if you need all boot output to be shown via a server's serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.
An empty `kernel {}` key is provided below in the example; inside this key is where you will specify your console devices.
You may specify multiple comma delimited console devices in quotes to a console key as such: `console = "tty0", "ttyS0,115200n8"`.
diff --git a/docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md b/docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md
index 1728608f9cb0..e2c45c3ef580 100644
--- a/docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md
+++ b/docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md
@@ -213,7 +213,7 @@ Follow these steps if you want to use your initial cluster to create and manage
> ```
> * For creating multiple workload clusters, it is essential that the hardware labels and selectors defined for a given workload cluster are unique to that workload cluster. For instance, for an EKS Anywhere cluster named `eksa-workload1`, the hardware that is assigned for this cluster should have labels that are only going to be used for this cluster like `type=eksa-workload1-cp` and `type=eksa-workload1-worker`.
Another workload cluster named `eksa-workload2` can have labels like `type=eksa-workload2-cp` and `type=eksa-workload2-worker`. Please note that even though labels can be arbitrary, they need to be unique for each workload cluster. Not specifying unique cluster labels can cause cluster creations to behave in unexpected ways which may lead to unsuccessful creations and unstable clusters.
- See the [hardware selectors]({{< relref "./bare-spec/#hardwareselector" >}}) section for more information
+ See the [hardware selectors]({{< relref "./bare-spec/#hardwareselector-optional" >}}) section for more information
1. Check the workload cluster:
diff --git a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md
index f2c82abdbd46..1d83b2594b79 100644
--- a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md
+++ b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md
@@ -189,7 +189,7 @@ creation process are [here]({{< relref "./cloudstack-prereq/." >}})
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See `CloudStackMachineConfig Fields` below.
-### controlPlaneConfiguration.taints
+### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint, `node-role.kubernetes.io/master`. The default control plane components will tolerate the provided taints.
@@ -200,7 +200,7 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
-### controlPlaneConfiguration.labels
+### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
@@ -214,13 +214,13 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
-### datacenterRef
+### datacenterRef (required)
Refers to the Kubernetes object with CloudStack environment specific configuration. See `CloudStackDatacenterConfig Fields` below.
-### externalEtcdConfiguration.count
+### externalEtcdConfiguration.count (optional)
Number of etcd members
-### externalEtcdConfiguration.machineGroupRef
+### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See `CloudStackMachineConfig Fields` below.
### kubernetesVersion (required)
@@ -234,7 +234,7 @@ If this is a standalone cluster or if it were serving as the management cluster
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
-### workerNodeGroupConfigurations.count
+### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.
Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
@@ -245,20 +245,20 @@ Refers to the Kubernetes object with CloudStack specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
-### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.taints
+### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.
-### workerNodeGroupConfigurations.labels
+### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
A special label value is supported by the CAPC provider:
@@ -272,7 +272,7 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
-### workerNodeGroupConfigurations.kubernetesVersion
+### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24
## CloudStackDatacenterConfig
diff --git a/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md b/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md
index efbdabdd17da..ac53f7c5a066 100644
--- a/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md
+++ b/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md
@@ -189,7 +189,7 @@ creation process are [here]({{< relref "./nutanix-prereq/#prepare-a-nutanix-envi
### workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.
-### workerNodeGroupConfigurations.count
+### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if `autoscalingConfiguration` is used, in which case count will default to `autoscalingConfiguration.minCount`.
Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
@@ -200,22 +200,22 @@ Refers to the Kubernetes object with Nutanix specific configuration for your nod
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: `md-0`)
-### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group’s autoscaling configuration.
-### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group’s autoscaling configuration.
-### workerNodeGroupConfigurations.kubernetesVersion
+### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24
-### externalEtcdConfiguration.count
+### externalEtcdConfiguration.count (optional)
Number of etcd members
-### externalEtcdConfiguration.machineGroupRef
+### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with Nutanix specific configuration for your etcd members. See `NutanixMachineConfig` fields below.
-### datacenterRef
+### datacenterRef (required)
Refers to the Kubernetes object with Nutanix environment specific configuration. See `NutanixDatacenterConfig` fields below.
### kubernetesVersion (required)
@@ -253,22 +253,22 @@ __Example__:
## NutanixMachineConfig Fields
-### cluster
+### cluster (required)
Reference to the Prism Element cluster.
-### cluster.type
+### cluster.type (required)
Type to identify the Prism Element cluster. (Permitted values: `name` or `uuid`)
-### cluster.name
+### cluster.name (required)
Name of the Prism Element cluster.
-### cluster.uuid
+### cluster.uuid (required)
UUID of the Prism Element cluster.
-### image
+### image (required)
Reference to the OS image used for the system disk.
-### image.type
+### image.type (required)
Type to identify the OS image. (Permitted values: `name` or `uuid`)
### image.name (`name` or `UUID` required)
@@ -279,37 +279,37 @@ The `image.name` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.S
UUID of the image
The name of the image associated with the `uuid` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the name associated with `image.uuid` field must include 1.24, 1_24, 1-24 or 124.
-### memorySize
+### memorySize (optional)
Size of RAM on virtual machines (Default: `4Gi`)
### osFamily (optional)
Operating System on virtual machines. Permitted values: `ubuntu` and `redhat`. (Default: `ubuntu`)
-### subnet
+### subnet (required)
Reference to the subnet to be assigned to the VMs.
### subnet.name (`name` or `UUID` required)
Name of the subnet.
-### subnet.type
+### subnet.type (required)
Type to identify the subnet. (Permitted values: `name` or `uuid`)
### subnet.uuid (`name` or `UUID` required)
UUID of the subnet.
-### systemDiskSize
+### systemDiskSize (optional)
Amount of storage assigned to the system disk. (Default: `40Gi`)
-### vcpuSockets
+### vcpuSockets (optional)
Amount of vCPU sockets. (Default: `2`)
-### vcpusPerSocket
+### vcpusPerSocket (optional)
Amount of vCPUs per socket. (Default: `1`)
### project (optional)
Reference to an existing project used for the virtual machines.
-### project.type
+### project.type (required)
Type to identify the project. (Permitted values: `name` or `uuid`)
### project.name (`name` or `UUID` required)
diff --git a/docs/content/en/docs/getting-started/snow/snow-spec.md b/docs/content/en/docs/getting-started/snow/snow-spec.md
index 096bbdb9efa9..ffd8ad9c5a18 100644
--- a/docs/content/en/docs/getting-started/snow/snow-spec.md
+++ b/docs/content/en/docs/getting-started/snow/snow-spec.md
@@ -124,7 +124,7 @@ range that does not conflict with other devices.
>**_NOTE:_** This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing.
-### controlPlaneConfiguration.taints
+### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces `node-role.kubernetes.io/master`. For k8s versions 1.24+, it replaces `node-role.kubernetes.io/control-plane`. The default control plane components will tolerate the provided taints.
@@ -135,7 +135,7 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
-### controlPlaneConfiguration.labels
+### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
@@ -146,7 +146,7 @@ the existing nodes.
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
-### workerNodeGroupConfigurations.count
+### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.
Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
@@ -157,36 +157,36 @@ Refers to the Kubernetes object with Snow specific configuration for your nodes.
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
-### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.taints
+### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.
-### workerNodeGroupConfigurations.labels
+### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
-### workerNodeGroupConfigurations.kubernetesVersion
+### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24
-### externalEtcdConfiguration.count
+### externalEtcdConfiguration.count (optional)
Number of etcd members.
-### externalEtcdConfiguration.machineGroupRef
+### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with Snow specific configuration for your etcd members. See `SnowMachineConfig Fields` below.
-### datacenterRef
+### datacenterRef (required)
Refers to the Kubernetes object with Snow environment specific configuration. See `SnowDatacenterConfig Fields` below.
### kubernetesVersion (required)
@@ -194,7 +194,7 @@ The Kubernetes version you want to use for your cluster. Supported values: `1.28
## SnowDatacenterConfig Fields
-### identityRef
+### identityRef (required)
Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.
## SnowMachineConfig Fields
@@ -240,7 +240,7 @@ Refers to a `SnowIPPool` object which provides a range of ip addresses. When spe
### containersVolume (optional)
Configuration option for customizing containers data storage volume.
-### containersVolume.size
+### containersVolume.size (optional)
Size of the storage for containerd runtime in Gi.
The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.
@@ -256,10 +256,10 @@ Type of the containers volume. Permitted values: `sbp1`, `sbg1`. (Default: `sbp1
### nonRootVolumes (optional)
Configuration options for the non root storage volumes.
-### nonRootVolumes[0].deviceName
+### nonRootVolumes[0].deviceName (optional)
Non root volume device name. Must be specified and cannot have prefix "/dev/sda" as it is reserved for root volume and containers volume.
-### nonRootVolumes[0].size
+### nonRootVolumes[0].size (optional)
Size of the storage device for the non root volume. Must be no smaller than 8 Gi.
### nonRootVolumes[0].type (optional)
@@ -269,14 +269,14 @@ Type of the non root volume. Permitted values: `sbp1`, `sbg1`. (Default: `sbp1`)
## SnowIPPool Fields
-### pools[0].ipStart
+### pools[0].ipStart (optional)
Start address of an IP range.
-### pools[0].ipEnd
+### pools[0].ipEnd (optional)
End address of an IP range.
-### pools[0].subnet
+### pools[0].subnet (optional)
An IP subnet for determining whether an IP is within the subnet.
-### pools[0].gateway
+### pools[0].gateway (optional)
Gateway of the subnet for routing purpose.
diff --git a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md
index e493535d10b5..fd20f6f3e453 100644
--- a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md
+++ b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md
@@ -34,33 +34,33 @@ spec:
machineGroupRef: # vSphere-specific Kubernetes node config (required)
kind: VSphereMachineConfig
name: my-cluster-machines
- taints: # Taints applied to control plane nodes
+ taints: # Taints applied to control plane nodes
- key: "key1"
value: "value1"
effect: "NoSchedule"
- labels: # Labels applied to control plane nodes
+ labels: # Labels applied to control plane nodes
"key1": "value1"
"key2": "value2"
- datacenterRef: # Kubernetes object with vSphere-specific config
+ datacenterRef: # Kubernetes object with vSphere-specific config
kind: VSphereDatacenterConfig
name: my-cluster-datacenter
externalEtcdConfiguration:
- count: 3 # Number of etcd members
- machineGroupRef: # vSphere-specific Kubernetes etcd config
+ count: 3 # Number of etcd members
+ machineGroupRef: # vSphere-specific Kubernetes etcd config
kind: VSphereMachineConfig
name: my-cluster-machines
kubernetesVersion: "1.25" # Kubernetes version to use for the cluster (required)
workerNodeGroupConfigurations: # List of node groups you can define for workers (required)
- - count: 2 # Number of worker nodes
+ - count: 2 # Number of worker nodes
machineGroupRef: # vSphere-specific Kubernetes node objects (required)
kind: VSphereMachineConfig
name: my-cluster-machines
name: md-0 # Name of the worker nodegroup (required)
- taints: # Taints to apply to worker node group nodes
+ taints: # Taints to apply to worker node group nodes
- key: "key1"
value: "value1"
effect: "NoSchedule"
- labels: # Labels to apply to worker node group nodes
+ labels: # Labels to apply to worker node group nodes
"key1": "value1"
"key2": "value2"
---
@@ -136,7 +136,7 @@ range that does not conflict with other VMs.
the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster
creation process are [here]({{< relref "../vsphere/vsphere-prereq/#prepare-a-vmware-vsphere-environment" >}})
-### controlPlaneConfiguration.taints
+### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces `node-role.kubernetes.io/master`. For k8s versions 1.24+, it replaces `node-role.kubernetes.io/control-plane`. The default control plane components will tolerate the provided taints.
@@ -147,7 +147,7 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
-### controlPlaneConfiguration.labels
+### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
@@ -158,7 +158,7 @@ the existing nodes.
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
-### workerNodeGroupConfigurations.count
+### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if the [cluster autoscaler curated package]({{< relref "../../packages/cluster-autoscaler/addclauto" >}}) is installed and autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.
Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
@@ -169,38 +169,38 @@ Refers to the Kubernetes object with vsphere specific configuration for your nod
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
-### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
+### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.
-### workerNodeGroupConfigurations.taints
+### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must **NOT** have `NoSchedule` or `NoExecute` taints applied to it.
-### workerNodeGroupConfigurations.labels
+### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
-### workerNodeGroupConfigurations.kubernetesVersion
+### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24`
Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec.
-### externalEtcdConfiguration.count
+### externalEtcdConfiguration.count (optional)
Number of etcd members
-### externalEtcdConfiguration.machineGroupRef
+### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See [VSphereMachineConfig Fields](#vspheremachineconfig-fields) below.
-### datacenterRef
+### datacenterRef (required)
Refers to the Kubernetes object with vsphere environment specific configuration. See [VSphereDatacenterConfig Fields](#vspheredatacenterconfig-fields) below.
### kubernetesVersion (required)
diff --git a/docs/content/en/docs/osmgmt/artifacts.md b/docs/content/en/docs/osmgmt/artifacts.md
index f7429b72d6d4..648fa41f2a20 100644
--- a/docs/content/en/docs/osmgmt/artifacts.md
+++ b/docs/content/en/docs/osmgmt/artifacts.md
@@ -25,7 +25,7 @@ Several code snippets on this page use `curl` and `yq` commands. Refer to the [T
Artifacts for EKS Anywhere Bare Metal clusters are listed below.
If you like, you can download these images and serve them locally to speed up cluster creation.
-See descriptions of the [osImageURL]({{< relref "../getting-started/baremetal/bare-spec/#osimageurl" >}}) and [`hookImagesURLPath`]({{< relref "../getting-started/baremetal/bare-spec#hookimagesurlpath" >}}) fields for details.
+See descriptions of the [`osImageURL`]({{< relref "../getting-started/baremetal/bare-spec/#osimageurl-optional" >}}) and [`hookImagesURLPath`]({{< relref "../getting-started/baremetal/bare-spec#hookimagesurlpath-optional" >}}) fields for details.
### Ubuntu or RHEL OS images for Bare Metal
@@ -627,7 +627,7 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo
osImageURL: "http:///my-ubuntu-v1.23.9-eks-a-17-amd64.gz"
```
- See descriptions of [osImageURL]({{< relref "../getting-started/baremetal/bare-spec/#osimageurl" >}}) for further information.
+ See descriptions of [`osImageURL`]({{< relref "../getting-started/baremetal/bare-spec/#osimageurl-optional" >}}) for further information.
### Build CloudStack node images
diff --git a/docs/content/en/docs/osmgmt/overview.md b/docs/content/en/docs/osmgmt/overview.md
index 1c35f72aa736..16af2b435e1d 100644
--- a/docs/content/en/docs/osmgmt/overview.md
+++ b/docs/content/en/docs/osmgmt/overview.md
@@ -30,7 +30,7 @@ With the vSphere, bare metal, Snow, CloudStack and Nutanix deployment options, E
To configure the operating system to use for EKS Anywhere clusters on vSphere, use the [`VSphereMachingConfig` `spec.template` field]({{< ref "/docs/getting-started/vsphere/vsphere-spec#template-optional" >}}). The template name corresponds to the template you imported into your vSphere environment. See the [Customize OVAs]({{< ref "/docs/getting-started/vsphere/customize/customize-ovas" >}}) and [Import OVAs]({{< ref "/docs/getting-started/vsphere/customize/vsphere-ovas" >}}) documentation pages for more information. Changing the template after cluster creation will result in the deployment of new machines.
## Bare metal
-To configure the operating system to use for EKS Anywhere clusters on bare metal, use the [`TinkerbellDatacenterConfig` `spec.osImageURL` field]({{< ref "/docs/getting-started/baremetal/bare-spec#osimageurl" >}}). This field can be used to stream the operating system from a custom location and is required to use Ubuntu or RHEL. You cannot change the `osImageURL` after creating your cluster. To upgrade the operating system, you must replace the image at the existing `osImageURL` location with a new image. Operating system changes are only deployed when an action that triggers a deployment of new machines is triggered, which includes Kubernetes version upgrades only at this time.
+To configure the operating system to use for EKS Anywhere clusters on bare metal, use the [`TinkerbellDatacenterConfig` `spec.osImageURL` field]({{< ref "/docs/getting-started/baremetal/bare-spec#osimageurl-optional" >}}). This field can be used to stream the operating system from a custom location and is required to use Ubuntu or RHEL. You cannot change the `osImageURL` after creating your cluster. To upgrade the operating system, you must replace the image at the existing `osImageURL` location with a new image. Operating system changes are only deployed when an action that triggers a deployment of new machines is triggered, which includes Kubernetes version upgrades only at this time.
## Snow
To configure the operating to use for EKS Anywhere clusters on Snow, use the [`SnowMachineConfig` `spec.osFamily` field]({{< ref "/docs/getting-started/snow/snow-spec#osfamily" >}}). At this time, only Ubuntu is supported for use with EKS Anywhere clusters on Snow. You can customize the instance image with the [`SnowMachineConfig` `spec.amiID` field]({{< ref "/docs/getting-started/snow/snow-spec#amiid-optional" >}}) and the instance type with the [`SnowMachineConfig` `spec.instanceType` field]({{< ref "/docs/getting-started/snow/snow-spec#instancetype-optional" >}}). Changes to these fields after cluster creation will result in the deployment of new machines.
diff --git a/docs/content/en/docs/overview/faq/_index.md b/docs/content/en/docs/overview/faq/_index.md
index a13b4691493c..222d99206ebe 100644
--- a/docs/content/en/docs/overview/faq/_index.md
+++ b/docs/content/en/docs/overview/faq/_index.md
@@ -103,4 +103,4 @@ There would need to be a change to the upstream project to support ESXi.
### Can I deploy EKS Anywhere on a single node?
-Yes. Single node cluster deployment is supported for Bare Metal. See [workerNodeGroupConfigurations]({{< relref "../../getting-started/baremetal/bare-spec/#workernodegroupconfigurations">}})
+Yes. Single node cluster deployment is supported for Bare Metal. See [workerNodeGroupConfigurations]({{< relref "../../getting-started/baremetal/bare-spec/#workernodegroupconfigurations-optional">}})