Skip to content

Commit

Permalink
Update configuration section in docs (#8076)
Browse files Browse the repository at this point in the history
  • Loading branch information
sp1999 committed May 1, 2024
1 parent f058cd8 commit d31ebbb
Show file tree
Hide file tree
Showing 12 changed files with 107 additions and 107 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The procedure to upgrade EKS Anywhere clusters in airgapped environments is simi

If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you upgrade your EKS Anywhere cluster, configure `registryMirrorConfiguration` in your EKS Anywhere cluster specification with the information for your local registry. For details see the [Registry Mirror Configuration documentation.]({{< relref "../../getting-started/optional/registrymirror/#registry-mirror-cluster-spec" >}})

>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of the upgraded node operating system image and hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl" >}})
>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of the upgraded node operating system image and hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl-optional" >}})
### Next Steps
- [Build upgraded node operating system images for your cluster]({{< relref "../../osmgmt/artifacts/#building-images-for-a-specific-eks-anywhere-version" >}})
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ spec:
...
```

>**_NOTE:_** If you have a custom machine image for your nodes in your cluster config yaml or to upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), you may also need to update your [`TinkerbellDatacenterConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbelldatacenterconfig-fields" >}}) or [`TinkerbellMachineConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbellmachineconfig-fields" >}}) with the new operating system image URL [`osImageURL`]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl" >}}).
>**_NOTE:_** If you have a custom machine image for your nodes in your cluster config yaml or to upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), you may also need to update your [`TinkerbellDatacenterConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbelldatacenterconfig-fields" >}}) or [`TinkerbellMachineConfig`]({{< relref "../../getting-started/baremetal/bare-spec/#tinkerbellmachineconfig-fields" >}}) with the new operating system image URL [`osImageURL`]({{< relref "../../getting-started/baremetal/bare-spec/#osimageurl-optional" >}}).
and then you will run the [upgrade cluster command]({{< relref "baremetal-upgrades/#upgrade-cluster-command" >}}).

Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/getting-started/airgapped/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The process for preparing your airgapped environment for EKS Anywhere is summari

If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you create your EKS Anywhere cluster, configure `registryMirrorConfiguration` in your EKS Anywhere cluster specification with the information for your local registry. For details see the [Registry Mirror Configuration documentation.]({{< relref "../../getting-started/optional/registrymirror/#registry-mirror-cluster-spec" >}})

>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../baremetal/bare-spec/#osimageurl" >}})
>**_NOTE:_** If you are running EKS Anywhere on bare metal, you must configure `osImageURL` and `hookImagesURLPath` in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the [bare metal configuration documentation.]({{< relref "../baremetal/bare-spec/#osimageurl-optional" >}})
### Next Steps
- Review EKS Anywhere [cluster networking requirements]({{< relref "../ports" >}})
Expand Down
38 changes: 19 additions & 19 deletions docs/content/en/docs/getting-started/baremetal/bare-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ the control plane nodes for kube-apiserver loadbalancing.
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See `TinkerbellMachineConfig Fields` below.

### controlPlaneConfiguration.taints
### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint (For k8s versions prior to 1.24, `node-role.kubernetes.io/master`. For k8s versions 1.24+, `node-role.kubernetes.io/control-plane`). The default control plane components will tolerate the provided taints.
Expand All @@ -147,34 +147,34 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>

### controlPlaneConfiguration.labels
### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.

### controlPlaneConfiguration.skipLoadBalancerDeployment
### controlPlaneConfiguration.skipLoadBalancerDeployment (optional)
Optional field to skip deploying the control plane load balancer. Make sure your infrastructure can handle control plane load balancing when you set this field to true. In most cases, you should not set this field to true.

### datacenterRef
### datacenterRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration. See `TinkerbellDatacenterConfig Fields` below.

### kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: `1.28`, `1.27`, `1.26`, `1.25`, `1.24`

### managementCluster
### managementCluster (required)
Identifies the name of the management cluster.
If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name.

### workerNodeGroupConfigurations
### workerNodeGroupConfigurations (optional)
This takes in a list of node groups that you can define for your workers.

You can omit `workerNodeGroupConfigurations` when creating Bare Metal clusters. If you omit `workerNodeGroupConfigurations`, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without `workerNodeGroupConfigurations`.

>**_NOTE:_** Empty `workerNodeGroupConfigurations` is not supported when Kubernetes version <= 1.21.

### workerNodeGroupConfigurations.count
### workerNodeGroupConfigurations.count (optional)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.

Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
Expand All @@ -185,44 +185,44 @@ Refers to the Kubernetes object with Tinkerbell-specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)

### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.taints
### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.

### workerNodeGroupConfigurations.labels
### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.

### workerNodeGroupConfigurations.kubernetesVersion
### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24`

Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec.

## TinkerbellDatacenterConfig Fields

### tinkerbellIP
### tinkerbellIP (required)
Required field to identify the IP address of the Tinkerbell service.
This IP address must be a unique IP in the network range that does not conflict with other IPs.
Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs.
When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.

### osImageURL
### osImageURL (optional)
Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see [building baremetal node images]({{< relref "../../osmgmt/artifacts/#build-bare-metal-node-images" >}}). This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run [upgrade cluster command]({{< relref "../../clustermgmt/cluster-upgrades/baremetal-upgrades/#upgrade-cluster-command" >}}).
The `osImageURL` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the `osImageURL` name should include 1.24, 1_24, 1-24 or 124.

### hookImagesURLPath
### hookImagesURLPath (optional)
Optional field to replace the HookOS image.
This field is useful if you want to provide a customized HookOS image or simply host the standard image locally.
See [Artifacts]({{< relref "../../osmgmt/artifacts/#hookos-kernel-and-initial-ramdisk-for-bare-metal" >}}) for details.
Expand All @@ -243,7 +243,7 @@ my-web-server
└── ubuntu-v1.23.7-eks-a-12-amd64.gz
```

### skipLoadBalancerDeployment
### skipLoadBalancerDeployment (optional)
Optional field to skip deploying the default load balancer for Tinkerbell stack.

EKS Anywhere for Bare Metal uses `kube-vip` load balancer by default to expose the Tinkerbell stack externally.
Expand All @@ -255,7 +255,7 @@ In the example, there are `TinkerbellMachineConfig` sections for control plane (
The following fields identify information needed to configure the nodes in each of those groups.
>**_NOTE:_** Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.
>
### hardwareSelector
### hardwareSelector (optional)
Use fields under `hardwareSelector` to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster.
Choose any label name you like.
For example, if you had added the label `node=cp-machine` to the machines listed in your CSV file that you want to be control plane nodes, the following `hardwareSelector` field would cause those machines to be added to the control plane:
Expand Down Expand Up @@ -284,7 +284,7 @@ See TinkerbellTemplateConfig fields below.
EKS Anywhere will generate default templates based on `osFamily` during the `create` command.
You can override this default template by providing your own template here.

### users
### users (optional)
The name of the user you want to configure to access your virtual machines through SSH.

The default is `ec2-user`.
Expand Down Expand Up @@ -424,7 +424,7 @@ spec:

Pay special attention to the `BOOTCONFIG_CONTENTS` environment section below if you wish to set up console redirection for the kernel and systemd.
If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here.
However, if you need all boot output to be shown via a servers serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.
However, if you need all boot output to be shown via a server's serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.

An empty `kernel {}` key is provided below in the example; inside this key is where you will specify your console devices.
You may specify multiple comma delimited console devices in quotes to a console key as such: `console = "tty0", "ttyS0,115200n8"`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ Follow these steps if you want to use your initial cluster to create and manage
> ```
> * For creating multiple workload clusters, it is essential that the hardware labels and selectors defined for a given workload cluster are unique to that workload cluster. For instance, for an EKS Anywhere cluster named `eksa-workload1`, the hardware that is assigned for this cluster should have labels that are only going to be used for this cluster like `type=eksa-workload1-cp` and `type=eksa-workload1-worker`.
Another workload cluster named `eksa-workload2` can have labels like `type=eksa-workload2-cp` and `type=eksa-workload2-worker`. Please note that even though labels can be arbitrary, they need to be unique for each workload cluster. Not specifying unique cluster labels can cause cluster creations to behave in unexpected ways which may lead to unsuccessful creations and unstable clusters.
See the [hardware selectors]({{< relref "./bare-spec/#hardwareselector" >}}) section for more information
See the [hardware selectors]({{< relref "./bare-spec/#hardwareselector-optional" >}}) section for more information
1. Check the workload cluster:
Expand Down
22 changes: 11 additions & 11 deletions docs/content/en/docs/getting-started/cloudstack/cloud-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ creation process are [here]({{< relref "./cloudstack-prereq/." >}})
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See `CloudStackMachineConfig Fields` below.

### controlPlaneConfiguration.taints
### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint, `node-role.kubernetes.io/master`. The default control plane components will tolerate the provided taints.
Expand All @@ -215,7 +215,7 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
### controlPlaneConfiguration.labels
### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.

Expand All @@ -229,13 +229,13 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.

### datacenterRef
### datacenterRef (required)
Refers to the Kubernetes object with CloudStack environment specific configuration. See `CloudStackDatacenterConfig Fields` below.

### externalEtcdConfiguration.count
### externalEtcdConfiguration.count (optional)
Number of etcd members

### externalEtcdConfiguration.machineGroupRef
### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See `CloudStackMachineConfig Fields` below.

### kubernetesVersion (required)
Expand All @@ -249,7 +249,7 @@ If this is a standalone cluster or if it were serving as the management cluster
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.

### workerNodeGroupConfigurations.count
### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.

Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
Expand All @@ -260,20 +260,20 @@ Refers to the Kubernetes object with CloudStack specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)

### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.taints
### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.

### workerNodeGroupConfigurations.labels
### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
A special label value is supported by the CAPC provider:
Expand All @@ -287,7 +287,7 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.

### workerNodeGroupConfigurations.kubernetesVersion
### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

## CloudStackDatacenterConfig
Expand Down
Loading

0 comments on commit d31ebbb

Please sign in to comment.