Skip to content

Commit

Permalink
Update configuration section in docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sp1999 committed Apr 30, 2024
1 parent 7cffe23 commit a259c74
Show file tree
Hide file tree
Showing 6 changed files with 96 additions and 96 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,29 @@
### clusterNetwork.cniConfig (required)
CNI plugin configuration. Supports `cilium`.

### clusterNetwork.cniConfig.cilium.policyEnforcementMode
### clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)
Optionally specify a policyEnforcementMode of `default`, `always` or `never`.

### clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces
### clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)
Optionally specify a network interface name or interface prefix used for
masquerading. See <a href="/docs/getting-started/optional/cni/#egressmasqueradeinterfaces-option-for-cilium-plugin">EgressMasqueradeInterfaces</a>
option.

### clusterNetwork.cniConfig.cilium.skipUpgrade
### clusterNetwork.cniConfig.cilium.skipUpgrade (optional)
When true, skip Cilium maintenance during upgrades. Also see <a href="/docs/getting-started/optional/cni/#use-a-custom-cni">Use a custom
CNI</a>.

### clusterNetwork.cniConfig.cilium.routingMode
### clusterNetwork.cniConfig.cilium.routingMode (optional)
Optionally specify the routing mode. Accepts `default` and `direct`. Also see <a href="/docs/getting-started/optional/cni/#routingmode-option-for-cilium-plugin">RoutingMode</a>
option.

### clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR
### clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)
Optionally specify the CIDR to use when RoutingMode is set to direct.
When specified, Cilium assumes networking for this CIDR is preconfigured and
hands traffic destined for that range to the Linux network stack without
applying any SNAT.

### clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR
### clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)
Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct.
When specified, Cilium assumes networking for this CIDR is preconfigured and
hands traffic destined for that range to the Linux network stack without
Expand Down
58 changes: 29 additions & 29 deletions docs/content/en/docs/getting-started/baremetal/bare-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ the control plane nodes for kube-apiserver loadbalancing.
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See `TinkerbellMachineConfig Fields` below.

### controlPlaneConfiguration.taints
### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint (For k8s versions prior to 1.24, `node-role.kubernetes.io/master`. For k8s versions 1.24+, `node-role.kubernetes.io/control-plane`). The default control plane components will tolerate the provided taints.
Expand All @@ -133,29 +133,29 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>

### controlPlaneConfiguration.labels
### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.

#### controlPlaneConfiguration.upgradeRolloutStrategy
#### controlPlaneConfiguration.upgradeRolloutStrategy (optional)
Configuration parameters for upgrade strategy.

#### controlPlaneConfiguration.upgradeRolloutStrategy.type
#### controlPlaneConfiguration.upgradeRolloutStrategy.type (optional)
Default: `RollingUpdate`

Type of rollout strategy. Supported values: `RollingUpdate`,`InPlace`.

>**_NOTE:_** The upgrade rollout strategy type must be the same for all control plane and worker nodes.

#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate
#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate (optional)
Configuration parameters for customizing rolling upgrade behavior.

>**_NOTE:_** The rolling update parameters can only be configured if `upgradeRolloutStrategy.type` is `RollingUpdate`.

#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge
#### controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)
Default: 1

This can not be 0 if maxUnavailable is 0.
Expand All @@ -164,27 +164,27 @@ The maximum number of machines that can be scheduled above the desired number of

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

### controlPlaneConfiguration.skipLoadBalancerDeployment
### controlPlaneConfiguration.skipLoadBalancerDeployment (optional)
Optional field to skip deploying the control plane load balancer. Make sure your infrastructure can handle control plane load balancing when you set this field to true. In most cases, you should not set this field to true.

### datacenterRef
### datacenterRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration. See `TinkerbellDatacenterConfig Fields` below.

### kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: `1.28`, `1.27`, `1.26`, `1.25`, `1.24`

### managementCluster
### managementCluster (required)
Identifies the name of the management cluster.
If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name.

### workerNodeGroupConfigurations
### workerNodeGroupConfigurations (optional)
This takes in a list of node groups that you can define for your workers.

You can omit `workerNodeGroupConfigurations` when creating Bare Metal clusters. If you omit `workerNodeGroupConfigurations`, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without `workerNodeGroupConfigurations`.

>**_NOTE:_** Empty `workerNodeGroupConfigurations` is not supported when Kubernetes version <= 1.21.

### workerNodeGroupConfigurations.count
### workerNodeGroupConfigurations.count (optional)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.

Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
Expand All @@ -195,52 +195,52 @@ Refers to the Kubernetes object with Tinkerbell-specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)

### workerNodeGroupConfigurations.autoscalingConfiguration
### workerNodeGroupConfigurations.autoscalingConfiguration (optional)
Configuration parameters for Cluster Autoscaler.

>**_NOTE:_** Autoscaling configuration is not supported when using the `InPlace` upgrade rollout strategy.

### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.taints
### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.

### workerNodeGroupConfigurations.labels
### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.

### workerNodeGroupConfigurations.kubernetesVersion
### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24`

Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec.

#### workerNodeGroupConfigurations.upgradeRolloutStrategy
#### workerNodeGroupConfigurations.upgradeRolloutStrategy (optional)
Configuration parameters for upgrade strategy.

#### workerNodeGroupConfigurations.upgradeRolloutStrategy.type
#### workerNodeGroupConfigurations.upgradeRolloutStrategy.type (optional)
Default: `RollingUpdate`

Type of rollout strategy. Supported values: `RollingUpdate`,`InPlace`.

>**_NOTE:_** The upgrade rollout strategy type must be the same for all control plane and worker nodes.

#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate
#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate (optional)
Configuration parameters for customizing rolling upgrade behavior.

>**_NOTE:_** The rolling update parameters can only be configured if `upgradeRolloutStrategy.type` is `RollingUpdate`.

#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxSurge
#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)
Default: 1

This can not be 0 if maxUnavailable is 0.
Expand All @@ -249,7 +249,7 @@ The maximum number of machines that can be scheduled above the desired number of

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxUnavailable
#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional)
Default: 0

This can not be 0 if MaxSurge is 0.
Expand All @@ -260,17 +260,17 @@ Example: When this is set to n, the old worker node group can be scaled down by

## TinkerbellDatacenterConfig Fields

### tinkerbellIP
### tinkerbellIP (required)
Required field to identify the IP address of the Tinkerbell service.
This IP address must be a unique IP in the network range that does not conflict with other IPs.
Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs.
When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.

### osImageURL
### osImageURL (optional)
Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see [building baremetal node images]({{< relref "../../osmgmt/artifacts/#build-bare-metal-node-images" >}}). This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run [upgrade cluster command]({{< relref "../../clustermgmt/cluster-upgrades/baremetal-upgrades/#upgrade-cluster-command" >}}).
The `osImageURL` must contain the `Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the `osImageURL` name should include 1.24, 1_24, 1-24 or 124.

### hookImagesURLPath
### hookImagesURLPath (optional)
Optional field to replace the HookOS image.
This field is useful if you want to provide a customized HookOS image or simply host the standard image locally.
See [Artifacts]({{< relref "../../osmgmt/artifacts/#hookos-kernel-and-initial-ramdisk-for-bare-metal" >}}) for details.
Expand All @@ -291,7 +291,7 @@ my-web-server
└── ubuntu-v1.23.7-eks-a-12-amd64.gz
```

### skipLoadBalancerDeployment
### skipLoadBalancerDeployment (optional)
Optional field to skip deploying the default load balancer for Tinkerbell stack.

EKS Anywhere for Bare Metal uses `kube-vip` load balancer by default to expose the Tinkerbell stack externally.
Expand All @@ -303,7 +303,7 @@ In the example, there are `TinkerbellMachineConfig` sections for control plane (
The following fields identify information needed to configure the nodes in each of those groups.
>**_NOTE:_** Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.
>
### hardwareSelector
### hardwareSelector (optional)
Use fields under `hardwareSelector` to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster.
Choose any label name you like.
For example, if you had added the label `node=cp-machine` to the machines listed in your CSV file that you want to be control plane nodes, the following `hardwareSelector` field would cause those machines to be added to the control plane:
Expand Down Expand Up @@ -332,7 +332,7 @@ See TinkerbellTemplateConfig fields below.
EKS Anywhere will generate default templates based on `osFamily` during the `create` command.
You can override this default template by providing your own template here.

### users
### users (optional)
The name of the user you want to configure to access your virtual machines through SSH.

The default is `ec2-user`.
Expand Down Expand Up @@ -472,7 +472,7 @@ spec:

Pay special attention to the `BOOTCONFIG_CONTENTS` environment section below if you wish to set up console redirection for the kernel and systemd.
If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here.
However, if you need all boot output to be shown via a servers serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.
However, if you need all boot output to be shown via a server's serial console for example, extra configuration should be provided inside `BOOTCONFIG_CONTENTS`.

An empty `kernel {}` key is provided below in the example; inside this key is where you will specify your console devices.
You may specify multiple comma delimited console devices in quotes to a console key as such: `console = "tty0", "ttyS0,115200n8"`.
Expand Down
22 changes: 11 additions & 11 deletions docs/content/en/docs/getting-started/cloudstack/cloud-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ creation process are [here]({{< relref "./cloudstack-prereq/." >}})
### controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See `CloudStackMachineConfig Fields` below.

### controlPlaneConfiguration.taints
### controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint, `node-role.kubernetes.io/master`. The default control plane components will tolerate the provided taints.
Expand All @@ -201,7 +201,7 @@ Modifying the taints associated with the control plane configuration will cause
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
>
### controlPlaneConfiguration.labels
### controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.

Expand All @@ -215,13 +215,13 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.

### datacenterRef
### datacenterRef (required)
Refers to the Kubernetes object with CloudStack environment specific configuration. See `CloudStackDatacenterConfig Fields` below.

### externalEtcdConfiguration.count
### externalEtcdConfiguration.count (optional)
Number of etcd members

### externalEtcdConfiguration.machineGroupRef
### externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See `CloudStackMachineConfig Fields` below.

### kubernetesVersion (required)
Expand All @@ -235,7 +235,7 @@ If this is a standalone cluster or if it were serving as the management cluster
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.

### workerNodeGroupConfigurations.count
### workerNodeGroupConfigurations.count (required)
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`.

Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation.
Expand All @@ -246,20 +246,20 @@ Refers to the Kubernetes object with CloudStack specific configuration for your
### workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)

### workerNodeGroupConfigurations.autoscalingConfiguration.minCount
### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group's autoscaling configuration.

### workerNodeGroupConfigurations.taints
### workerNodeGroupConfigurations.taints (optional)
A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it.

### workerNodeGroupConfigurations.labels
### workerNodeGroupConfigurations.labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
A special label value is supported by the CAPC provider:
Expand All @@ -273,7 +273,7 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.

### workerNodeGroupConfigurations.kubernetesVersion
### workerNodeGroupConfigurations.kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

## CloudStackDatacenterConfig
Expand Down
Loading

0 comments on commit a259c74

Please sign in to comment.