Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update CNI docs #7335

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/content/en/docs/getting-started/docker/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ sudo install -m 0755 ./kubectl /usr/local/bin/kubectl
spec: {}

```
> Note: You can also use [`kindnetd`](https://www.tkng.io/cni/kindnet/) as an alternative to `cilium` under the `cniConfig` field. Kindnetd can only be used with the Docker provider.

1. Create Docker Cluster. Note the following command may take several minutes to complete. You can run the command with -v 6 to increase logging verbosity to see the progress of the command.

Expand Down
50 changes: 14 additions & 36 deletions docs/content/en/docs/getting-started/optional/cni.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,19 @@ weight: 15
aliases:
/docs/reference/clusterspec/optional/cni/
description: >
EKS Anywhere cluster yaml cni plugin specification reference
EKS Anywhere cluster yaml CNI plugin specification reference
---

### Specifying CNI Plugin in EKS Anywhere cluster spec
### Specifying CNI Plugin in EKS Anywhere cluster YAML spec

#### Provider support details
| | vSphere | Bare Metal | Nutanix | CloudStack | Snow |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given we only support a single CNI it must work on every provider so this table is superfluous. Mind removing it?

|:--------------:|:-------:|:----------:|:-------:|:----------:|:----:|
| **Supported?** | ✓ | ✓ | ✓ | ✓ | ✓ |

EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected
for a cluster, and the plugin cannot be changed once the cluster is created.
Up until the 0.7.x releases, the plugin had to be specified using the `cni` field on cluster spec.
Starting with release 0.8, the plugin should be specified using the new `cniConfig` field as follows:
EKS Anywhere supports Cilium as a CNI plugin on all providers. The plugin cannot be changed by modifying the `cniConfig` field. However, EKS Anywhere Cilium can be replaced with a custom CNI after the cluster has been created. See [Use a custom CNI](#use-a-custom-cni) for more information.
Up until the 0.7.x release, the plugin had to be specified using the `cni` field on cluster yaml spec.
Starting with release 0.8.0, the plugin should be specified using the new `cniConfig` field as follows:

- For selecting Cilium as the CNI plugin:
```yaml
Expand All @@ -39,35 +38,14 @@ Starting with release 0.8, the plugin should be specified using the new `cniConf
```
EKS Anywhere selects this as the default plugin when generating a cluster config.

- Or for selecting Kindnetd as the CNI plugin:
```yaml
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
kindnetd: {}
```

> NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins
after the cluster is created.

### Policy Configuration options for Cilium plugin

Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods.
The allowed values for this mode are: `default`, `always` and `never`.
Please refer the official [Cilium documentation]({{< cilium "policy/intro/" >}}) for more details on how each mode affects
the communication within the cluster and choose a mode accordingly.
You can choose to not set this field so that cilium will be launched with the `default` mode.
Starting release 0.8, Cilium's policy enforcement mode can be set through the cluster spec
Starting release 0.8.0, Cilium's policy enforcement mode can be set through the cluster yaml spec
as follows:

```yaml
Expand Down Expand Up @@ -133,12 +111,12 @@ spec:

The policy enforcement mode for Cilium can be changed as a part of cluster upgrade
through the cli upgrade command.
1. Switching to `always` mode: When switching from `default`/`never` to `always` mode,
1. To `always` mode: When switching from `default`/`never` to `always` mode,
EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above).
This will ensure that the cluster gets upgraded successfully. But it is up to the user to create
This will ensure that the cluster gets upgraded successfully, but it is up to the user to create
the NetworkPolicy objects required for the user workloads.

2. Switching from `always` mode: When switching from `always` to `default` mode, EKS Anywhere
2. From `always` mode: When switching from `always` to `default` mode, EKS Anywhere
will not delete any of the existing NetworkPolicy objects, including the ones required
for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.

Expand Down Expand Up @@ -234,7 +212,7 @@ immediately install a CNI after uninstalling EKS Anywhere Cilium.
{{% /alert %}}

{{% alert title="Warning" color="warning" %}}
Clusters created using the Full Lifecycle Controller prior to v0.15 that have removed the EKS Anywhere Cilium CNI must manually populate their `cluster.anywhere.eks.amazonaws.com` object with the following annotation to ensure EKS Anywhere does not attempt to re-install EKS Anywhere Cilium.
Prior to v0.15.0, clusters created using Kubernetes API-compatible tooling such as kubectl, Terraform, or GitOps that removed the EKS Anywhere Cilium CNI must manually populate their `cluster.anywhere.eks.amazonaws.com` object with the following annotation to ensure EKS Anywhere does not attempt to re-install EKS Anywhere Cilium.

```
anywhere.eks.amazonaws.com/eksa-cilium: ""
Expand All @@ -243,9 +221,9 @@ anywhere.eks.amazonaws.com/eksa-cilium: ""

### Node IPs configuration option

Starting with release v0.10, the `node-cidr-mask-size` [flag](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options)
for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The `clusterNetwork.nodes` being an optional field,
is not generated in the EKS Anywhere spec using `generate clusterconfig` command. This block for `nodes` will need to be manually added to the cluster spec under the
Starting with release v0.10.0, the `node-cidr-mask-size` [flag](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options)
for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster YAML spec. The `clusterNetwork.nodes` being an optional field,
is not generated in the EKS Anywhere spec using `generate clusterconfig` command. The block for `nodes` will need to be manually added to the cluster YAML spec under the
`clusterNetwork` section:

```yaml
Expand All @@ -269,7 +247,7 @@ and the node CIDR mask size is `24`. This ensures the cluster 256 blocks of /24

To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rewrite this section:

To support more than 256 nodes adjust the pod CIDR block and node CIDR mask size.

Pod CIDR Node CIDR Mask Max Nodes Max Pods/Node*
192.168.0.0/16 24 256 256
192.168.0.0/16 25 512 128
192.168.0.0/15 24 512 256
192.168.0.0/15 25 1024 128

* Includes system pods.

small, to support that many IPs.
For instance, to support 1024 nodes, a user can do any of the following things
For instance, to support 1024 nodes, a user can do any of the following things:
- Set the pods cidr blocks to `192.168.0.0/16` and node cidr mask size to 26
- Set the pods cidr blocks to `192.168.0.0/15` and node cidr mask size to 25

Expand Down