Skip to content

Commit

Permalink
docs: add information about KubeSpan ports and topology
Browse files Browse the repository at this point in the history
Update KubeSpan documentation.

Signed-off-by: Steve Francis <[email protected]>
Signed-off-by: Andrey Smirnov <[email protected]>
  • Loading branch information
steverfrancis authored and smira committed May 1, 2023
1 parent 2bad74d commit f8a7a5b
Show file tree
Hide file tree
Showing 3 changed files with 65 additions and 32 deletions.
26 changes: 12 additions & 14 deletions website/content/v1.5/learn-more/kubespan.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,29 +12,28 @@ The key pieces of information needed for WireGuard generally are:
- an IP address and port of the host you wish to connect to

The latter is really only required of _one_ side of the pair.
Once traffic is received, that information is known and updated by WireGuard automatically and internally.
Once traffic is received, that information is learned and updated by WireGuard automatically.

For Kubernetes, though, this is not quite sufficient.
Kubernetes also needs to know which traffic goes to which WireGuard peer.
Because this information may be dynamic, we need a way to be able to constantly keep this information up to date.
Kubernetes, though, also needs to know which traffic goes to which WireGuard peer.
Because this information may be dynamic, we need a way to keep this information up to date.

If we have a functional connection to Kubernetes otherwise, it's fairly easy: we can just keep that information in Kubernetes.
If we already have a connection to Kubernetes, it's fairly easy: we can just keep that information in Kubernetes.
Otherwise, we have to have some way to discover it.

In our solution, we have a multi-tiered approach to gathering this information.
Each tier can operate independently, but the amalgamation of the tiers produces a more robust set of connection criteria.
Talos Linux implements a multi-tiered approach to gathering this information.
Each tier can operate independently, but the amalgamation of the mechanisms produces a more robust set of connection criteria.

For this discussion, we will point out two of these tiers:
These mechanisms are:

- an external service
- a Kubernetes-based system

See [discovery service]({{< relref "../talos-guides/discovery" >}}) to learn more about the external service.

The Kubernetes-based system utilises annotations on Kubernetes Nodes which describe each node's public key and local addresses.
The Kubernetes-based system utilizes annotations on Kubernetes Nodes which describe each node's public key and local addresses.

On top of this, we also optionally route Pod subnets.
This is often (maybe even usually) taken care of by the CNI, but there are many situations where the CNI fails to be able to do this itself, across networks.
On top of this, KubeSpan can optionally route Pod subnets.
This is usually taken care of by the CNI, but there are many situations where the CNI fails to be able to do this itself, across networks.

## NAT, Multiple Routes, Multiple IPs

Expand All @@ -44,13 +43,12 @@ For instance, a node sitting on the same network might see its peer as `192.168.
We need to be able to handle any number of addresses and ports, and we also need to have a mechanism to _try_ them.
WireGuard only allows us to select one at a time.

For our implementation, then, we have built a controller which continuously discovers and rotates these IP:port pairs until a connection is established.
KubeSpan implements a controller which continuously discovers and rotates these IP:port pairs until a connection is established.
It then starts trying again if that connection ever fails.

## Packet Routing

After we have established a WireGuard connection, our work is not done.
We still have to make sure that the right packets get sent to the WireGuard interface.
After we have established a WireGuard connection, we have to make sure that the right packets get sent to the WireGuard interface.

WireGuard supplies a convenient facility for tagging packets which come from _it_, which is great.
But in our case, we need to be able to allow traffic which both does _not_ come from WireGuard and _also_ is not destined for another Kubernetes node to flow through the normal mechanisms.
Expand Down
8 changes: 4 additions & 4 deletions website/content/v1.5/talos-guides/discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ cluster:
disabled: true
```
Disabling all registries effectively disables member discovery altogether.
Disabling all registries effectively disables member discovery.
> An enabled discovery service is required for [KubeSpan]({{< relref "../kubernetes-guides/network/kubespan/" >}}) to function correctly.
> Note: An enabled discovery service is required for [KubeSpan]({{< relref "../talos-guides/network/kubespan/" >}}) to function correctly.
The `Kubernetes` registry uses Kubernetes `Node` resource data and additional Talos annotations:

Expand Down Expand Up @@ -83,7 +83,7 @@ In order for nodes to communicate to the discovery service, they must be able to

## Resource Definitions

Talos provides seven resources that can be used to introspect the new discovery and KubeSpan features.
Talos provides resources that can be used to introspect the discovery and KubeSpan features.

### Discovery

Expand All @@ -107,7 +107,7 @@ Node identity is preserved across reboots and upgrades, but it is regenerated if

#### Affiliates

An affiliate is a proposed member attributed to the fact that the node has the same cluster ID and secret.
An affiliate is a proposed member: the node has the same cluster ID and secret.

```sh
$ talosctl get affiliates
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,31 +3,67 @@ title: "KubeSpan"
description: "Learn to use KubeSpan to connect Talos Linux machines securely across networks."
aliases:
- ../../guides/kubespan
- ../../kubernetes-guides/network/kubespan
---

KubeSpan is a feature of Talos that automates the setup and maintenance of a full mesh [WireGuard](https://www.wireguard.com) network for your cluster, giving you the ability to operate hybrid Kubernetes clusters that can span the edge, datacenter, and cloud.
Management of keys and discovery of peers can be completely automated for a zero-touch experience that makes it simple and easy to create hybrid clusters.
Management of keys and discovery of peers can be completely automated, making it simple and easy to create hybrid clusters.

KubeSpan consists of client code in Talos Linux, as well as a discovery service that enables clients to securely find each other.
Sidero Labs operates a free Discovery Service, but the discovery service may be operated by your organization and can be [downloaded here](https://github.com/siderolabs/discovery-service).
KubeSpan consists of client code in Talos Linux, as well as a [discovery service]({{< relref "../discovery" >}}) that enables clients to securely find each other.
Sidero Labs operates a free Discovery Service, but the discovery service may, with a commercial license, be operated by your organization and can be [downloaded here](https://github.com/siderolabs/discovery-service).

## Video Walkthrough

To learn more about KubeSpan, see the video below:

<iframe width="560" height="315" src="https://www.youtube.com/embed/lPl3u9BN7j4" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

To see a live demo of KubeSpan, see one the videos below:

<iframe width="560" height="315" src="https://www.youtube.com/embed/RRk8gYzRHJg" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/sBKIFLhC9MQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## Network Requirements

KubeSpan uses **UDP port 51820** to carry all KubeSpan encrypted traffic.
Because UDP traversal of firewalls is often lenient, and the Discovery Service communicates the apparent IP address of all peers to all other peers, KubeSpan will often work automatically, even when each nodes is behind their own firewall.
However, when both ends of a KubeSpan connection are behind firewalls, it is possible the connection may not be established correctly - it depends on each end sending out packets in a limited time window.

Thus best practice is to ensure that one end of all possible node-node communication allows UDP port 51820, inbound.

For example, if control plane nodes are running in a corporate data center, behind firewalls, KubeSpan connectivity will work correctly so long as worker nodes on the public Internet can receive packets on UDP port 51820.
(Note the workers will also need to receive TCP port 50000 for initial configuration via `talosctl`).

An alternative topology would be to run control plane nodes in a public cloud, and allow inbound UDP port 51820 to the control plane nodes.
Workers could be behind firewalls, and KubeSpan connectivity will be established.
Note that if workers are in different locations, behind different firewalls, the KubeSpan connectivity between workers *should* be correctly established, but may require opening the KubeSpan UDP port on the local firewall also.

### Caveats

#### Kubernetes API Endpoint Limitations

When the K8s endpoint is an IP address that is **not** part of Kubespan, but is an address that is forwarded on to the Kubespan address of a control plane node, without changing the source address, then worker nodes will fail to join the cluster.
In such a case, the control plane node has no way to determine whether the packet arrived on the private Kubespan address, or the public IP address.
If the source of the packet was a Kubespan member, the reply will be Kubespan encapsulated, and thus not translated to the public IP, and so the control plane will reply to the session with the wrong address.

This situation is seen, for example, when the Kubernetes API endpoint is the public IP of a VM in GCP or Azure for a single node control plane.
The control plane will receive packets on the public IP, but will reply from it's KubeSpan address.
The workaround is to create a load balancer to terminate the Kubernetes API endpoint.

#### Digital Ocean Limitations

Digital Ocean assigns an "Anchor IP" address to each droplet.
Talos Linux correctly identifies this as a link-local address, and configures KubeSpan correctly, but this address will often be selected by Flannel or other CNIs as a node's private IP.
Because this address is not routable, nor advertised via KubeSpan, it will break pod-pod communication between nodes.
This can be worked-around by assigning a non-Anchor private IP:

`kubectl annotate node do-worker flannel.alpha.coreos.com/public-ip-overwrite=10.116.X.X`

Then restarting flannel:
`kubectl delete pods -n kube-system -l k8s-app=flannel`

## Enabling

### Creating a New Cluster

To generate configuration files for a new cluster, we can use the `--with-kubespan` flag in `talosctl gen config`.
To enable KubeSpan for a new cluster, we can use the `--with-kubespan` flag in `talosctl gen config`.
This will enable peer discovery and KubeSpan.

```yaml
Expand All @@ -45,13 +81,12 @@ cluster:
service: {}
```
> The default discovery service is an external service hosted for free by Sidero Labs.
> The default value is `https://discovery.talos.dev/`.
> The default discovery service is an external service hosted by Sidero Labs at `https://discovery.talos.dev/`.
> Contact Sidero Labs if you need to run this service privately.

### Enabling for an Existing Cluster

In order to enable KubeSpan for an existing cluster, enable `kubespan` and `discovery` settings in the machine config for each machine in the cluster (`discovery` is enabled by default):
In order to enable KubeSpan on an existing cluster, enable `kubespan` and `discovery` settings in the machine config for each machine in the cluster (`discovery` is enabled by default):

```yaml
machine:
Expand All @@ -67,7 +102,7 @@ cluster:

KubeSpan will automatically discovery all cluster members, exchange Wireguard public keys and establish a full mesh network.

There are a few configuration options available to fine-tune the feature:
There are configuration options available which are not usually required:

```yaml
machine:
Expand Down Expand Up @@ -95,7 +130,7 @@ The `mtu` setting configures the Wireguard MTU, which defaults to 1420.
This default value of 1420 is safe to use when the underlying network MTU is 1500, but if the underlying network MTU is smaller, the KubeSpanMTU should be adjusted accordingly:
`KubeSpanMTU = UnderlyingMTU - 80`.

The `filters` setting allows to hide some endpoints from being advertised over KubeSpan.
The `filters` setting allows hiding some endpoints from being advertised over KubeSpan.
This is useful when some endpoints are known to be unreachable between the nodes, so that KubeSpan doesn't try to establish a connection to them.
Another use-case is hiding some endpoints if nodes can connect on multiple networks, and some of the networks are more preferable than others.

Expand All @@ -117,7 +152,7 @@ spec:

Talos automatically configures unique IPv6 address for each node in the cluster-specific IPv6 ULA prefix.

Wireguard private key is generated for the node, private key never leaves the node while public key is published through the cluster discovery.
The Wireguard private key is generated and never leaves the node, while the public key is published through the cluster discovery.

`KubeSpanIdentity` is persisted across reboots and upgrades in [STATE]({{< relref "../../learn-more/architecture/#file-system-partitions" >}}) partition in the file `kubespan-identity.yaml`.

Expand Down

0 comments on commit f8a7a5b

Please sign in to comment.