Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6-only support / improvements #9372

Open
3 tasks
Tracked by #9899
lion7 opened this issue Sep 25, 2024 · 3 comments
Open
3 tasks
Tracked by #9899

IPv6-only support / improvements #9372

lion7 opened this issue Sep 25, 2024 · 3 comments

Comments

@lion7
Copy link

lion7 commented Sep 25, 2024

Bug Report

Description

DNS not working when Talos is installed on a machine in an IPv6-only network.
I needed to do multiple customizations and workarounds which I think should not be needed.

Some things that I noticed:

Default nameservers

Issue:
The default nameservers are 1.1.1.1 and 8.8.8.8, which cannot be reached from IPv6.

Workaround:
Configure an IPv6 nameserver using the dashboard (requires a local screen/keyboard or BMC).
Alternatively configure a DNS server using the ip=:::::::<dns0-ip>:<dns1-ip>:<ntp0-ip> kernel arg.

Better solution:
Talos should use the IPv6 equivalent when no IPv4 address is configured on any interface.

Nameservers announced by ND are ignored

Issue:
DNS servers that are announced via ND (SLAAC setup) are currently ignored by Talos.

Workaround:
Override DNS servers using machine configuration.

Better solution:
Talso should use the announced DNS servers from ND similar to how it does for DNS servers announced via IPv4 DHCP.

IPv6 endpoint for Image Factory

Issue:
Talos Image Factory (factory.talos.dev) has no IPv6 addresses configured.

Workaround:
Run a pull-through registry on a dual-stack server.

Better solution:
Already tracked by siderolabs/image-factory#60

DNS forwarding to host

Issue:
DNS requests are forwarded to the host using a hardcoded 169.254.116.108 IPv4 address, but there is no matching IPv4 route in a IPv6-only environment.

Workaround:
Disable DNS forwarding.

Better solution:
Also use a hardcoded IPv6 address as a fallback.

Logs

N/A

Output of talosctl get routes | grep -v veth

NODE                                      NAMESPACE   TYPE          ID                                                                   VERSION   DESTINATION                                   GATEWAY                     LINK      METRIC
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   enp38s0/inet6//2a02:****:****:****::/64/256                          1         2a02:****:****:****::/64                                                  enp38s0   256
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   enp38s0/inet6//fe80::/64/256                                         1         fe80::/64                                                                 enp38s0   256
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   enp38s0/inet6/fe80::2ec8:1bff:feab:9e54//1024                        1                                                       fe80::2ec8:1bff:feab:9e54   enp38s0   1024
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/enp38s0/inet6//2a02:****:****:****::/128/0                     1         2a02:****:****:****::/128                                                 enp38s0   0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/enp38s0/inet6//2a02:****:****:****:d250:99ff:fefa:a836/128/0   1         2a02:****:****:****:d250:99ff:fefa:a836/128                               enp38s0   0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/enp38s0/inet6//fe80::/128/0                                    1         fe80::/128                                                                enp38s0   0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/enp38s0/inet6//fe80::d250:99ff:fefa:a836/128/0                 1         fe80::d250:99ff:fefa:a836/128                                             enp38s0   0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/enp38s0/inet6//ff00::/8/256                                    1         ff00::/8                                                                  enp38s0   256
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/inet4//127.0.0.0/8/0                                           1         127.0.0.0/8                                                               lo        0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/inet4//127.0.0.1/32/0                                          1         127.0.0.1/32                                                              lo        0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/inet4//127.255.255.255/32/0                                    1         127.255.255.255/32                                                        lo        0
2a02:****:****:****:d250:99ff:fefa:a836   network     RouteStatus   local/lo/inet6//::1/128/0                                            1         ::1/128                                                                   lo        0

Environment

  • Talos version:
Client:
	Tag:         v1.8.0
	SHA:         5cc935f7
	Built:       
	Go version:  go1.22.7
	OS/Arch:     linux/amd64
Server:
	NODE:        2a02:22a0:eee4:d110:d250:99ff:fefa:a836
	Tag:         v1.8.0
	SHA:         5cc935f7
	Built:       
	Go version:  go1.22.7
	OS/Arch:     linux/amd64
	Enabled:     RBAC
  • Kubernetes version:
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.31.0
  • Platform: metal

Tasks

Preview Give feedback
  1. DmitriyMV
  2. DmitriyMV
  3. area/networking size/S triage/needs-planning
@smira
Copy link
Member

smira commented Sep 25, 2024

DNS requests are forwarded to the host using a hardcoded 169.254.116.108 IPv4 address, but there is no matching IPv4 route in a IPv6-only environment.

This one I'm confused a bit, as the address is assigned to the host network. Or you mean that CoreDNS with pod networking doesn't have IPv4 address at all, so it can't reach out?

@lion7
Copy link
Author

lion7 commented Sep 25, 2024

DNS requests are forwarded to the host using a hardcoded 169.254.116.108 IPv4 address, but there is no matching IPv4 route in a IPv6-only environment.

This one I'm confused a bit, as the address is assigned to the host network. Or you mean that CoreDNS with pod networking doesn't have IPv4 address at all, so it can't reach out?

Sorry, should have mentioned that indeed my PodCIDR is IPv6 only. So the pods only have an IPv6 address assigned. My cluster is basically IPv6 single stack instead of dual-stack.

@lion7
Copy link
Author

lion7 commented Sep 25, 2024

Something else that might be of interest is that I'm using the bridge CNI plugin that comes bundled with Talos.
Here is my CNI configuration:

❯ talosctl cat /etc/cni/net.d/bridge-cni.conflist
{
  "cniVersion": "1.0.0",
  "name": "cbr0",
  "plugins": [
    {
      "type": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "2a02:****::****::****::c:0/120"
      },
      "dns": {},
      "isDefaultGateway": true
    }
  ]
}

Since I'm using a GUA IPv6 prefix the pods can directly access the internet without any need for NAT / masquerading / etc. So from a networking perspective this is I guess the simplest setup that you can get.

Also, since I have no firewall configured (yet) I can actually reach the pods directly from any other server in the world.


Sidenote: to generate the above CNI configuration I had to use a Daemonset to generate and write it to /etc/cni/net.d which was mounted using a hostPath volume (https://github.com/lion7/bridge-cni).

It would be nice if we could write files to /etc/cni/net.d directly from the machine configuration (right now that path is prohibited from being used). Especially now that a few default CNI plugins are bundled with Talos.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants