Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Add support for dualstack loadbalancers #202

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

AlexanderLieret
Copy link

While trying to deploy this helm chart in k3s on a dualstack setup, I noticed an error.
The integrated loadbalancer requires a single loadbalancer service per port and does not work with the split service. The external ip does stay <pending> because it does not have a free port.

This PR fixes this issue by adding a config option to create dualstack loadbalancer services.

A minimal working example on k3s is:

dualStack:
  enabled: true
  loadBalancer: true

serviceDns:
  mixedService: false
  type: LoadBalancer

@MoJo2600
Copy link
Owner

Hello,

thanks for your pull request. I'm a little confused on how your change goes along with the previous merged PR 'Add dualstack service support #187'. I'm sorry, but I don't have a k3s cluster to test. But what combinations of LoadBalancers are needed and how does the result look like? Maybe you could post your service description from kubernetes so I do understand the difference better? I just want to make sure that I do understand it so I might be able to add some form of documentation on how to configure LoadBalancers for this chart.

@AlexanderLieret
Copy link
Author

AlexanderLieret commented Jan 13, 2022

I just want to make sure that I do understand it so I might be able to add some form of documentation on how to configure LoadBalancers for this chart.

I hope the following explanation will help with this.

My setup consists of a single Ubuntu VM running a default k3s installation in dual stack mode. The IP addresses are 192.168.122.17 and fd17::2.

current

This minimal configuration shows the problem in action. For k3s mixedService must be off.

The dual stack feature implemented by #187 creates separate services for each IP version.
In this scenario it create 6 individual services of type LoadBalancer. Depending on the load balancer implementation this is necessary.

pihole.yaml

dualStack:
  enabled: true

serviceDns:
  mixedService: false
  type: LoadBalancer

serviceDhcp:
  type: LoadBalancer
$ helm install my-pihole -f pihole.yaml mojo2600/pihole
[...]
$ kubectl get pods,services
NAME                                     READY   STATUS    RESTARTS   AGE
pod/svclb-my-pihole-dns-udp-sbmxb        0/1     Pending   0          4m1s
pod/svclb-my-pihole-dns-tcp-ipv6-p9xkw   0/1     Pending   0          4m
pod/svclb-my-pihole-dhcp-8cfdq           0/1     Pending   0          4m
pod/svclb-my-pihole-dhcp-ivp6-9tt5t      1/1     Running   0          4m1s
pod/svclb-my-pihole-dns-udp-ipv6-j6x5f   1/1     Running   0          4m1s
pod/svclb-my-pihole-dns-tcp-bcthd        1/1     Running   0          4m1s
pod/my-pihole-9475685d-t76lt             1/1     Running   0          4m

NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
service/kubernetes               ClusterIP      10.43.0.1       <none>           443/TCP          21d
service/my-pihole-web            ClusterIP      10.43.77.36     <none>           80/TCP,443/TCP   4m1s
service/my-pihole-dns-udp        LoadBalancer   10.43.37.52     <pending>        53:30873/UDP     4m1s
service/my-pihole-dns-tcp-ipv6   LoadBalancer   fd17:43::5cc6   <pending>        53:31312/TCP     4m1s
service/my-pihole-dhcp           LoadBalancer   10.43.124.203   <pending>        67:30036/UDP     4m1s
service/my-pihole-dhcp-ivp6      LoadBalancer   fd17:43::1a5    fd17::2          67:32142/UDP     4m1s
service/my-pihole-dns-udp-ipv6   LoadBalancer   fd17:43::e8b    fd17::2          53:32414/UDP     4m1s
service/my-pihole-dns-tcp        LoadBalancer   10.43.218.101   192.168.122.17   53:32168/TCP     4m1s

With only a single node each port (like 53/udp) can only be bound to one load balancer.
The other load balancer wait indefinitly for a free port to bind to.

$ kubectl describe pod/svclb-my-pihole-dns-udp-sbmxb
[...]
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  5m     default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  3m45s  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

The DNS resolution is not working as indicated by the reported status.

Click to expand!
$ dig pi.hole @192.168.122.17

; <<>> DiG 9.16.1-Ubuntu <<>> pi.hole @192.168.122.17
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48182
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;pi.hole.			IN	A

;; ANSWER SECTION:
pi.hole.		0	IN	A	0.0.0.0

;; Query time: 0 msec
;; SERVER: 192.168.122.17#53(192.168.122.17)
;; WHEN: Thu Jan 13 13:57:01 UTC 2022
;; MSG SIZE  rcvd: 52

$ dig pi.hole @fd17::2

; <<>> DiG 9.16.1-Ubuntu <<>> pi.hole @fd17::2
;; global options: +cmd
;; connection timed out; no servers could be reached

pull request

Using the dual load balancer feature from this PR, we create a single LB per port.

For backwards compatibility the config value is disabled by default.

pihole.yaml

dualStack:
  enabled: true
  loadBalancer: true

serviceDns:
  mixedService: false
  type: LoadBalancer

serviceDhcp:
  type: LoadBalancer
$ helm install my-pihole -f pihole.yaml ./pihole-kubernetes/charts/pihole
[...]
$ kubectl get pods,services
NAME                                READY   STATUS    RESTARTS   AGE
pod/svclb-my-pihole-dns-udp-llwp6   1/1     Running   0          78s
pod/svclb-my-pihole-dns-tcp-xc9hs   1/1     Running   0          77s
pod/svclb-my-pihole-dhcp-74nbw      1/1     Running   0          77s
pod/my-pihole-79f567bb6b-tcxg2      1/1     Running   0          77s

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP              PORT(S)          AGE
service/kubernetes          ClusterIP      10.43.0.1       <none>                   443/TCP          21d
service/my-pihole-web       ClusterIP      10.43.192.250   <none>                   80/TCP,443/TCP   78s
service/my-pihole-dns-udp   LoadBalancer   10.43.47.51     192.168.122.17,fd17::2   53:30749/UDP     78s
service/my-pihole-dns-tcp   LoadBalancer   10.43.198.1     192.168.122.17,fd17::2   53:31005/TCP     78s
service/my-pihole-dhcp      LoadBalancer   10.43.170.9     192.168.122.17,fd17::2   67:30152/UDP     78s

All load balancers are running and are bound to the correct IP addresses. DNS resolution works as expected.

Click to expand!
$ dig pi.hole @192.168.122.17

; <<>> DiG 9.16.1-Ubuntu <<>> pi.hole @192.168.122.17
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23517
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;pi.hole.			IN	A

;; ANSWER SECTION:
pi.hole.		0	IN	A	0.0.0.0

;; Query time: 3 msec
;; SERVER: 192.168.122.17#53(192.168.122.17)
;; WHEN: Thu Jan 13 13:53:26 UTC 2022
;; MSG SIZE  rcvd: 52

$ dig pi.hole @fd17::2

; <<>> DiG 9.16.1-Ubuntu <<>> pi.hole @fd17::2
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29873
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;pi.hole.			IN	A

;; ANSWER SECTION:
pi.hole.		0	IN	A	0.0.0.0

;; Query time: 0 msec
;; SERVER: fd17::2#53(fd17::2)
;; WHEN: Thu Jan 13 13:53:33 UTC 2022
;; MSG SIZE  rcvd: 52

Add config value `dualStack.loadBalancer` to toggle dualstack support
for loadbalancers (LB need to support this feature).
@DerRockWolf
Copy link
Contributor

DerRockWolf commented Feb 20, 2022

What loadbalancer implementation are you using, is it the default k3s one? For MetalLB my former PR is working because with MetalLB it is possible to mount multiple LBs to one port. Since the last release of MetalLB they also support true dualstack LBs.

Apart from this I think that your implementation isn't actually working because the spec.loadBalancerIP field can't be used for dualstack services and will be deprecated in the future.

Additional information for MetalLB:

MetalLB supports spec.loadBalancerIP and a custom metallb.universe.tf/loadBalancerIPs annotation. The annotation also supports a comma separated list of IPs to be used in case of Dual Stack services.

Please note that spec.LoadBalancerIP is planned to be deprecated in k8s apis.

@AlexanderLieret
Copy link
Author

I use the default integrated loadbalancer of k3s.

Apart from this I think that your implementation isn't actually working because the spec.loadBalancerIP field can't be used for dualstack services

It does work for me.

The soon to be deprecated spec.LoadBalancerIP is used in the current templates. Changing to the new field would remove your concern and clean up my code.

@DerRockWolf
Copy link
Contributor

FYI I've opened #214 to support the dualstack LB services and remove the spec.loadBalancerIP.

@MoJo2600
Copy link
Owner

Hey @DerRockWolf @AlexanderLieret should we combine this PR and #202 and do one breaking change which adds this feature and documents the change for everyone that want's to upgrade? How could we achieve this?

@DerRockWolf
Copy link
Contributor

@MoJo2600 Sure!
I think that there isn't something to combine if we want to do it with a breaking change (i.e. remove the separate LBs feature). I've implemented this in #214.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants