Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] NodeSelector for LoadBalancer Service Pod #4756

Open
tgdfool2 opened this issue Nov 22, 2024 · 4 comments · May be fixed by #4793
Open

[Feature Request] NodeSelector for LoadBalancer Service Pod #4756

tgdfool2 opened this issue Nov 22, 2024 · 4 comments · May be fixed by #4793
Assignees
Labels
feature New network feature

Comments

@tgdfool2
Copy link

Description

Hi Everyone,

In our setup, we have VLAN interfaces that are only available/configured on the Kubernetes Master Nodes.

The following NetworkAttachmentDefinition has been created:

---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: vlan201-external-subnet
  namespace: kube-system
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "macvlan",
      "master": "bond0.201",
      "mode": "bridge",
      "ipam": {
        "type": "kube-ovn",
        "server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
        "provider": "vlan201-external-subnet.kube-system"
      }
    }'

The VpcNatGateway is deployed with the following selector, which forces it to run on kube-ovn masters:

---
kind: VpcNatGateway
apiVersion: kubeovn.io/v1
metadata:
  name: vlan201-nat-gw
spec:
  vpc: vlan201-vpc
  subnet: vlan201-internal-subnet
  lanIp: 100.105.0.100
  selector:
    - "kubernetes.io/os: linux"
    - "kube-ovn/role: master"
  externalSubnets:
    - vlan201-external-subnet

The issue comes when a LoadBalancer Service gets created and wants to request an IP in the vlan201-external-subnet Subnet. If the lb-svc-* Pod gets scheduled on a non-kube-ovn master node, it fails to get an IP. Manually editing the Deployment and specifying a nodeSelector fixes this issue:

      nodeSelector:
        kube-ovn/role: master
        kubernetes.io/os: linux

Is there already a way to specify this nodeSelector? Looking at the source code, it does not seem to be the case:

  • dp = &v1.Deployment{
    ObjectMeta: metav1.ObjectMeta{
    Name: name,
    },
    Spec: v1.DeploymentSpec{
    Replicas: ptr.To(int32(1)),
    Selector: &metav1.LabelSelector{
    MatchLabels: labels,
    },
    Template: corev1.PodTemplateSpec{
    ObjectMeta: metav1.ObjectMeta{
    Labels: labels,
    Annotations: podAnnotations,
    },
    Spec: corev1.PodSpec{
    Containers: []corev1.Container{
    {
    Name: "lb-svc",
    Image: vpcNatImage,
    Command: []string{"sleep", "infinity"},
    ImagePullPolicy: corev1.PullIfNotPresent,
    SecurityContext: &corev1.SecurityContext{
    Privileged: ptr.To(true),
    AllowPrivilegeEscalation: ptr.To(true),
    },
    Resources: resources,
    },
    },
    TerminationGracePeriodSeconds: ptr.To(int64(0)),
    },
    },
    Strategy: v1.DeploymentStrategy{
    Type: v1.RecreateDeploymentStrategyType,
    },
    },
    }

Thanks in advance for your support!

Who will benefit from this feature?

No response

Anything else?

No response

@tgdfool2 tgdfool2 added the feature New network feature label Nov 22, 2024
@hongzhen-ma
Copy link
Collaborator

It seems that there is indeed a lack of a way to pass labels from lb-svc to deployment。

@hongzhen-ma hongzhen-ma self-assigned this Nov 22, 2024
@tgdfool2
Copy link
Author

Thanks for confirming!

Not sure what would be the best way to enable this kind of configuration; maybe a new ConfigMap similar to the ovn-vpc-nat-config (https://kube-ovn.readthedocs.io/zh-cn/latest/en/guide/vpc/#enabling-the-vpc-gateway) one?

@bobz965
Copy link
Collaborator

bobz965 commented Nov 25, 2024

maybe make it just the same as the vpc-nat-gw pod is a better way

@hongzhen-ma hongzhen-ma linked a pull request Dec 5, 2024 that will close this issue
1 task
@hongzhen-ma
Copy link
Collaborator

maybe make it just the same as the vpc-nat-gw pod is a better way

It really is the simple and reasonable way to resolve this problem.
The vpc-nat-gw pod is created by vpc-nat-gateways crd, and there's selector fields in vpc-nat-gateways crd,so it's easy to pass nodeSelector values to pod.

What we need is same as vpc-nat-gw, the nodeSelector field for lb-svc, but there's no crd field to pass this value.
As the image for lb-svc is configured in configmap ovn-vpc-nat-config, so maybe it's a good way to put nodeSelector in the same configmap as follows

apiVersion: v1
data:
  image: docker.io/kubeovn/vpc-nat-gateway:v1.14.0
  nodeSelector: |
    kubernetes.io/hostname: kube-ovn-control-plane
    kubernetes.io/os: linux
kind: ConfigMap
metadata:
  name: ovn-vpc-nat-config
  namespace: kube-system

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New network feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants