Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] wrong ippool allocation #4687

Open
dgsponer opened this issue Nov 1, 2024 · 12 comments · May be fixed by #4777
Open

[BUG] wrong ippool allocation #4687

dgsponer opened this issue Nov 1, 2024 · 12 comments · May be fixed by #4777
Assignees
Labels
bug Something isn't working

Comments

@dgsponer
Copy link

dgsponer commented Nov 1, 2024

Kube-OVN Version

v1.12.28

Kubernetes Version

v1.31.1

Operation-system/Kernel Version

PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
6.1.0-25-amd64

Description

with two ore more subnets and assigned ippools in the same vpc, kubeone tooks the wrong ippool, depending, which ippool is created first.

9366984", FieldPath:""}): type: 'Warning' reason: 'AcquireAddressFailed' NoAvailableAddressI1031 10:30:54.989888       7
pod.go:519] handle add/update pod ns-vpc-test/deployment-vpc-test-5fd5456c9d-xgqsmI1031 10:30:54.990702       7
pod.go:576] sync pod ns-vpc-test/deployment-vpc-test-5fd5456c9d-xgqsm allocatedI1031 10:30:54.990764       7
ipam.go:62] allocate v4 , v6 , mac  for ns-vpc-test/deployment-vpc-test-5fd5456c9d-xgqsm from ippool ippool-vpc-test-001 in subnet subnet-vpc-test-002E1031 10:30:54.990850       7
pod.go:589] NoAvailableAddressE1031 10:30:54.990907       7
pod.go:406] error syncing 'ns-vpc-test/deployment-vpc-test-5fd5456c9d-xgqsm': NoAvailableAddress, requeuing

Steps To Reproduce

create 1 namespace

add this ressources in the new namespace
create 1 vpc
create 2 subnets (001 and 002)
create 2 ippools (001 assigned to subnet 001, 002 assigned to subnet 002)

create a deployment with annotation logical_router to 001, it will works.
change now the logical router to the other subnet 002, it will fail

Current Behavior

depended of order the cr are added.
ippool ippool-vpc-test-001 in subnet subnet-vpc-test-002

Expected Behavior

the correct ippool wil be assigned
ippool ippool-vpc-test-001 in subnet subnet-vpc-test-001

@dgsponer dgsponer added the bug Something isn't working label Nov 1, 2024
Copy link

dosubot bot commented Nov 1, 2024

Hey @dgsponer! I'm here to help you with any bugs, questions, or contributions you have for Kube-OVN. Let's tackle this issue together while we wait for a human maintainer to join in.

The issue you're experiencing with Kube-OVN version v1.12.28 seems to be related to the selection of the wrong IP pool when multiple subnets and IP pools are present in the same VPC. This behavior might be influenced by the order in which the IP pools are created.

Kube-OVN handles IP pool selection by checking namespace labels and selectors to determine the appropriate subnet and IP pool. If a subnet is explicitly bound to a namespace or matches namespace label selectors, it is considered for that namespace. The IP pool selection process involves checking if an IP pool is explicitly associated with the namespace by looking at the Spec.Namespaces field of each IP pool [1].

In your case, it seems that the IP pool selection might not be correctly associating the IP pool with the intended subnet due to the creation order. This could be a limitation or bug in the current version of Kube-OVN.

To address this, you might want to ensure that the namespace and subnet configurations are correctly set up to explicitly bind the correct IP pool to the intended subnet. Additionally, reviewing the logic in the IP pool selection process might help identify any potential misconfigurations or bugs that could be causing this behavior.

To continue talking to Dosu, mention @dosu.

@zhangzujian zhangzujian self-assigned this Nov 1, 2024
@zhangzujian
Copy link
Member

I cannot reproduce it. Please provide more detailed information.

@dgsponer dgsponer changed the title [BUG] [BUG] wrong ippool allocation Nov 5, 2024
@dgsponer
Copy link
Author

dgsponer commented Nov 5, 2024

here the manifest.
i installed a new k8s cluster with 1.31.2, cni 1.6.0 and kubeovn 1.12.x
the first shoot works like a charm, container will come up and running with subnet-vpc-test-001.
then delete the deployment and change this line

ovn.kubernetes.io/logical_switch: subnet-vpc-test-001 -> ovn.kubernetes.io/logical_switch: subnet-vpc-test-002

and i got this again:

I1105 08:37:17.015696       6 pod.go:519] handle add/update pod ns-vpc-test/deployment-vpc-test-001-5bb77df576-lpxk7
I1105 08:37:17.015938       6 pod.go:576] sync pod ns-vpc-test/deployment-vpc-test-001-5bb77df576-lpxk7 allocated
I1105 08:37:17.015992       6 ipam.go:62] allocate v4 , v6 , mac  for ns-vpc-test/deployment-vpc-test-001-5bb77df576-lpxk7 from ippool ippool-vpc-test-001 in subnet subnet-vpc-test-002
E1105 08:37:17.016060       6 pod.go:589] NoAvailableAddress
E1105 08:37:17.016130       6 pod.go:406] error syncing 'ns-vpc-test/deployment-vpc-test-001-5bb77df576-lpxk7': NoAvailableAddress, requeuing
I1105 08:37:17.016844       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ns-vpc-test", Name:"deployment-vpc-test-001-5bb77df576-lpxk7", UID:"0b023fe3-77fe-4269-9484-68c61f86abb7", APIVersion:"v1", ResourceVersion:"66972", FieldPath:""}): type: 'Warning' reason: 'AcquireAddressFailed' NoAvailableAddress

The manifest

cat <<EOF | k apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: ns-vpc-test
EOF

cat <<EOF | k apply -f -
apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: vpc-vpc-test
spec:
  namespaces:
  - ns-vpc-test
EOF


cat <<EOF | kubectl apply -f -
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: subnet-vpc-test-001
  namespace: ns-vpc-test
spec:
  vpc: vpc-vpc-test
  protocol: IPv4
  cidrBlock: 10.0.0.0/29
  namespaces:
  - ns-vpc-test
EOF


cat <<EOF | kubectl apply -f -
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: subnet-vpc-test-002
  namespace: ns-vpc-test
spec:
  vpc: vpc-vpc-test
  protocol: IPv4
  cidrBlock: 10.0.1.0/29
  namespaces:
  - ns-vpc-test
EOF


cat <<EOF | k apply -f -
apiVersion: kubeovn.io/v1
kind: IPPool
metadata:
  name: ippool-vpc-test-001
spec:
  subnet: subnet-vpc-test-001
  ips:
  - "10.0.0.3..10.0.0.4"
  namespaces:
  - ns-vpc-test
EOF


cat <<EOF | k apply -f -
apiVersion: kubeovn.io/v1
kind: IPPool
metadata:
  name: ippool-vpc-test-002
spec:
  subnet: subnet-vpc-test-002
  ips:
  - "10.0.1.3..10.0.1.4"
  namespaces:
  - ns-vpc-test
EOF


cat <<EOF | k apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: vpc-test
  name: deployment-vpc-test-001
  namespace: ns-vpc-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vpc-test
  template:
    metadata:
      annotations:
        ovn.kubernetes.io/logical_switch: subnet-vpc-test-001
      labels:
        app: vpc-test
    spec:
      containers:
        - image: nicolaka/netshoot
          name: netshoot
          command: ["/bin/bash"]
          args: ["-c", "while true; do ping localhost; sleep 60;done"]
EOF

@dgsponer
Copy link
Author

dgsponer commented Nov 5, 2024

as sidenote:

the pod will not come up.

delete both ippools and rollout a restart on the deployment.
the pod come up.

@zhangzujian
Copy link
Member

Why do you add the namespace into all the subnet/ippool definitions? Is there any special requirement?

@zhangzujian
Copy link
Member

IPPool is designed to limit ip addresses assigned to a namespace/workload. Your kind of usage is not supported yet.

@dgsponer
Copy link
Author

dgsponer commented Nov 5, 2024

we create frr routers for some usecases.
from mpls provider we got 2 vlans with 2 /30 subnets (/31 will the target, but is not supported in kubeovn).
on consumer namespace, we create 2 routers, one in the subnet 1, other in the subnet 2.

this is why we add the namespace, to prevent other namespaces to consume it.

@dgsponer
Copy link
Author

dgsponer commented Nov 5, 2024

what is working:
delete ippools and work with exclude ips, this is working.

what is understand from the ippool, the ipppool have the whitelist behavior, the subnet the blacklist behavior.
on ippool I add what I will assignment.
on subnet I add, what I will prevent from assignment.

Is this right?

@dgsponer
Copy link
Author

dgsponer commented Nov 5, 2024

add as next info:
when i remove the logical_switch definition and just add ip_pool like this:
ovn.kubernetes.io/ip_pool: "10.0.0.3,10.0.0.4"

and change then to
ovn.kubernetes.io/ip_pool: "10.0.1.3,10.0.1.4"

the ip pool works like a charm and select the right logical_switch.

the controller just select the wrong pool with the logical_switch definition.

@zhangzujian
Copy link
Member

Support for subnets with a /31 CIDR has been added to Kube-OVN in #4425. Could you try the latest version v1.13.0 which will be released recently?

@cnvergence
Copy link
Contributor

hey @zhangzujian, would you mind if I take a look at this one?
I will try to reproduce and fix this issue

@cnvergence cnvergence linked a pull request Dec 4, 2024 that will close this issue
1 task
@cnvergence
Copy link
Contributor

PR is ready to be reviewed, I have spent some time to add an e2e test case as well
#4777

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants