-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] 创建pod时绑定安全组,进入pod后ping不通网关 #4742
Comments
@QEDQCD 你好!我是Dosu,我是一个可以帮助您解决错误、回答问题并成为贡献者的机器人。我在这里帮助您,同时等待人类维护者。 I found a similar closed issue that might be relevant to your problem. The issue titled "[BUG][KIND] Pod inside Custom VPC+NatGateway cannot access Internet using SNAT](#4241)" was resolved by adding a logical router static route in the VPC specification. Although your issue involves security groups, the solution might provide some insight into routing configurations that could affect connectivity. In your case, since the pod can ping the gateway when the security group is removed, it suggests that the security group rules might not be applied correctly or there might be an issue with how the security group interacts with the gateway. Here are a few things you could check:
If these steps do not resolve the issue, further investigation into the specific interactions between security groups and the gateway in your environment might be necessary. |
经详细排查可能是ovn的问题,偶现 执行命令 详情见kubectl-ko命令执行结果 |
ovn.sg.sg.example 这个 group 中有这个 pod 网卡的port id 么? |
有的 包含ports : [ba339dc8-979e-4614-bd3c-5065d446a7fb] 在查询logical_switch_port时也能grep到 |
@zhangzujian 看起来这是个 bug |
这是由v1.12.x升级到v1.13.0导致的问题,安全组的acl规则在v1.12.x用的是tier 0,升级之后,变成了tier 2,因此新建的安全组规则都是tier 2的,但是deny all的acl规则还是tier 0的,因为tier 0要比tier 2优先级高,所以deny all本来是最低优先级的,现在变成了最高优先级。 临时解决办法就是把deny all的两条规则删除,然后重启ovn controller让其重建规则就可以了:
|
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, which results that legacy denyall sg will drop all traffics if a pod bound a sg, because acls in tier 0 have the higest priority. we should recreate acls in denyall sg when upgrading to v1.13.x. Signed-off-by: Rain Suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, which results that legacy denyall sg will drop all traffics if a pod bound a sg, because acls in tier 0 have the higest priority. we should recreate acls in denyall sg when upgrading to v1.13.x. Signed-off-by: Rain Suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, which results that legacy denyall sg will drop all traffics if a pod bound a sg, because acls in tier 0 have the higest priority. we should recreate acls in denyall sg when upgrading to v1.13.x. Signed-off-by: Rain Suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, which results that legacy denyall sg will drop all traffics if a pod bound a sg, because acls in tier 0 have the higest priority. we should recreate acls in denyall sg when upgrading to v1.13.x. Signed-off-by: Rain Suo <[email protected]>
@bobz965 @zhangzujian 看这样处理可以吗?#4768 |
感谢,我看了下 |
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: Rain Suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: suo <[email protected]>
the acls in v1.13.x are in tier 2 rather than tier 0 in v1.12.x, the legacy acls may cause some unexpected behaviors because acls in tier 0 have the higest priority. we should delete legacy acls and recreate them when upgrading to v1.13.x. Signed-off-by: suo <[email protected]>
Kube-OVN Version
v1.13.0
Kubernetes Version
Client Version: v1.29.3
Server Version: v1.29.3
Operation-system/Kernel Version
/etc/os-release
"CentOS Stream 9"
uname -r
5.14.0-407.el9.x86_64
sbctl版本
kubectl-ko sbctl --version
ovn-sbctl 24.03.5
Open vSwitch Library 3.3.3
DB Schema 20.33.0
nbctl版本
kubectl-ko nbctl --version
ovn-nbctl 24.03.5
Open vSwitch Library 3.3.3
DB Schema 7.3.0
Description
1 创建安全组,放开全部网段 0.0.0.0/0
2 创建pod,绑定该安全组
3 kubectl exec 进入pod,ping网关地址失败
Steps To Reproduce
1 创建安全组sg.yaml
`apiVersion: kubeovn.io/v1
kind: SecurityGroup
metadata:
name: sg-example
spec:
allowSameGroupTraffic: true
egressRules:
policy: allow
priority: 1
protocol: all
remoteAddress: 0.0.0.0/0
remoteType: address
ingressRules:
policy: allow
priority: 1
protocol: all
remoteAddress: 0.0.0.0/0
remoteType: address`
2 创建pod,绑定该安全组
pod.yaml
`apiVersion: v1
kind: Pod
metadata:
labels:
app: static
annotations:
ovn.kubernetes.io/port_security: 'true'
ovn.kubernetes.io/security_groups: 'sg-example'
name: sg-pod
namespace: default
spec:
nodeName: worker-1
containers:
imagePullPolicy: IfNotPresent
command: ["sleep"]
args: ["infinity"]
name: test
`
Current Behavior
进入 pod
kubectl exec -it sg-pod -- bash
尝试ping 网关地址,ping不通
ping 240.0.0.1
PING 240.0.0.1 (240.0.0.1): 56 data bytes
^C--- 240.0.0.1 ping statistics ---
8 packets transmitted, 0 packets received, 100% packet loss
pod解除安全组规则的关联关系后,立刻能通
Expected Behavior
绑定安全组,进入pod后能ping通网关地址
The text was updated successfully, but these errors were encountered: