Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unable to create a pod after upgrading to v1.13.2. #4840

Closed
zsxsoft opened this issue Dec 17, 2024 · 3 comments
Closed

[BUG] Unable to create a pod after upgrading to v1.13.2. #4840

zsxsoft opened this issue Dec 17, 2024 · 3 comments
Labels
bug Something isn't working security subnet

Comments

@zsxsoft
Copy link

zsxsoft commented Dec 17, 2024

Kube-OVN Version

v1.13.2

Kubernetes Version

v1.31.2

Operation-system/Kernel Version

TencentOS Server 4.2
6.6.47-12.tl4.x86_64

Description

When I upgrade from v1.12.28 to v1.13.0, I've noticed that whenever my Subnet is configured with any ACL, creating a pod with a securitygroup consistently results in a "network not ready after XX ping" related log in CNI.

image

Subnets created in 1.12.28 can still create pods in version 1.13. This issue only occurs with Subnets created in version 1.13. Additionally, if I downgrade back to version 1.12.28, the issue disappears.

In my test environment 1.28.30 -> 1.13.0, if I remove the securitygroup annotations from the pod or delete all ACLs from the subnet, this will be ready and the pod will be created.

The following yaml can be applied in v1.28.30, it works.

In 1.28.30 -> 1.13.2, I can't do nothing to let the following yaml applied. It happened in my 2 different clusters upgraded from v1.12, and my new test cluster with v1.28.30 -> v.13.2.

image
apiVersion: kubeovn.io/v1
kind: SecurityGroup
metadata:
  name: sg-net-83747e7952934db3a1f0ab1602fd9bc2
spec:
  allowSameGroupTraffic: true
  ingressRules:
    - ipVersion: ipv4
      policy: allow
      priority: 200
      protocol: all
      remoteAddress: 0.0.0.0/0
      remoteType: address
  egressRules:
    - ipVersion: ipv4
      policy: drop
      priority: 199
      protocol: all
      remoteAddress: 10.0.0.0/8
      remoteType: address
    - ipVersion: ipv4
      policy: allow
      priority: 200
      protocol: all
      remoteAddress: 0.0.0.0/0
      remoteType: address
    - ipVersion: ipv4
      policy: allow
      priority: 100
      protocol: all
      remoteAddress: 10.232.82.0/24
      remoteType: address
---
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: net-83747e7952934db3a1f0ab1602fd9bc2
spec:
  acls:
  - action: allow-related
    direction: from-lport
    match: ip
    priority: 1002
  cidrBlock: 10.232.82.0/24
  default: false
  enableDHCP: true
  enableLb: true
  gateway: 10.232.82.1
  gatewayNode: ""
  gatewayType: distributed
  mtu: 1400
  natOutgoing: true
  private: false
  protocol: IPv4
  provider: ovn
  vpc: ovn-cluster
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  annotations:
    ovn.kubernetes.io/security_groups: sg-net-83747e7952934db3a1f0ab1602fd9bc2
    ovn.kubernetes.io/logical_switch: net-83747e7952934db3a1f0ab1602fd9bc2
spec:
  containers:
  - name: nginx
     image: nginx

Trace here, It looks like KubeOVN didn't match any SG in this pod so fallback tokubeovn_deny_all.

[root@k8s-master ~]# kubectl ko trace vm/minio 10.232.82.1 tcp 80
[root@k8s-master ~]# kubectl ko trace vm/minio 10.232.82.89 tcp 80
+ kubectl exec ovn-central-69c8f7b97b-csmpk -n kube-system -c ovn-central -- ovn-trace net-83747e7952934db3a1f0ab1602fd9bc2 'inport == "minio.vm" && ip.ttl == 64 && eth.src == be:23:11:83:32:ad && ip4.src == 10.232.82.5 && eth.dst == 4c:3d:a5:43:28:eb && ip4.dst == 10.232.82.89 && tcp.src == 10000 && tcp.dst == 80 && ct.new'
# ct_state=new|trk,tcp,reg14=0x6,vlan_tci=0x0000,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,nw_src=10.232.82.5,nw_dst=10.232.82.89,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=10000,tp_dst=80,tcp_flags=0

ingress(dp="net-83747e7952934db3a1f0ab1602fd9bc2", inport="minio.vm")
---------------------------------------------------------------------
 0. ls_in_check_port_sec (northd.c:9039): 1, priority 50, uuid 58bb491f
    reg0[15] = check_in_port_sec();
    next;
 4. ls_in_pre_acl (northd.c:6023): ip, priority 100, uuid e0c5b3fc
    reg0[0] = 1;
    next;
 5. ls_in_pre_lb (northd.c:6227): ip, priority 100, uuid ea5b345c
    reg0[2] = 1;
    next;
 6. ls_in_pre_stateful (northd.c:6382): reg0[2] == 1, priority 110, uuid 05f7b0b4
    ct_lb_mark;

ct_lb_mark /* default (use --ct to customize) */
------------------------------------------------
 7. ls_in_acl_hint (northd.c:6511): ct.est && ct_mark.blocked == 0, priority 1, uuid 5c869d92
    reg0[10] = 1;
    next;
 8. ls_in_acl_eval (northd.c:6724): reg8[30..31] == 0 && reg0[10] == 1 && (inport == @ovn.sg.kubeovn_deny_all && ip), priority 3003, uuid cc4049b9
    reg8[17] = 1;
    ct_commit { ct_mark.blocked = 1; };
    next;
 9. ls_in_acl_action (northd.c:6842): reg8[17] == 1, priority 1000, uuid 88f0c49d
    reg8[16] = 0;
    reg8[17] = 0;
    reg8[18] = 0;
    reg8[30..31] = 0;
+ set +x
--------
Start OVS Tracing


+ kubectl exec ovs-ovn-xfdb2 -c openvswitch -n kube-system -- ovs-appctl ofproto/trace br-int in_port=105,tcp,nw_ttl=64,nw_src=10.232.82.5,nw_dst=10.232.82.89,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,tcp_src=1000,tcp_dst=80
Flow: tcp,in_port=105,vlan_tci=0x0000,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,nw_src=10.232.82.5,nw_dst=10.232.82.89,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0

bridge("br-int")
----------------
 0. in_port=105, priority 100, cookie 0x8d1ed171
    set_field:0x1c/0xffff->reg13
    set_field:0x1a->reg11
    set_field:0x1b->reg12
    set_field:0x4->metadata
    set_field:0x6->reg14
    set_field:0/0xffff0000->reg13
    resubmit(,8)
 8. metadata=0x4, priority 50, cookie 0x58bb491f
    set_field:0/0x1000->reg10
    resubmit(,73)
    73. No match.
            drop
    move:NXM_NX_REG10[12]->NXM_NX_XXREG0[111]
     -> NXM_NX_XXREG0[111] is now 0
    resubmit(,9)
 9. metadata=0x4, priority 0, cookie 0x1541808
    resubmit(,10)
10. metadata=0x4, priority 0, cookie 0x6587e4f4
    resubmit(,11)
11. metadata=0x4, priority 0, cookie 0x4173bf59
    resubmit(,12)
12. ip,metadata=0x4, priority 100, cookie 0xe0c5b3fc
    set_field:0x1000000000000000000000000/0x1000000000000000000000000->xxreg0
    resubmit(,13)
13. ip,metadata=0x4, priority 100, cookie 0xea5b345c
    set_field:0x4000000000000000000000000/0x4000000000000000000000000->xxreg0
    resubmit(,14)
14. ip,reg0=0x4/0x4,metadata=0x4, priority 110, cookie 0x5f7b0b4
    ct(table=15,zone=NXM_NX_REG13[0..15],nat)
    nat
     -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 15.
     -> Sets the packet to an untracked state, and clears all the conntrack fields.

Final flow: tcp,reg0=0x5,reg11=0x1a,reg12=0x1b,reg13=0x1c,reg14=0x6,metadata=0x4,in_port=105,vlan_tci=0x0000,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,nw_src=10.232.82.5,nw_dst=10.232.82.89,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0
Megaflow: recirc_id=0,eth,tcp,in_port=105,dl_src=00:00:00:00:00:00/01:00:00:00:00:00,dl_dst=4c:3d:a5:43:28:eb,nw_dst=0.0.0.0/1,nw_frag=no
Datapath actions: ct(zone=28,nat),recirc(0x212)

===============================================================================
recirc(0x212) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
Replacing src/dst IP/ports to simulate NAT:
 Initial flow:
 Modified flow:
===============================================================================

Flow: recirc_id=0x212,ct_state=new|trk,ct_zone=28,eth,tcp,reg0=0x5,reg11=0x1a,reg12=0x1b,reg13=0x1c,reg14=0x6,metadata=0x4,in_port=105,vlan_tci=0x0000,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,nw_src=10.232.82.5,nw_dst=10.232.82.89,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0

bridge("br-int")
----------------
    thaw
        Resuming from table 15
15. ct_state=+new-est+trk,metadata=0x4, priority 7, cookie 0x879f1bd7
    set_field:0x80000000000000000000000000/0x80000000000000000000000000->xxreg0
    set_field:0x200000000000000000000000000/0x200000000000000000000000000->xxreg0
    resubmit(,16)
16. ip,reg0=0x200/0x200,reg8=0/0xc0000000,reg14=0x6,metadata=0x4, priority 3003, cookie 0x22e6c0cd
    set_field:0x2000000000000/0x2000000000000->xreg4
    resubmit(,17)
17. reg8=0x20000/0x20000,metadata=0x4, priority 1000, cookie 0x88f0c49d
    set_field:0/0x1000000000000->xreg4
    set_field:0/0x2000000000000->xreg4
    set_field:0/0x4000000000000->xreg4
    set_field:0/0xc000000000000000->xreg4

Final flow: recirc_id=0x212,ct_state=new|trk,ct_zone=28,eth,tcp,reg0=0x285,reg11=0x1a,reg12=0x1b,reg13=0x1c,reg14=0x6,metadata=0x4,in_port=105,vlan_tci=0x0000,dl_src=be:23:11:83:32:ad,dl_dst=4c:3d:a5:43:28:eb,nw_src=10.232.82.5,nw_dst=10.232.82.89,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0
Megaflow: recirc_id=0x212,ct_state=+new-est-rel-rpl+trk,ct_mark=0/0x1,eth,ip,in_port=105,dl_dst=4c:3d:a5:43:28:eb,nw_frag=no
Datapath actions: drop
[root@k8s-master ~]#

Steps To Reproduce

IDK

Current Behavior

/

Expected Behavior

/

@zsxsoft zsxsoft added the bug Something isn't working label Dec 17, 2024
Copy link

dosubot bot commented Dec 17, 2024

The issue you're experiencing with the "network not ready after XX ping" error when creating a pod with a security group in Kube-OVN v1.13.2 might be related to changes in how ACLs and security groups are handled in this version. Here are a few points to consider:

  1. ACL and Security Group Handling: Kube-OVN v1.13.0 introduced changes in how ACLs and security groups are managed, which might affect network readiness and pod creation. The default deny-all security group and synchronization mechanisms ensure that ACLs are correctly applied, but any misconfiguration or missing ACLs could lead to network readiness issues [1][2].

  2. Known Issues in v1.13.2: There is a known issue related to ACLs in the v1.13.2 release notes, specifically regarding node local DNS IP conntrack when setting ACLs. This might be related to your problem if the ACLs are not correctly allowing necessary traffic [3].

  3. Network Not Ready Error: The "network not ready after XX ping" error is triggered when the pinger does not receive any packets back after multiple attempts. This could be due to ACLs blocking traffic or misconfigured security group rules [4].

To troubleshoot further, consider the following steps:

  • Verify that the ACLs and security group rules are correctly configured to allow necessary traffic.
  • Check if the issue persists when using a simpler ACL configuration or by temporarily removing security group annotations.
  • Review the specific ACL and security group changes introduced in v1.13.0 to ensure compatibility with your setup.

If the problem continues, it might be beneficial to compare the configurations of subnets created in v1.12.28 with those in v1.13.0 to identify any discrepancies.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@zsxsoft
Copy link
Author

zsxsoft commented Dec 17, 2024

Final flow: recirc_id=0x58,ct_state=new|trk,ct_zone=13,eth,tcp,reg0=0x285,reg11=0xc,reg12=0xb,reg13=0xd,reg14=0x2,metadata=0x4,in_port=28,vlan_tci=0x0000,dl_src=a6:ec:07:de:ec:6e,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.2,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0
Megaflow: recirc_id=0x58,ct_state=+new-est-rel-rpl+trk,ct_mark=0/0x1,eth,ip,in_port=28,dl_dst=f6:44:5e:43:34:66,nw_frag=no
Datapath actions: drop


[root@vm-master ~]# sh install-v1.12.sh
-------------------------------
Kube-OVN Version:     v1.12.30
Default Network Mode: geneve
Default Subnet CIDR:  10.16.0.0/16
Join Subnet CIDR:     100.64.0.0/16
# ......

[root@vm-master ~]# kubectl apply -f test.yaml
securitygroup.kubeovn.io/sg-net-83747e7952934db3a1f0ab1602fd9bc2 unchanged
subnet.kubeovn.io/net-83747e7952934db3a1f0ab1602fd9bc2 unchanged
pod/nginx created
[root@vm-master ~]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          26s

[root@vm-master ~]# kubectl ko trace nginx 10.232.82.1 tcp 80
Using the logical gateway mac address as destination
+ kubectl exec ovn-central-7bcb5f6489-4hhck -n kube-system -c ovn-central -- ovn-trace net-83747e7952934db3a1f0ab1602fd9bc2 'inport == "nginx.default" && ip.ttl == 64 && eth.src == f2:22:a9:d2:66:19 && ip4.src == 10.232.82.3 && eth.dst == f6:44:5e:43:34:66 && ip4.dst == 10.232.82.1 && tcp.src == 10000 && tcp.dst == 80 && ct.new'
# ct_state=new|trk,tcp,reg14=0x2,vlan_tci=0x0000,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.3,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=10000,tp_dst=80,tcp_flags=0

ingress(dp="net-83747e7952934db3a1f0ab1602fd9bc2", inport="nginx.default")
--------------------------------------------------------------------------
 0. ls_in_check_port_sec (northd.c:8990): 1, priority 50, uuid 2dc761b9
    reg0[15] = check_in_port_sec();
    next;
 4. ls_in_pre_acl (northd.c:6056): ip, priority 100, uuid 8d29cd2c
    reg0[0] = 1;
    next;
 5. ls_in_pre_lb (northd.c:6267): ip4 && ip4.dst == 10.232.82.0/24, priority 105, uuid 8a374124
    next;
 6. ls_in_pre_stateful (northd.c:6425): reg0[0] == 1, priority 100, uuid 0eb2b698
    ct_next;

ct_next(ct_state=est|trk /* default (use --ct to customize) */)
---------------------------------------------------------------
 7. ls_in_acl_hint (northd.c:6519): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 917618e6
    reg0[8] = 1;
    reg0[10] = 1;
    next;
 8. ls_in_acl (northd.c:6694): reg0[8] == 1 && (inport == @ovn.sg.sg.net.83747e7952934db3a1f0ab1602fd9bc2 && ip4 && ip4.dst == 10.232.82.0/24), priority 3200, uuid 68c2d7a4
    next;
14. ls_in_after_lb (northd.c:7993): reg0[2] == 0, priority 100, uuid 0b39517c
    next;
15. ls_in_pre_hairpin (northd.c:8062): ip && ct.trk, priority 100, uuid 3bf0d671
    reg0[6] = chk_lb_hairpin();
    reg0[12] = chk_lb_hairpin_reply();
    next;
26. ls_in_l2_lkup (northd.c:9682): eth.dst == f6:44:5e:43:34:66, priority 50, uuid 397e1847
    outport = "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster";
    output;

egress(dp="net-83747e7952934db3a1f0ab1602fd9bc2", inport="nginx.default", outport="net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster")
-------------------------------------------------------------------------------------------------------------------------------------
 0. ls_out_pre_acl (northd.c:5885): ip && outport == "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", priority 110, uuid 30687282
    next;
 1. ls_out_pre_lb (northd.c:5885): ip && outport == "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", priority 110, uuid 31a20b63
    next;
 3. ls_out_acl_hint (northd.c:6519): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid e7801c61
    reg0[8] = 1;
    reg0[10] = 1;
    next;
 8. ls_out_check_port_sec (northd.c:5848): 1, priority 0, uuid a17eee54
    reg0[15] = check_out_port_sec();
    next;
 9. ls_out_apply_port_sec (northd.c:5853): 1, priority 0, uuid 5e238984
    output;
    /* output to "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", type "patch" */

ingress(dp="ovn-cluster", inport="ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2")
------------------------------------------------------------------------------------
 0. lr_in_admission (northd.c:12528): eth.dst == f6:44:5e:43:34:66 && inport == "ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2", priority 50, uuid 6651324d
    xreg0[0..47] = f6:44:5e:43:34:66;
    next;
 1. lr_in_lookup_neighbor (northd.c:12716): 1, priority 0, uuid 98e25a8b
    reg9[2] = 1;
    next;
 2. lr_in_learn_neighbor (northd.c:12253): reg9[2] == 1 || reg9[3] == 0, priority 100, uuid b5d8eb62
    next;
 3. lr_in_ip_input (northd.c:14447): ip4 && ip4.dst == 10.232.82.1 && !ip.later_frag && tcp, priority 80, uuid 0d54f2c1
    tcp_reset { eth.dst <-> eth.src; ip4.dst <-> ip4.src; next; };

tcp_reset
---------
    eth.dst <-> eth.src;
    ip4.dst <-> ip4.src;
    next;
12. lr_in_ip_routing_pre (northd.c:12972): 1, priority 0, uuid 6ab60692
    reg7 = 0;
    next;
13. lr_in_ip_routing (northd.c:11214): ip4.dst == 10.232.82.0/24, priority 74, uuid c570cb18
    ip.ttl--;
    reg8[0..15] = 0;
    reg0 = ip4.dst;
    reg1 = 10.232.82.1;
    eth.src = f6:44:5e:43:34:66;
    outport = "ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2";
    flags.loopback = 1;
    next;
14. lr_in_ip_routing_ecmp (northd.c:13030): reg8[0..15] == 0, priority 150, uuid 1433a181
    next;
15. lr_in_policy (northd.c:10380): ip4.dst == 10.232.82.0/24, priority 31000, uuid 8c8311c2
    reg8[0..15] = 0;
    next;
16. lr_in_policy_ecmp (northd.c:13215): reg8[0..15] == 0, priority 150, uuid cb4dbf0c
    next;
17. lr_in_arp_resolve (northd.c:12932): outport == "ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2" && reg0 == 10.232.82.3, priority 100, uuid 9a97f92e
    eth.dst = f2:22:a9:d2:66:19;
    next;
21. lr_in_arp_request (northd.c:13890): 1, priority 0, uuid 62882d7d
    output;

egress(dp="ovn-cluster", inport="ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2", outport="ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2")
-----------------------------------------------------------------------------------------------------------------------------------------------
 0. lr_out_chk_dnat_local (northd.c:15353): 1, priority 0, uuid db3fd51a
    reg9[4] = 0;
    next;
 6. lr_out_delivery (northd.c:13939): outport == "ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2", priority 100, uuid 73c2df10
    output;
    /* output to "ovn-cluster-net-83747e7952934db3a1f0ab1602fd9bc2", type "patch" */

ingress(dp="net-83747e7952934db3a1f0ab1602fd9bc2", inport="net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster")
-------------------------------------------------------------------------------------------------------------
 0. ls_in_check_port_sec (northd.c:8990): 1, priority 50, uuid 2dc761b9
    reg0[15] = check_in_port_sec();
    next;
 4. ls_in_pre_acl (northd.c:5882): ip && inport == "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", priority 110, uuid f6c0c248
    next;
 5. ls_in_pre_lb (northd.c:5882): ip && inport == "net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", priority 110, uuid 01896978
    next;
 7. ls_in_acl_hint (northd.c:6519): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 917618e6
    reg0[8] = 1;
    reg0[10] = 1;
    next;
 8. ls_in_acl (northd.c:6694): reg0[8] == 1 && (ip), priority 2002, uuid 0ec95965
    next;
14. ls_in_after_lb (northd.c:7993): reg0[2] == 0, priority 100, uuid 0b39517c
    next;
15. ls_in_pre_hairpin (northd.c:8062): ip && ct.trk, priority 100, uuid 3bf0d671
    reg0[6] = chk_lb_hairpin();
    reg0[12] = chk_lb_hairpin_reply();
    next;
26. ls_in_l2_lkup (northd.c:9611): eth.dst == f2:22:a9:d2:66:19, priority 50, uuid a324f60b
    outport = "nginx.default";
    output;

egress(dp="net-83747e7952934db3a1f0ab1602fd9bc2", inport="net-83747e7952934db3a1f0ab1602fd9bc2-ovn-cluster", outport="nginx.default")
-------------------------------------------------------------------------------------------------------------------------------------
 0. ls_out_pre_acl (northd.c:6059): ip, priority 100, uuid 523bea13
    reg0[0] = 1;
    next;
 1. ls_out_pre_lb (northd.c:6263): ip, priority 100, uuid e7ec7b05
    reg0[2] = 1;
    next;
 2. ls_out_pre_stateful (northd.c:6419): reg0[2] == 1, priority 110, uuid 4b08f800
    ct_lb_mark;

ct_lb_mark /* default (use --ct to customize) */
------------------------------------------------
 3. ls_out_acl_hint (northd.c:6519): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid e7801c61
    reg0[8] = 1;
    reg0[10] = 1;
    next;
 4. ls_out_acl (northd.c:6694): reg0[8] == 1 && (outport == @ovn.sg.sg.net.83747e7952934db3a1f0ab1602fd9bc2 && ip4 && ip4.src == 0.0.0.0/0), priority 3100, uuid 518a3421
    next;
 8. ls_out_check_port_sec (northd.c:5848): 1, priority 0, uuid a17eee54
    reg0[15] = check_out_port_sec();
    next;
 9. ls_out_apply_port_sec (northd.c:5853): 1, priority 0, uuid 5e238984
    output;
    /* output to "nginx.default", type "" */
+ set +x
--------
Start OVS Tracing


+ kubectl exec kube-ovn-cni-9j5wm -c cni-server -n kube-system -- ovs-appctl ofproto/trace br-int in_port=32,tcp,nw_ttl=64,nw_src=10.232.82.3,nw_dst=10.232.82.1,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,tcp_src=1000,tcp_dst=80
Flow: tcp,in_port=32,vlan_tci=0x0000,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.3,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0

bridge("br-int")
----------------
 0. in_port=32, priority 100, cookie 0x7dc3f07f
    set_field:0xd->reg13
    set_field:0xc->reg11
    set_field:0x1->reg12
    set_field:0x4->metadata
    set_field:0x2->reg14
    resubmit(,8)
 8. metadata=0x4, priority 50, cookie 0x2dc761b9
    set_field:0/0x1000->reg10
    resubmit(,73)
    73. No match.
            drop
    move:NXM_NX_REG10[12]->NXM_NX_XXREG0[111]
     -> NXM_NX_XXREG0[111] is now 0
    resubmit(,9)
 9. metadata=0x4, priority 0, cookie 0x221dee88
    resubmit(,10)
10. metadata=0x4, priority 0, cookie 0xe05d28b3
    resubmit(,11)
11. metadata=0x4, priority 0, cookie 0x81308b74
    resubmit(,12)
12. ip,metadata=0x4, priority 100, cookie 0x8d29cd2c
    set_field:0x1000000000000000000000000/0x1000000000000000000000000->xxreg0
    resubmit(,13)
13. ip,metadata=0x4,nw_dst=10.232.82.0/24, priority 105, cookie 0x8a374124
    resubmit(,14)
14. ip,reg0=0x1/0x1,metadata=0x4, priority 100, cookie 0xeb2b698
    ct(table=15,zone=NXM_NX_REG13[0..15])
    drop
     -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 15.
     -> Sets the packet to an untracked state, and clears all the conntrack fields.

Final flow: tcp,reg0=0x1,reg11=0xc,reg12=0x1,reg13=0xd,reg14=0x2,metadata=0x4,in_port=32,vlan_tci=0x0000,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.3,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0
Megaflow: recirc_id=0,eth,tcp,in_port=32,dl_src=00:00:00:00:00:00/01:00:00:00:00:00,dl_dst=f6:44:5e:43:34:66,nw_dst=10.232.82.0/24,nw_frag=no
Datapath actions: ct(zone=13),recirc(0xc0)

===============================================================================
recirc(0xc0) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
===============================================================================

Flow: recirc_id=0xc0,ct_state=new|trk,ct_zone=13,eth,tcp,reg0=0x1,reg11=0xc,reg12=0x1,reg13=0xd,reg14=0x2,metadata=0x4,in_port=32,vlan_tci=0x0000,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.3,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0

bridge("br-int")
----------------
    thaw
        Resuming from table 15
15. ct_state=+new-est+trk,metadata=0x4, priority 7, cookie 0xf20411ad
    set_field:0x80000000000000000000000000/0x80000000000000000000000000->xxreg0
    set_field:0x200000000000000000000000000/0x200000000000000000000000000->xxreg0
    resubmit(,16)
16. ip,reg0=0x80/0x80,reg14=0x2,metadata=0x4,nw_dst=10.232.82.0/24, priority 3200, cookie 0x79b8f5b4
    set_field:0x2000000000000000000000000/0x2000000000000000000000000->xxreg0
    resubmit(,17)
17. metadata=0x4, priority 0, cookie 0x731d7252
    resubmit(,18)
18. metadata=0x4, priority 0, cookie 0x88205ffc
    resubmit(,19)
19. metadata=0x4, priority 0, cookie 0xf5c6118f
    resubmit(,20)
20. metadata=0x4, priority 0, cookie 0xee85beb6
    resubmit(,21)
21. metadata=0x4, priority 0, cookie 0xd1d85f20
    resubmit(,22)
22. reg0=0/0x4,metadata=0x4, priority 100, cookie 0xb39517c
    resubmit(,23)
23. ct_state=+trk,ip,metadata=0x4, priority 100, cookie 0x3bf0d671
    set_field:0/0x80->reg10
    resubmit(,68)
    68. No match.
            drop
    move:NXM_NX_REG10[7]->NXM_NX_XXREG0[102]
     -> NXM_NX_XXREG0[102] is now 0
    set_field:0/0x80->reg10
    resubmit(,69)
    69. No match.
            drop
    move:NXM_NX_REG10[7]->NXM_NX_XXREG0[108]
     -> NXM_NX_XXREG0[108] is now 0
    resubmit(,24)
24. metadata=0x4, priority 0, cookie 0x3dd2dea0
    resubmit(,25)
25. metadata=0x4, priority 0, cookie 0xeda36216
    resubmit(,26)
26. metadata=0x4, priority 0, cookie 0x21852660
    resubmit(,27)
27. ip,reg0=0x2/0x2002,metadata=0x4, priority 100, cookie 0x5533cc88
    ct(commit,zone=NXM_NX_REG13[0..15],nat(src),exec(set_field:0/0x1->ct_mark))
    nat(src)
    set_field:0/0x1->ct_mark
     -> Sets the packet to an untracked state, and clears all the conntrack fields.
    resubmit(,28)
28. metadata=0x4, priority 0, cookie 0xb9371c31
    resubmit(,29)
29. metadata=0x4, priority 0, cookie 0x9e6797c8
    resubmit(,30)
30. metadata=0x4, priority 0, cookie 0xa08c89cb
    resubmit(,31)
31. metadata=0x4, priority 0, cookie 0xae8f7377
    resubmit(,32)
32. metadata=0x4, priority 0, cookie 0x81037dfe
    resubmit(,33)
33. metadata=0x4, priority 0, cookie 0xac4dc650
    resubmit(,34)
34. metadata=0x4,dl_dst=f6:44:5e:43:34:66, priority 50, cookie 0x397e1847
    set_field:0x1->reg15
    resubmit(,37)
37. priority 0
    resubmit(,39)
39. priority 0
    resubmit(,40)
40. reg15=0x1,metadata=0x4, priority 100, cookie 0xa1ca6630
    set_field:0xc->reg11
    set_field:0x1->reg12
    resubmit(,41)
41. priority 0
    set_field:0->reg0
    set_field:0->reg1
    set_field:0->reg2
    set_field:0->reg3
    set_field:0->reg4
    set_field:0->reg5
    set_field:0->reg6
    set_field:0->reg7
    set_field:0->reg8
    set_field:0->reg9
    resubmit(,42)
42. ip,reg15=0x1,metadata=0x4, priority 110, cookie 0x30687282
    resubmit(,43)
43. ip,reg15=0x1,metadata=0x4, priority 110, cookie 0x31a20b63
    resubmit(,44)
44. metadata=0x4, priority 0, cookie 0x3b45bbaa
    resubmit(,45)
45. ct_state=-trk,metadata=0x4, priority 5, cookie 0x5b2415be
    set_field:0x100000000000000000000000000/0x100000000000000000000000000->xxreg0
    set_field:0x200000000000000000000000000/0x200000000000000000000000000->xxreg0
    resubmit(,46)
46. metadata=0x4, priority 0, cookie 0x9c24bc50
    resubmit(,47)
47. metadata=0x4, priority 0, cookie 0xb5e9083
    resubmit(,48)
48. metadata=0x4, priority 0, cookie 0x1ac925e8
    resubmit(,49)
49. metadata=0x4, priority 0, cookie 0x43f91f34
    resubmit(,50)
50. metadata=0x4, priority 0, cookie 0xa17eee54
    set_field:0/0x1000->reg10
    resubmit(,75)
    75. No match.
            drop
    move:NXM_NX_REG10[12]->NXM_NX_XXREG0[111]
     -> NXM_NX_XXREG0[111] is now 0
    resubmit(,51)
51. metadata=0x4, priority 0, cookie 0x5e238984
    resubmit(,64)
64. priority 0
    resubmit(,65)
65. reg15=0x1,metadata=0x4, priority 100, cookie 0xa1ca6630
    clone(ct_clear,set_field:0->reg11,set_field:0->reg12,set_field:0->reg13,set_field:0x6->reg11,set_field:0x2->reg12,set_field:0x1->metadata,set_field:0x3->reg14,set_field:0->reg10,set_field:0->reg15,set_field:0->reg0,set_field:0->reg1,set_field:0->reg2,set_field:0->reg3,set_field:0->reg4,set_field:0->reg5,set_field:0->reg6,set_field:0->reg7,set_field:0->reg8,set_field:0->reg9,resubmit(,8))
    ct_clear
    set_field:0->reg11
    set_field:0->reg12
    set_field:0->reg13
    set_field:0x6->reg11
    set_field:0x2->reg12
    set_field:0x1->metadata
    set_field:0x3->reg14
    set_field:0->reg10
    set_field:0->reg15
    set_field:0->reg0
    set_field:0->reg1
    set_field:0->reg2
    set_field:0->reg3
    set_field:0->reg4
    set_field:0->reg5
    set_field:0->reg6
    set_field:0->reg7
    set_field:0->reg8
    set_field:0->reg9
    resubmit(,8)
 8. reg14=0x3,metadata=0x1,dl_dst=f6:44:5e:43:34:66, priority 50, cookie 0x6651324d
    set_field:0xf6445e4334660000000000000000/0xffffffffffff0000000000000000->xxreg0
    resubmit(,9)
 9. metadata=0x1, priority 0, cookie 0x98e25a8b
    set_field:0x4/0x4->xreg4
    resubmit(,10)
10. reg9=0x4/0x4,metadata=0x1, priority 100, cookie 0xb5d8eb62
    resubmit(,11)
11. tcp,metadata=0x1,nw_dst=10.232.82.1,nw_frag=not_later, priority 80, cookie 0xd54f2c1
    controller(userdata=00.00.00.0b.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1b.00.00.00.00.04.06.00.30.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1b.00.00.00.00.02.06.00.30.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1c.00.00.00.00.04.06.00.30.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1c.00.00.00.00.02.06.00.30.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1b.00.00.00.00.0e.04.00.20.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1b.00.00.00.00.10.04.00.20.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1c.00.00.00.00.0e.04.00.20.00.00.00.00.00.00.ff.ff.00.18.00.00.23.20.00.1c.00.00.00.00.10.04.00.20.00.00.00.00.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.0c.00.00.00)

Final flow: recirc_id=0xc0,eth,tcp,reg0=0x300,reg11=0xc,reg12=0x1,reg13=0xd,reg14=0x2,reg15=0x1,metadata=0x4,in_port=32,vlan_tci=0x0000,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.3,nw_dst=10.232.82.1,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=1000,tp_dst=80,tcp_flags=0
Megaflow: recirc_id=0xc0,ct_state=+new-est-rel-rpl+trk,ct_mark=0/0x3,eth,tcp,in_port=32,dl_src=f2:22:a9:d2:66:19,dl_dst=f6:44:5e:43:34:66,nw_src=10.232.82.2/31,nw_dst=10.232.82.1,nw_frag=no
Datapath actions: ct(commit,zone=13,mark=0/0x1,nat(src)),userspace(pid=4294967295,controller(reason=1,dont_send=1,continuation=0,recirc_id=193,rule_cookie=0xd54f2c1,controller_id=0,max_len=65535))
[root@vm-master ~]#

@zsxsoft
Copy link
Author

zsxsoft commented Dec 20, 2024

#4742

@zsxsoft zsxsoft closed this as completed Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working security subnet
Projects
None yet
Development

No branches or pull requests

1 participant