Skip to content

Commit

Permalink
Add support for loadBalancerSourceRanges in LoadBalancer Service
Browse files Browse the repository at this point in the history
For #5493

This commit introduces support for loadBalancerSourceRanges for LoadBalancer
Services.

Here is an example of a LoadBalancer Service configuration allowing access
from specific CIDRs:

```yaml
apiVersion: v1
kind: Service
metadata:
  name: sample-loadbalancer-source-ranges
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
  loadBalancerSourceRanges:
    - "192.168.77.0/24"
    - "192.168.78.0/24"
status:
  loadBalancer:
    ingress:
      - ip: 192.168.77.152
```

To implement `loadBalancerSourceRanges`, a new table LoadBalancerSourceRanges is introduced
after table PreRoutingClassifier. Here are the corresponding flows:

```text
1. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.77.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
2. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.78.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
3. table=LoadBalancerSourceRanges, priority=190,tcp,nw_dst=192.168.77.152,tp_dst=80 actions=drop",
4. table=LoadBalancerSourceRanges, priority=0 actions=goto_table:SessionAffinity
```

Flows 1-2 allow packets destined for the for sample [LoadBalancer] from CIDRs specified in the `loadBalancerSourceRanges`
of the Service.

Flow 3, with lower priority, drops packets destined for the sample [LoadBalancer] that don't match any CIDRs within the
`loadBalancerSourceRanges`.

Signed-off-by: Hongliang Liu <[email protected]>
  • Loading branch information
hongliangl committed Sep 25, 2024
1 parent a996421 commit f54ac3a
Show file tree
Hide file tree
Showing 12 changed files with 465 additions and 117 deletions.
36 changes: 31 additions & 5 deletions docs/design/ovs-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,8 @@ spec:

### LoadBalancer

A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller.
A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller, configured
`loadBalancerSourceRanges` to a CIDR list.

```yaml
apiVersion: v1
Expand All @@ -334,6 +335,9 @@ spec:
port: 80
targetPort: 80
type: LoadBalancer
loadBalancerSourceRanges:
- "192.168.77.0/24"
- "192.168.78.0/24"
status:
loadBalancer:
ingress:
Expand Down Expand Up @@ -919,7 +923,7 @@ If you dump the flows of this table, you may see the following:
```text
1. table=NodePortMark, priority=200,ip,nw_dst=192.168.77.102 actions=set_field:0x80000/0x80000->reg4
2. table=NodePortMark, priority=200,ip,nw_dst=169.254.0.252 actions=set_field:0x80000/0x80000->reg4
3. table=NodePortMark, priority=0 actions=goto_table:SessionAffinity
3. table=NodePortMark, priority=0 actions=goto_table:LoadBalancerSourceRanges
```

Flow 1 matches packets destined for the local Node from local Pods. `NodePortRegMark` is loaded, indicating that the
Expand All @@ -937,6 +941,28 @@ Note that packets of NodePort Services have not been identified in this table by
identification of NodePort Services will be done finally in table [ServiceLB] by matching `NodePortRegMark` and the
the specific destination port of a NodePort.

### LoadBalancerSourceRanges

This table is designed to implement `loadBalancerSourceRanges` for LoadBalancer Services.

If you dump the flows of this table, you may see the following:

```text
1. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.77.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
2. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.78.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
3. table=LoadBalancerSourceRanges, priority=190,tcp,nw_dst=192.168.77.152,tp_dst=80 actions=drop",
4. table=LoadBalancerSourceRanges, priority=0 actions=goto_table:SessionAffinity
```

Flows 1-2 are used to match the packets destined for the sample [LoadBalancer], and these packets are also from the
CIDRs within the `loadBalancerSourceRanges` of the Services.

Flow 3, having lower priority than that of flows 1-2, is also used to match the packets destined for the sample
[LoadBalancer], but these packets, which are not from any CIDRs within the `loadBalancerSourceRanges` of the Services,
will be dropped.

Flow 4 is the table-miss flow.

### SessionAffinity

This table is designed to implement Service session affinity. The learned flows that cache the information of the
Expand Down Expand Up @@ -978,7 +1004,7 @@ This table is used to implement Service Endpoint selection. It addresses specifi
3. LoadBalancer, as demonstrated in the example [LoadBalancer].
4. Service configured with external IPs, as demonstrated in the example [Service with ExternalIP].
5. Service configured with session affinity, as demonstrated in the example [Service with session affinity].
6. Service configured with externalTrafficPolicy to `Local`, as demonstrated in the example [Service with
6. Service configured with `externalTrafficPolicy` to `Local`, as demonstrated in the example [Service with
ExternalTrafficPolicy Local].

If you dump the flows of this table, you may see the following:
Expand Down Expand Up @@ -1081,7 +1107,7 @@ If you dump the flows of this table, you may see the following::
```

Flow 1 is designed for Services without Endpoints. It identifies the first packet of connections destined for such Service
by matching `SvcNoEpRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP
by matching `SvcRejectRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP
Service traffic, the controller will send a TCP RST, and for all other cases the controller will send an ICMP Destination
Unreachable message.

Expand Down Expand Up @@ -1312,7 +1338,7 @@ the following cases when Antrea Proxy is not enabled:
to complete the DNAT processes, e.g., kube-proxy. The destination MAC of the packets is rewritten in the table to
avoid it is forwarded to the original client Pod by mistake.
- When hairpin is involved, i.e. connections between 2 local Pods, for which NAT is performed. One example is a
Pod accessing a NodePort Service for which externalTrafficPolicy is set to `Local` using the local Node's IP address,
Pod accessing a NodePort Service for which `externalTrafficPolicy` is set to `Local` using the local Node's IP address,
as there will be no SNAT for such traffic. Another example could be hostPort support, depending on how the feature
is implemented.

Expand Down
3 changes: 3 additions & 0 deletions pkg/agent/openflow/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -799,6 +799,9 @@ func (c *client) InstallServiceFlows(config *types.ServiceConfig) error {
if config.IsDSR {
flows = append(flows, c.featureService.dsrServiceMarkFlow(config))
}
if len(config.LoadBalancerSourceRanges) != 0 {
flows = append(flows, c.featureService.loadBalancerSourceRangesMarkFlows(config)...)
}
cacheKey := generateServicePortFlowCacheKey(config.ServiceIP, config.ServicePort, config.Protocol)
return c.addFlows(c.featureService.cachedFlows, cacheKey, flows)
}
Expand Down
96 changes: 65 additions & 31 deletions pkg/agent/openflow/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1017,8 +1017,8 @@ func Test_client_GetPodFlowKeys(t *testing.T) {
"table=1,priority=200,arp,in_port=11,arp_spa=10.10.0.11,arp_sha=00:00:10:10:00:11",
"table=3,priority=190,in_port=11",
"table=4,priority=200,ip,in_port=11,dl_src=00:00:10:10:00:11,nw_src=10.10.0.11",
"table=17,priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.11",
"table=22,priority=200,dl_dst=00:00:10:10:00:11",
"table=18,priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.11",
"table=23,priority=200,dl_dst=00:00:10:10:00:11",
}
assert.ElementsMatch(t, expectedFlowKeys, flowKeys)
}
Expand Down Expand Up @@ -1254,17 +1254,18 @@ func Test_client_InstallServiceFlows(t *testing.T) {
port := uint16(80)

testCases := []struct {
name string
trafficPolicyLocal bool
protocol binding.Protocol
svcIP net.IP
affinityTimeout uint16
isExternal bool
isNodePort bool
isNested bool
isDSR bool
enableMulticluster bool
expectedFlows []string
name string
trafficPolicyLocal bool
protocol binding.Protocol
svcIP net.IP
affinityTimeout uint16
isExternal bool
isNodePort bool
isNested bool
isDSR bool
enableMulticluster bool
loadBalancerSourceRanges []string
expectedFlows []string
}{
{
name: "Service ClusterIP",
Expand Down Expand Up @@ -1449,6 +1450,38 @@ func Test_client_InstallServiceFlows(t *testing.T) {
"cookie=0x1030000000064, table=DSRServiceMark, priority=200,tcp6,reg4=0xc000000/0xe000000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=learn(table=SessionAffinity,idle_timeout=160,fin_idle_timeout=5,priority=210,delete_learned,cookie=0x1030000000064,eth_type=0x86dd,nw_proto=0x6,OXM_OF_TCP_SRC[],OXM_OF_TCP_DST[],NXM_NX_IPV6_SRC[],NXM_NX_IPV6_DST[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG4[25],load:NXM_NX_XXREG3[]->NXM_NX_XXREG3[]),set_field:0x2000000/0x2000000->reg4,goto_table:EndpointDNAT",
},
},
{
name: "Service LoadBalancer,LoadBalancerSourceRanges,SessionAffinity,Short-circuiting",
protocol: binding.ProtocolSCTP,
svcIP: svcIPv4,
affinityTimeout: uint16(100),
isExternal: true,
trafficPolicyLocal: true,
loadBalancerSourceRanges: []string{"192.168.1.0/24", "192.168.2.0/24"},
expectedFlows: []string{
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp,nw_src=192.168.1.0/24,nw_dst=10.96.0.100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp,nw_src=192.168.2.0/24,nw_dst=10.96.0.100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=190,sctp,nw_dst=10.96.0.100,tp_dst=80 actions=drop",
"cookie=0x1030000000000, table=ServiceLB, priority=210,sctp,reg4=0x10010000/0x10070000,nw_dst=10.96.0.100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x64->reg7,group:100",
"cookie=0x1030000000000, table=ServiceLB, priority=200,sctp,reg4=0x10000/0x70000,nw_dst=10.96.0.100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x65->reg7,group:101",
"cookie=0x1030000000065, table=ServiceLB, priority=190,sctp,reg4=0x30000/0x70000,nw_dst=10.96.0.100,tp_dst=80 actions=learn(table=SessionAffinity,hard_timeout=100,priority=200,delete_learned,cookie=0x1030000000065,eth_type=0x800,nw_proto=0x84,OXM_OF_SCTP_DST[],NXM_OF_IP_DST[],NXM_OF_IP_SRC[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:NXM_NX_REG4[26]->NXM_NX_REG4[26],load:NXM_NX_REG3[]->NXM_NX_REG3[],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[9],load:0x1->NXM_NX_REG4[21]),set_field:0x20000/0x70000->reg4,goto_table:EndpointDNAT",
},
},
{
name: "Service LoadBalancer,LoadBalancerSourceRanges,IPv6,SessionAffinity",
protocol: binding.ProtocolSCTPv6,
svcIP: svcIPv6,
affinityTimeout: uint16(100),
isExternal: true,
loadBalancerSourceRanges: []string{"fec0:192:168:1::/64", "fec0:192:168:2::/64"},
expectedFlows: []string{
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp6,ipv6_src=fec0:192:168:1::/64,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp6,ipv6_src=fec0:192:168:2::/64,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=190,sctp6,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=drop",
"cookie=0x1030000000000, table=ServiceLB, priority=200,sctp6,reg4=0x10000/0x70000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x64->reg7,group:100",
"cookie=0x1030000000064, table=ServiceLB, priority=190,sctp6,reg4=0x30000/0x70000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=learn(table=SessionAffinity,hard_timeout=100,priority=200,delete_learned,cookie=0x1030000000064,eth_type=0x86dd,nw_proto=0x84,OXM_OF_SCTP_DST[],NXM_NX_IPV6_DST[],NXM_NX_IPV6_SRC[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:NXM_NX_REG4[26]->NXM_NX_REG4[26],load:NXM_NX_XXREG3[]->NXM_NX_XXREG3[],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[9],load:0x1->NXM_NX_REG4[21]),set_field:0x20000/0x70000->reg4,goto_table:EndpointDNAT",
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
Expand All @@ -1471,17 +1504,18 @@ func Test_client_InstallServiceFlows(t *testing.T) {
cacheKey := generateServicePortFlowCacheKey(tc.svcIP, port, tc.protocol)

assert.NoError(t, fc.InstallServiceFlows(&types.ServiceConfig{
ServiceIP: tc.svcIP,
ServicePort: port,
Protocol: tc.protocol,
TrafficPolicyLocal: tc.trafficPolicyLocal,
LocalGroupID: localGroupID,
ClusterGroupID: clusterGroupID,
AffinityTimeout: tc.affinityTimeout,
IsExternal: tc.isExternal,
IsNodePort: tc.isNodePort,
IsNested: tc.isNested,
IsDSR: tc.isDSR,
ServiceIP: tc.svcIP,
ServicePort: port,
Protocol: tc.protocol,
TrafficPolicyLocal: tc.trafficPolicyLocal,
LocalGroupID: localGroupID,
ClusterGroupID: clusterGroupID,
AffinityTimeout: tc.affinityTimeout,
IsExternal: tc.isExternal,
IsNodePort: tc.isNodePort,
IsNested: tc.isNested,
IsDSR: tc.isDSR,
LoadBalancerSourceRanges: tc.loadBalancerSourceRanges,
}))
fCacheI, ok := fc.featureService.cachedFlows.Load(cacheKey)
require.True(t, ok)
Expand Down Expand Up @@ -1527,11 +1561,11 @@ func Test_client_GetServiceFlowKeys(t *testing.T) {
assert.NoError(t, fc.InstallEndpointFlows(bindingProtocol, endpoints))
flowKeys := fc.GetServiceFlowKeys(svcIP, svcPort, bindingProtocol, endpoints)
expectedFlowKeys := []string{
"table=11,priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=11,priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=12,priority=200,tcp,reg3=0xa0a000b,reg4=0x20050/0x7ffff",
"table=12,priority=200,tcp,reg3=0xa0a000c,reg4=0x20050/0x7ffff",
"table=20,priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.12,nw_dst=10.10.0.12",
"table=12,priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=12,priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=13,priority=200,tcp,reg3=0xa0a000b,reg4=0x20050/0x7ffff",
"table=13,priority=200,tcp,reg3=0xa0a000c,reg4=0x20050/0x7ffff",
"table=21,priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.12,nw_dst=10.10.0.12",
}
assert.ElementsMatch(t, expectedFlowKeys, flowKeys)
}
Expand Down Expand Up @@ -2787,8 +2821,8 @@ func Test_client_ReplayFlows(t *testing.T) {
"cookie=0x1020000000000, table=IngressMetric, priority=200,reg0=0x400/0x400,reg3=0xf actions=drop",
)
replayedFlows = append(replayedFlows,
"cookie=0x1020000000000, table=IngressRule, priority=200,conj_id=15 actions=set_field:0xf->reg3,set_field:0x400/0x400->reg0,set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x1b/0xff->reg2,group:4",
"cookie=0x1020000000000, table=IngressDefaultRule, priority=200,reg1=0x64 actions=set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x400000/0x600000->reg0,set_field:0x1c/0xff->reg2,goto_table:Output",
"cookie=0x1020000000000, table=IngressRule, priority=200,conj_id=15 actions=set_field:0xf->reg3,set_field:0x400/0x400->reg0,set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x1c/0xff->reg2,group:4",
"cookie=0x1020000000000, table=IngressDefaultRule, priority=200,reg1=0x64 actions=set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x400000/0x600000->reg0,set_field:0x1d/0xff->reg2,goto_table:Output",
)

// Feature Pod connectivity replays flows.
Expand Down
1 change: 1 addition & 0 deletions pkg/agent/openflow/framework.go
Original file line number Diff line number Diff line change
Expand Up @@ -254,6 +254,7 @@ func (f *featureService) getRequiredTables() []*Table {
tables := []*Table{
UnSNATTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down
6 changes: 6 additions & 0 deletions pkg/agent/openflow/framework_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -260,6 +261,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -304,6 +306,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -347,6 +350,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -426,6 +430,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
NodePortMarkTable,
SessionAffinityTable,
ServiceLBTable,
Expand Down Expand Up @@ -474,6 +479,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down
43 changes: 36 additions & 7 deletions pkg/agent/openflow/pipeline.go
Original file line number Diff line number Diff line change
Expand Up @@ -138,12 +138,13 @@ var (

// Tables in stagePreRouting:
// When proxy is enabled.
PreRoutingClassifierTable = newTable("PreRoutingClassifier", stagePreRouting, pipelineIP)
NodePortMarkTable = newTable("NodePortMark", stagePreRouting, pipelineIP)
SessionAffinityTable = newTable("SessionAffinity", stagePreRouting, pipelineIP)
ServiceLBTable = newTable("ServiceLB", stagePreRouting, pipelineIP)
DSRServiceMarkTable = newTable("DSRServiceMark", stagePreRouting, pipelineIP)
EndpointDNATTable = newTable("EndpointDNAT", stagePreRouting, pipelineIP)
PreRoutingClassifierTable = newTable("PreRoutingClassifier", stagePreRouting, pipelineIP)
LoadBalancerSourceRangesTable = newTable("LoadBalancerSourceRanges", stagePreRouting, pipelineIP)
NodePortMarkTable = newTable("NodePortMark", stagePreRouting, pipelineIP)
SessionAffinityTable = newTable("SessionAffinity", stagePreRouting, pipelineIP)
ServiceLBTable = newTable("ServiceLB", stagePreRouting, pipelineIP)
DSRServiceMarkTable = newTable("DSRServiceMark", stagePreRouting, pipelineIP)
EndpointDNATTable = newTable("EndpointDNAT", stagePreRouting, pipelineIP)
// When proxy is disabled.
DNATTable = newTable("DNAT", stagePreRouting, pipelineIP)

Expand Down Expand Up @@ -3011,7 +3012,7 @@ func (f *featureService) preRoutingClassifierFlows() []binding.Flow {
cookieID := f.cookieAllocator.Request(f.category).Raw()
var flows []binding.Flow

targetTables := []uint8{SessionAffinityTable.GetID(), ServiceLBTable.GetID()}
targetTables := []uint8{LoadBalancerSourceRangesTable.GetID(), SessionAffinityTable.GetID(), ServiceLBTable.GetID()}
if f.proxyAll {
targetTables = append([]uint8{NodePortMarkTable.GetID()}, targetTables...)
}
Expand Down Expand Up @@ -3106,6 +3107,34 @@ func (f *featureService) gatewaySNATFlows() []binding.Flow {
return flows
}

func (f *featureService) loadBalancerSourceRangesMarkFlows(config *types.ServiceConfig) []binding.Flow {
cookieID := f.cookieAllocator.Request(f.category).Raw()
protocol := config.Protocol
ingressIP := config.ServiceIP
port := config.ServicePort
var flows []binding.Flow
for _, srcRange := range config.LoadBalancerSourceRanges {
_, srcIPNet, _ := net.ParseCIDR(srcRange)
flows = append(flows, LoadBalancerSourceRangesTable.ofTable.BuildFlow(priorityNormal).
Cookie(cookieID).
MatchProtocol(protocol).
MatchSrcIPNet(*srcIPNet).
MatchDstIP(ingressIP).
MatchDstPort(port, nil).
Action().NextTable().
Done(),
)
}
flows = append(flows, LoadBalancerSourceRangesTable.ofTable.BuildFlow(priorityLow).
Cookie(cookieID).
MatchProtocol(protocol).
MatchDstIP(ingressIP).
MatchDstPort(port, nil).
Action().Drop().
Done())
return flows
}

func getCachedFlowMessages(cache *flowCategoryCache) []*openflow15.FlowMod {
var flows []*openflow15.FlowMod
cache.Range(func(key, value interface{}) bool {
Expand Down
Loading

0 comments on commit f54ac3a

Please sign in to comment.