• Keine Ergebnisse gefunden

5.4 Mitigating Flow Table Space Shortages with Adaptive Software-defined

5.4.3 Adaptive Bit-Indexed Software-Defined Multicast

5.4 ���������� ���� ����� ����� ��������� ���� �������� ��������-������� ��������� 107

packet is the sum for the set length and the offset length: AL SL OL. Specifically, for each bit string length BL the optimal set length is calculated as follows:

SL subject to:

OL BL

AL OL SLSL

An example of an ABSDM forwarding system is provided in Figure5.10. Only switches that should receive multicast traffic are assigned bit IDs to minimize the total length of the bit string. The multicast forwarding includes all data plane devices in the system and requires multicast state on every one of them. However, this resource consumption is offset by the fact the per-group state is smaller than in other systems because all groups share the forwarding tree. The forwarding tree is created in the same way than described for ASDM with using late replication.

ISP Rest of the Internet

Subscribers Subscribers

5

6 1

Multicast forwarding

9

BSDM Edge switches

8

7

4

2 3

1:010 0:100 0:001

1:001 0:010

1:100 2:001

2:010 2:100

Figure 5.10: BSDM forwarding underlay

An example of a multicast group is given in Figure5.11. The light green shaded areas depicted the shared forwarding state. The default forwarding method proposed by BIER is late replication that is depicted in Figure5.11a. As with ASDM, the packets addressed to the group are identified at the group ingress switch. The group is the same used in the bit address encoding example. The destination addresses of the multicast group are part of two different bit sets, 1 and 2. Therefore, the packet is replicated on the ingress switch, a multicast address header is added to each of them, and each copy gets a different multicast address. One packet is addressed to switches 5 and 6, which are part of bit set 1 and one is addressed to switch 9, which is part of bit set 2. The multicast forwarding routers contain forwarding rules for all bit sets, which forward the packets in parallel for the first to two hops. Once the packets arrive at their destination switch, a rule on each of the switches identifies the multicast group, replicates the packets if

required, removes the multicast address header, and rewrites the destination addresses to their respective group’s members, as with ASDM. As is visible in Figure5.11a, the number of switches that host per-group state is significantly reduced.

ISP Rest of the Internet

Subscribers Subscribers

5

6 1

Multicast forwarding

9

BSDM Edge switches

8

7

4

2 3

1:010 0:100 0:001

1:100 0:010

1:001 2:100

2:010 2:001 9

5

6 9

Group ingress switch

Group egress switches

1

Dst: 1:110 Dst: 2:100

(a) BSDM late replication

ISP Rest of the Internet

Subscribers Subscribers

5

6 1

Multicast forwarding

9

BSDM Edge switches

8

7

4

2 3

1:010 0:100 0:001

1:100 0:010

1:001 2:100

2:010 2:001 9

Group ingress switch

Group egress switches

1

6 5

Unicast Unicast

(b) ABSDM replication with Figure 5.11: ABSDM forwarding.

The per-group state can be further reduced by applying adaptiveness to the multicast forwarding as depicted in Figure5.11b. The unicast conversion threshold in the example is , which means the multicast subtrees of the corresponding bit set are converted to unicast when the number of members they contain is less or equal than 2. Therefore, the traffic for both bit sets is immediately converted to unicast, which leads to a single rule on a single switch only.

The forwarding pipeline of ASDM is more complex than the one for ASDM, which is why we describe in detail in Figure5.12. It requires an OpenFlow flow table for every port and is assumed to start in the flow table pipeline at flow table m. The depiction shows the process for n ports where ABSDM downstream devices are connected. ABSDM traffic is identified at the beginning of the pipeline and directed to flow table m. This first table is responsible for terminating the multicast process early to implement adaptiveness. We will discuss this table later.

The default bit-indexed forwarding works as follows for every port. For each switch that is reached through a specific port, its bit index is checked in the respective bit set if it is set to 1. In this is the case, the packet is forwarded out this port. To that end, the packet is sent to both, an indirect group table and the flow table of the next port, i.e., flow table m+1. In the group table all bit fields in the set at set to 0 that are not reached through this port. This prevents that the packet is replicated to these addresses again on the next multicast switch, which would cause duplicate packets. Then the packet is output to the corresponding port. The actions applied to the packet in the group table does not affect the copy of the packet that has been forwarded to table m+1. In flow table m+1 the same process is applied, which ensures that the packet is checked if it has to be sent out of every port on the switch. Afterward, the packet is dropped. Note that

5.4 ���������� ���� ����� ����� ��������� ���� �������� ��������-������� ��������� 109

Instructions:

- Apply action:

group table indirect n, - drop packet

Instructions:

- Apply action:

group table indirect 2, - Goto Table m+3 Instructions:

- Apply action:

group table indirect 1, - Goto Table m+2 Incoming

ABSDM packet

Flow Table m+1 Match fwd bit mask for

port 1 P

P1

Pn

Port 2

Port n P2

Rest of flow table pipeline

Group Table Indirect 1 Zero fwd bit mask, output to port 1

P1

Group Table Indirect 2 Zero fwd bit mask, output to port 2

P2 Flow Table m+2

Match fwd bit mask for port 2

Group Table Indirect n Zero fwd bit mask, output to port n Flow Table m+n

Match fwd bit mask for port n

Pn

Port 1 Instructions:

- Apply actions:

group table all i, - Goto Table m+1

Port 1 Port x

Group Table All i Remove bit header, write unicast dst,

output to port x P1 P1 P1

Flow Table m Adaptive Termination

Match Group ID i

Figure 5.12: The ABSDM OpenFlow pipeline

we assumed OpenFlow to support the specific bit header format used by ABSDM. This could be implemented, e.g., using the OpenFlow experimenter feature. If the header format is not supported, the 128-bit IPv6 destination address could be used to identify bit indexed multicast traffic and to encode the bit field. The 20-bit flow label header field of IPv6 Group identification could be used for group identification.

Table 5.4: Requirements for the ABSDM forwarding pipeline

Resource/Feature Consumption

Flow tables Num. of next hops + 1

Num. group tables typeindirect Num. of next hops

Num. group tables typeall Num. of groups with local unicast conversion Num. of flow entries Bit string length

Table 5.5: Required OpenFlow features over using version 1.0 as baseline (adapted from [Wel16]).

Feature Min. OpenFlow

version Comment

Group Tables 1.1 Per next-hop packet processing

Match fields with arbitrary

bit masks 1.2 Matching single bits of BIER destination

addresses Set field values with

arbi-trary bit mask 1.5 Zero current forwarding bit mask

Table m terminates packets addressed directly to the switch and enables early dupli-cation strategies for adaptiveness, both of which require the conversion of traffic from multicast to unicast. In this table, entries can be added that match for a specific multicast group. Then, per group, the packets can be converted to unicast and replicated again if required using an OpenFlowalltable.

The functional requirements for OpenFlow devices to able to operate ABSDM are listed in Table5.4. While the approach is expected to be very efficient regarding flow entry consumption, it requires the data plane element to support a high number of flow tables.

In addition to a flow table per port, the system requires some features that are only available in recent versions of OpenFlow. The features and the OpenFlow version they are available from are listed in Table5.5.

An extended description of the BSDM approach including its application interface can be found in the master’s thesis of P. Welzel [Wel16].