• Keine Ergebnisse gefunden

5.3 Enabling Dynamic Function Chaining to Mitigate Flow Entry Addition

5.3.1 Packet Flow Identification and Forwarding Scheme

The core design decision of the approach is the packet flow identification at the packet interface of VNF instances. Network function chain instances use a mix of dedicated

Edge Data Center

Data Center Top of Rack Switch 1 Data Center

Top of Rack Switch 1 Data Center

Top of Rack Switch 1

Data Center Top of Rack Switch 1

Data Center Top of Rack Switch 1 Data Center Top of Rack Switch 1

Node

1 Node

2 Node

3 Node

4 Network Functions

Virtualization Infrastructure 1 Network Functions

Virtualization Infrastructure 2 Legacy Network Functions

ISP Core Network Virtual

Switch 1 VNFI2

VNFI1 VNFI3

Deep Packet

Inspection Firewall

Data Center Top of Rack Switch 2

Data Center Top of Rack Switch 5 Data Center

Top of Rack Switch 4 Data Center Top of Rack Switch 1

Data Center Top of Rack Switch 3

ISP Access Network

Virtual Switch 1

VNFI4

VNFI4 VNFI6

Data Center Spine Switch 1

Data Center Spine Switch 2

Data Center Top of Rack Switch 1

!

Flow identification

"

Figure 5.4: An example edge data center with the proposed network function chaining design (adapted from [Ble+15a]).

and shared VNF instances as depicted in Figure5.4. The system relies on network interfaces to identify traffic sent to and received from VNF instances. In most cases, a dedicated VNF instance has two interfaces: the ingress interfaces for traffic from and to the subscriber and the egress interface for traffic from and to the Internet. This simple approach solves the problem of traffic identification that often arises when a large number of subscribers share one VNF instance [Qaz+13]. The problem is caused by the packet interface characteristics of a type of network functions. Specifically, network functions that modify the packet flow headers in a way that makes them not easily identifiable after processing, e.g., by modifying the IP addresses, or create an entirely new packet flow. An overview of selected packet interface types of VNF is given in Table5.1, which is discussed in Section5.3.2. By using network interfaces to identify traffic, the network function chaining problem is reduced to the problem of creating interconnections for network interfaces in the data center. A network function graph instead of a chain can be implemented through VNF instances with more than two interfaces. The additional interfaces enable the instance to select alternative paths in the graph. Shared VNF instances can be implemented by using multiple, separate sets of interfaces per network function chain instance.

5.3 �������� ������� �������� �������� �� �������� ���� ����� �������� ����������� 93

Creating many network interfaces is not an issue since the network interfaces are created on software switches and, therefore, are software instances only. The NFV architecture is assumed to be based on many small, disaggregated VNFs. A dedicated VNF instance per subscriber or traffic stream can be used. The idea and feasibility of operating a dedicated VNF instance per traffic flow in the form of a virtual machine (VM) were presented by Bifulco et al. [Bif+13] and confirmed by Madhavapeddy et al. [Mad+15]

as well as Manco et al. [Man+17]. The concept is also proposed for the CORD-based design for residential access networks by the ONF�. Furthermore, the deployment process for a large number of VNF instances is supported by our design through the concept of VNF instance isolation.

The network function chaining approach is aimed at edge data centers where, in addition to other services, the service gateways are operated for subscribers. These service gateways could be implemented as virtual machines as proposed by the ONF in their R-CORD proposal for implementing residential network access services�or based on hardware switches as proposed by Nobach and Blendin et al. [Nob+17]. Independent of the implementation, subscriber identification facilities are available, either as IP addresses or prefixes or in the form of virtual LAN (VLAN) or other protocol tags.

These identification facilities are used for subscriber identification and can be used for matching using the OpenFlow protocol [Nob+17]. In this discussion, we use the subscribers IP addresses. However, the mechanism can be replaced without changing the characteristics of the design. Traffic coming from the Internet of other parts of the ISP network is always identified by its IP address.

The forwarding scheme is inspired by the MAC-address based data center forwarding approach PortLand proposed by Mysore et al. [Nir+09]. We use MAC addresses instead of VLAN tags [IMS13] or Segment Routing because it is well supported by existing OpenFlow hardware. This includes masked matching of MAC addresses as well as writing MAC addresses. Support for, e.g., VLAN tags in existing SDN hardware is not complete. As shown, e.g., by Nobach et al. [Nob+17], even state-of-the-art switching ASICs such as the Trident II do not fully support the processing of more than one VLAN tag. While the PortLand approach fits the edge data center use case well, it is not designed for SDN and is a generic design for data centers. Since edge data centers are restricted in size and we can simplify the forwarding mechanism.

Our proposed MAC address encoding scheme is depicted in Figure5.5. A modified

00:09:2d:01:03:02

Rack

ID VNFI Port Flow ID ID

VNFI ID

Figure 5.5: The MAC address encoding of forwarding information (adapted from [Ble+15a]).

1 https://wiki.opencord.org/pages/viewpage.action?pageId=1278090 2 https://www.opennetworking.org/r-cord/

version of the hierarchical addressing scheme of PortLand is used. The first byte of the 48-bit MAC address field encodes the rack in the data center where the NFV infrastructure server is located. The second byte signifies the server in the rack that is addressed. The 12 bits after that specify the port number of the virtual switch of this server. The last 20 bits encode a unique flow ID that can be used for verification by VNFs and debugging purposes; it is not required for the forwarding process.

The maximum edge data center size supported by this design is 256 racks with 256 servers each for a total of 65,536 servers. Each NFV infrastructure server can host at maximum 32,768 VNF instances with two interfaces each. We argue that these restrictions are sufficient for edge data centers.

This design combines the advantages of hierarchical addressing with the freedom for forwarding customizations for the leaf-spine layer. We assume a fully redundant leaf-spine architecture, where every VNF instance is connected to two ToR switches that are in turn connected to all available spine switches. Specifically, the ToR switches that act as leaf switches and the spine switches need at most 512 forwarding table entries for redundant paths to any NFV infrastructure server. For directly connected servers, ToR switches additionally require 256 forwarding table entries. Because at each hop, only a part of the MAC address is matched against, wildcard matching is required.

State-of-the-art OpenFlow switches support TCAM-based tables of these sizes.

In the investigated NFV environment, most interfaces are virtual interfaces. The virtualization manager assigns the MAC addresses of virtual interfaces. Hence, in this specific use case, we can control all MAC addresses of the involved devices and assign them as required. Therefore, optimizations or more general approaches such as the

„MAC address as routing label“ approach proposed by Schwabe et al. [SK14] are not required.