• Keine Ergebnisse gefunden

2.3 Software-Defined Networking

3.1.2 Data Plane Performance Bottlenecks

3.1 ��������-������� ���������� ���� ����� ��������������� 23

mentioned in the future work section. Bottlenecks are discussed in the context of the controller only. In the presentation of OpenDaylight by Medved et al. [Med+14] the word resource appears two times in the text, but not in a relevant context. Therefore, we conclude that both designs strive primarily for performance. This approach makes sense for the goal of gaining acceptance in the industry but makes these designs susceptible to resource contention.

The papers analyzed in this investigation, the location where they suggest performing virtualization, and resources they identified and virtualized are listed in Table3.1. The meaning of the symbols in the table that signify the handling of a hardware bottleneck is as follows:

• Identified only:

• Identified and virtualized: V

• Identified and virtualized with prioritization: VP

An overview of the locations where virtualization is suggested to be performed is depicted in Figure3.1. The table is listing the related work references to this depiction by denoting the number of the virtualization location in the „location“ column.

report on are the size of flow tables, group tables, and meter tables. Another control path bottleneck that is acknowledged is sending packets from the ASIC to the control plane. The proposed solution to this issue is discussed in Section3.1.3.

Costa et al. [Cos+17] propose a systematic approach to investigate the performance of SDN data panes. Unfortunately, the approach does not explain how the list of performance tests was derived and why. This leads to the assumption that, again, an ad-hoc approach is used to determine potential performance bottlenecks. Still, the authors investigate the behavior of the tested devices when their flow tables are full and discover that many of them exhibit unexpected behavior in this case. They, therefore, suggest filling flow tables at maximum to 90% of their capacity–which is lower than advertised by the corresponding OpenFlow primitives in many cases.

Rotsos et al. [Rot+12] provide in-depth measurements of three hardware and one software OpenFlow switch. They use a glass box approach and provide insights into the data path performance as well as on the control path. They investigate the performance of different OpenFlow message as well as their interaction. To do not, however, try to systematically create a performance model of the data plane.

Lazaris et al. [Laz+14] argue that the diversity in performance characteristics of SDN devices is not captured adequately by existing SDN protocols, e.g., OpenFlow. Therefore, control planes need detailed information on the expected control path performance of the devices. To that end, the authors present an inference system that sends OpenFlow message patterns and measure the response in the data as well as the control path. They describe the unpredictable behavior of TCAM table sizes because, on some switches, some combinations of packet match fields yield dramatically different maximum number of flow entries in a table. Furthermore, they investigate the flow_mod message in detail and find that the priority, as well as the order and number of existing entries in a table, impact the performance of adding entries. The performance of modifying entries, however, is mostly constant. They assume that the approach of some vendors to use the management system as a slow path for packets. However, this approach cannot be considered viable for high-performance networks and multi-gigabit traffic [KMH14].

The hardware performance interference patterns seem to be derived from an ad-hoc model of OpenFlow switches. The devices are investigated as black boxes, and there is no discussion of how the specific characteristics where chosen. Furthermore, the question whether all relevant configurations and combinations of parameters are investigated is not discussed.

The authors then present an approach to analyze application requests to schedule them for best performance. The analysis process is designed to ensure that the dependencies between requests are kept even after the optimization process. The results are very promising and show that the approach works well for the investigated use cases.

He et al. [He+15] argue that the control path latency is crucial for many services. They dissect the components that make up the latency but do not recognize virtualization as a factor. They describe the hard- and software of OpenFlow switches and investigate

3.1 ��������-������� ���������� ���� ����� ��������������� 25

two use cases: packet forwarding along the control path and flow table updates–a reactive flow installation scenario. While the process described is detailed and includes all relevant components, it is specific to a configuration and cannot be generalized as described. The issue of virtualization and its effects is not discussed.

The literature on security issues in SDN focus on specific issues, but again, work with ad-hoc models of the data plane [SNS16;Yoo+] and do not provide additional insights into the issue.

Table 3.2: Performance bottlenecks in literature compared to the analysis conducted in this thesis.

Location Resource Identified Optimized

Virtual-ized

Manage-mentsystem

CPU [CB17a;Laz+14;BR13;Nar+12;KHK13;

Cur+11;Amb+17;Wan+14;Rot+12;SY12;

Cos+17]

[BR13;Nar+12;

KHK13;Cur+11;

Wan+14;SY12]

Manage-mentsystem

Management NIC

Manage-mentsystem

PCI-e link

ASIC PCI-e controller

ASIC Flow table space OpenFlow [ONF15], [Qia+16;Nar+12;

Yu+10;Cur+11;Guo+17;Yoo+;KHK13] [Qia+16;Yu+10;

Cur+11;Guo+17;

KHK13]

ASIC Flow table

memory interface [CB17a;Laz+14;Qia+16;HYS;Kat+16;

Jin+14;Rot+12;Wen+16a;Ngu+18] [CB17a;Laz+14;

Qia+16;Kat+16;

Wen+16a]

ASIC Group table space OpenFlow [ONF15]

ASIC Group table memory interface

ASIC Meter table space OpenFlow [ONF15]

ASIC Meter table memory interface ASIC Statistics counter

table memory interface

[Cur+11;Rot+12]

ASIC Data Path [Jar+11;Rot+12]

ASIC Packet output to switch controller port

OpenFlow [ONF15], [He+15;Nar+12;

Bas+17;Amb+17;Wan+14] OpenFlow

[ONF15],[Nar+12] Open-Flow[ONF15]

Chen et al. [CB17b] argue that for certain applications (traffic engineering, mobile networks, cyber-physical systems), control path performance guarantees are required.

Based on the observations made in [Kuz+18;He+15; Laz+14] they conclude that the primary source of unpredictability is the number of flow entries in a table. To mitigate this effect, they use two TCAMs or TCAM partitions, one as the main table and one as insertion cache. The number of entries in the cache is kept small to achieve guarantees

for inserting entries there. Then, the entries are migrated to the main table to keep the cache table size small. The approach is evaluated using a simulator and shows promising results. The approach aims to improve the design of OpenFlow switches.

However, the effect of virtualization is not discussed. Guarantees can only be given if the system has enough time to copy the entries into the main table. How this approach affects the SDN protocol is not discussed, neither is how the SDN controller knows of these guarantees.

The complete overview of the literature investigated, and the resource discovered there is listed in Table3.2. The table clearly shows that fixed properties like table sizes are well investigated as well as the dynamics of the management system and table updates. However, we found no papers discussing topic prioritization on the control path, virtualization, or its effects on control plane applications.