• Keine Ergebnisse gefunden

4.4 Scalability and Fairness

4.4.1 Simulation Setup (Parking Lot Network)

In the previous section it was already shown that CP-TCP/EPF can adapt to changing network conditions. However, scalability of the algorithm with regard to the number of bottleneck links, delays, and network capacity is also important. This aspect shall be dealt with using a different simulation setup with a more realistic simulation scenario. The second simulation setup utilizes a multi-link “parking lot” topology (cf. Figure 4.4.1). Most published simulations so far were

R1 R2 R3 R4 R5

S2 S4 S6 S8 S10

S12

S5 S7 S9 S11

S3

S1

Figure 4.4.1: Parking lot network topology

conducted using only a small number of flows. For this reason a simulation scenario using a relatively large number of flows is additionally considered here. To evaluate scalability of Congestion Pricing based TCP with regard to the number of active flows and network capacity, the number of flows and the link capacities is scaled up without changing any of the other 44

4.4 Scalability and Fairness parameters. For good scalability, no further parameter changes should be necessary. Fairness between users is also evaluated.

In a “few flows” simulation scenario, up to 64 active flows are competing for bandwidth, while in a ”many flows” simulation scenario up to 6400 competing flows are used. The link capacities are dimensioned in such a way that only the core links are limiting. All core links have a capacity of 48 Mbps and a delay of 8 ms, except for the link between routers 2 and 3, which has a delay of 24 ms. The edge links have a capacity of 1000 Mbps and a delay of 1 ms.

Forward and reverse paths are symmetric. These capacities allow a theoretical average of five packets in transit per connection.

For this simulation, a slightly changed source algorithm is used. The modified source algo-rithm limits the amount of congestion window change to 1 per acknowledgment (5.2.7). This is basically a limitation of the step size at which the steady-state is reached. The modification is done to make the results comparable to the Random Exponential Marking (REM) variant, which will be presented in detail in Section 5.2. The link algorithm parameters areγ=0.1and α=0.0023, with thewillingness to paybeing 20 for all sources. All sources are greedy and use a packet size of a 1000 bytes.

Eight different flow paths with different hop counts and with different minimum round-trip delays are considered, as shown in Table 4.4.1. Each one of the eight sources (S1-S7,

Table 4.4.1: Connections and minimum round-trip time

Connection S3S5 S5S7 S7S9 S9S11

hop count 2 2 2 2

min. RTT 20 ms 52 ms 20 ms 20 ms Connection S2→S6 S4→S8 S6→S10 S1→S12

hop count 3 3 3 5

min. RTT 68 ms 68 ms 36 ms 100 ms

S9) produces up to eight flows, yielding a maximum of 64 flows simultaneously active on the network.

For the “many flows” scenario, the number of flows is scaled up by a factor of 100. To accommodate the increased load, the capacities of the links are also scaled up by a factor of 100. Everything else remains unchanged. To examine the dynamic behavior, the number of active flows per source is again varied over time by turning on and off several flows every 20 seconds (cf. Figure 4.4.2 and Table 4.4.2). As explained before, optimally the congestion window should react inversely to the number of active flows (cf. Figure 4.3.3). The desired target queue sizeb0 is chosen such that the corresponding queuing delay equals 3 ms. Thus for the “few flows” scenario:b0=18and for the “many flows” scenario:b0=1800.

3Although a larger value forαwas suggested in [AL00], the current queue size is weighted less here because oscillations were observed for largerα.

Chapter 4: Implementation of Congestion Pricing Based TCP

Time [s]

0 20 40 60 80 100

2 flows

2 flows

4 flows 4 flows

Figure 4.4.2: Start and stop times of the TCP flows per source

Table 4.4.2: Active flows at each edge node during the different time intervals

0–20 s 20–40 s 40–60 s 60–80 s 80–100 s

Few flows scenario 2 8 4 8 2

Many flows scenario 200 800 400 800 200

4.4.2 Simulation Results

Figure 4.4.3a shows the variation of a congestion window over time for the “few flows” sce-nario. It follows nearly ideally the change of load as was seen before in the double bottleneck link topology (cf. Figure 4.4.3c). Figure 4.4.3b shows the trajectory of the instantaneous queue size at router R3 for the “few flows” case. Corresponding link performance measures are given in Table 4.4.3. Utilization is nearly perfect while the average queue size is only slightly larger

Table 4.4.3: Link performance measures for queue R3 (“few flows” scenario)

Time interval 0–20 s 20–40 s 40–60 s 60–80 s 80–100 s Utilization 98 % 100 % 99 % 100 % 98 % Avg. backlog 20 pkts 30 pkts 20 pkts 26 pkts 20 pkts

than the target queue size. Only when network conditions change and additional flows are turned on or off, the queue size increases for a short time until it returns to the desired equilib-rium. Also note that in the time intervals 20–40 s and 60–80 s, there are oscillations visible at a low frequency. This is a sign of instability which will be examined more in detail for a different TCP variant in Chapter 6 using a control theoretic model. But even with these oscillations, performance is nearly perfect and by far better than with conventional TCP variants.

Figures 4.4.3c and 4.4.3d show the corresponding plots for the “many flows” scenario. The link performance parameters are shown in Table 4.4.4. With the scaled up number of flows and capacity, more oscillations are visible, but on average the behavior is still similar to the

“few flows” case. The link utilization is well above 90% (cf. 4.4.4) and the average backlog is acceptable and even below the target. Thus, CP-TCP/EPF scales with regard to the number of flows over a wide range without adaptation of parameters.

46

4.4 Scalability and Fairness

60 Congestion Window Source S1, Flow #0

Time [sec] →

cwnd[pkts]

(a) Congestion window (few flows)

0 20 40 60 80 100

250 Queue Size Router R3

Time [sec] →

Queue size [pkts]

(b) Queue size (few flows)

0 20 40 60 80 100

60 Congestion Window Source S1, Flow #0

Time [sec] →

cwnd[pkts]

(c) Congestion window (many flows)

0 20 40 60 80 100

5000 Queue Size Router R3

Time [sec] →

Queue size [pkts]

(d) Queue size (many flows)

Figure 4.4.3: Congestion window and queue size (no slow start) [ZHK01]

Table 4.4.4: Link performance measures for queue R3 (“many flows” scenario) [ZHK01]

Time interval 0–20 s 20–40 s 40–60 s 60–80 s 80–100 s

Utilization 94 % 94 % 94 % 94 % 95 %

Avg. backlog 279 pkts 840 pkts 471 pkts 830 pkts 283 pkts

Chapter 4: Implementation of Congestion Pricing Based TCP