• Keine Ergebnisse gefunden

Web Transfer Simulator Validation

MPTCP subflow 1 interface 1

bandwidth connectionsRTT

transfer 2 size parent remaining

RTT transfer manager

transfers

interface 2 bandwidth connectionsRTT

TCP connection 1 desired bw.

outstanding bytes transfers

MPTCP connection 1 desired bw.

outstanding bytes

transfers subflows

MPTCP subflow 2 desired bw.

outstanding bytes

desired bw.

outstanding bytes

transfer 3 transfer 4 size

parent remaining

RTT

size parent remaining

RTT

transfer 5 size parent remaining

RTT transfer 1

size parent remaining

RTT

Figure 5.3: Simplified Simulator State Example.

Figure 5.3 shows a simplified example state of a simulator run. The solid black arrows show relationships that are stable during a simulator run, while the blue dashed arrows indicate relationships that change. In this case,transfer 1has finished and enabled transfer 2-4. The policy (omitted in the figure) has assigned transfer 2 and transfer 3 (after transfer 2 has finished) to TCP connection 1, that uses interface 1 and transfer 4 to MPTCP connection 1. The transfer 5 is not enabled yet as it depends ontransfer 4. While the transfers progress, theoutstanding bytes and desired bandwidth fields get updated. The MPTCP connection uses subflow objects to interface with multiple interface objects. The outstanding bytes and desired bandwidthfields of the subflows mirror the one of the MPTCP connection.

5.5 Web Transfer Simulator Validation

To check the appropriateness of the assumptions and simplifications underlying our simulator we use two different settings for validating the simulator: We compare the expected timings for a set of handcrafted scenarios and the page load times of our web crawls against the simulated timings. To cross-check the consistency of our policy implementations, we also compare the policy implemented in our Multi-Access Prototype, see Section 6.3.5, against the simulated policies.

5.5.1 Handcrafted Scenarios

We choose twelve different scenarios to test the basic functionality of the various simulator policies. The core idea of the scenarios is to test various corner cases including

• Cases those that require connection reuse or cannot benefit from it

• Traffic pattern that can specifically take advantage of either the

MPTCP

,

EAF

, or

EAF_MPTCP

policy

• Cases that stress-test the simulator

For these scenarios, we manually calculate the expected page load times and check the simulator results against them. The simulator passes all of them. Besides, we consequently use assertions and cross checks within the simulator to be able to identify implementation bugs impacting the results.

5.5.2 Simulator vs. Actual Web Load Times

Our validation of the actual Web load times is based on the timings in the HAR files of the crawls. For comparing the page load times with the simulator, we only consider the network timings and ignore other timing information, e.g., rendering, execution time of JavaScript, …

Accordingly, we parse the HAR file, infer the inter-object dependencies, and use these to calculate the cumulative network time of the longest chain of objects fetched in sequence. Next, we compare for all Web pages of our workload the actual page load time to the simulated one. Given that our crawl uses a machine with a single interface we also use a single interface with the policy

Single Interface

. To determine the interface parameters, we estimate the available bandwidth as well as the RTT to the servers from the actual download. To estimate the available bandwidth, we use all objects larger than a minimum size of 50 KB and their download times. Hereby, we take into account that several of these can occur in parallel. Using the median of the estimated bandwidth results in a typically used bandwidth of 67.13 Mbit/s

— this suggests that none of the transfers were bandwidth bound. To estimate the RTT the simulator issues a series of pings for each Web page. The median of all measured RTT towards that Web page is then used as an estimator for the interface for the validation run for that Web page.

The simulator, as well as the validation, does several simplifications: The simulator assumes that all Web objects share a single network bottleneck and that the RTT is the same for all servers. In reality, some embedded objects of Web pages are fetched from hosts with different network bottlenecks and RTTs. For the validation, we use ICMP ping rather than TCP ping and the pings are not executed while the HAR files are gathered.

5.5 Web Transfer Simulator Validation

−20 −10 0 10 20

0.00.10.20.30.40.50.6PDF

Actual − simulated page load time [s]

Relative difference of page load times [%]

Figure 5.4: Simulator validation: Probability distribution of relative and absolute difference of simulated time vs. actual page load time.

Figure 5.4 shows the absolute as well as the relative differences of the simulated vs.

the actual page load times for all Alexa Top 100 Web pages from Section 5.3. The main mass of both distributions is around zero indicating that the simulated page load times are very close to the actual ones. This is confirmed by the median value which is 0.3548/1.5% for the absolute/relative differences. This highlights that the simplifying assumptions of the simulator still enable us to approximate the actual page load times and that we capture most intra-dependencies of the Web page.

There are some differences for some Web pages. We manually checked them and find a majority is caused by differences in the estimated bandwidth, server delays, and name resolution overhead. These are, e.g., related to Web back-office interac-tions [80]. Overall, the results are rather close and show that our simulainterac-tions result in reasonable approximations of the actual Web page load time.

5.5.3 Simulator vs. Multi-Access Prototype

As part of our proxy-based evaluation in Section 6.4, we also cross-validate our testbed results with and without our proxy. As expected, the simulator is slightly more optimistic than the testbed results with- and without proxy. However, these differences are consistent across scenarios and small enough to support the simulator results. See Section 6.4.2 for an extensive discussion of the cross-validation.