• Keine Ergebnisse gefunden

Evaluation based on Constant Resource Availability

Backtracking (BT)

5.7. Evaluation

5.7.3. Evaluation based on Constant Resource Availability

First of all, we are looking for reasonable values for the fixed cache size|C|. For this purpose, suitable overall cache sizes for the different environments where the resource conditions are supposed to do not change over time are investigated. Regarding the space consumption of the PACs, as discussed in Section 5.5 and illustrated in Figure5.5, a greenPACfor the larger application withk2= 31 components consumes around 100 kB, while a yellowPACconsumes around 18 kB. Accordingly, the PACs for the smaller application with k1 = 7 components consume only 23 kB (green PAC) or 4 kB (yellow PAC). In the following, we determine the size |C| which has to be reserved per single application with k2 = 31 components.

The application user obviously wants to have the application configured as fast as possible, which is the case when the cache miss rate is close to zero and a pre-stored configuration is just loaded from disk. However, no cache misses at all can only be achieved with an unbounded cache, which is not possible in practical use, particularly in resource-constrained Ad Hoc environments. Thus, a tradeoff between small cache

5.7. Evaluation 159

Figure 5.8.: Correlation between PAC Cache Miss Rate and configuration latency at different cache sizes between 100 kB and 10 MB

size at the one hand and low cache miss rate at the other hand is needed. Figure 5.8 shows the correlation between the cache miss rate and the expected configuration latency, relative to the latency of a configuration that does not rely on PACs at all, when different cache sizes between 100 kB and 10 MB per application are used. A cache with 10 MB capacity can be seen as an unbounded cache, as allPACs that are possible for the investigated application can be stored simultaneously in that cache.

For all cache sizes, the configuration latency is lowest when no cache misses appear at all, and rises monotonously with increasing cache miss rates. From the figure, it can also be seen that when the cache miss rate exceeds a specific value, then the latency with PAC usage becomes higher than the latency of the standard configuration, due to the contract matchings that have to be performed for the cached PACs.

With a rising cache size, the expected latency at specific cache miss rates decreases monotonously, as the number of PACs that can be stored concurrently becomes higher and, thus, the percentage of the application that needs to be configured in the traditional way is reduced. When increasing the cache from 100 kB up to 400 kB, the latency significantly drops. From these results, it becomes obvious that cache sizes of 100 kB and 200 kB are too restrictive for a reasonable use of thePAC approach. The figure also shows that when increasing the cache size to values above 400 kB, the latency reduction is minor compared to the increased space overhead for the involved devices. For example, when comparing the results of 400 kB and 1 MB cache size measurements, the factor between the space overheads is 2.5, while the latency reduction at the same time is only around 4 %. From these results, we conclude that choosing 400 kB as value for the cache size |C| per application with k2 = 31 components is a reasonable choice, as the latency overhead compared to the best possible latency – when choosing a cache size of 10 MB – represents only 7 %. Thus, we rely on a 400 kB cache size limit in the following. With this cache size, the latency with PAC usage exceeds the latency of the standard configuration when the cache miss rate becomes higher than 71 %. In case of cache miss rates

close to 1 (i.e., none of the application contracts are covered by a PAC), the PAC approach’s configuration latency is around 5 % higher than standard configuration withoutPAC use.

Figure 5.9.: Determination of optimal staticλ values for S1, S2 and S3

Next, we determine the optimal static values for λ for the LRFU strategy (cf.

Section 5.4) in the different scenarios, i.e. the values where the cache miss rate becomes minimal. In these initial evaluations, we set |Cyellow| = 0. In Figure 5.9, it can be seen that neither choosing pure LFU (λ = 0) nor choosing pure LRU (λ = 1) leads to the best results: On the one hand, the recency is relevant, as we consider dynamic environments where components may be available only for a limited amount of time. Thus, relying on PACs which have recently shown to be usable makes sense. On the other hand, the frequency also needs to be regarded, as devices which were previously available, but are unavailable now may return again in the future, e.g., due to periodically repeating activities like working days.

In this case, it needs to be considered how often PACs have been used before, leading to a higher utility only for these PACs. Thus, both the recency and the frequency of a PAC’s availability need to be taken into account to maximize the usefulness of the PAC approach and minimize the cache miss rate. The optimal λ changes from 0.4 in the Ad Hoc scenario to slightly higher values of 0.5 (S2) and 0.6 (S3), since the degree of dynamics decreases and, thus, the recency of a PAC’s availability becomes more relevant, as thePACs are valid for a longer average period of time in the heterogeneous scenarios S2 and S3. Moreover, the general influence of λ to the variance of the resulting cache miss rate increases with the dynamics of the environment. We decided to use a static value of λ = 0.5 in the following measurements, as this leads to a low cache miss rate inall three environments.

Subsequently, we determine the optimal split factor f which determines the rela-tive amount of |C| that is spent for Cyellow and, thus, the partitioning of the cache into the areas for green and yellow PACs. Therefore, we perform several simulation runs with differing fractions of cache space for Cyellow. The results are shown in Figure5.10. It can be seen that reserving an increasing portion of the cache size for

5.7. Evaluation 161

Figure 5.10.: Determination of optimal size for Cyellow when LRFU-0.5 and |C| = 400 kB is used

the yellow PACs monotonously increases the cache miss rate in the homogeneous scenario S1. This is becauseS1 represents a highly dynamic scenario, as it involves only mobile devices. Thus, the cache content changes very rapidly, which yields more frequent replacements in the cache. However, when |Cgreen| is reduced in fa-vor of |Cyellow|, the overall number of green PACs that can be stored and used in the configuration becomes the limiting factor in this highly dynamic scenario. In the heterogeneous environment (S2 and S3), the degree of dynamics is lower due to additonal infrastructure devices which are supposed to be continuously available.

Thus, the cache is filled with some PACs whose components are resident only on infrastructure devices. These PACs durably reserve a specific fraction of Cgreen and form the basis for a large coverage of the application components. Moreover, the cache content changes much less between two subsequent configuration processes, and the overall cache size for Cgreen is not that crucial as in scenarioS1. So, shifting some space from Cgreen to Cyellow, particularly for PACs involving components on the mobile devices, leads to a lower cache miss rate, as the cache holds more infor-mation about yellow PACs, which may possibly change to green PACs after some time. In Figure 5.10, you can also see that the cache miss rate increases again in S2 and S3 when a specific fraction for Cyellow is exceeded, as the reduced size of Cgreen then starts to become the limiting factor. This optimal fraction for Cyellow’s space increases from 16.3 % in scenario S2 to 25.1 % in scenarioS3, as the degree of dynamics in S3 is lower because it features more resource-rich devices. Introducing yellow PACs in the scenarios S2 and S3 with these optimal fractions of the overall cache size reduces the cache miss rate from 10.2 % to 7.6 % (i.e., by more than a quarter) in S2, and from 6.4 % to 4.4 % (i.e., by almost a third) in S3.

In the following, we compare theLRFUstrategy (withλ= 0.5 and optimal cache size fractions for Cyellow) with the standard replacement strategies First In First Out (FIFO), Remove Smallest First (RSF), and Remove Largest First (RLF), as described in Section 5.4, to show its outstanding performance.

Figure 5.11.: Comparison of different cache replacement strategies

Figure 5.11 illustrates that even LRFU-0.5 without usage of Cyellow outperforms FIFO, RSF and RLF in all scenarios. FIFO performs better than RSF and RLF, as it keeps all cachedPACs for a certain time in the cache, but worse than LRFU, as FIFO does not consider the usability of the involved components. RSF faces the problem of only storing large PACs, which are potentially not usable for a large amount of time, as they involve much more devices and components. RLF only keeps small PACs in the cache. Because of this, it usually does not cover the complete application and leaves some components uncovered. This problem becomes even more severe in the heterogeneous environments S2 and S3, as the application size increases, leading to very bad performance there. Moreover, introducing yellow PACs (with the optimal fractions determined in Figure5.10) further drops the cache miss rate in the heterogeneous environments. Thus, usingLRFU with λ = 0.5 and yellow PACs2 leads to much less cache misses than standard strategies likeFIFO.

Figure 5.12 shows the cache miss rate (depicted in z-axis) with variable overall cache sizes (x-axis) and split factors f (y-axis) in the three evaluated scenarios. It can be seen that the cache miss rate is comparatively high in all scenarios in case of low cache limits and for large values off. In this case, the cache space forCgreen becomes the limiting factor, yielding only fewPACs that actually fit into the cache.

Moreover, the contour lines of the figures (which represent the 2.5 %, 5 %, 7.5 %, and 10 % cache miss rate bounds) show that the resource-richer the environment gets, the smaller the achieved cache miss rates become. For example, if the cache miss rate should not exceed 10 %, then you can infer from Figures 5.12a to c that the previously determined static cache size of 400 kB is sufficient to stay below this limit: With|C|= 400kB, the cache miss rates are 9.2 % (S1), 8.6 % (S2), and 7.3 % (S3). Regarding the choice of f, the cache miss rates become minimal if we choose values of 0 % (S1), 16.3 % (S2) and 25.1 % (S3) forf. The corresponding data points X1 (400 kB,0 %,9.2 %),X2 (400kB,16.3 %,8.6 %) and X3 (400kB,25.1 %,7.3 %) are drawn in Figures 5.12a to c.

2As determined before, yellowPACs should only be used in heterogeneous environments

5.7. Evaluation 163

Figure 5.12.: Distribution of Cache Miss Rate in a)S1, b) S2, c) S3

5.7.4. Evaluation based on Dynamically Changing Resource