• Keine Ergebnisse gefunden

offload their tasks to the server. The context-aware greedy heuristic algorithm again yields similar results as the global optimum, with a maximum deviation of less than 6%

from the optimal result. Moreover, for both random and line topologies, the network energy consumption converges for large values ofEprov,n, but to two different values. At the same time, for both random and line topologies, the fraction of nodes transmitting their tasks to the server converges for large values of Eprov,n, but also to two different values. This is due to the fact that if EC,n < ET,n holds for a node, this node will never use computation offloading, no matter how large the provided energy Eprov,n is.

Hence, there exists a point after which a further increase of Eprov,n does not lead to more nodes offloading their tasks. Since the energy costs ET,n are especially high in a line topology due to the large number of hops, in this case, the fraction of offloading nodes is lower than in a random topology.

From our simulations, we may conclude the following. First, computation offloading in multi-hop networks is beneficial for highly computation-intensive applications with small amounts of data to be transmitted. Secondly, the effect of computation offloading strongly depends on the provided energy Eprov,n. With higher amounts of provided energy, computation offloading may save more energy in the overall network, but the energy savings do not grow arbitrarily for larger values of provided energy since for some tasks, it is always cheaper to compute them locally even if more energy is provided by relay nodes. Thirdly, even though the context-aware greedy heuristic algorithm has no performance guarantee for general multi-dimensional knapsack problems, it yields very good overall results in the considered offloading scenarios, with a maximal deviation of less than 6% from the optimal results for the considered set of parameters.

The computational complexity of the proposed algorithm has been shown to grow at most quadratically as a function of the number of nodes in the network. Moreover, the communication overhead of the proposed centralized architecture of decision making has been shown to be small. In addition, the proposed algorithm has shown very good performance in simulations under various network settings and task contexts, with a maximal deviation of less than 6% from the optimal results. On average, the offloading solution found by the proposed algorithm reduces the network energy consumption by 13% compared to the case when no computation offloading is used. Our numerical as well as analytical results have revealed that devices in multi-hop networks benefit noticeably from computation offloading for highly computation-intensive applications with small amount of data to transmit. Additionally, the outcome is strongly affected by the amount of energy provided by relay nodes, but the energy savings do not grow arbitrarily when the provided energy is increased.

Chapter 4

Caching at the Edge of Wireless Networks

4.1 Introduction

In this chapter, we study caching at the edge of wireless networks. Caching at the edge exploits caching resources at the edge of the wireless network in order to serve users locally with popular contents [BBD14b]. Such caching resources could be attached to macro base stations (MBSs) and small base stations (SBSs) owned by the mobile network operator (MNO) or they could be part of wireless infostations installed in public or commercial areas by either a content provider or a third party [GBMY97, IR02, BG14c, BG14a]. Caching popular content in lo-cal caches in a placement phase and lolo-cally serve the users in a delivery phase may reduce backhaul and cellular traffic and it may reduce the latency for the user [WCT+14]. In order to reduce the load on the macro cellular network as much as possible, the most popular content should be cached locally such that the number of cache hits is maximized. As described in Section 1.3.3, this requires knowledge about the popularity distribution, which is typically not available a pri-ori [BBD14b,BBZ+15,BG14b,BG14c,BG14a,SAT+14,EBSLa14]. Moreover, local con-tent popularity may vary according to the preferences of the mobile users connecting to a local cache over time [GALM07, ZSGK09, BSW12]. The users’ preferences, in turn, may depend on their contexts [BSW12, MS10, HL05, RGZ11, Zil88, ZGC+14]. Fi-nally, cache content placement needs to take into account the cache operator’s specific objective, which may include the need for service differentiation [KLAC03, LAS04].

We hence consider the problem of maximizing the number of cache hits in a local cache at the edge of the wireless network, taking into account the following aspects:

(i) A priori, there is no knowledge available about local content popularity.

(ii) Content popularity can vary across the user population.

(iii) Content popularity can depend on the users’ contexts.

(iv) The cache operator’s specific objective with respect to service differentiation needs to be taken into account.

In the sequel, we propose amachine-learning-based approach and a decentralized archi-tecture of decision making. We use a machine-learning-based approach since the content popularity is not known in advance and needs to be learned. Moreover, we use a decen-tralized architecture of decision making and let the controller of a local cache take local caching decisions since the content popularity at a local cache is not necessarily the same as the global content popularity and since the set of mobile users with potentially different interests in the vicinity of a local cache changes over time. In detail, we propose an online learning algorithm for context-aware proactive caching based on acontextual multi-armed bandit (contextual MAB) model. Using this algorithm, the controller of a local cache at the edge of the wireless network is enabled to learn context-specific content popularity online over time. This chapter presents work originally published by the author in [MAvK16, MAvK17]. Compared to [MAvK16, MAvK17], in this the-sis, the regret bound is improved in its constant factors due to a new proof technique.

Furthermore, in this thesis, an analysis of the computational complexity and of the communication requirements of the proposed algorithm is added. Also, in this thesis, the ideas of the mathematical proofs are additionally summarized and discussed within the main body of text, while the full mathematical proofs are given in the appendices.

In addition, in this thesis, the numerical results are revised to show a better comparison of the proposed algorithm with the oracle solution, which assumes a priori knowledge about local content popularity.

The remainder of this chapter is organized as follows. Section 4.2 provides a detailed review of the state of the art on decision making for caching at the edge. In Sec-tion 4.3, we introduce the system model for context-aware proactive caching at the edge. A formal problem formulation of context-aware proactive caching for maximiz-ing the number of cache hits under missmaximiz-ing knowledge about content popularity is presented in Section 4.4 and it is shown that the formulated problem can be under-stood as a contextual MAB problem. In Section 4.5, we propose an online learning algorithm for context-aware proactive caching. In Section 4.6, properties of the pro-posed algorithm are discussed. In particular, an analytical upper bound on the regret of the proposed algorithm is derived, which proves that the algorithm converges to the optimal cache content placement strategy. Extensions of the proposed algorithm to practical requirements are presented in Section 4.7. The performance of the proposed algorithm is evaluated numerically in Section 4.8. Section 4.9 concludes this chapter.