• Keine Ergebnisse gefunden

Main VNMP Instance Properties

In this section, we show the main properties of the created VNMP (full load) instances, which are presented in Table 5.1. It can be seen that the created substrate graphs are very sparse. For instances of size 20 (which means that the substrate network contains 20 nodes), we observe an average of 40.8 substrate arcs. For a connected substrate, the lowest possible number of arcs is 38, which is twice the number of edges required to form a tree. This number is doubled since due to the construction of the substrate network, if two nodes are connected, they are connected in both directions. With rising instance size, the substrate networks also get marginally denser.

The main point to note with respect to the virtual networks is that the number of virtual nodes to map and the number of virtual arcs to implement stays roughly constant from size 100 onward, because each virtual network has a size limit of 30 nodes. Even if the number of virtual nodes and arcs stays the same, with rising instance size we can expect that the required implementing paths for each virtual arc get longer, which complicates the problem of finding cheap and valid solutions. In addition, for larger instance sizes there are far more mapping possibilities for each virtual node. Table 5.1 also shows the complete substrate usage costs for reference, i.e., how much it would cost to use every part of the substrate network.

CHAPTER 6

Construction Heuristics, Local Search, and Variable Neighborhood Descent

6.1 Introduction

In this chapter, we will introduce Construction Heuristics (Section 6.2), Local Search (Sec-tion 6.3), and Variable Neighborhood Descent algorithms (Sec(Sec-tion 6.4) for the VNMP. We present those algorithms together since they depend on each other and also share the prop-erty that they terminate by design, i.e., they are finished at some point and do not need to be aborted by a stopping criterion like the elapsed time. The heuristic algorithms we discuss in later chapters do not have this property. The deterministic termination also offers some great opportunities to analyze the tradeoff between required run-time and solution quality, which is more complicated if the run-time is an external parameter to the algorithm. The performance of the algorithms is compared in Section 6.5, Section 6.6 concludes and gives some directions for future work. The results presented in this chapter have been published in a more compact form in [91].

6.2 Construction Heuristics

A Construction Heuristic (CH) is used to create solutions to problems by following heuristic rules that guide the construction process towards feasible solutions of high quality. During each step, a partial solution is extended by the currently most promising component. For the VNMP, we can already see that these are conflicting objectives; guiding towards valid solutions means spreading resource usage across the whole substrate to decrease the probability of having to buy additional resources, which causesCuto be unnecessarily high. Trying to pack virtual networks densely to keepCulow will most likely lead to highCaas some substrate network components run out of resources, so some kind of balancing is required. To find the right balance, we first define a general outline of a CH for the VNMP. This outline defines sub-problems, which can

Algorithm 6.1:Outline of the Construction Heuristic for the VNMP Input : A VNMP instance I

Output: A solution to I

1 Solution S(I);

2 ifuse node emphasis NEthen

3 whilenot all virtual nodes mappeddo

4 VirtualNode k=getVirtualNode(I,S);// virtual node selection SVN

5 SubstrateNode i=getMapTarget(I,S,k);// mapping target sel. IVN

6 S.setMapping(k,i);

7 end

8 end

9 whileS incompletedo

10 whilevirtual arcs are implementabledo

11 VirtualArc f=getVirtualArc(I,S);// virtual arc selection SVA

12 Path p=getImplementingPath(I,S,f);// path selection IVA

13 S.setPath(f,p);

14 end

15 ifnot all virtual nodes mappedthen

16 VirtualNode k=getVirtualNode(I,S);// virtual node selection SVN

17 SubstrateNode i=getMapTarget(I,S,k);// mapping target sel. IVN

18 S.setMapping(k,i);

19 end

20 end

21 return S;

be solved by different heuristics. These heuristics can be tuned towards lowCuor lowCaand by selecting the right heuristics for the sub-problems, a CH for the VNMP can be derived that creates valid results with lowCuwith high probability.

Algorithm 6.1 shows the outline of the CH. It uses the solution to four sub-problems to build a VNMP solution. The four sub-problems are:

SVN Selecting a virtual node to map from the nodes that have not been mapped.

IVN Selecting an implementation of the virtual node, i.e., a substrate node to which the virtual node is mapped.

SVA Selecting a virtual arcf to implement. This arc has to be implementable, which means that it is not implemented in the current solution ands(f)andt(f)have to be mapped.

Otherwise we would not know, which substrate nodes the implementing path forf has to connect.

IVA Selecting an implementation of the virtual arc, i.e., the path in the substrate network.

In addition to the heuristics used to solve those four sub-problems, there is another parameter which determines the behaviour of the CH. Basically, we can decide if we want to map all virtual nodes before we start to implement virtual arcs (node emphasis NE) or if we implement virtual arcs as soon as they are implementable (arc emphasis AE). We will call this the implementation emphasis. If NE is used, Algorithm 6.1 iteratively selects substrate nodes to implement (SVN) and a mapping target for them (IVN), until all virtual nodes are mapped. Only after that, the main loop of the CH is entered. This loop is executed until the solution is complete, that means every virtual node has been mapped and every virtual arc has an implementing path. In the main loop, we first implement all virtual arcs that are currently implementable by selecting such an arc (SVA) and finding a path for it (IVA). Once no more implementable arcs remain, an additional node has to be mapped to (potentially) make more arcs implementable. The structure of Algorithm 6.1 might seem a bit counterintuitive, because it contains the node mapping twice, which is not strictly necessary. We chose this structure because it can be easily adapted to work with a partial solution as input, which has to be completed. This functionality will be required for the Local Search and Variable Neighborhood Descent algorithms discussed in the following sections. The required modification to Algorithm 6.1 is removing its first line and adding Solution S, the solution to be completed, as second input.

We implemented four heuristics for each of the four different sub-problems to solve, which are described in the following. If the strategies define no specific order of nodes or arcs (or ties occur), it is arbitrary. Now follows the discussion of the SVN heuristics.

NextVNode Selects an unmapped virtual node fromV0.

CPUHeavy Selects the virtual node with the highest sum of CPU requirement and connected bandwidth. The connected bandwidth is the sum of the bandwidth requirements of all virtual arcs that start, or end, at a virtual node. It is important to include this factor, since it is a (nearly) guaranteed CPU load caused by the virtual node. It slightly overestimates the CPU load because transferring data from one virtual node to another if both are hosted on the same substrate node requires CPU capacity only once but is counted twice according to this calculation model. We want to focus on virtual nodes that require a lot of resources because they are the most problematic to fit into the substrate network.

CPUHeavyVN Selects with CPUHeavy from the virtual network (VN) with the highest total CPU and bandwidth requirements that still has unmapped nodes left. Concentrating on one virtual network when selecting nodes supports AE, since virtual arcs become imple-mentable much faster.

DLHeavyVN Selects with CPUHeavy from the VN with the lowest total sum of allowed delays that still has unmapped nodes left. This heuristic assumes that VNs that are highly delay constrained are the most critical to implement. Especially in concert with AE, this ensures that virtual arcs with stringent delay constraints are implemented first, when the substrate still has enough resources to allow a delay feasible path without any additional costs in terms ofCa.

Now follows the description of the node mapping heuristics. Note that for the node mapping strategies, only substrate nodes allowed byM are regarded as candidates. If no substrate node would yield a valid solution, i.e., no allowed substrate node has enough CPU resources or con-nected bandwidth left, then the substrate node (allowed byM) with the most free resources is chosen. This is the substrate node with the least missing resources in case no substrate node has resources left.

NextSNode Maps a virtual node to the first substrate node with enough free CPU capacity.

NextFree Maps a virtual node to the first substrate node with enough free CPU capacity to support the CPU requirements and the total connected bandwidth of the virtual node (the total CPU load of a virtual node).

MostFree Maps to the substrate node with the most free CPU capacity. In case of ties, the substrate node with the most free connected bandwidth is chosen as map target.

CheapHost Maps to the substrate node with enough free resources (with respect to the total CPU load) and least increase ofCu, i.e., if possible to a substrate node that already hosts virtual nodes.

For the following description of virtual arc selection heuristics, keep in mind that the SVA strate-gies only consider implementable virtual arcs.

NextVArc Selects an arbitrary unimplemented virtual arc.

BWHeavy Selects the arc with the highest bandwidth requirement.

DLHeavy Selects the arc with the smallest delay.

RelDLHeavy Selects the arc with the smallest fraction of allowed delay to shortest possible delay betweenm(s(f))and m(t(f)). This might be a more accurate measure of how delay constrained a connection actually is.

All four IVA strategies implement a virtual arcfby finding a Delay Constrained Shortest Path in the substrate fromm(s(f))tom(t(f))via the Dynamic Program from [69]. The only difference between the strategies is the calculation of the substrate arc costs, which define the length of a path.

MinUse If substrate arcs have already been used, they have a cost of 0. Otherwise, their usage costpAe is assigned. If arcs do not have enough free bandwidth, or their source and target node not enough CPU capacity to hostf, a penalty cost of106is applied.

Spread-n The cost of a substrate arc is the sum the fraction of the arc’s remaining free band-width that would be used by the virtual arc and the fraction of free CPU capacity the virtual arc would use on the node the substrate arc connects to. This represents the rela-tive resource usage the virtual arc would incur on a substrate arc. Low values mean that the virtual arc has a low impact on the available resources of a substrate arc (and the node

it connects to). The relative resource usage is then taken to the power ofn∈ {0.5,1,2}so that it is possible to evaluate the influence of different biasing strategies. We will denote them by Spread-0.5, Spread-1 and Spread-2 respectively.

These methods result in a total of 512 different CH configurations, the results of their evaluation can be found in Section 6.5. The strategies were kept simple to keep running times short as the following heuristics build on the best CH variants.

6.3 Local Search

The basic idea of Local Search (LS) is that a found solution to a problem may be improved by iteratively making small changes. The solutions immediately reachable from a starting solution Sare defined by a neighborhoodN(S), which can be generated by the appropriate neighborhood structure. LS starts with a solutionS and replaces it with a better solution fromN(S)until no more improvements can be found. For selecting the neighbor, we use the two standard strategies first-improvement (select the first improving solution) and best-improvement (select the best solution from a neighborhood).

We implemented six different neighborhood structures for the VNMP. They are ruin-and-recreate [154] neighborhoods, which means (in the context of this work) that they remove a part of a complete solution, for instance the implementation of a virtual arc, and then rebuild the solution by applying a CH. Here we need the CH with the modified structure as discussed in the previous section. The discussion of the neighborhoods will skip this rebuilding step.

RemapVarc (N1) Removes the implementation of a virtual arc.

RemapVnode (N2) Removes a virtual node and the implementations of all adjacent virtual arcs.

RemapSlice (N3) Removes a virtual network from the solution by removing all virtual nodes and implementations of all virtual arcs of the virtual network.

ClearSarc (N4) Clears a substrate arc, which means it removes the implementation of all vir-tual arcs using this substrate arc.

ClearSnode (N5) Clears a substrate node, which means it removes the implementation of all virtual arcs that are crossing the substrate node and removes all virtual nodes that are mapped to the substrate node.

RemapVnodeTAP (N6) Works like RemapVnode, but instead of delegating the choice of sub-strate node for the removed virtual node to the CH, it explicitly tries to map the virtual node to all possible (TAP) substrate nodes.

Note that the description of the neighborhood structures only specifies how one neighboring solution is reached. How the neighborhood is used during Local Search is straight forward to derive from the description. For example, when performing Local Search with the RemapVarc

neighborhood and best-improvement, we start with an initial solution created by CH. Then we remove the implementation of a virtual arc and rebuild the solution with CH. If the created solu-tion is better, we store it as the currently best one. This procedure is repeated for all remaining virtual arcs, always starting from the initial solution. After all virtual arcs have been tested, the best found solution replaces the initial solution and the process of finding an improving solution is repeated until no further improvements can be found.

For each neighborhood, there is a natural order in which to evaluate the neighbors, for instance the order in which virtual nodes are specified in the VNMP instance. This order is relevant when we use first-improvement. We might be able to speed up the search process if we try the most promising neighbors first. When the current solution for example is not valid, then the most promising neighbors are those which might reduceCa. In case of RemapVnode, that means vir-tual nodes, which are mapped to substrate nodes that are overloaded (additional resources had to be bought there), should be tried first. For neighborhoods that focus on the substrate (ClearSarc, ClearSnode), overloaded substrate nodes or arcs should be cleared first. We will denote this neighbor ordering by OverloadingFirst. When solving VNMP-S instead of VNMP-O, it might make sense to only consider neighbors which might reduceCainstead of just prioritizing them.

We will call this strategy OnlyOverloading. The choice of not focusing on any particular neigh-bors will be denoted by None. Note that even when solving VNMP-S, OnlyOverloading is not as strong as OverloadingFirst since changing parts of a solution that do not directly contribute toCamight make future improvements ofCapossible.

For an evaluation of the different neighborhoods, see Section 6.5.

6.4 Variable Neighborhood Descent

The neighborhood structures discussed in the previous section can be applied in combination within a Variable Neighborhood Descent (VND) algorithm [74]. A VND utilizes a series of neighborhoodsN1. . . Nk. An initial solution is improved byN1 until no more improvements can be found, thenN2is applied to the solution and so on. If Nkfails, VND terminates. If an improved solution is found in some neighborhood, VND restarts withN1. For more information, see Section 2.2.6.

We use the neighborhood structures defined in the previous section in two variants: as described without any neighbor prioritization and with OnlyOverloading. We will denote the second vari-ant with a prime. For exampleN60 denotes RemapVnodeTAP in OnlyOverloading configuration.

The following neighborhood orderings (VND configurations) were tested:

All (C1): N10,N20,N30,N40,N50,N60,N1,N2,N3,N4,N5,N6 All neighborhoods, in order of their size.

OnlyOverloading (C2): N10,N20,N30,N40,N50,N60

Only neighborhoods in OnlyOverloading configuration.

Complete (C3): N1,N2,N3,N4,N5,N6

Only complete neighborhoods.

RComplete (C4): N6,N5,N4,N3,N2,N1

LikeC3, but in reverse order.

ImprovCompA (C5): N1,N2,N3,N4,N6

An improvement to Complete based on preliminary results which showed that ClearSnode does not contribute in a significant way to VND.

RImprovCompA (C6): N6,N4,N3,N2,N1

C5in reverse order.

ImprovOverload (C7): N10,N20,N30

The neighborhoods of OnlyOverloading which find improvements based on preliminary results.

RImprovOverload (C8): N30,N20,N10 C7in reverse order.

ImprovCompB (C9): N2,N4,N5,N6

Another selection of neighborhoods to improve Complete.

ImprovCompC (C10): N3,N4,N5,N6

C9, but using RemapSlice instead of RemapVnode in the hope of speeding up the algo-rithm while achieving similar results.

OnlyClear (C11): N4,N5

Only the neighborhoods that try to clear parts of the substrate.

ImprovCompD (C12): N2,N4,N5

A variant ofC9which does not use RemapVnodeTAP to improve run-times.