• Keine Ergebnisse gefunden

In this chapter, we have presented a GRASP and VNS algorithm for solving the Virtual Network Mapping Problem. We have shown that the VNS algorithm produces significantly better results than the MA and VND approaches previously introduced. Based on the presented results, we can conclude that the main idea of VNS (successively larger random moves away from local optima) works better than learning from a set of good solutions (MA) or improving good random solutions (GRASP) for the Virtual Network Mapping Problem, at least with a rather constraining run-time budget. The comparison is fair since the same local improvement strategy (VND) was used, the parameters of all algorithms have been optimized and the same time-limits were employed.

Table 8.5: Average relative rankRreland its relation to the best result, average number of it-erations (Its.) or run-time, fraction of solved instances (Solv.) in percent and averageCa for different solution methods per instance size.

Size GRASP-O VNS-O MA-O VND VND-O

Rrel 20 0.476> 0.222= 0.192= 0.912> 0.753>

30 0.518> 0.222= 0.241= 0.920> 0.705>

50 0.598> 0.214= 0.275> 0.930> 0.696>

100 0.606> 0.187= 0.368> 0.916> 0.564>

200 0.577> 0.197= 0.413> 0.859> 0.372>

500 0.628> 0.489> 0.538> 0.846> 0.171=

1000 0.623> 0.592> 0.589> 0.569> 0.228=

Its. / 20 3142 7345 8185 0.2 0.4

t[s] 30 1248 3670 3899 0.7 1.3

50 481 1664 1663 2.1 4.2

100 66 346 314 16.0 29.7

200 90 399 333 40.2 119.7

500 30 124 94 126.6 605.1

1000 9 39 23 397.1 828.1

Solv. 20 100.0 100.0 100.0 96.7 97.5

[%] 30 100.0 100.0 100.0 100.0 100.0

50 100.0 100.0 100.0 99.2 98.3

100 98.3 100.0 100.0 95.0 97.5

200 98.3 97.5 95.8 90.0 98.3

500 76.7 76.7 76.7 73.3 90.8

1000 60.0 59.2 58.3 57.5 61.7

Ca 20 0.0 0.0 0.0 13.1 4.5

30 0.0 0.0 0.0 0.0 0.0

50 0.0 0.0 0.0 4.9 2.1

100 2.5 0.0 0.0 6.3 3.3

200 0.4 6.1 3.0 19.0 1.0

500 47.1 68.5 77.3 97.6 13.9

1000 245.5 311.2 215.9 184.1 198.9

A promising direction for future work could be to further look into the behaviour of the pre-sented algorithms for the largest instances sizes, where their performance still leaves something to be desired. The main problem is that VND, while already selected for reduced run-time requirements, takes too much time. It might be promising to try for instance LS-O as local improvement strategy, or some other Local Search variants as presented in Section 6.5.4. Even setting a time-limit for VND might be sufficient to improve performance for the largest instance sizes.

Table 8.6: Average relative rankRrel and its relation to the best result, average number of it-erations (Its.) or run-time, fraction of solved instances (Solv.) in percent and averageCa for different solution methods load.

Load GRASP-O VNS-O MA-O VND VND-O Rrel 0.10 0.497> 0.215= 0.315> 0.876> 0.528>

0.50 0.616> 0.294= 0.384> 0.913> 0.450>

0.80 0.602> 0.333= 0.397> 0.841> 0.484>

1.00 0.586> 0.371= 0.400= 0.771> 0.532>

Its. / 0.10 2348 6100 6354 5.6 41.7

t[s] 0.50 239 877 1016 50.2 218.3

0.80 155 458 537 111.2 316.2

1.00 151 329 385 166.0 331.6

Solv. 0.10 100.0 100.0 100.0 100.0 100.0

[%] 0.50 97.1 98.1 97.1 95.7 99.0

0.80 90.0 88.6 88.6 85.7 91.9

1.00 74.8 75.2 74.8 68.1 77.1

Ca 0.10 0.0 0.0 0.0 0.0 0.0

0.50 0.5 6.9 0.6 0.7 0.1

0.80 18.3 29.6 21.4 30.0 11.4

1.00 150.0 183.9 147.3 155.0 116.3

CHAPTER 9

Preprocessing of VNMP Instances

9.1 Introduction

In this chapter, we present preprocessing techniques to use on VNMP instances. The main aim is to extract as much information as possible from those instances and possibly reduce their size or complexity before we start solving them. As an example, one can determine if a virtual arc can never use a particular substrate arc. If we know this beforehand, we can reduce the model of the problem by removing the variable that would tell us if the virtual arc uses the substrate arc.

In addition, it is also possible to remove some constraints. If we can detect that a virtual arc can never cross a particular substrate node, then we can omit the flow conservation constraint for this substrate node and virtual arc (for formulations based on network flows). The following chap-ters on exact approaches for solving the VNMP will make use of the preprocessing techniques discussed in this chapter.

There are also other preprocessing opportunities which we will not regard any further. One is checking the consistency of the mapping possibilities, i.e., for each allowed mapping of a virtual node, is it possible to find a valid implementing path for all virtual arcs going out of (going into) that virtual node for one of the allowed mapping targets of the target (source) of the virtual arc.

Since the VNMP instances are generated in a way that ensures this property, we do not check it during preprocessing. Implementing this check would be straight forward. Another prepro-cessing possibility would be checking if the capacities of nodes or arcs are actually constraining (e.g., all virtual arcs that could traverse a substrate arc require more bandwidth than available) and only add a constraint if it is actually possible to violate it. These checks are performed within the exact solvers discussed in the next chapters, so we do not consider them any further.

Note however that they benefit from the domain reductions that we present in this chapter.

Before we can introduce the preprocessing methods for the VNMP, we require the following definitions:

Definition 9.1.1(Set of Delay-Constrained Simple Paths). Given a directed graphG(V, A)and delaysde, ∀e ∈ A,Ps,td denotes the set of all simple paths froms ∈ V tot ∈ V of length at mostd.

Definition 9.1.2(Possible Nodes of a Delay-Constrained Substrate Connection). The set of pos-sible nodes of a substrate connection froms∈ V tot∈V with delay limitd, PNds,t, is defined as PNds,t ={i∈V | ∃pds,t∈Ps,td :i∈pds,t}.

Definition 9.1.3(Fixed Nodes of a Delay-Constrained Substrate Connection). The set of fixed nodes of a substrate connection froms ∈ V tot ∈ V with delay limitd, FNds,t, is defined as FNds,t={i∈V | ∀pds,t∈Ps,td :i∈pds,t}.

The definition of the set of possible arcs of a delay-constrained substrate connection PAds,tand the set of fixed arcs FAds,tis analogous.

Definition 9.1.4(Domain of a Delay-Constrained Substrate Connection). The domain of a sub-strate connection froms ∈ V tot ∈ V with delay limit d, Dds,t, is defined as the quadruple (PNds,t,PAds,t,FNds,t,FAds,t). We will refer to this also as substrate domain.

Definition 9.1.5(Possible Nodes of a Virtual Arc). Given a VNMP instance, the set of possible nodes of a virtual arcf ∈A0, PNf, is defined as PNf =S

s∈M(s(f)),t∈M(t(f))PNds,tf.

Definition 9.1.6(Fixed Nodes of a Virtual Arc). Given a VNMP instance, the set of fixed nodes of a virtual arcf ∈A0, FNf, is defined as FNf =T

s∈M(s(f)),t∈M(t(f))FNds,tf.

The definition of the possible arcs PAf and fixed arcs FAf for a virtual arcf ∈A0is analogous.

Definition 9.1.7(Domain of a Virtual Arc). Given a VNMP instance, the domain of a virtual arcf ∈A0,Df, is defined as the quadruple(PNf,PAf,FNf,FAf).

These definitions allow us to state the VNMP preprocessing problem as follows:

Definition 9.1.8(The VNMP Preprocessing Problem). Given a VNMP instance, calculate the virtual arc domainsDf,∀f ∈A0.

Given a solution to this problem, we can remove superfluous variables and constraints from the exact VNMP models. As for solving this problem, it is immediately apparent that the VNMP Preprocessing Problem can be decomposed into multiple instances of the following problem:

Definition 9.1.9(The Substrate Domain Problem (SDP)). Given a source nodes, a target node tand a delay limitd, calculateDds,tfor the substrate graph of a VNMP instance.

The decomposition of the calculation ofDf works as follows: LetS =M(s(f))be the set of all allowed sources off,T =M(t(f))the set of all allowed targets anddf the allowed delay off. ThenDf is given as(S

s∈S,t∈T PNds,tf,T

s∈S,t∈TFNds,tf,S

s∈S,t∈T PAds,tf,T

s∈S,t∈TFAds,tf), which is the combination of allDs∈S,t∈Tdf .

1

2

4

3

6

5

7

8

9 11

10

Figure 9.1: A small sample substrate network.

At first glance, solving the Preprocessing Problem in that way seems wasteful. For instance, we check if it is possible to use a substrate node for every mapping configuration, even if we have already found a mapping configuration for which it is possible. The same holds for the fixed parts. We check for every mapping configuration if we have to use a substrate arc, even if we have already found a mapping configuration where we do not have to use it. As it turns out, it is actually wasteful to skip evaluating nodes or arcs if we already know the result when solving the SDP. When nodes or arcs are skipped based on external assumptions, the solution of the SDP will be invalid for another set of assumptions. This slows down preprocessing considerably, because the solutions cannot be memoized. Section 9.6.3 shows an evaluation of this effect.