• Keine Ergebnisse gefunden

Greedy Randomized Adaptive Search Procedure (GRASP) . 26

1.2 Overview of the Thesis

2.2.4 Greedy Randomized Adaptive Search Procedure (GRASP) . 26

if f(x) better thanf(x) then

3

x←x

4

until some termination criterion is met

5

return x

6

Best improvement: The complete neighborhoodN(x) is explored, the best solution x is chosen.

Next improvement: The neighborhoodN(x) is searched systematically, the first solution x withf(x) better thanf(x) is chosen.

Random neighbor: A random solution x ∈ N(x) is chosen and evaluated.

Here the selection process itself is very fast, but in most cases f(x) will be worse thanf(x).

There is no strategy dominating all others in the general case. However, using best or next improvement depends on various parameters like the problem to be solved, the definition of the neighborhood structure, or if there exists an efficient incremental evaluation scheme.

Local search is performed until a stopping criterion is met, which can be the maximal number of iterations, a time limit, or when reaching a local optimum:

Definition 16 Let x be a feasible solution in S, and f(x) the objective value of x.

Thenx is a local optimum of a minimization problem ↔ ∀x ∈ N(x) :f(x)≥f(x).

Of course, a local optimum with respect to a chosen neighborhood structure is not necessarily aglobal optimum, but each global optimum is also a local one.

2.2.4 Greedy Randomized Adaptive Search Procedure (GRASP)

The greedy randomized adaptive search procedure [46, 47] is a multi-start method where each iteration consists of two phases: A construction phase, where a feasible solution is computed, which is refined in the second phase using local search.

Algorithm 4: Greedy Randomized Adaptive Search Procedure Input: instanceI of optimization problem P

Output: best found feasible solution x initialize so far best solutionx+

1

build RCL // restricted candidate list

5

select an element xj from RCL at random

6

add xj to solution x

7

until x is a complete solution

8

until some termination criterion is met

12

return best found solution x+

13

The overall best solution is stored and return as result of GRASP when a stop-ping criterion like exceeded runtime or maximal number of iterations is met, see Algorithm 4.

The construction phase does not follow a pure greedy approach but makes use of a so-calledrestricted candidate list (RCL). Whenever a new element will be added to the solution, such a list is computed in advance where each candidate for incorporation is evaluated according to a greedy cost function. The best elements are allowed to enter the RCL, and one of them is chosen at random to be part of the current solution. The size of the list can be static, i.e., thelmax best candidates are used for the RCL, or can depend on the incremental costs caused to objective value of the considered elements. In the latter case, a parameterα∈[0,1] is used to control the selection pressure, reaching from only accepting the best candidate (pure greedy) to a complete random approach where each candidate element leading to a feasible solution is allowed to join the RCL. Ifα is a self-tuned parameter this is referred to asreactive GRASP.

To improve the performance of the basic algorithm there exist various modifications and enhancements, including for example cost perturbations to enforce search diver-sification, the introduction of long-term memory to store several good solutions to be used to bias the construction phase, or the replacement of the uniformly random

selection of a candidate element from the RCL by a probability distribution taking into account the incremental costs of the different elements.

2.2.5 Variable Neighborhood Search (VNS)

Variable neighborhood search, introduced by Hansen and Mladenovi´c [75], is a meta-heuristic making use of multiple neighborhood structures defined for the considered optimization problem, and a technique to escape local optima. It is based on fol-lowing observations:

A local optimum with respect to one neighborhood structure is not necessarily a local optimum to another one.

A global optimum is a local optimum with respect to every possible neighbor-hood structures.

Often local optima with respect to different neighborhood structures are rela-tively close to each other.

Variable Neighborhood Descend (VND)

Variable neighborhood descendimplements the first observation given above and can be seen as a local search procedure using not just one but multiple neighborhood structures, N1,N2, . . . ,Nkmax. An initial solution x is thereby systematically im-proved with respect to the various neighborhood structures until a local optimum for all of them is reached, see Algorithm 5.

In general, the ordering of the neighborhood structures is crucial for the performance of the VND, not only with respect to the runtime but also to the achievable solution quality. Different properties of these neighborhood structures have to be considered, like their relationship to each other (overlapping, (partly) including one another, mutual exclusive), the complexity to search them, or their coverage of the whole solution space. The simplest choice is to use a static ordering, usually based on the runtime complexity since the first neighborhoodN1 is more often searched than Nkmax. However, there also exist more sophisticated approaches like theself-adaptive VND which dynamically reorders the neighborhood structures according to their execution time and their success to improve a solution [79]. Another method quickly evaluates relaxations of the different neighborhood structures to be able to choose the most promising one next [112].

Algorithm 5: Variable Neighborhood Descent (x)

choose a neighboring solution x ∈ Nk(x)

3

In addition to a strong local improvement component, VNS also includes a mecha-nism to escape local optima, the so-calledshaking process. For this purpose, a set of neighborhood structuresN1,N2, . . . ,Nlmax, which are usually ordered by size and can be different from those exploited within VND, is used to perform random moves from the so far best found solution, cf. Algorithm 6.

When following a best improvement strategy in the local search step, a single random shaking move in the same neighborhood would not be sufficient to escape a local optimum. In such a case a sequence of moves is often used to perform shaking in corresponding larger neighborhoods.

Like GRASP, VNS is a very simple to implement metaheuristic with only a few parameters to tune making it easy it use in practice. Variants of the basic VNS include reduced VNS which completely omit the local search phase, thus relying only on the shaking process, as well as the general VNS utilizing VND to locally improve solutions.