• Keine Ergebnisse gefunden

A definition of the term metaheuristic according to Osman and Laporte [104] is the following:

Definition 7. A metaheuristic is formally defined as an iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies are used to structure information in order to find efficiently near-optimal solutions.

2.4. Metaheuristics Although, MIP techniques are powerful, it is often the case that they are not suitable or

applicable for practical problems due to too excessive runtime or memory requirements for N P-hard problems. Sometimes, it is also even impossible to obtain any feasible solution or even bounds when trying to solve huge problem instances with exact techniques like MIP-methods, CP or SAT-based approaches. In these cases, (meta)heuristics may be a practically highly promising way to go.

In general, (meta)heuristics do not provide guarantees on solution quality but provide a way to frequently yield good approximate solutions to a problem if well designed. In the following we will describe heuristics which have been used for solving the problems handled in this thesis.

Metaheuristics have already been studied for decades and a good starting point for developing and learning about them is the Handbook of Metaheuristics [61].

There is a wide range of metaheuristics described in the literature and it is not in the scope of this work to describe all of them in detail. The following sections sketch the principles of GRASP, PILOT and VNS, which are those applied in the subsequent chapters. Further prominent metaheuristics are, for example:

• population-based metaheuristics

ant-colony optimization [42, 43, 44]

genetic algorithms [35, 127]

• local-search based metaheuristics tabu search [57, 60]

simulated annealing [82, 152]

We also note, that the term of a hyperheuristicis gaining popularity which is basically a special metaheuristic layer that is able to adapt the search by selectively utilizing various lower level (meta)heuristics. The book of Sörensen et al. [31] gives an overview on the topic and discusses recent results in this area.

2.4.1 Preferred Iterative Look ahead Technique

This method proposed by Duin and Voß [45, 46, 157], short as PILOT, tries to reduce problems with the greedy trap by looking ahead a certain number of steps. Basically, any greedy algorithm can be enhanced to the PILOT method when looking ahead the incumbent solution. The PILOT method could do a full dry run of the, e.g., greedy algorithm, or the PILOT depth could also be limited in case the runtimes are to large when applying this method, but this clearly is problem- and instance dependent. When setting the depth to zero this can also seen as a special case where the algorithm corresponds to the usual greedy construction heuristic without look-ahead mechanism.

Algorithm 2.4:Preferred Iterative Look ahead Technique Require: candidate list C

1: S← ∅

2: while solution is not completedo

3: xbestuninitialized

4: Sbestuninitialized

5: for allcC do

6: S0 ←greedy(S∪ {c}, C\ {c})

7: if Sbest=uninitialized or better(S0, Sbest) then

8: xbestc

9: SbestS0

10: end if

11: end for

12: SS∪ {xbest}

13: CC\ {xbest}

14: end while

15: return S

Algorithm 2.5:GRASP

1: Sbest=∅

2: while termination criterion not metdo

3: SGreedyRandomizedConstruction()

4: SLocalSearch(S)

5: if S better than Sbest then

6: SbestS

7: end if

8: end while

The pseudo code for PILOT is given in Algorithm 2.4. First, the solution is initialized to the empty set. Then, the algorithm tries to add solution components until the solution is complete. In each iteration of the outer loop, for every candidatecC a full greedy look ahead is done, possibly limited by a given look-ahead depth, and xbest is the element from the candidate list which has the best solution for a complete look ahead solution.

The function greedy(S, C) calls the corresponding Algorithm 2.2. After all elements are evaluated with the look-ahead mechanism the solution is extended with the best possible choice and the element is removed from the candidate list.

2.4.2 Greedy Randomized Adaptive Search Procedure

The Greedy Randomized Adaptive Search Procedure (GRASP) has been proposed by Resende and Ribeiro [128]. Basically, GRASP is a multistart metaheuristic where each

2.4. Metaheuristics Algorithm 2.6: GreedyRandomizedConstruction

Require: Set of candidates C

1: S =∅

2: ∀c∈C: evaluate incremental costs

3: while solution is not completedo

4: Build restricted candidate list (RCL)

5: pickcRCL at random

6: S=S∪ {c}

7: Reevaluate incremental costs

8: end while

9: return S

Algorithm 2.7: Variable neighborhood search

Require: initial solutionx, shaking neighborhood structuresNls |l= 1, . . . , lmax, local search neighborhood structuresNk|k= 1, . . . , kmax

1: while termination criteria not metdo

2: l←1

3: whilel6=lmax do

4: x0Shaking(x, Nls)

5: x00VND(x0, Nk)

6: if x00 betterx then

7: xx00

8: l←1

9: else

10: ll+ 1

11: end if

12: end while

13: end while

step of the metaheuristic consists of a construction phase and a local search phase. The general procedure of GRASP is given in Algorithm 2.5.

When looking at Algorithm 2.6, GRASP makes use of a restricted candidate list (RCL).

This list usually contains a number of best elements from which one element is chosen at random to be the next part of the solution. The size of RCL obviously directly influences performance and solution quality of the algorithm and should be chosen well.

2.4.3 Variable Neighborhood Search

The Variable neighborhood search (VNS) [97] metaheuristic uses a so called shaking mechanism to construct a random point which allows the metaheuristic escaping local optima. VNS is a powerful technique for many real-world applications requiring good

solutions in reasonable runtimes. Pseudo code of the VNS is shown in Algorithm 2.7.

There are many variations, extensions and hybrids of the basic VNS algorithm where the most prominent ones are reduced VNS (RVNS), variable neighborhood decomposition search (VNDS), parallel VNS (PVNS) and skewed VNS (SVNS).