• Keine Ergebnisse gefunden

For network problems such as the TSP, when solutions are represented by a set of edges, the edge recombination crossover [13] is also popular. It creates an offspring by taking over edges of parental solutions iteratively according to certain criteria.

On network problems, this recombination model performs considerably well since it usually can exploit problem specific informations.

Mutation

Mutation is similar to applying a random move in a certain neighborhood to a solution in LS. This way, lost attributes which do not appear in the whole population have a chance of being introduced again. Usually, mutations are not applied to every solution in the population each iteration, but they only occur with a small probability. For a permutation based representation, a possible mutation could be to exchange the attributes of two positions.

3.7 Memetic Algorithms

A common drawback of EAs is that there is no guarantee for the global best solution to be even local optimal. Though good diversification is present due to a large population, recombination and mutation mechanisms, EAs lack intensification in overall.

Therefore, many successful EAs for complex combinatorial optimization problems additionally use hybridization to improve solution quality and/or running time.

Pablo Moscato [84] introduced the term Memetic Algorithm (MA) for local search and problem specific knowledge enhanced EAs. The term “meme” corresponds to a unit of imitation in cultural transmission [17]. So while genetic algorithm are inspired by biological evolution, MAs attempts to mimic cultural evolution.

In MAs, While the outer metaheuristic is an EA, individual solutions of the pop-ulation are further improved e.g. via local search heuristics. If each intermediate solution is always turned into a local optimum, the EA would exclusively search the space of local optima (w.r.t. the neighborhood structure(s) of the local improvement procedure). So by adjusting how much effort is spent in the local improvement, it is possible to tune the balance between intensification and diversification.

Chapter 4

Hybrid Algorithms

Looking at the various exact techniques and metaheuristics described in the previ-ous chapters, each of them has its assets and drawbacks. As a matter of fact, it appears to be natural to combine ideas from multiple algorithmic streams. Several publications of the last years describe different kinds of such hybrid optimizers that are often significantly more effective in terms of running time and/or solution qual-ity since they benefit from synergy. See [24, 97] which illustrates the many different possibilities of combinations and the huge potential they have.

This chapter will focus on embedded techniques since they are implemented in this thesis. They are possibly the most straightforward way of how to combine different approaches. The basic idea is to let one algorithm act as a subordinate of another one. One popular strategy is to apply some local search or more complex algorithms within an outer metaheuristic for “fine-tuning”. Variable neighborhood search or memetic algorithms introduced in the previous chapter are typical examples for such a collaboration – while the outer algorithms creates diversity, the inner local search heuristics emphasize intensification. To go one step further, such collaborations between exact and heuristic approaches seem to provide even more hybridization possibilities.

4.1 Exact Algorithms as Subordinates of Metaheuristics

In order to enhance the performance of metaheuristics, exact algorithms can be used to solve parts or subproblems during the optimization process.

Chapter 4 Hybrid Algorithms

4.1.1 Explore Large Neighborhoods by Exact Methods

Numerous local search based algorithms use neighborhoods Nk that lead to moves referred to ask-exchangeork-opt withk= 1 ork= 2. These simple neighborhoods are characterized by the fact that they consider only the change of one or two component(s) of the current configuration vector at once.

Such algorithms are fast but often produce poor suboptimal solutions. To improve this behavior, one can increase k, the number of variables to be concurrently con-sidered at each move, beyond one or two. However, as the number of neighboring solutions in Nk typically increases exponentially with k, a complete enumeration and evaluation of all neighbors of the current configuration can usually only be done for small k.

Instead of naively enumerating and evaluating all the solutions in a larger neighbor-hood in order to identify the best move (or any improving move) to be performed, we can consider more sophisticated exact algorithms for this task. So-called very large scale neighborhood search methods [1] have been described for a few selected prob-lems, in which large neighborhoods are defined in special ways allowing to identify the best neighboring solution in reasonable (i.e. polynomial) time without explic-itly considering each neighbor. For example, Ergun and Orlin [26] presented such approaches for the traveling salesman problem, Congram [9] explored large neighbor-hoods efficiently by means of dynamic programming, and for a class of partitioning problems, Thompson et al. [116] defined the concept of cyclic exchange neighbor-hoods.

Such approaches are also highly promising for many other classes of problems. How-ever, the design of successful large neighborhoods, is not trivial, since it goes hand in hand with the design of an efficient algorithm for searching it.

For several generalized network design problems in this thesis, we will use techniques such as dynamic programming and integer linear programming to efficiently search large neighborhoods.

4.1.2 Merge Solutions by Exact Methods

In evolutionary algorithms, a traditional operator is recombination, which derives a new offspring solution by merging properties of two or more selected candidate solutions. This operator traditionally relies on random decisions and often poor offsprings cannot be avoided.