• Keine Ergebnisse gefunden

2.3 Exact Solution Approaches

2.3.3 Constraint Programming

Whereas integer linear programming is mainly applied to optimization problems,constraint programming(CP) is primarily utilized to solve constraint satisfaction problems. In general both concepts are somewhat complementary to each other. Also in CP the term “program-ming” is actually related to “computer program“program-ming”, too, as besides being a (declarative) programming paradigm, the user often needs to program the strategy to solve the problem as well. Nevertheless, the idea still is that the user states a problem (via constraints) which is then solved by a general purpose constraint solver. For an introduction we refer to the book by Apt [10], a broad overview also including some more advanced topics is given in theHandbook of CPby Rossi et al.[200], a comparison between ILP and CP is presented by Lustig and Puget [132].

The two main basically orthogonal concepts for solving CSPs aresearchandinference. For CSPs also a tree-search method is used: backtracking. It differs from exhaustive search by checking the constraints after each branching decision and in—the simplest case—going back to the previous node in case a violation is encountered and continue the search from there. Hence also here subtrees are pruned. Though the search component alone would be sufficient to solve the problem, it is usually too inefficient and would seldom be a vi-able approach on its own (even though many improvements were proposed in the literature).

16

For instance during simple backtracking it frequently happens that very similar subtrees are unnecessarily investigated again and again, only to fail and being pruned in the end (this behavior is denoted asthrashing). Here comes inference into play, “the real power engine behind CP” [13], which essentially tightens the constraints, i.e. reduces the domains of the variables that are involved, eventually eliminating parts of the search space. In practice one might be familiar with this concept when solving Sudokus from newspapers. More precisely, inference removes (filters) inconsistent domain values, hence is said to achieve consistency, and since the information contained in one constraint is propagated to the neighboring con-straints this process is sometimes also calledconstraint propagation. Note that several no-tions (levels) of consistency with corresponding consistency techniques exist, demanding an increasing computational effort to achieve/apply. We mention three common ones: The sim-plest being node consistency (for unary constraints), followed by arc consistency (already achieving a high degree of consistency for binary constraints), and path consistency (to re-move more but not provably all inconsistencies which were not covered by arc consistency;

it was shown that achieving consistency on paths of length two is sufficient to achieve path consistency in general). Since every CSP can be transformed to an equivalent CSP using only binary constraints, much research effort was put into efficient algorithms to obtain arc consistency. Practical CSP solution approaches usually rely on incomplete consistency tech-niques combined with a non-deterministic search to yield a complete method. COPs can be solved too, in that case one applies a tree-search based on branch-and-bound.

There also existglobal constraints(e.g.sum, alldifferent,cumulative etc.) that are on the one hand shorthands for frequently recurring patterns, which makes programming easier, and on the other hand facilitate the work of the constraint solver by providing it with a better view of the structure of the problem. For so-calledover-constrainedproblems where it is unlikely or impossible that all constraints can be fulfilled, one can usesoft constraints (as opposed to the usualhard constraints). They are suited to formalize desired properties rather than requirements that may not be violated.

Naturally a careful basic modeling is crucial for solving CSPs or COPs, but the performance might be further improved via: considering redundant constraints which might allow an enhanced filtering, the usage of readily available global constraints (depending on the solver), also consider the inclusion of the dual model (swapping the constraints for the variables and vice versa) or an alternative view on the initial model, and avoiding symmetries via posting corresponding constraints.

In this thesis, a subproblem in Chapter 7 will be tackled with CP.

2.4 (Meta-)Heuristic Solution Approaches

Heuristic problem solving techniques range from simple constructive techniques such as ad-hoc greedy algorithms over local search methods to various metaheuristics [218, 91].

Especially the latter category is well-developed and has proven to be highly useful in prac-tice. As their name suggests, metaheuristics are defined on a higher and basically problem-independent level; note that the term “meta-heuristic” was first introduced by Fred Glover

2. METHODOLOGIES

in the context of tabu search [95]. These “solving concepts” describe how to efficiently ex-plore the search space by guiding lower-level or subordinate heuristics to find (near-)optimal solutions. It is important that a metaheuristic appropriately balancesdiversificationand in-tensificationof the search. Several taxonomies were proposed in the context of metaheuris-tics [22, 218], which do make sense since meanwhile a lot of such solution approaches exist, where the differences are sometimes quite fuzzy. In fact, in recent years one can observe the appearance of an increasing number of rather “exotic” variants which are frequently inspired by metaphors from nature, physics and life [235]. It is sometimes hard to spot new and original ideas as often only slight variations, if at all, of established concepts are proposed.

A critical view on this is given by Weyland [235]. Coming to the taxonomies again, they are also helpful in synthesizing (original) successful concepts and ideas among the different variants, fostering a better understanding of vital components yielding an improved search balance as mentioned above.

A meaningful criterion is the division into single-solution based methods (i.e. following a single search trajectory), which are often sophisticated variants of local search either using a single neighborhood or several ones, and population based methods (i.e. multiple search trajectories, usually running in an intertwined way). Prominent examples of the former class are variable neighborhood search (VNS), tabu search (TS) and simulated annealing (SA), while the broad class of evolutionary algorithms (EAs), swarm intelligence methods such as ant colony optimization (ACO) algorithms and particle swarm optimization (PSO), as well as scatter search (SS) belong to the latter class. Another criterion is whether the solutions are primarily constructed (e.g. greedy randomized adaptive search procedures or ACO) or improved (e.g. VNS, TS, SA, EAs, PSO, SS).

Mostly in the early years of metaheuristics – but sometimes to a lesser extent even today – certain communities strongly promotedtheir method of choice and seemingly rather tried to find problems where it could be applied to with success, and hence gather more evidence why it is the “only true” method. Yet according to the“no free lunch” theoremby Wolpert and Macready [238] no single algorithm can dominate all others on all problems, since any elevated performance over one class of problems is offset by performance over another class.

Hence it only makes sense to favor one algorithm over another with respect to specific prob-lems or classes of probprob-lems.

We define some required basic notions before dealing in more detail with some heuristic methods.

Definition 10 Aneighborhood structureis a functionN :S →2Sthat assigns to everys∈ Sa set of neighborsN(s)⊆ S.N(s)is called the neighborhood ofs. Often, neighborhood structures are implicitly defined by specifying the changes that must be applied to a solution sin order to generate all its neighbors. The application of such an operator that produces a neighbors∈ N(s)of a solutionsis commonly called amove.

In addition to the globally optimal solution defined in Section 2.1 we can also define a locally optimal solution (again for a minimization problem):

18

Definition 11 Alocally optimal solution(orlocal optimum) with respect to a neighborhood structureN is a solutionssuch that∀s0∈ N(s) :f(s)≤f(s0).