• Keine Ergebnisse gefunden

Basic Terms

Im Dokument  (Seite 17-20)

1.3 Combinatorial Optimisation

1.3.1 Basic Terms

Each day decision makers are confronted with problems of growing complexity.

The problem to be solved is often expressed as an optimisation problem.

In principle an optimisation problem can be described as follows [DD04]: Max-imise (or minMax-imise) the function H(x) under the following restrictions

gi(x) where x is a possible configuration in the configuration space Γ, gi(x) are the contraints and H(x) is the objective function, which shall be optimised. It is a map from the set of feasible solutions (configurations) x into the set of real numbers:

H : Γ7−→R

x−→ H(x) (1.2)

Regularly the total costs of a system shall be minimised. A maximisation prob-lem can be changed into a minimisation probprob-lem by multiplication with -1. A combinatorial optimisation problem is defined just like a normal optimisation problem. H(x) is again the objective function which shall be optimised. But this time the configuration space Γ is finite and consists of discrete elements. A continuousoptimisation problem has a configuration space which is not discrete.

The restrictions of combinatorial optimisation problems are difficult to handle.

At first those configurations have to be banned, which do not fulfil the restrictions.

Thereby the search space is divided into small islands, which the system cannot leave if it is stranded. Thus the optimum is reached just per chance. The second possibility is to accept unfulfilled restrictions and to use the so called virtual costs (penalties), if a restriction is not fulfilled. A penalty functionHP is a map

HP : Γ7−→R+

x−→ HP(x) (1.3)

with x∈Γ and

HP(x) = λ·g(x)

½ = 0 x fulfils the restriction

>0 else (1.4)

λ∈Ris a parameter, which has to be fixed. For each restriction a function can be defined and integrated into the objective function. Ifλis very high the restrictions

have to be fulfilled, because the objective function (which shall be minimised) has higher values. Forλ = 0 no restrictions are considered. A solution isvalid, if all restrictions are fulfilled. One can distinguish between hard andweak penalties.

Hard penalties do not allow to break a restriction; weak penalties allow a small non-fulfilment. In route planning for example a truck can be slightly overloaded.

A configuration is a possible solution of a problem, which doesn’t need to fulfil all restrictions. A configuration is an element of the configuration space, which is formed by all configurations. Because of many degrees of freedom the space is calledhigh dimensional. The set contains elements which do not solve the problem, because they do not fulfil the restrictions. The solution space is the set of all valid combinations of the system parameters. Each element of the set solves the problem and fulfils the restrictions. The solution space is a subspace of the configuration space; its elements only differ in quality.

It is common to describe the step from x to x0 = A(x) as move. A(x) is the operator which changes the current configuration and depends on the shape of the underlying configuration space. The number of all moves, starting from a solutionx, is restricted; not every solutionx0 can be reached fromx. The possible moves are characterised by Mx. In principle those sets can be chosen freely for a given problem; butx0 =A(x) should be valid for am∈Mx. When for allx of the solution space Z the sets Mx are given, a concept of neighbourhood can be de-fined on the setZ. Thereby a problemP with the solution spaceZ shall be given:

• IfMx is the set of moves, which can be excuted onx inZ, then the neigh-bourhood of x can be defined like following:

NM(x) := {x0∈Z | ∃ m∈Mx :x0 =A(x)} (1.5)

• The union of all neighbourhoods NM(x), x∈Z, is called neighbourhood structure N.

• If x0∈N(x)⇔ x∈N(x0) is valid, a symmetric neighbourhood structure is given .

• Let x, y∈Z. The sequence of solutions x1, . . . , xk is called solution path fromx to y, if the following is valid:

x1∈N(x), y∈N(xk) ∧ xi+1∈N(xi) ∀ i= 1, . . . , k−1 (1.6)

• A neighbourhood structure N is called coherent, if there is a path from x toy for all x, y∈Z.

If the operatorAalways produces valid solutions, it generates a solution path starting with x0. Then the operator should find the best x0 from N(x):

H(x0) = min

yN(x)H(y) (1.7)

Depending on the neighbourhood structure, N(x) can be very big; this means that the subproblem itself has a great computation time. In such cases, the minimum x0 of a subset ¯N(x) ⊆ N(x) can be taken as substitute. Basically for N¯(x)≥2 the following subsidiary optimisation problem is to solve:

min{H(y)|y∈N¯(x)⊆N(x)} (1.8) Therewith the operator A(x) itself can be formulated as an algorithm: Produce a subset ¯N(x) of neighbourhood solutions N(x) and find a x0 due to 1.7. Con-cerning the objective function f,x0 is called local minimum in the solution space Z and the neighbourhood N, if

H(x0)≤ H(x) ∀ x∈N(x0) (1.9) With opposite sign, x0 would be a local maximum; in both cases it is a local optimum. The position of the local optimum is not only characterised by the objective function and the solution space; the chosen concept of neighbourhood plays an important role as well.

With the concept of neighbourhood the idea of alocaland global minimum (maximum) can be formulated. A solutionxmin∈Γ is a global minimum, if for all solutions xin the solution space Γ : H(xmin)≤ H(x) holds. xmax∈Γ is called global maximum, if for all solutions x in the solution space Γ: H(xmax) ≥ H(x) holds. A solution x∈Γ is a local minimum, if : H(xmin) ≤ H(x0) ∀ x0∈N. xmax∈Γ is called local maximum, if: H(xmax)≥ H(x0) ∀ x0∈N.

The structure of the configuration space is independent of the neighbourhood structure. If the different configurations are defined by the neighbourhood struc-ture N, the so called search spaceD is given. During optimisation one ”walks”

through the search space step by step. The more moves there are in D, the more paths exist between two points of the search space and thus it is easier to leave local optima on the way to the global optimum.

During optimisation a ”walk” from one point of the search space to another is made. If each point of the phase space is assigned to the equivalent energy H(x), one gets the so called hill-valley-landscape [Mo87] as illustration of the energy landscape. In Figure 1.4 there are just two dimensions of the normally high dimensional phase space represented. For a small number of different moves it is easy to see, that mostly just local minima are found and not the global optimum. A great number of moves makes it possible to bypass an energy barrier;

the system doesn’t get stuck in a local minimum.

Figure 1.4: Energy landscape

Im Dokument  (Seite 17-20)