• Keine Ergebnisse gefunden

1.2.2 3-Dimensional Descriptors and Projection onto Surface

2.3 Evolutionary Algorithms

very powerful in their domain, on the other hand, however, these specialized operators do not allow for use on a wide domain of optimization problems. In

Algorithm 2.1:Loop of an Evolutionary Strategy Input: function f : X →Y

Output:x∈X such that x is optimum of f t=0;

Pt=initialize(μ,σ, nσ);

while(! terminated(Pt, t) )do Pt=∅;

fori=1, . . . ,λ=μ·νdo P=matingSelection(Pt,ρ);

I=recombination(P, recx, recσ);

I =mutation(I, cτ);

Pt=Pt∪I;

Pt+1=selection(Pt, Pt,κ);

t=t+1;

the following, all the genetic operators of the(μ,ν,ρ,κ, nσ, cτ,σ, recx, recσ)-ES illustrated in Figure 2.1 are explained.

Individuals

Individuals are used to represent a potential solution of the problem under consideration. If a function f :Rn → X is to be optimized, hence the individ-ual contains a vector x Rn called the object component which gives a point of the search space. Moreover additional information is stored in the individ-ual, namely the number of iterations the individual was part in the population in the form of an integer ˜κ(used for selection), the value f(x)given the fitness of the individual (to omit multiple evaluations, hence to accelerate the opti-mization) and a strategy component that is either realized as a scalarσ R+

or a vector σ Rn+ (used to steer the strength of the mutation). Hence an individual I is given as I= (x,σ, f(x), ˜κ).

Initialization

For initialization, different methods were proposed that are surveyed in (Beyer and Schwefel, 2002). In this thesis, especially the following method is appro-priate: With a bounded search space, as this will be the case in the problems considered here, the population can be initialized uniformly at random within the search space. This is done by assigning to each coordinate i of each individ-ual the value[x]i =a+U[0,1]·(b−a), where[a, b]is the domain of the i-th co-ordinate and U[0,1]is a uniformly distributed number in the interval[0, 1]. This initialization procedure has advantages especially on multi-modal optimiza-tion problems, where one expects several local optima. By spreading individ-uals over the whole search space, the probability of placing some individindivid-uals in sub-domains from which it is easier to reach the global optimum becomes much higher (Beyer and Schwefel, 2002).

The strategy component giving the step sizes for mutation is usually ini-tialized by assigning the predefined valueσ. Alternatively, the strategy com-ponent can be initialized by using a predefined interval[σ1,σ2] from which values are drawn uniformly to initialize the strategy component. Two different types of strategy component are distinguished by the exogenous parameter nσ, namely one step size for all dimensions, or alternatively one step size for each dimension, thus n step sizes.

Mating Selection

Mating selection Iμ−→Iρis performed to chooseρindividuals from the pop-ulation that are used to generate a new individual. The selection of individuals is based on randomness and requires that individuals having worse fitness do not have higher probability to be chosen than those individuals having better fitness. In the case of evolutionary strategies, mating selection is performed by choosing individuals uniformly at random.

Recombination

Evolutionary strategies distinguish between two recombination operators: In-termediate recombination and discrete recombination. The recombination op-erator maps the selected ρ individuals onto a new individual called the off-spring. While the intermediate recombination calculates for a coordinate i the average over the parents coordinate i, the discrete recombination determines

the offsprings coordinate i by choosing coordinate i from a uniformly ran-domly selected parent. This procedure is repeated for all coordinates leading to a new individual. Formally, the recombination is hence realized as

[x]i = 1ρ

ρ

k=1[x(k)]i

in the intermediate case, and as

[x]i= [x(Uρ)]i

in the discrete case, where x(j)specifies the object component of the j-th indi-vidual and where the functionUρreturns a uniform random number from the set{1, . . . ,ρ} ⊂N.

The recombination of the strategy componentσis performed analogously, moreover ˜κis set to 0 in the newly generated individual, whereas a fitness eval-uation is not performed in this step. The two exogenous parameters recxand recσ are used to specify the type of recombination for the object and strategy component.

Mutation and Self-Adaptation

The recombination operator does not ensure the complete exploration of the search space. This property however is a necessary condition to find the global optimum. Therefore, evolutionary strategies use a mutation operator that must be able to reach each point in the search space in finite time. Evolutionary strategies realize this by using the Gaussian distribution. Obviously, this dis-tribution ensures that each point in the search space can be reached in finite time if numbers drawn from this distribution are added to each coordinate of an individual. Furthermore, they guarantee that mutation is symmetric, unbi-ased, and scalable (using different standard deviations), which are other condi-tions mutation has to fulfill. Concretely, the mutation of the object component is defined as

[x]i= [x]i+N(0,[σ]i) = [x]i+ [σ]i· N(0, 1),

whereN(0,σ)is a Gaussian distributed random number with mean 0 and stan-dard deviationσobtained from the strategy component.

The step sizesσare obviously important for the success of a mutation. One can distinguish between two properties: The success-rate gives the number of

successful mutations and the progress-rate expresses the progress towards the optimum. Choosingσ→0 on the one hand will lead in 50% of mutations to a success, the progress towards the optimum however tends to zero due to very small movements in the search space. On the other hand, a large step size will not increase the progress since in most cases mutations will be unsuccessful.

This phenomenon obviously depends on the state of the optimization. At the beginning, large step sizes are preferable in exploring the search space. At the end of the optimization, however, small step sizes are required to hit the optimum precisely. Hence a step size adaptation is required.

To realize step size adaptation, different methods were proposed, e.g., start-ing with a relatively high value and decreasstart-ing it with an increasstart-ing number of generations, adaptation according to the rate of successfully applied mu-tations, or self-adaptation techniques (Beyer and Schwefel, 2002). The latter are used in this work and will therefore be introduced more in detail. To re-alize self-adaptation the individuals store additional information in the form of the strategy component, which represents the standard deviation used for mutation. Keeping in mind that standard deviations are numbers larger than zero and the properties that mutation has to fulfill (e.g. symmetry), multipli-cation with a logarithmic normally distributed number is the most appropri-ate approach to mutappropri-ate the strappropri-ategy component. To allow different adapta-tion rates, an exogenous parameter cτ is used to defineτ = cτ/

2 n and τ0=cτ·exp(N(0, 1))/

n. These two values are used to realize the mutation of the strategy componentσby

[σ]i= [σ]i·τ0·τ·exp(N(0, 1)).

Since there is no fitness function for the strategy component, it is evaluated indirectly. The idea is firstly to mutate the strategy component and afterwards the object component using the standard deviations stored in the already mu-tated strategy component, followed by the subsequent evaluation of the thus resulting individual. The assumption is that the better the strategy component is, the better the object component becomes. Thus, better strategy components will lead to a higher fitness that in turn leads to a higher probability for the selection of the corresponding individual. Hence, individuals having better strategy components are more likely to be reproduced.

Selection

The selection operator reduces the surplus of individuals generated by the op-erators recombination and/or mutation, to ensure that the population Pt con-tainsμindividuals. In this work, theκselection is considered. To realize this selection operator, each individual contains an integer ˜κcalled age, giving the number of generations it already exists in the population. Having a set of m individuals, the selection operator choses the bestμindividuals for the next generation that do not exceed an age ofκ. This generalized operator allows one to model the comma-selection and the plus-selection as well, since for choices κ = 1 and κ = ∞the former and the latter are realized, respectively. Fur-thermore it allows for finding a trade-off between both extremes, where in the former case it was observed that an evolutionary strategy does not converge to an optimum and in the latter case that it is more likely to get stuck in a local optimum.

Termination Criteria

In contrast to classical algorithms, it is hard to determine for evolutionary algo-rithms whether the global optimum (at least approximately) has been found.

Different techniques exist: Some of them do not use criteria based on the qual-ity of the solution found so far and terminate the evolutionary loop if a certain number of generations or fitness evaluations was performed, or if a certain amount of time was used. If it is sufficient to reach a certain quality, the best fitness value so far can be monitored and the loop is terminated after reaching the specified quality. Criteria that are based on the progress of the optimization often use the convergence velocity or the number of stall generations or time.

Here, the assumption is that the optimum is probably reached if after a certain amount of time or generations the best fitness could not be increased. Accord-ingly, if the convergence velocity reaches a certain predefined value, again the hit of the optimum can be assumed. Using the self-adaptation technique, an-other interesting termination criterion can be used based on the current step size. The step sizes decrease with increasing progress of the optimization.

Therefore, the search can be terminated if the step size falls below a certain threshold.