• Keine Ergebnisse gefunden

This section describes the second contribution of the author to multi-objective large-scale optimisation, which was proposed in [5]. It is based on the idea to include variable groups in the mutation operators of existing evolutionary algorithms. Since almost all population-based metaheuristics utilise a kind of mutation operator, changing the operator itself to include large-scale related concepts is an easy and fast way to enable a large variety of metaheuristics to deal with large-scale problems.

To create special mutation operators that are able to deal with large-scale problems, we include variable groups to determine which variables undergo mutation. For this purpose, the widely used Polynomial Mutation Operator [101] is used, although the concept can be extended to other mutation methods, for instance for non-real-valued representations, as well.

In the following, we introduce the concept of the well-known Polynomial Mutation Operator. Afterwards, we propose three new mutation operators. The first one is called the Linked Polynomial Mutation, which establishes a connection between the mutated variables in each individual in terms of the amount of change. Secondly, the concept of variable groups is incorporated into the Polynomial Mutation, called the Grouped Polynomial Mutation, by using groups to decide which variables are mutated. The third proposed operator, the Grouped and Linked Polynomial Mutation Operator utilises both of the above concepts. The following explanations are based on the initial publication by the author in [5].

5.2. GROUPS IN MUTATION OPERATORS 107

Algorithm 6Pseudocode of the Polynomial Mutation operator. Pseudocode based on [5]

Input: Solution~x, Probability p, Distribution Indexη Output: Mutated Solution~y

1: fori= 1 tothe number of variables in~xdo

2: r ← random(0,1)

3: if r < pthen

4: u ←random(0,1)

5: if u≤0.5 then

6: δ1 = x xi−xi,min

i,max−xi,min

7: δq = (2u+ (1−2u)(1−δ1)η+1)η+11 −1

8: else

9: δ2 = xxi,max−xi

i,max−xi,min

10: δq = 1−(2(1−u) + 2(u−0.5)(1−δ2)η+1)η+11

11: end if

12: yi=xiq(xi,max−xi,min)

13: repair(yi)

14: else

15: yi=xi

16: end if

17: end for

18: return ~y

5.2.1 Polynomial Mutation

The Polynomial Mutation operator was introduced in [101] and has since been a part of many metaheuristic optimisation algorithms, for instance in [11, 32, 108]. It is designed for real-valued variables, and utilises a polynomial distribution around the original value of a variable to sample a new, mutated value. Two versions of this operator were proposed in the literature, called highly disruptive and non-highly disruptive Polynomial Mutation.

The difference between them is the distribution of the possible mutated values. The original, non-highly disruptive mutation has the disadvantage to become useless when the variable value gets closer to its domain border [109]. For this reason, the new highly disruptive version was proposed in 2008 [102]. In contrast to the original version, the complete domain of each variable was included in the distribution of the operator. A hybrid of both versions was also introduced in [109]. Due to its advantages, the highly disruptive version is used in this thesis. The functionality of this operator is depicted as pseudocode in Algorithm 6 and is described in further detail in the following [5].

Polynomial Mutation has two parameters: the mutation probabilitypand the distribution index η. For each variable xi (i= 1, ..., n) of a solution~x, the probability parameterp is used independently to decide whether this variable is subject to mutation. Thus, the expected number of mutated variables in a solution is n·p. In the literature, a value of p= 1n is often used [11, 108, 32]. As a result, only a single variable per solution is

mutated in the expected case. The second parameter, the distribution indexη, influences the distribution from which the new, mutated value is chosen. The distribution index therefore influences how much change the mutation operator produces, and can be used to balance exploitation and exploration in the search process. A larger value ofηcorresponds to a higher probability that the drawn value from the distribution lies very close to the original xi. In the literature, η is often set to 20.0 [11, 32].

The detailed steps of the operator are as follows. For each variable, it is decided whether a mutation is applied (Line 3 in Algorithm 6). After that, the operator draws another valueu uniformly at random, which is used in the subsequent steps to determine the new value ofxi, i.e. the value that is chosen from the distribution defined byη. Depending on the value ofu, a value from a distribution is drawn either on the left side or the right side of the original value of xi (Lines 5 - 11 in Algorithm 6), and the update of the variable xi is performed (Line 12). In case a drawn, new value forxi exceeds the upper or lower limit of the variable, a repair mechanism is used in Line 13 which sets the value to the respective border [5].

In the following, certain changes are made to this operator, and the three proposed mutation operators for large-scale optimisation are described in detail. The newly proposed versions are calledLinked Polynomial, Grouped Polynomial andGrouped Linked Polynomial mutation, where the last of these versions is the most effective operator based on the original publication, and is called the Grouped Linked Mutation Operator (GLMO) in the remainder of the thesis. In the following, we explain how the mutation can be equipped with a variable grouping mechanism, and how the amount of change for each variable can be influenced within these groups [5]. The experimental comparison between all the different versions inside various algorithms is carried out below in Chapter 6.

5.2.2 Linked Polynomial Mutation

The Linked Polynomial Mutation operator follows essentially the same workflow as the original Polynomial Mutation. The difference lies in the amount of change, which in this operator is not independent between the mutated variables. As we have seen above, the separability of a problem is usually of concern in large-scale optimisation. In a separable problem, all variables can be optimised independently of each other to obtain the optima of the problem. In contrast, in non-separable problems (where interactions between variables exist) the solution quality can be affected in a negative way if changes are made to one variable without a corresponding, suitable change to another, interacting variable.

To address this issue, we propose a version of the Polynomial Mutation operator where the amount of change in each variable is connected to the amount of change in the other mutated variables of the same solution. This “link” between the mutated variables is implemented by influencing the choice of the value u in the operator. In the pseudocode in Algorithm 6, the value for u (the amount of change) in Line 4 is now chosen prior

5.2. GROUPS IN MUTATION OPERATORS 109

to the start of the loop in Line 1. In this way, all values for the mutated variables are determined by only one value for u, which is drawn one time and fix for all variables.

Thus, the same amount of change (relative to the domain) is applied to all variables that are actually subject to mutation (according to the probability p) [5].

Using this concept can be beneficial for problems with interacting variables. If mutation is performed on multiple variables of a solution, the link between the variables will result in a similar change in both of the variables. This can lead to less disturbance in solution quality in case these variables are interacting with each other. However, this may not always be the case and is dependent on the specific types of interaction among the variables. Especially since the mutation probability is usually chosen so that the expected value of changed variables is 1 out of nvariables, the effect of this mechanism alone, that is without any variable groups, might be rather limited. The effectiveness of this is examined later in the experiments in Chapter 6.

5.2.3 Grouped Polynomial Mutation

Next, the concept of variable groups is incorporated into the Polynomial Mutation operator. In this version, we assume that a grouping mechanism is used to separate the variables into multiple groups. Then, mutation is only applied to a group as a whole entity, i.e. to each variable in a group at the same time. This is expected to work well in large-scale problems when the groups are chosen in a way that interacting variables are put in the same group [5].

In the Grouped Polynomial Mutation, the decision of which variables are mutated is changed from a random probability per variable to a random choice of which group as a whole undergoes mutation. The underlying assumption is that through a suitable grouping mechanism, interacting variables, which should ideally be altered at the same time, belong to the same group. This is supposed to benefit the search process, especially in non-separable problems.

A difference to the original operator is that the grouped mutation does not require a probability parameter p. Which and how many of thenvariables are subject to change is defined by the number of groups (γ), since one of the groups is chosen for the mutation randomly. The operator works as follows, as written in [5]. In a first step, a grouping mechanism is used on the solution ~xto separate the decision variables intoγ groups, as described in Section 3.3. This can be done before the actual optimisation starts (in terms of interaction-based and contribution-based groups), or separately and anew each time the mutation operator is used (which might only be feasible with simple grouping methods).

The second step is to select one of theγ groups randomly with equal probabilities for each group. Then, Polynomial Mutation is applied to every variable in the selected group (Lines 4-13 in Algorithm 6) [5]. More precisely, separate values for u are chosen and used in the mutation for each variable in the group. No additional probability value is

used. Which and how many of the variables are subject to mutation is entirely based on the random selection of one of the groups. All variables belonging to the remaining groups keep their original values. Assuming evenly sized groups, the amount of variables which are subject to change is nγ, which is in the expected case significantly larger than the amount of mutated variables in the original and Linked Polynomial Mutation [5].

5.2.4 Grouped and Linked Polynomial Mutation

The Grouped and Linked Polynomial Mutation Operator (GLMO) incorporates both of the concepts from the previous operators. The detailed steps of this mutation operator are shown as pseudocode in Algorithm 7.

The necessary inputs for this operator are a grouping mechanism Γ and a distribution indexη. In case the groups are already precomputed by a non-simple method as described above, the groups can also serve as an input directly. Otherwise, the whole mechanism Γ can be used as an input. In case a simple mechanism is used, it can even be beneficial to obtain different groups in each iteration or usage of the operator, as the specific simple groups can be tailored to the specific solution to be mutated (e.g. Ordered Grouping as seen in Section 5.1.6). Similar to the Grouped Polynomial Mutation, no additional mutation probability for the variables is needed. One of theγ groups is chosen at random and all variables in this group are mutated (Lines 1 and 2 in Algorithm 7). In addition, as in theLinked Polynomial Mutation, only one value foruis drawn from the distribution at the beginning of the operator (Line 3). This ensures that all variables in the chosen mutated group are changed equally, i.e. by the same amount relative to their domains.

After choosing one of the groups, Polynomial Mutation is used on all variables in that group using the given value for u (Lines 4 to 14).

As explained, it can be advantageous to the search if interacting variables are changed at the same time, and we further assume that changing them by a similar amount can be useful. This operator combines both of these functionalities. In contrast to the linked version only, the variables that supposedly interact with each other may now belong to the same group. Further, since now a larger amount of variables is changed due to the mutation of a whole group, the link between the variables is expected to have an increased effect. The original study by the authors in [5] shows that this GLMO version performs better than each of the two proposed changes alone. It also shows that this mutation operator is able to greatly enhance the performance of existing methods on large-scale benchmark problems. The experimental section of this thesis evaluates the performance further by applying it to a larger set of different benchmark functions with different amounts of variables (see Chapter 6).

5.3. LINEAR COMBINATION SEARCH ALGORITHM 111

Algorithm 7 Pseudocode of the Grouped and Linked Polynomial Mutation operator.

Pseudocode based on [5]

Input: Solution~x, Grouping Mechanism Γ, Distribution Index η Output: Mutated Solution~y

1: {G1, ..., Gγ} ← Apply Γ to~x, producing γ groups

2: j ← Pick a group index uniformly at random from{1, ..., γ}

3: u ← random(0,1)

4: for allvariables xi withi∈Gj do

5: if u≤0.5then

6: δ1 = x xi−xi,min

i,max−xi,min

7: δq = (2u+ (1−2u)(1−δ1)η+1)η+11 −1

8: else

9: δ2 = xxi,max−xi

i,max−xi,min

10: δq = 1−(2(1−u) + 2(u−0.5)(1−δ2)η+1)η+11

11: end if

12: yi=xiq(xi,max−xi,min)

13: repair(yi)

14: end for

15: for allvariables xi withi /∈Gj do

16: yi=xi

17: end for

18: return ~y