• Keine Ergebnisse gefunden

5.3. LINEAR COMBINATION SEARCH ALGORITHM 111

Algorithm 7 Pseudocode of the Grouped and Linked Polynomial Mutation operator.

Pseudocode based on [5]

Input: Solution~x, Grouping Mechanism Γ, Distribution Index η Output: Mutated Solution~y

1: {G1, ..., Gγ} ← Apply Γ to~x, producing γ groups

2: j ← Pick a group index uniformly at random from{1, ..., γ}

3: u ← random(0,1)

4: for allvariables xi withi∈Gj do

5: if u≤0.5then

6: δ1 = x xi−xi,min

i,max−xi,min

7: δq = (2u+ (1−2u)(1−δ1)η+1)η+11 −1

8: else

9: δ2 = xxi,max−xi

i,max−xi,min

10: δq = 1−(2(1−u) + 2(u−0.5)(1−δ2)η+1)η+11

11: end if

12: yi=xiq(xi,max−xi,min)

13: repair(yi)

14: end for

15: for allvariables xi withi /∈Gj do

16: yi=xi

17: end for

18: return ~y

values for their convergence-related decision variables [44, 28]. These similar properties can be approximated by an optimisation algorithm, and the knowledge that certain values in certain variables are beneficial for the overall quality of solutions is implicitly coded in the population of the algorithm.

In this LCSA approach, the aim is to make use of this inherent information in the population. The method is based on the assumption that at each generation of the EA, the current population’s members contain the information which (sub-)vector-space of the n-dimensional search space contains the (at that point in time) best or most promising solutions [7]. To utilise this concept and to search for solutions in this new subspace, a population of coefficient vectors is formed and used to create linear combinations of solutions. Optimising these coefficients with a metaheuristic can then help to increase the exploration and exploitation of the search space in areas that likely contain further improved solutions. Through such combinations of population members, the aim of the resulting EA is to improve the search process of multi- and many-objective optimisation algorithms in terms of solution quality.

The idea of extracting knowledge from the search process can be seen as related to the concept of “innovisation” from the literature ([111, 112]), specifically online innovisation.

This concept aims to identify relevant information about the problem from the obtained solutions at runtime of the optimisation, and ideally feed it back to the optimisation process to further improve the search.

An advantage of this approach is the fact that population sizes are usually smaller than the number of decision variables (in large-scale optimisation). Therefore, even if all population members are used in the linear combinations, the proposed method is able to provide a reduction of dimensionality for large search spaces. In the following, the concept is introduced formally and the generic algorithm structure is presented. As the other proposed approaches in this thesis, the LCSA can be used with arbitrary population-based metaheuristics, and the evaluation of the approach on large-scale and other problems with various algorithms is performed in Chapter 6.

5.3.1 Concept of Linear Combinations of Solutions

Suppose we have a real-valued optimisation problem as shown in Eq. (2.1), containing n decision variables andmobjectives. As written in [7], let the population of an algorithm be P and its size bes:=|P|. At each given time of the optimisation process the population consists of s solution vectors each of dimensionality n: P = {~x(1), ~x(2), ..., ~x(s)}. Each solution is a vector ∈Rn:

~x(i)= (x(i)1 x(i)2 ... x(i)n ) (5.10)

5.3. LINEAR COMBINATION SEARCH ALGORITHM 113

The setP can be used to define a vector (sub)space, using the population members to span this space. The dimensionality of this space is given by the rank of the matrix of the spanning vectors [7]. Using the variables of the population members, we can create the matrix ˆX∈Rs×n, where each row contains one of the solutions inP.

Xˆ =

x(1)1 x(1)2 · · · x(1)n

x(2)1 x(2)2 · · · x(2)n

... ... . .. ... x(s)1 x(s)2 · · · x(s)n

(5.11)

In an EA, new solutions for an optimisation problem are usually created by using recombination operators like, for instance, the arithmetic or simulated binary crossovers.

On the other hand, multiple solutions of the population can also be combined linearly instead. The focus in the following lies on general linear combinations, although it is also possible to use smaller subsets of these, for instance convex or conical combinations. We define a linear combination of the solutions as follows.

~x0=~yXˆ =y1~x(1)+y2~x(2)+...+ys~x(s) (5.12) where the vector~y contains the combination coefficients:

~y= (y1 y2... ys) (5.13)

Using this concept, we can employ a metaheuristic algorithm to perform a search for better solutions from combinations of the existing ones. This new search space, which spanned by the svectors in P, has a dimensionality equal to the rank of the matrix ˆX, i.e. 1≤rank( ˆX)≤min{n, s}.

Since this method combines all of the sexisting solutions, it can also be seen as a variant of ans-parent crossover. However, in a multi-parent crossover new solutions are typically created through random combinations, while the environmental selection process of the EA is responsible for the decision whether this produced solution is an improvement. In contrast, our proposed approach uses an evolutionary process internally to find “optimal”

or improving combinations, i.e. the parameters for the combination are not chosen randomly, but are subject to an optimisation process.

In the LCSA method, instead of optimising the original variables of the problem, the values of the vector~y are optimised to search for promising linear combinations. Thus, assuming

that alls population members are used in the linear combinations, the dimensionality of the new optimisation problem is reduced to sdecision variables as opposed to the n variables of the original problem. The effects this procedure has on the search can vary depending on the dimensionality of the problem.

In the following example, as described in [7], consider a 30-dimensional problem that is optimised using a population size of s= 100. In this case, searching for optimal linear combination coefficients results in a search in a 100-dimensional space. The created solutions by the combinations, however, still remain in the original, 30-dimensional space, and thus the formed vector space contains redundancy, since not all the vectors used for the combinations can be linearly independent. On the other hand, if the same mechanism is used in a high-dimensional problem, for instance with n= 1000 variables, we can observe a different effect. The s= 100 population members can at most define a 100-dimensional subspace of the original search space. Thus, the optimisation algorithm searches in an – at most – 100-dimensional subspace of the 1000-dimensional original search space. In this case, the optimisation in the space of linear combinations serves as a dimensionality reduction technique. If this technique is used in the beginning of the search, the whole population still consists of mostly randomly created solutions, where it is not guaranteed that good solutions actually lie in the defined subspace. However, after the optimisation already progressed for a certain time, we can assume that the population started to converge towards promising areas of the search space. Thus, the spanned vector space, i.e. a combination of the current variable values, may contain additional promising solutions which can help to approximate the Pareto-optimal areas.

This way of dimensionality reduction makes the LCSA suitable for large-scale problems, and through the exploration of subspaces, we can also expect a good performance in multi- and many-objective optimisation in terms of diversity of solutions. A possible drawback is, of course, that the lower dimensional space might not contain the actual Pareto-optimal solutions, and therefore optimising only the linear combinations might make it impossible for the EA to find these optimal regions of the original search space. To counter this risk, in the algorithm structure which is described in the following subsection, the optimisation of the original problem and the optimisation of linear-combinations take turns to harvest the best of both search spaces.

5.3.2 Algorithm Structure of the LCSA

The proposed concept of the LCSA can be used inside arbitrary metaheuristic optimisation algorithms. In the following, the algorithm structure is described as written in the author’s contribution in [7]. We define a population Q of ~y-vectors, where each vector in the population defines one linear combination of the members ofP as described above. Thus, any optimisation algorithm can be used on this newly formed population to search for promising linear combinations of the underlying original solutions. This optimisation of the population Q, i.e. the search in a promising subspace of Ω defined by P, is

5.3. LINEAR COMBINATION SEARCH ALGORITHM 115

Algorithm 8Linear Combination-based Search Algorithm LCSA Input: ProblemZ, Optimisation algorithmA

Output: Solution population P

1: P ← Random initial population for Z

2: while termination criterion not metdo

3: for i= 1 toiter1 do

4: P ← Perform one iteration ofA on populationP

5: end for

6: Xˆ ← Decision variable values of the current populationP

7: Q← Random initial population of linear-combination-vectors

8: for i= 1 toiter2 do

9: Q← Perform one iteration ofA on populationQ

10: P ← EnvironmentalSelection(P ∪Q)

11: end for

12: end while

13: return P

employed inside other, existing metaheuristic algorithms as an additional search step.

More precisely, the search mechanism of the original (arbitrary) metaheuristic is used in turns with the proposed linear combination-based search [7].

The following mathematical description is identical to the one given in the earlier publication by the author in [7]. Let ˆX be the matrix of the decision variable values of all solutions in P as seen above, where each row in ˆX corresponds to one solution in P. As a result, ˆX is ans×n matrix, wheres is the number of solutions inP. In the same way, let ˆY be the matrix of the decision variable values (i.e. coefficients of linear combinations) of the solutions in Q. The population size of Qis t, therefore ˆY ∈Rt×s. The original objective function evaluation can be applied to the new population by simply multiplying ˆX with ˆY and computingf~( ˆYX), i.e. applyingˆ f~to each row in ˆYX. Forˆ practical reasons and to limit the search space of the new problem, we also define lower and upper bounds for the variables yi, i.e. yi ∈[yi,min, yi,max],i= 1, .., s0.

The resulting LCSA optimisation approach works as follows, and is shown in Algorithm 8.

In the main loop of the algorithm, the population P of the original problem is optimised with a multi-objective algorithmAfor a specified number iter1 of iterations (Lines 3 to 5 in Algorithm 8). Then, we build the matrix ˆX out of the decision variables’ values of the current population P. To start the linear combination-based search phase, a random population Qof linear-combination-vectors is created (Line 7). The algorithm then optimises Q for a certain amount of iterationsiter2 using the same optimisation algorithmA (Lines 8 to 11). During this step, all evaluated solutions are also used to update the original populationP, using the environmental selection method of A. As a result, we obtain an updated population P for the next iteration of the main loop.

5.3.3 Discussion and Modifications of the LCSA

The LCSA switches between two optimisation phases, which optimise either the original problem or the coefficients of the linear combinations. The resulting algorithm is able to explore the original search space frequently during its runtime, while at the same giving increased attention to the exploitation of promising subspaces. If the upper and lower bounds of the coefficients (yi ∈[yi,min, yi,max]) are limited to be between zero and one, the search can only exploit the convex combinations of the existing solutions, i.e. the

“inner” area of the simplex spanned by the (non-dominated) population members. The experiments in the previous publication [7] and the experiments conducted in this thesis, however, make use of larger domains for these coefficients, which allows extrapolation and therefore further enhances the exploration of the algorithm.

Previous results in [7] showed that this method was able to increase the performance in 60 problem instances from different benchmark families, both in many-objectives and in large-scale problems. The LCSA was able to increase the solution quality for two standard algorithms NSGA-II and GDE3, and even improved the performance of two many-objective algorithms: NSGA-III and RVEA.

Even though these results showed the potential of the linear combinations, the original version had the drawback that it used the NSGA-II optimiser exclusively for the optimi-sation of the coefficients ~y. This use of NSGA-II can be a disadvantage when optimising a many-objective problem. To further harvest the advantages of existing methods, the version proposed in this dissertation thesis uses the employed metaheuristic in both, the original search space and the reduced space of the combination coefficients. This means, if SMPSO is used with this method, SMPSO is used to search in both spaces, in contrast to the version in [7]. In this way, using a many-objective algorithm within the LCSA framework is expected to deal well with many-objective large-scale problems as well.

One further modification to concentrate on promising solutions can be to use only the non-dominated solutions in the population for the linear combinations instead of the whole population. From a theoretical point of view, this can help to achieve a faster convergence, since the algorithm concentrates more on the “best” subspace spanned by the first front. In case the first front is significantly smaller than the population size, this measure also reduces the amount of decision variables of the linear coefficient problem, as fewer coefficients are necessary. On the other hand, focussing only on the non-dominated solutions can also be disadvantageous to the overall search. For instance, it can lead to a reduced exploration of the search space, especially in the early stages of the optimisation. Moreover, it is known that Pareto-dominance is not a well-performing concept in many-objective problems, and limiting the linear combinations to those non-dominated solutions can affect the diversity in the objective space negatively. For these reasons, the version in this thesis uses the whole population for the linear combinations.

5.4. CLASSIFICATION OF PROPOSED METHODS 117