• Keine Ergebnisse gefunden

5.1 The Weighted Optimisation Framework

5.1.4 The WOF Algorithm

of the transformed problem in this case can lead to fast convergence, especially in the case of Fig. 5.5d, but the diversity of the solutions may suffer. This effect is caused by the structure of the ZDT1 problem, since the diversity of the solution set is only defined by the first decision variable x1, and the used transformation function in this case was the multiplication as described above. Therefore, since the lower and upper bounds of the wj were set to 0.0 and 2.0 respectively, the transformed problem is not able to reach certain parts of the search space, as it can only alter the variables within a certain interval, based on the values in ~x0. As mentioned above, this emphasises that other transformation functions than simple multiplication may be more suitable for the optimisation process of the transformed problem.

We can conclude from these three examples that, if appropriate choices for groups, transformation function and~x0 are made, the proposed transformation of the problem can result in an accelerated optimisation process in terms of both diversity and convergence.

It is, however, important not to sacrifice the diversity in favour of a fast convergence, as in the case of Fig. 5.5d. In the following, these observations are used to propose the optimisation strategy of the WOF for large-scale problems.

5.1. THE WEIGHTED OPTIMISATION FRAMEWORK 97

Figure 5.6: Outline of the Weighted Optimisation Framework. Graphic based on [1].

The inputs for the WOF algorithm are an optimisation problemZ, a population-based metaheuristicA, a grouping mechanism Γ (see Section 2.4) and a transformation function ψ (see Section 5.1.1). First, the population of the algorithm is initialised with a random population for the problem Z. In the main loop of the algorithm (Lines 3−10 in Algorithm 3), the two different optimisation phases are carried out. First, the original problem Z is optimised using algorithm A for a predefined number of t1 function evaluations (Line 4 in Algorithm 3).

Algorithm 3WOF(Z,A,Γ,ψ) - Pseudocode based on [1].

Input: ProblemZ, Optimisation AlgorithmA, Grouping Mechanism Γ, Transformation Function ψ

Output: Solution population S

1: Initialisation

2: S ←Random initial population for Z

3: repeat

4: S ←A(Z, S, t1) // Optimise Z with Algorithm Afort1 evaluations, using S as a starting population.

5: {~x01, .., ~x0q} ←Selection of q pivot solutions fromS

6: for k= 1 to q do

7: Wk ←WeightingOptimisation(~x0k, Z, A,Γ, ψ) // Algorithm 4

8: end for

9: S ← updatePopulation(W1, .., Wq, ~x01, .., ~x0q, S) // Algorithm 5

10: until δ·total#Evaluations used

11: repeat

12: S ←A(Z, S, t1) // Optimise Z with Algorithm Afort1 evaluations, using S as a starting population.

13: until total#Evaluations used

14: return FirstNonDominatedFront(S)

Next, the weighting optimisation (Lines 6 to 8 in Algorithm 3) performs the transformation and optimisation of the weights q times. For this, q different pivot solutions ~x0k (k= 1,2, ..., q) are drawn from the current population (Line 5) based on a selection mechanism.

For every one of them, the transformed optimisation is carried out as shown in Algorithm 4.

A suitable choice of the pivots helps in this step to preserve diversity in the population.

Using the above described mechanism of problem transformation, multiple transformed problems Z~xk0 are created. At this point, we make use of a transformation function, as shown in Definition 5.1. In our implementation, the same function ψ is used for all transformed problems, although theoretically different functions could be used for each single problem as well.

Next, a population Wk of randomly created solutions (weights) for each of these Z~xk0

is optimised using the optimisation algorithm A. As a result of this step, we obtain a population Wk of weights, where the objective function values are optimised based on the values of the originally chosen solution ~x0k, i.e. the population Wk contains the best found solutions in the subspace defined by the transformation. This process of problem transformation and optimisation is carried out q times (once for each of the q pivot solutions), resulting in a set of q weight populations {W1, ..., Wq} [1].

The last step is to merge the original population and the newly obtained weight populations to perform an overall environmental selection (Line 9 in Algorithm 3). The weight populations {W1, ..., Wq} each contain vectors of weights which are optimised based on one of the solutions of the original population, i.e. the chosen ~x0k. If the population size of each of these Wk isc, and the original solution populationS has a size ofsindividuals, by combining all weight vectors of all the populations with each solution of S, we can construct q·c·s new solution candidates for the original problem. This results in a large number of additional needed function evaluations due to the creation of these new solutions.

To reduce this computational overhead different strategies can be used. The original version of WOF in [4, 1] used the strategy to only pick one solution from eachWk (using the largest Crowding Distance from the first non-dominated front[1]). This weight vector was used to create one new solution by applying it to each of the solutions inS, resulting inq·snew solutions. In contrast, a modified approach introduced in [6] used q selected weight vectors from each Wk and in addition combined each~x0k with all vectors in the respective Wk. This results in an overall ofq·q·s+q·cnew function evaluations for the environmental selection step, but showed a better exploitation of the information contained in theWk. This second, modified version of the merging step is used in this thesis and is shown in detail in Algorithm 5, Lines 2 to 9.

The obtained new solutions in the sets Sk0 for k = 1, .., q are then combined with the population S (Line 10 in Algorithm 5). In a next step, duplicate solutions are removed from this union set to prevent negative effects on the population’s diversity (Line 11).

This elimination is based on the values in the objective space, i.e. if two solutions have the same objective function values, they are considered as duplicates and one of them is removed. Note that this elimination can also be done using the decision variable values. This increases the runtime of the algorithm, since the comparison has to be

5.1. THE WEIGHTED OPTIMISATION FRAMEWORK 99

Algorithm 4WeightingOptimisation(~x0k, Z, A,Γ, ψ) - Pseudocode based on [1].

Input: Solution ~x0k, Problem Z, Optimisation Algorithm A, Grouping Mechanism Γ, Transformation Functionψ

Output: Population of weightsWk 1: Initialisation

2: Dividenvariables intoγ groups using Γ

3: Z~xk0 ← Build a transformed problem withγ decision variables (weights) fromZ,~x0k, Γ and ψ

4: Wk← Random population of weights for Z~xk0

5: Wk←A(Z~xk0, Wk, t2) // OptimiseZ~xk0 with AlgorithmA fort2 evaluations, usingWk

as a starting population.

6: return Wk

done with the high-dimensional decision variable vectors. On the other hand, it can be beneficial for multi-modal problems where multiple areas of the search space map to the same objective function values. The concept of multi-modal multi-objective optimisation has recently drawn attention in the literature [103, 104, 105, 106, 107], and changing the elimination of duplicates in WOF is a possible modification for future multi-modal large-scale research.

Next, in case the newly formed solution set contains fewer than ssolutions, additional solutions are created by genetic operators to fill the population (Lines 12 to 14 in Algorithm 5). As a last step, an environmental selection in the form of non-dominated sorting is carried out. This helps to eliminate worse solutions which may have been found in the weight optimisation, for instance due to a suboptimal selection of a pivot solution.

After the selection process, the main loop of the algorithm starts from the beginning with a normal optimisation step to alter the variable values independently of each other.

The alternation of optimising the original and the transformed problems (Lines 3 - 10 in Algorithm 3) is repeated until a certain number of function evaluations is used up.

Similar to the strategy used in MOEA/DVA, the performance of WOF is improved if a certain amount of computation is used for a final so-called “uniformity” optimisation.

Since the weights of WOF usually lead to large “jumps” in the search space, and do not allow for independent alternation of variables, a normal optimisation with the used metaheuristic has shown to work better towards the terminal phase of the search. In this stage, the optimal values for the decision variables can be approximated better with traditional evolutionary operators, that do not include variable groups, and allow for independent changes in each variable [1]. To control at which point the optimisation of weights is stopped, WOF contains a parameterδ ∈[0,1], which defines which share of the total function evaluations are spent for the first phase (Line 11 in Algorithm 3).

Algorithm 5 UpdatePopulation(W1, .., Wq, ~x01, .., ~x0q, S)

Input: Weight populationsW1, .., Wq, Pivot Solutions~x01, .., ~x0q, Solution population S Output: Solution population S

1: s← |S|

2: for k= 1 toq do

3: Sk0 ←∅

4: {w~(1)k , .., ~wk(q)} ← Selection of q individuals from Wk

5: forr= 1 toq do

6: Sk0 ← Sk0∪ {Applywk(r) to each solution in populationS}

7: end for

8: Sk0 ← Sk0∪ {Apply each individual in Wk to the solutionx0k}

9: end for

10: S ← S∪ {Sk0}k=1,..,q

11: Eliminate duplicate solutions from S

12: if |S|< s then

13: FillS by applying genetic operators to solutions fromS until the population sizes is reached

14: end if

15: S ← Perform non-dominated sorting onS

16: return S