• Keine Ergebnisse gefunden

5.4. CLASSIFICATION OF PROPOSED METHODS 117

Building Blocks

RandomGrouping Interaction-basedGrouping Contribution-basedGrouping CC-basedoptimisation oflarge-scaleproblem CC-basedoptimisation ofconvergence-variables Optimiselarge-scaleproblem Optimisediversity-variables Indicator-basedoptimisation Indicator-basedlocalsearch Optimisetransformedproblem Optimisationofasingle groupofvariables Updateglobalarchive Createindependentpopulations Convergencedetection Problemtransformation

WOF (X) X X X

GLMO (X) X

LCSA X X X

Table 5.1: Building blocks from the related works which are present in the proposed large-scale algorithms.

solutions and variable groups to transform the problem. Since in both WOF and LCSA, the new, lower dimensional problem is created by the introduction of new variables which span a new search space inside the large-scale problem, both of them can be regarded as transformation-based approaches in the classification of dimensionality reduction (category 2). GLMO, on the other hand, can be seen as part of the coevolution-based group of algorithms, since it changes variables in the mutation operator only in certain groups while leaving the other groups fixed. On the other hand, the crossover or PSO movements (depending on the used metaheuristic) are always performed on the original large-scale problem. Although other approaches like MOEA/DVA also apply optimisation on the original large-scale problem, the related methods usually divide the CC-based optimisation and the large-scale optimisation in temporal order (meaning the algorithm applies one of these approaches at a time, after each other). This means that whenever CC-based optimisation is used, the whole evolutionary process using crossover and mutation is applied only to a single variable group. In contrast, this division is not done in GLMO, and only the original problem is optimised. As a result, GLMO falls in the

“no reduction at all” category of dimensionality reduction (category 3). If we look again at Table 4.2, we see that the only other method in the literature so far which is not using dimensionality reduction is the DLS-MOEA. Since now GLMO also falls into that category, with the proposed methods included still only 2 out of 15 methods make use of this strategy.

Diversity Management

Next, we take a look at the ways of diversity management of the three methods. As described above and in the original publications, the WOF method, applied to a certain pivot solution, usually can leads to a fast convergence speed towards some optimal

5.4. CLASSIFICATION OF PROPOSED METHODS 119

Dimensionality Reduction Category

Diversity Management Category

Many-Year Algorithm 1 2 3 1-1 1-2 2 3 4 objective Parallel

2016 [4] [1] WOF X X X (X)

2016 [5] GLMO X X X

2019 [7] LCSA X X X

Table 5.2: Classification of the proposed large-scale methods. WOF is the only method in the new category 4 of diversity management which is not existent in the related literature.

solutions. The drawback of converging too fast to only one region of the Pareto-Front is the risk of loosing diversity. Therefore, WOF is designed to achieve diversity by using multiple pivot solutions~x0k in each of the weighting optimisation steps, to obtain convergence to different parts of the Pareto-front and therefore achieve and maintain diversity. The mechanism that allows WOF to balance between diversity and convergence is the selection of the pivot solutions, and to do the transformation of the problem multiple times with these solutions before merging the populations. Unfortunately, this method of achieving diversity is not used in any of the related works and it is different from the proposed main categories and subcategories introduced in Chapter 4. WOF is not part of the categories 1-1 or 1-2 since it does not use diversity-related variables.

It does not use indicator-based optimisation as in the third category. However, since category 2 consists of those methods which do not explicitly have a mechanism for managing diversity, this category does not fit to WOF either. We therefore introduce a fourth main category of diversity management, which we define asachieving diversity through pivot solutions. WOF is currently the only representative of this way of obtianing diversity. The forth category is added into Table 5.2.

Noteworthy is that the new LSMOF algorithm from the literature [69] is strongly based on WOF in the way that it uses the transformation function with weight vectors to create the new subproblems. It does, however, as described above, not make use of any mechanism to select specific solutions as candidates for this transformation. In fact, since the transformation step is only done once in the beginning of LSMOF, it uses every solution in the initial population of the algorithm to do a transformation. And while this is probably not harmful to the diversity, the LSMOF clearly does not take any steps of using specific solutions for increasing diversity either, and especially not in the remaining parts of the algorithm. Instead, since all these transformed problems are optimised at the same time, the Hypervolume indicator is used as described in Section 3.2.

The GLMO approach does not possess a specific method of diversity management in the basic implementation, since it only specifies using (simple) variable groups. GLMO basically takes care of diversity by always optimising the whole, high-dimensional problem with its crossover operator. Therefore, like the other algorithms in category 2 of diversity management, it relies on the selection mechanism of the used metaheuristic to achieve diversity. If GLMO is applied to a NSGA-II or NSGA-III algorithm, it is to be expected

that the final performance in terms of diversity is entirely depended on the diversity capabilities of those algorithms. However, this classification relies on the assumption that the used grouping mechanism in the mutation operator is either a simple method or an interaction-based method as described in Section 4.2. It is, on the other hand, very easy to use also contribution-based groups or even a combination of contribution-based and other methods to obtain the groups for the mutation. For the sake of computational budget, these groups might have to be precomputed in the beginning of the algorithm, as they should not be redone in every iteration like the simple methods can. If contribution-based groups are applied in the GLMO method, it might raise the computational budget needed for the approach, but it might be beneficial for the overall performance and the diversity of the problem. The GLMO can therefore easily be extended in this way to be a member of category 1-1 of diversity management. Nonetheless, in its original version, and as used in this thesis, it only uses simple groups, and by that does not manage diversity. In Table 5.2 the approach is listed in category 2 as a result.

Regarding the LCSA, it achieves diversity basically through the assumption that the inter- and extrapolation of solutions in the transformed problem can produce diverse solution candidates, and that the selection mechanism of the used metaheuristic is able to keep these in the population. In addition, LCSA optimises the original problem in turn with the transformed one, which makes it also similar to other methods like WOF, MOEA/DVA or LSMOF in the way that there are phases of optimising the original problem. LCSA therefore also belongs to diversity-management category 2, which does not include specialised ways of obtaining or retaining diversity (see Table 5.2). However, the linear search mechanism is actually designed and shown in the original publication to benefit diversity [7]. This is because it enables the algorithm to produce solutions through the extrapolation in a linear hyperplane, which can lead to a good exploration and produce more diverse solutions within a certain converged area in the search space.

Even though LCSA belongs technically to category 2, because it relies on the used metaheuristic, it is designed with the goal of increasing diversity in the population.

Nevertheless, it does not manage this increased diversity with its own mechanism.

Dealing with Many-objective Problems

Following the analysis of the related work in Table 4.2, we now give a brief analysis of the many-objective capabilities of the proposed methods as well. In their original publications, only LCSA was actually designed for many-objective problems, while WOF ans GLMO were mostly used with 2- and 3-objective problems. However, all of them possess certain many-objective capabilities if the employed metaheuristic is able to deal with many-objective problems.

For GLMO, the diversity management is entirely up to the used metaheuristic, and the modified mutation operator can be used directly inside a many-objective algorithm like NSGA-III. In the case of WOF, the same applies, but another aspect of WOF can

5.4. CLASSIFICATION OF PROPOSED METHODS 121

affect its efficiency for many-objective problems as well. This regards the selection and especially the amount of chosen pivot solutions ~x0k. These solutions are used to control the diversity of the algorithm, and it is recommended to use a number of pivot solutions that is at least the number of objective functions. Therefore, if reference directions are used for the pivot-selection, as was proposed in [6] and described in Section 5.1.7, at least one transformed problem is formed for the extremal solutions in each objective function, i.e. the solution which has the best value in one objective. Using this recommendation of the parameter q, we can assume that WOF has a kind of built-in scalability for increasing numbers of objectives. On the other hand, it is not to be expected that WOF would achieve a satisfactory performance for many-objective problems when using NSGA-II as the optimiser, even if the number of pivot solutions is increased, due to the Pareto-dominance-based selection mechanism of NSGA-II. Therefore, the many-objective capabilities of WOF depend on different aspects, but the approach can be adjusted to be applicable to many-objective problems using the correct selection mechanisms and metaheuristics.

Regarding the LCSA, since the experiments in the original publication actually showed good performance for many-objective instances, it can be assumed that it is a promising way to increase the performance of existing many-objective algorithms. It was shown that the linear search through extrapolation is able to significantly increase the solution quality of NSGA-III as well as RVEA on 4- and 5-objective benchmark problems. This shows that not only does the approach work well for many-objective optimisation, it also further increased the performance of already well-performing algorithms on these problems. Since the goals of the LCSA is not just dimensionality reduction, but also the identification of relevant subspaces to increase exploration, among the three proposed algorithms, the LCSA is the one whose design is intended to work for many-objective optimisation.

To further explore the capabilities of the proposed methods, in the experimental evaluation of this thesis (see Chapter 6) all of the three approaches are also tested on many-objective problems, and show good performance when they are used with many-objective algorithms.

For this reason, Table 5.2 lists all three approaches as suitable for many-objective optimisation.

Parallel Implementations of the Proposed Methods

Next, we focus on the possible parallel implementations of the proposed methods. It is clear that the GLMO is actually not easy to parallelise on its own, or at least not easier than to parallelise any existing metaheuristic that uses the mutation. Since the algorithm structure does not deviate from the usual flow of the underlying metaheuristic, it is therefore dependent on the underlying algorithm if GLMO works well for parallel computation. This is denoted with a in Table 5.2.

To parallelise WOF, the same is true as for GLMO, which means if a parallel algorithm is used, parts of the optimisation can be done in parallel. However, this is only partly effective, since WOF consists of the different steps of optimisation, namely the normal optimisation of the original problem and the optimisation of the q transformed problems.

The optimisation of the q different independent problems can be done in parallel with any metaheuristic, which makes WOF easy to parallelise in these phases. An issue is, on the other hand, that these parallel processes only exist for a certain time and there is frequent need for communication between cores to distribute the created problems and gather their results. The merging of the populations and the subsequent large-scale optimisation phase needs to be done in a central instance again, before new q problems are created. In addition, WOF stops this alternation at a certain point in time to focus entirely on the original problem. WOF is therefore, in parts, parallelisable, but is by design not meant to work very efficiently on multiple cores at the same time.

The LCSA is probably the hardest to parallelise among the three proposed algorithms, since it has alternating phases similar to WOF that would require communication and data transfer, except that it does not use multiple transformed problems, but just a single one. Therefore, even if LCSA is used with a parallel EA, there is increased need for central coordination.