• Keine Ergebnisse gefunden

3.4 Summary

4.1.1 Similarities and Common Building Blocks

Reviewing the existing large-scale multi-objective optimisation literature reveals that most algorithms were in fact published within the last three years, with 11 out of 13 methods listed in Section 3.2 published in 2016 or later, and 6 of them published in 2018 or 2019. The publication history of the related methods is depicted in Fig. 4.1, and shows an increased attention of the research community towards this area in the last years. Looking closer at the structures of the state-of-the-art algorithms, we observe that on the one hand there are certain common elements in most of these algorithms, while on the other hand there is a diverse set of different techniques. We now first examine the state-of-the-art algorithms to identify similarities and common elements, so-called building blocks, which are present in the large-scale multi-objective algorithms.

The first common element of all the 13 methods is that all of them rely on existing “low-scale” algorithms to perform the actual optimisation, i.e. algorithms which work well with traditional multi- or single-objective problems and relatively low dimensionality. This is implemented in several different ways, but in all cases one or more of the sophisticated search strategies for single- and multi-objective areas are usually incorporated to optimise different parts of the whole problem. They are used to optimise either separate populations for certain areas of the search space (as for instance in S3-CMA-ES) or used to optimise only certain groups of variables (as in the CC-based approaches like MOEA/DVA or CCLSM). In some cases, single-objective optimisation is used to optimise the problem based on an indicator-value (as in LSMOF or DLS-MOEA).

Since sophisticated algorithms in the multi-objective area have been developed over the last two decades, it is only natural to cover large-scale problems by harvesting the power of existing algorithms, i.e. by reducing the dimensionality of the large-scale problems, and then applying these algorithms. All of the mentioned methods, with the exception of DLS-MOEA, use some kind of dimensionality reduction technique in the decision space to enable related algorithms to deal with a lower-dimensional problem. In addition, some of the algorithms - including DLS-MOEA - apply a reduction of dimensionality also in the objective space.

4.1. CLASSIFICATION OF LARGE-SCALE ALGORITHMS 67

These findings lead to one of the most striking observations: There is no common algorithm structure that covers all large-scale methods, and therefore no kind of general framework for successful large-scale algorithms seems to present itself. DLS-MOEA is one of the newest algorithms which, based on the experiments in the original article, performs very well on current large-scale problem instances. However, it is also the only method that does not reduce the dimensionality of the search space through any mechanism. On the other hand, some of the most successful algorithms like LSMOF and the WOF by the author of this thesis heavily rely on the reduction of dimensionality and reformulation of the original problem. Similarly, some algorithms show good performance using variable groups, while others perform equally well without ever using any grouping technique. Among all methods, the DLS-MOEA is the only example so far that shows that dimensionality reduction is not necessarily the definite answer to all large-scale optimisation problems, which is a surprising result given that all other algorithms in the last 6 years have relied on this principle.

Some further similarities are striking among the related methods. Among them, the algorithms CCGDE3, MOEA/D2 and CCLSM are all very similar. In fact, CCGDE3 and MOEA/D2 were proposed by the same authors in 2013 and 2016 respectively. They differ only in the aspect that the independent populations for the groups were replaced by only one population, so that the search for suitable partners from the other population for evaluation is not necessary any more. Following in a publication in 2018, the CCLSM is essentially the same algorithm as the MOEA/D2, with the only difference that the random groups were replaced with an interaction-based grouping method.

Similar relationships exist between other algorithms as well. MOEA/D-RDG is equivalent to MOEA/DVA, only the interaction-based groups are replaced by the dynamic random-based method (DRG) described in Section 3.3.3. Further, there exist certain similarities between the three methods PEA, S3-CMA-ES and DPCCMOEA. Based on the simplified flowcharts of all algorithms in the previous chapter, Table 4.1 shows an overview of which blocks are used in which methods, and how often each component is used in the literature in total. As mentioned before, the MOEA/D(s&ns) is excluded from this analysis, due to the uncertain quality of the work (see Section 3.2).

Based on Table 4.1, we see that the most common element in the literature is the

“Contribution-based grouping”, which occurs in 6 out of 12 methods. Similar popularity enjoy the building blocks “Interaction-based grouping” and “CC-based optimisation of convergence-variables”, which occur 5 times each. 4 out of 12 methods use a form of

“Convergence-detection”. Further, the “Creation of independent populations” is used 3 times, which refers to actual independent populations for the original large-scale problem (i.e. not populations only containing certain variables as in the CC-based approaches). A minority at the current time are the indicator-based methods and transformation-based methods, whose building blocks only occur in ReMO, DLS-MOEA and LSMOF. It is further interesting that random groups are used 4 times, and that also the optimisation

Building Blocks

RandomGrouping Interaction-basedGrouping Contribution-basedGrouping CC-basedoptimisation oflarge-scaleproblem CC-basedoptimisation ofconvergence-variables Optimiselarge-scaleproblem Optimisediversity-variables Indicator-basedoptimisation Indicator-basedlocalsearch Optimisetransformedproblem Optimisationofasingle groupofvariables Updateglobalarchive Createindependentpopulations Convergencedetection Problemtransformation

CCGDE3 X X

MOEA/DVA X X X X X

LMEA X X X X

MOEA/D-RDG X X X X X

MOEA/D2 X X

DPCCMOEA X X X X X

ReMO X X

S3-CMA-ES X X X X X X

CCLSM X X

PEA X X X X X X

DLS-MOEA X X

LSMOF X X X X

Σ 4 5 6 3 5 3 3 2 1 2 1 1 3 4 2

Table 4.1: Summary of building blocks in the related large-scale algorithms. The bottom row shows in how many methods a building block is used.

of the large-scale problem as a whole occurs in a quarter of all methods, without reducing the dimensionality.

The remainder of this section analyses these observations in further detail. The resulting classification of methods and which categories they belong to are summarised in Table 4.2, and details about experimental evaluations are shown in Table 4.3. The properties, similarities and differences are categorised in the next subsections based on the following criteria.

• Dimensionality Reduction in Decision Space

• Diversity Management

• Many-Objective Capabilities

• Parallelism

• Experimental Evaluation and Computational Budget

4.1. CLASSIFICATION OF LARGE-SCALE ALGORITHMS 69