• Keine Ergebnisse gefunden

This chapter presents basic principles of multi-objective optimisation and introduces several concepts that are required in the remainder of this thesis. A brief overview on multi-objective optimisation and its formal definition, along with Pareto-optimality and the related concepts, are given. The principles of population-based metaheuristics are given, and the functionality of evolutionary algorithms is explained.

2.8. SUMMARY 35

The following sections deal with the special properties and challenges of large-scale multi-objective optimisation. The terminology of large scale and many-multi-objective optimisation is introduced and the concepts of variable groups and the different roles of variables in terms of interaction, convergence and diversity are explained. Cooperative Coevolution, which is commonly used in many large-scale methods, is introduced.

After that, a brief overview is given about the existing benchmark suites which exist in the literature and which are commonly used in the scientific community for designing and comparing algorithms. Many of them are scalable in the number of objectives and variables. Finally, a brief description of different evaluation metrics from the literature is given, and the used metrics for the later experimental evaluation (HV and IGD) are formally defined.

C h a p t e r

3

Related Work

In the recent years, large-scale optimisation has drawn increased attention in the scientific community. While the large-scale single-objective literature has grown in the last decade, the amount of research and algorithms developed exclusively for multi-objective large-scale problems remains sparse, with the majority of large-large-scale algorithms published since the year 2016.

This chapter gives an overview of the related literature on large-scale multi-objective optimisation. In the following, Section 3.1 gives an overview of the literature on large-scale algorithms that have been developed in recent years. In Section 3.2, the related large-scale multi-objective algorithms, which are the focus of this thesis are used in subsequent chapters, are explained in detail. Section 3.3 presents a selection of related grouping mechanisms, which have been developed in the single- and the multi-objective area. Finally, Section 3.4 provides a short summary of the described algorithms and grouping mechanisms of this chapter.

3.1 Overview of the State of the Art

The aim of this section is to provide a brief overview of existing methods to solve large-scale problems, both in single- and in multi-objective optimisation. Even though a variety of large-scale algorithms for single-objective optimisation have been developed in recent years, the focus of this thesis lies on the multi-objective area. Therefore, only a brief summary of single-objective methods is given in this section. Further information about large-scale optimisation algorithms in the existing single-objective literature is, for instance, found in [80]. Large-scale optimisation has also drawn interest in the area of exact methods, like for instance in [81], however, the focus of this thesis lies on metaheuristic approaches. In the following we give a short review on related single-objective works (based on the author’s article in [1]) that have been influential to the multi-objective area, followed by a brief overview of multi-objective large-scale approaches.

37

Single-objective Large-scale Optimisation

As described in Section 2.5, the most prominent concept that led to the development of large-scale algorithms is Cooperative Coevolution. The idea of a division of the decision space variables in smaller groups of variables, and optimising these independently with an evolutionary algorithm, was originally proposed by Potter and De Jong in 1994 [29]

for single-objective optimisation. Later work by Potter et al. further studied a method for dynamically evolving the variable groups needed in the CC framework [31]. The concept of CC was since used in a variety of large-scale single-objective algorithms [31, 32, 33, 34, 35, 36, 37].

In 2010, Chen et al. used a separate step prior to the optimisation process to determine the segregation of the variables into groups for non-separable problems [36]. They took into account the interaction of variables with a learning mechanism for finding the optimal distribution of variables to the subcomponents. They reported good results compared to a naive grouping of the variables, although the additional learning step consumed more computational resources. A study on this tradeoff and impact of Variable Interaction Learning on Cooperative Coevolutionary algorithms was later conducted by the same authors in 2013 [82].

In 2008, Yang et al. [33] proposed a CC method for single-objective problems using two special features. One was a repeated reassignment of the variables into subcomponents in every iteration of the algorithm. The second was a weighting scheme to optimise a so-called “weight” for each subcomponent, i.e. apply a multiplication with a certain value to every variable in the same group. These “weights” were then evolved with a metaheuristic for the best, worst and a random member of the population. The (re-)grouping in this work was done at random, i.e. in each iteration of the algorithm, new random groups were created [1]. This approach later served as an inspiration for the development of the Weighted Optimisation Framework [4, 1], which is one of the proposed methods in this PhD thesis (see Section 5.1).

The same principle of using CC with weights has then been used in other works [34, 83]

for single-objective problems, using benchmark functions of up to 1000 decision variables.

A mechanism for choosing appropriate lower and upper bounds of weights was proposed in [34]. In a later study, Li et al. stated that using this weighting approach in CC is less effective than improving the frequent (re-)grouping of variables [37, 1].

In 2014, a mechanism called “Differential Grouping” (DG) was introduced to find improved distributions of the variables in single-objective CC algorithms [84]. The aim of DG was to determine variable interactions to base groups not only on random assignments, but on the information which variables should be optimised together in the same group. A more detailed description of DG and its successor, DG2, is given later in Section 3.3.

Other concepts and extensions to this approach for variable grouping were proposed [85, 86, 87].

3.1. OVERVIEW OF THE STATE OF THE ART 39

A single-objective competitive PSO algorithm [88] was developed in 2015 which showed good results for large-scale benchmark problems with up to 5000 decision variables [1].

Further, CC was, for instance, applied to a large-scale version of the capacitated arc routing problem, with a special variable grouping mechanism that made use of the information of the routes found the algorithm in previous iterations [89].

Multi-objective Large-scale Optimisation

In contrast to single-objective problems, the popularity of research on multi-objective large-scale optimisation has only increased within the last few years. In the multi-objective case, the additional challenge of exploring a high-dimensional search space is even more challenging when multiple areas of the search space need to be found to cover different parts of the Pareto-optimal front of the problem. A study in 2013, for instance, showed that the performance of existing algorithms deteriorates when the dimension of the search space is increased [90].

A first approach to utilise the concept of CC in multiple objectives was performed by Iorio and Li [32] in 2004. This approach makes use of the NSGA-II algorithm, but was not developed as a dedicated large-scale optimiser, and as such was not tested on any high-dimensional search spaces. The algorithm optimises each decision variable on its own, utilising its own population consisting only out of values for this one variable. The ZDT test problems (refer to Section 2.6) were used with 2 objectives and only a small number of up to 30 decision variables. The results showed that their algorithm was able to compete with the performance of the NSGA-II in most of their experiments.

To the best of the author’s knowledge, the first dedicated multi-objective large-scale algorithm was proposed in 2013 by Antonio and Coello Coello, called CCGDE3 [3].

Their work used the concept of CC together with the Generalized Differential Evolution 3 algorithm (GDE3 [91]). In their experiments, using some of the 2-objective ZDT benchmark functions with between 200 and 5000 decision variables, the coevolution-enhanced version of GDE3 outperformend the traditional GDE3 and NSGA-II algorithms.

However, to achieve good approximations of the PF, the CCGDE3 method still required a large number of function evaluations, ranging between 150,000 to 220,000 for the 1000-variable instances and up to over 5,700,000 evaluations for 5000-variable problems.

Furthermore, although the study showed that the concept of CC can be applied to multiple objectives, it was only tested on the ZDT functions. It remained unclear whether this concept would work with more complicated benchmark problems like the WFG or the DTLZ benchmarks.

3.2 Related Approaches in Large-scale Multi-objective