• Keine Ergebnisse gefunden

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

10

2

IGD Value

WOF-SMPSO WOF-NSGA-II NSGA-II SMPSO

WOFRandomised

(a) SMPSO and NSGA-II

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

10

2

IGD Value

WOF-MOEA/D WOF-NSGA-III MOEA/D NSGA-III

(b) MOEA/D and NSGA-III

Figure 6.2: Convergence behaviour of the original and respective WOF algorithms on the 5-objective LSMOP5 problem with 1000 variables.

6.3 Evaluation of the Grouped and Linked Mutation

6.3. EVALUATION OF THE GLMO 143

or worst. As we want to show the effects of the proposed changes to the mutation operators, in the following we take a look at each of the three algorithms separately. We first concentrate on the final solution quality before taking a look at the convergence behaviour.

Regarding the NSGA-II and its derived versions, the results are shown in Table 6.4. The first observation we can draw from these winning-rates is that the linked algorithm does not perform better than the original NSGA-II in most instances, and only outperforms it in a little over 5% of the problems. This behaviour makes sense since both of these versions use a mutation probability of 1/n, and therefore the expected amount of mutated genes in each individual is 1 out of then variables. As a result, linking the amount of change between all mutated variables might in most cases not have any effect, since it only applies when more than 1 variables is changed in the first place.

The next observation regards the grouped NSGA-II version, especially in comparison with the high mutation probability version. Here we can see in Table 6.4 that the GroupedNSGA-II performs better than the original NSGA-II in about 30% of the cases, and better than the HighProbabilityNSGA-II in 53,8% of all problems. Since NSGA-II also outperforms the grouped version in around 50% of problems, a clear superior performance between these two can not be observed when all problem instances are considered. However, when only looking at the large-scale problems, these results differ, and the GroupedNSGA-II and NSGA-II win against each other in 42% and 39%

respectively, the remaining instances resulting in draws. If we compare with the version using a high mutation probability, we see that the grouped version performs significantly better on 60.86% of the large-scale problems, while on the other hand, the high probability NSGA-II can only outperfom the grouped NSGA-II in 7.6% of large-scale problems. This is especially interesting since the expected amount of change to each individual remains the same in both operators (1/4 of the n variables). However, there seems to be an influence in the choice of which of the variables are mutated. Since we used the ordered grouping mechanism in these experiments, the grouped version usually mutates solutions with relatively similar values (relative to their domains), which might in the current benchmark problems lead to groups that seem more beneficial. Without the detailed knowledge on why these ordered groups are favourable for each specific benchmark, we can nevertheless conclude that the choice of which variables are being mutated has in influence besides the mere increase of mutation compared to the normal, low mutation rate.

The most important observation concerns the version that uses both the groups and the links between the variables. In this GLMO operator, even though both effects on their own did not lead to a clearly superior performance over the original NSGA-II, we can observe that the “GroupLinkNSGA-II” in Table 6.4 can obtain significantly better results than the original NSGA-II in 91.30% of the 92 different large-scale instances and 73,43%

of the many-objective problems. This shows that the solution quality can be significantly

improved using the proposed GLMO mutation operator, and that variable groups and especially the link between the amount of mutation for the variables can enhance the effectiveness of traditional algorithms in the large-scale area.

When we look at the results of the NSGA-III and SMPSO in Tables 6.5 and 6.6, we observe a similar picture. The GroupLinkNSGA-III algorithm obtains better results than NSGA-III in over 90% of the large-scale problems, and over 73% of the many-objective problems. Although NSGA-III is already a dedicated many-objective algorithm, and we can naturally expect it to work well for the 64 many-objective instances, we still observe a large increase in performance in those many-objective instances when we apply the modified mutation operator, which confirms the observation that these operators might not just be useful for large-scale, but also for many-objective problems.

The same operator applied to SMPSO yields similar result. However, the winning rates compared to the original SMPSO are with 76% and 39% for large-scale and many-objective instances respectively a little lower than for the other optimisers. On the other hand, the original SMPSO can only win against the GroupLinkSMPSO version in less than 2% of the cases, suggesting that in more than 20% of the cases there is no statistical difference between the two algorithms.

Another interesting aspect is the convergence behaviour of the proposed methods. In Figs. 6.3 to 6.5 we show exemplarily the development of the IGD values for the 2-objective WFG5 and UF3 problems with 1000 variables for the algorithm versions using NSGA-II, SMPSO and NSGA-III respectively. In all three figures it is clearly observable that the GLMO which uses both groups and links between variables performs superiorly, not just with a statistically signifiant difference, but also by a large margin. In contrast, even though in general there exist such significant differences between the other algorithm versions as well, we see that in the median performance, the results of the other three versions are always very close to their respective original algorithm. What is also visible is that all algorithms have the phase of highest progression in the beginning of the search, while after the first 10% of the evaluations the IGD values decrease only slowly. The big difference between the GLMO and the other versions is that this progression in the beginning is much more significant, which may indicate a much better exploration of the high-dimensional search space in the beginning of the search compared to the other algorithms.

In conclusion, we can see that the modified mutation operators, although they require very little change to the operator, and no changes at all to the used metaheuristic, can significantly increase the performance of existing algorithms for large-scale and also for many-objective optimisation. Especially the combination of variable groups and variable linkage can lead to increased performance in over 90% of the 92 large-scale problems and over 70% in many-objective instances when the NSGA-II or NSGA-III algorithms are

6.4. EVALUATION OF THE LCSA 145

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

IGD Value

NSGA-II

HighProbNSGA-II GroupedNSGA-II LinkedNSGA-II GroupLinkNSGA-II

(a) UF3

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

IGD Value

NSGA-II

HighProbNSGA-II GroupedNSGA-II LinkedNSGA-II GroupLinkNSGA-II

(b) WFG5

Figure 6.3: Convergence behaviour of the different NSGA-II versions using the grouped and linked mutation operators on the 2-objective UF3 and WFG5 problems with 1000 variables.

used. The convergence analysis further confirms a superior performance of the GLMO not only in solution quality, but also in the speed of convergence towards the Pareto-front.

6.4 Evaluation of the Linear Combination-based Search