• Keine Ergebnisse gefunden

In this section, we compare the performance of different versions of the WOF algorithm framework with the respective original algorithms, and examine how the proposed method is suitable to increase the performance of algorithms on large-sale optimisation. In total, we compare 9 different algorithms, which are the four original methods NSGA-II, SMPSO, MOEA/D and NSGA-II, the four respective WOF versions of these methods, and the above-described randomised version of WOF. All 184 problem instances are used in this experiment. For the analysis in the winning-score tables, these problems include 56 low-scale instances, 92 large-scale instances, and 64 many-objective instances. The winning scores of this experiment are shown in Table 6.2 and Table 6.3, as well as in Tables B.1 to B.8 in Appendix B for all problems.

Regarding WOF-SMPSO and SMPSO, we see in Table 6.2 that the WOF version outperfoms SMPSO in 90.76% of all problems (167 out of 184 instances), where the remaining instances consist of 16 draws and only 1 out of 184 times (0.54%) the original SMPSO was able to perform better than WOF-SMPSO. We further observe that the good performance occurs in all kinds of problem categories, where WOF wins in 44 out 56 instances (78.57%) in the low-scale problems, 88 out of 92 times (95.65%) in the large-scale and 55 out of 64 times (85.93%) in the many-objective area. If we further take a look at the detailed results in Appendix B, we see that the superior performance

Table 6.3: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations of the NSGA-III and MOEA/D algorithms and their WOF-versions as well as the randomised WOF algorithm. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

NSGA-III WOF-NSGA-III MOEA/D WOF-MOEA/D WOF-Randomised

NSGA-III 21.73 55.35

5.43 23.43

45.65 58.92 47.82 20.31

26.08 71.42 4.34 14.06

12.50 37.50 1.08 10.93

WOF-NSGA-III 62.50 12.50

88.04 62.50

66.84 50.00

84.78 54.68

45.65 67.85 33.69 31.25

4.89 0.00 5.43 7.81

MOEA/D 45.10 21.42

46.73 68.75

22.82 37.50 11.95 37.50

25.00 50.00

8.69 25.00

13.04 23.21 6.52 18.75

WOF-MOEA/D 67.39 17.85

90.21 78.12

40.21 25.00 50.00 51.56

61.41 25.00 86.95 57.81

18.47 12.50

15.21 35.93

WOF-Randomised 72.28 19.64 95.65 73.43

61.95 44.64 72.82 57.81

73.36 51.78 91.30 60.93

57.60 66.07 57.60 32.81

is spread across all problem families. We can conclude from this that WOF is able to improve the performance of SMPSO for almost all problems independent of their properties or dimensionality in decision or objective space.

The NSGA-II variants show in general a similar performance, although the superiority over the original NSGA-II is a little lower than in the SMPSO case. WOF-NSGA-II outperforms NSGA-II in 123 of 184 instances (66.84%), while NSGA-II wins in 32 instances (17.39%). Interestingly, we can see that the performance in terms of low-scale problems differs from the one on large-scale and many-objective problems. This is visible in Table 6.2 (third row, fourth column), where NSGA-II can only outperfom its WOF version in 4 out of 92 large-scale problems (4.34%), and 9 out of 64 many-objective instances (14.06%). Therefore, we can conclude that WOF-NSGA-II shows a superior performance on the large-scale and many-objective problems, but does not improve the results of NSGA-II on classical problems with small numbers of variables on the same scale as with the SMPSO.

Next, we take a look at the results for NSGA-III in Table 6.3, which is a dedicated many-objective optimiser. Also in this case we see that the WOF is able to enhance the performance strongly. While NSGA-III is still able to claim around 55% of the wins for low-scale problems, WOF-NSGA-III clearly outperforms NSGA-III on most of the large-scale and many-objective instances (88.04% and 62.5% respectively), with a

6.2. EVALUATION OF THE WOF 133

Table 6.4: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations of NSGA-II and its grouped and linked versions. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

NSGA-II GroupedNSGA-II LinkedNSGA-II GroupLinkNSGA-II HighProbNSGA-II

NSGA-II 50.54 87.50

39.13 46.87

1.63 0.00 3.26 0.00

18.47 50.00 5.43 20.31

58.15 87.50 51.08 54.68

GroupedNSGA-II 30.43 3.57 42.39 21.87

29.89 3.57

42.39 20.31

1.08 1.78 0.00 0.00

53.80 46.42 60.86 43.75

LinkedNSGA-II 5.43 8.92

3.26 1.56

51.08 87.50 38.04 45.31

21.19 58.92

5.43 20.31

59.23 87.50 52.17 54.68

GroupLinkNSGA-II 69.02 17.85 91.30 73.43

89.13 80.35 94.56 89.06

69.02 14.28 92.39 73.43

94.02 87.50

97.82 95.31

HighProbNSGA-II 25.00 1.78 38.04 17.18

5.97 1.78 7.60 1.56

25.00 1.78 39.13 15.62

1.08 3.57 0.00 0.00

Table 6.5: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations of NSGA-III and its grouped and linked versions. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

NSGA-III GroupedNSGA-III LinkedNSGA-III GroupLinkNSGA-III HighProbNSGA-III

NSGA-III 61.41 83.92

52.17 67.18

0.00 0.00 0.00 0.00

21.73 55.35 8.69 21.87

66.84 85.71 58.69 71.87

GroupedNSGA-III 28.26 7.14 40.21 18.75

27.17 7.14

39.13 18.75

1.63 5.35 0.00 0.00

56.52 37.50 71.73 50.00

LinkedNSGA-III 8.15 16.07

4.34 3.12

62.50 85.71 54.34 71.87

21.73 55.35

8.69 21.87

69.02 89.28 61.95 75.00

GroupLinkNSGA-III 71.19 23.21 90.21 73.43

95.65 85.71 100.00 100.00

69.56 17.85 90.21 71.87

96.73 89.28

100.00 100.00

HighProbNSGA-III 20.10 5.35 31.52 14.06

9.78 16.07 5.43 9.37

20.65 5.35 32.60 14.06

1.63 5.35 0.00 0.00

Table 6.6: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations of SMPSO and its grouped and linked versions. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

SMPSO GroupedSMPSO LinkedSMPSO GroupLinkSMPSO HighProbSMPSO

SMPSO 4.89 1.78

8.69 3.12

1.63 1.78 1.08 1.56

2.17 5.35 1.08 1.56

7.60 7.14 7.60 3.12

GroupedSMPSO 35.86 21.42

47.82 3.12

34.23 17.85

46.73 3.12

2.71 8.92 0.00 0.00

25.00 7.14 40.21 3.12

LinkedSMPSO 2.17 1.78

2.17 0.00

5.43 5.35 7.60 3.12

3.26 10.71

0.00 0.00

8.69 8.92 9.78 4.68

GroupLinkSMPSO 63.58 37.50 76.08 39.06

61.95 33.92 73.91 43.75

62.50 33.92 75.00 42.18

61.95 33.92

72.82 40.62

HighProbSMPSO 28.80 17.85

35.86 0.00

1.08 1.78 0.00 0.00

27.17 10.71 36.95 0.00

2.17 7.14 0.00 0.00

Table 6.7: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. The original NSGA-II, SMPSO and NSGA-III algorithms are shown along their LCSA-enhanced versions. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

SMPSO xSMPSO NSGA-II xNSGA-II NSGA-III xNSGA-III

SMPSO 7.06 17.85

2.17 3.12

44.56 14.28 66.30 53.12

24.45 12.50 32.60 34.37

27.17 7.14 45.65 12.50

14.13 3.57 22.82 7.81

xSMPSO 65.76 23.21

79.34 73.43

71.73 21.42

92.39 85.93

47.82 19.64 64.13 53.12

61.95 17.85 80.43 60.93

36.41 17.85 50.00 18.75

NSGA-II 41.30 64.28 26.08 32.81

25.00 67.85 7.60 12.50

27.71 33.92

27.17 28.12

14.67 14.28 13.04 4.68

14.13 25.00 9.78 1.56

xNSGA-II 68.47 71.42 65.21 64.06

30.97 66.07 14.13 25.00

52.17 10.71 68.47 62.50

53.26 17.85

68.47 57.81

17.93 19.64 16.30 1.56

NSGA-III 65.76 78.57 50.00 81.25

32.06 73.21 15.21 31.25

66.30 57.14 70.65 90.62

35.86 51.78 30.43 37.50

32.06 48.21

27.17 28.12

xNSGA-III 78.26 78.57 73.91 85.93

46.73 71.42 33.69 54.68

77.17 51.78 86.95 95.31

53.80 48.21 56.52 82.81

52.71 8.92 72.82 60.93

6.2. EVALUATION OF THE WOF 135

Table 6.8: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. WOF and LSMOF are compared using the NSGA-II and SMPSO algorithms. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-SMPSO LS-SMPSO WOF-NSGA-II LS-NSGA-II WOF-Randomised

WOF-SMPSO 61.41 66.07

60.86 34.37

60.86 37.50 77.17 53.12

51.63 33.92 60.86 37.50

32.06 17.85 43.47 9.37

LS-SMPSO 9.23 1.78

13.04 21.87

40.21 26.78

48.91 42.18

34.78 19.64 41.30 35.93

15.21 14.28 18.47 10.93

WOF-NSGA-II 24.45 41.07

13.04 31.25

49.45 58.92 45.65 45.31

37.50 33.92

41.30 32.81

10.86 10.71 9.78 7.81

LS-NSGA-II 26.08 42.85

19.56 29.68

42.93 62.50 39.13 39.06

29.89 30.35 34.78 23.43

17.39 28.57

15.21 9.37

WOF-Randomised 45.65 62.50 35.86 59.37

67.93 71.42 66.30 68.75

73.91 60.71 81.52 84.37

68.47 57.14 70.65 75.00

large number of non-significant differences (draws) between the two in the low-scale and many-objective cases as well. Overall, NSGA-III can only outperfrom its WOF-enhanced version in 21.73% of all problems, and in large-scale instances only in 5.43% (5 out of 92 instances). This shows that the WOF method is suitable also for many-objective optimisation if an appropriate method like NSGA-III is used. Regarding MOEA/D in comparison with WOF-MOEA/D, the picture looks very similar to the one in NSGA-III, with similar winning scores in all four categories. An interesting observation is here that WOF internally uses the concept of Pareto-dominance to merge its created population from the different subproblems. Nonetheless, it is able to improve the performance also in algorithms like MOEA/D, which on their own do not rely on this concept.

These results, which reflect the final algorithm performance with respect to obtained IGD values, show that WOF is able to significantly outperform existing methods in large-scale optimisation. It is further able to significantly increase the performance of existing algorithms in many cases on traditional, low-scale problems. The application to 4 different optimisers shows that the improvement is strongest when applied to the SMPSO algorithm, but in general suggest that the WOF method can be successfully used with any arbitrary metaheuristic.

To bring out the best of the four versions of WOF, the randomised version as described above is compared to the respective other versions and the original algorithms. The

Table 6.9: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. WOF is compared with the random-group-based MOEA/DVA, LMEA and S3-CMA-ES. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-SMPSO WOF-NSGA-II WOF-Randomised randomLMEA randomMOEA/DVA randomS3-CMA-ES

WOF-SMPSO 60.86 37.50

77.17 53.12

32.06 17.85 43.47 9.37

72.28 33.92 90.21 60.93

86.41 66.07 95.65 78.12

89.67 85.71 94.56 76.56

WOF-NSGA-II 24.45 41.07

13.04 31.25

10.86 10.71

9.78 7.81

66.84 30.35 83.69 53.12

73.36 64.28 77.17 53.12

89.13 87.50 93.47 76.56

WOF-Randomised 45.65 62.50 35.86 59.37

73.91 60.71 81.52 84.37

78.26 44.64

95.65 67.18

92.39 83.92 95.65 90.62

91.84 89.28 95.65 82.81

randomLMEA 25.54 60.71

8.69 34.37

29.89 62.50 14.13 42.18

16.84 41.07 3.26 28.12

71.73 78.57

72.82 67.18

75.54 94.64 80.43 67.18

randomMOEA/DVA 10.32 25.00

4.34 18.75

21.73 25.00 19.56 39.06

3.80 7.14 2.17 3.12

20.10 16.07

22.82 23.43 59.23 80.35

56.52 48.43

randomS3-CMA-ES 7.60 7.14

5.43 20.31

9.23 8.92 6.52 20.31

5.97 5.35 3.26 14.06

18.47 3.57 15.21 23.43

31.52 10.71 34.78 39.06

Table 6.10: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. WOF is compared with ReMO using NSGA-II and SMPSO. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-NSGA-II ReNSGA-II WOF-MOEA/D ReMOEA/D WOF-Randomised

WOF-NSGA-II 91.30 94.64

90.21 84.37

35.86 46.42 29.34 23.43

88.58 94.64 85.86 71.87

10.86 10.71 9.78 7.81

ReNSGA-II 5.97 3.57

7.60 12.50

0.54 1.78

0.00 0.00

69.56 85.71 66.30 35.93

0.00 0.00 0.00 0.00

WOF-MOEA/D 48.36 28.57

58.69 65.62

94.02 85.71 97.82 95.31

99.45 100.00

100.00 98.43

18.47 12.50 15.21 35.93

ReMOEA/D 7.60 3.57

9.78 21.87

23.91 8.92 28.26 53.12

0.00 0.00 0.00 0.00

1.08 0.00

2.17 3.12

WOF-Randomised 73.91 60.71 81.52 84.37

98.91 96.42 100.00 100.00

57.60 66.07 57.60 32.81

98.36 100.00 96.73 95.31

6.2. EVALUATION OF THE WOF 137

Table 6.11: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. WOF is compared with DLS-MOEA. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

102 42

60 0

WOF-SMPSO WOF-NSGA-II WOF-Randomised DLS-MOEA

WOF-SMPSO 63.72 33.33

85.00

43.13 23.80 56.66

74.50 52.38 90.00

WOF-NSGA-II 21.56 40.47

8.33

11.76 14.28

10.00

61.76 35.71 80.00

WOF-Randomised 41.17 61.90

26.66

68.62 52.38 80.00

70.58 42.85

90.00

DLS-MOEA 19.60 35.71

8.33

18.62 28.57 11.66

11.76 19.04 6.66

Table 6.12: Winning rates using the IGD indicator for different problem categories using 10,000,000 evaluations. WOF is compared with MOEA/DVA and LMEA. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

42 0 42 0

WOF-SMPSO WOF-Randomised LMEA MOEA/DVA

WOF-SMPSO 38.09

38.09

66.66 66.66

47.61 47.61

WOF-Randomised 50.00

50.00

76.19

76.19

47.61 47.61

LMEA 26.19

26.19

19.04 19.04

16.66

16.66

MOEA/DVA 47.61

47.61

47.61 47.61

76.19 76.19

Table 6.13: Winning rates using the IGD indicator for different problem categories using 10,000,000 evaluations. WOF is compared with MOEA/DVA, LMEA and S3 -CMA-ES. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

28 0 28 0

WOF-SMPSO WOF-Randomised LMEA MOEA/DVA S3-CMA-ES

WOF-SMPSO 42.85

42.85

82.14 82.14

64.28 64.28

82.14 82.14

WOF-Randomised 46.42

46.42

82.14

82.14

64.28 64.28

75.00 75.00

LMEA 17.85

17.85

14.28 14.28

21.42

21.42

53.57 53.57

MOEA/DVA 28.57

28.57

32.14 32.14

67.85 67.85

60.71

60.71

S3-CMA-ES 17.85

17.85

17.85 17.85

42.85 42.85

32.14 32.14

numbers in Tables 6.2 and 6.3 show that the randomised version of WOF is able to improve the performance even further. In different problems the 4 different algorithms which are used internally might perform differently. By using all of them with an expected share of 14, there is a high chance that the respective best method will be able to produce some solutions which enable the rest of the algorithm to advance towards better solutions as well. Since the second half of the randomised WOF uses only the NSGA-III optimiser, a good diversity is also expected for the final solution sets. The winning rates of the 4 original algorithms against the randomised WOF are in general very low, with 2.71% for SMPSO and 3.8%, 12.5% and 13.04% for NSGA-II, NSGA-III and MOEA/D respectively.

Note that these numbers even include the low-dimensional problems as well. If applied to large-scale problems only, randomised WOF outperforms all algorithms by far, winning for instance 100.0% of all large-scale problems against the original NSGA-II, and over 90%

of instances against SMPSO, NSGA-III and MEOEA/D. It also outperfroms the other WOF versions in many cases, winning 81.52% of problems against the WOF-NSGA-II and 72.82% against WOF-NSGA-III. With around 35%, the WOF-SMPSO is the only algorithm against which the randomised WOF looses more often that it wins. All this shows that a combination of different algorithms inside one framework can be a promising direction for future large-scale algorithms.

6.2. EVALUATION OF THE WOF 139

Table 6.14: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. WOF is compared with modified versions of MOEA/DVA, LMEA and S3-CMA-ES. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

28 0 28 0

WOF-SMPSO WOF-Randomised groupInfLMEA groupInfMOEA/DVA groupInfS3-CMA-ES

WOF-SMPSO 46.42

46.42

96.42 96.42

96.42 96.42

96.42 96.42

WOF-Randomised 28.57

28.57

100.00

100.00

96.42 96.42

100.00 100.00

groupInfLMEA 3.57

3.57

0.00 0.00

67.85

67.85

42.85 42.85

groupInfMOEA/DVA 0.00

0.00

0.00 0.00

14.28 14.28

21.42

21.42

groupInfS3-CMA-ES 3.57

3.57

0.00 0.00

42.85 42.85

67.85 67.85

Table 6.15: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. The proposed WOF, GLMO and LCSA are compared using NSGA-II. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-NSGA-II xNSGA-II GroupLinkNSGA-II

WOF-NSGA-II 47.82 17.85

66.30 42.18

39.13 48.21 38.04 37.50

xNSGA-II 16.30 21.42

14.13 20.31

33.69 50.00

23.91 46.87

GroupLinkNSGA-II 41.30 21.42 53.26 35.93

51.08 21.42 67.39 42.18

Table 6.16: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. The proposed WOF, GLMO and LCSA are compared using NSGA-III. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-NSGA-III xNSGA-III GroupLinkNSGA-III

WOF-NSGA-III 45.10 17.85

61.95 28.12

37.50 41.07 38.04 37.50

xNSGA-III 33.15 46.42

27.17 51.56

36.41 48.21

29.34 48.43

GroupLinkNSGA-III 44.56 23.21 55.43 48.43

51.08 21.42 64.13 40.62

Table 6.17: Winning rates using the IGD indicator for different problem categories using 100,000 evaluations. The proposed WOF, GLMO and LCSA are compared using SMPSO. Each row shows the amount of wins (based on statistical significance) against the respective algorithms in the columns.

184 56

92 64

WOF-SMPSO xSMPSO GroupLinkSMPSO

WOF-SMPSO 60.32 67.85

61.95 28.12

65.76 58.92 66.30 73.43

xSMPSO 9.78 7.14

9.78 20.31

32.06 21.42

30.43 64.06

GroupLinkSMPSO 13.58 3.57

19.56 6.25

44.56 32.14 54.34 10.93

6.2. EVALUATION OF THE WOF 141

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-2

10

-1

10

0

10

1

IGD Value

WOF-SMPSO WOF-NSGA-II NSGA-II SMPSO

WOFRandomised

(a) SMPSO and NSGA-II

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

IGD Value

WOF-MOEA/D WOF-NSGA-III MOEA/D NSGA-III

(b) MOEA/D and NSGA-III

Figure 6.1: Convergence behaviour of the original and respective WOF algorithms on the 3-objective UF3 problem with 1000 variables.

Next, we take a brief look at the convergence behaviour to examine if the good performance is not only reflected in the final solution sets after the complete 100,000 function evaluations, but also during the search process. In Figs. 6.1 and 6.2 we show examples of the performance of the algorithms over the amount of function evaluations for the UF3 problem with 2 objectives and 1000 variables as well as LSMOP5 with 5 objectives and 1000 variables. We observe in these figures not only that all four WOF versions perform better than the respective original algorithm, we also see a rapid convergence towards low IGD values in the very beginning of the search process. This finding is consistent with the observations in [1, 6], where WOF shows a very fast convergence in all problem instances as well. The observed behaviour reflects the ability of WOF to converge to promising areas of the search space easily with the transformed, lower dimensional problem. This is especially useful in the beginning, while in later phases of the search the spread along the PF is more important and the transformed problems can not lead to large jumps in solution quality any more.

In general, this behaviour of the WOF algorithm is beneficial for the overall search, and desired behaviour that can lead to advantages when applied to practical applications.

WOF is able to provide a reasonable solution quality after a very small amount of function evaluations, which might be helpful when resources are limited. From that point, it gradually improves, and the selection of multipe pivot solutions helps to keep diversity despite the fast convergence, to prevent the population from collapsing to a small area on the PF.

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

10

2

IGD Value

WOF-SMPSO WOF-NSGA-II NSGA-II SMPSO

WOFRandomised

(a) SMPSO and NSGA-II

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Evaluations

×105

10

-1

10

0

10

1

10

2

IGD Value

WOF-MOEA/D WOF-NSGA-III MOEA/D NSGA-III

(b) MOEA/D and NSGA-III

Figure 6.2: Convergence behaviour of the original and respective WOF algorithms on the 5-objective LSMOP5 problem with 1000 variables.

6.3 Evaluation of the Grouped and Linked Mutation