• Keine Ergebnisse gefunden

around 500,000 evaluations. In contrast, LMEA and MOEA/DVA need multiple millions of evaluations in both benchmarks before they obtain the interaction-based groups. In MOEA/DVA, however, this is not immediately visible, since it saves the solutions created during the interaction analysis and constantly updates its initial population during the process. Especially in Fig. 6.11a it is visible that the IGD values gradually improve until the optimisation starts after around 9 million evaluations, and the IGD improves rapidly afterwards.

Another interesting observation is the sudden increase in IGD of the S3-CMA-ES. This algorithm has two different stages, which are the independent optimisation of the popula-tions until convergence and the diversity optimisation. It is visible that the IGD converges in the beginning and once no more improvement is possible, the algorithm detects that all populations have converged, starts the optimisation of the diversity-related variables, and creates new independent populations for the next iteration. However, this mechanism seems to increase the IGD values in both benchmark functions, and leads to an overall worse performance. This insight can be valuable to reconsider the way the diversity optimisation is carried out in S3-CMA-ES, or how the preservation of good solutions in an archive is implemented in future versions of the S3-CMA-ES.

6.7. INFLUENCE AND EFFICIENCY OF GROUPING MECHANISMS 167

with another, since the varying computational budget then allows the algorithm to use more evaluations for the optimisation.

Therefore, in this section we examine the performance of the three interaction-based methods from the previous experiments in further detail. The goal of this experiment is to find out which influence interaction-based groups have on the actual performance of the three most prominent algorithms using this concept. This analysis can especially be of interest considering that “good” variable groups in reality might be obtained through the inclusion of expert knowledge, and therefore it is of interest to know whether MOEA/DVA, LMEA and S3-CMA-ES are competitive algorithms to use in these scenarios, even when only a limited computational budget is available for the actual optimisation phase of the problem.

Experimental Outline

To test for the influence of the interaction-based groups and to assess whether the search mechanisms alone are competitive, we use MOEA/DVA, LMEA and S3-CMA-ES and implement special versions of them as follows. The normal version of each algorithm is modified so that the used function evaluations are not counted during the group-finding phase. Therefore, each of them performs their contribution-based and interaction-based grouping mechanisms, and starts the following optimisation phase as if no evaluations had been used so far.

In addition, no population update is done in the case of MOEA/DVA, so that after finding the interaction-based groups, all algorithms start with a random initial population.

For comparison, the WOF-SMPSO and WOF-Randomised algorithms are used. In the previous sections, these two algorithms have already been compared with the random-group-based versions (see Section 6.6.1 and Table 6.9) and the original versions of LMEA, MOEA/DVA and S3-CMA-ES on a large computational budget (see Section 6.6.2 and Table 6.12).

The algorithms perform the optimisation in these experiments with a budget of 100,000 function evaluations. This means that the WOF versions use the standard parameters as described above with simple variable groups, while the three related methods use interaction-based groups, but all algorithms use the same budget for search process after the groups are formed.

We point out that this kind of experiment can usually not be seen as a fair comparison, as the versions with the interaction-based groups actually use millions of additional evaluations to obtain knowledge of the problem. It would therefore not be surprising to see that the interaction-based algorithms can outperform the WOF versions in this case.

However, this experiment can reveal information on whether the high computational effort that is associated with interaction-based groups is actually justified and should be pursued in future large-scale algorithms. We can also obtain insight on whether the

actual search procedure of the three algorithms causes the good performance, and see how well these methods can perform on a fairly low computational budget.

The remaining settings and parameters of this experiment are as described in Section 6.1 and Section 6.6.2 respectively. Since the S3-CMA-ES is involved in the experiments, the same 28 large-scale benchmark problems as in the previous section are used. The results are analysed in detail in the following and especially compared with the performances in the previous sections regarding the other version of the three related interaction-based methods.

Results and Analysis

The resulting winning rates of the five algorithms are shown in Table 6.14. Since the algorithms are altered to assess the influence of variable groups, the versions of the three related algorithms in Table 6.14 are called “groupInfMOEA/DVA”, “groupInfLMEA”

and “groupInfS3-CMA-ES”.

Similar to the previous experiments, the WOF methods obtain high winning rates compared with the other three algorithms. WOF-SMPSO performs significantly better than LMEA, MOEA/DVA and S3-CMA-ES in 27 out of 28 instances, and shows no statistical difference on the one remaining problem when compared with MOEA/DVA.

Similarly, WOF-Randomised wins in 100% of all cases against LMEA and S3-CMA-ES and in all but one instances against MOEA/DVA. None of the three related algorithms is able to win a single time against the randomised WOF version.

These winning rates of the WOF algorithms are higher than the ones obtained in the above experiment in Table 6.12. In those experiments all algorithms used 10,000,000 evaluations, and the three related methods use up a major part of it for their interaction analysis. Nonetheless, they still used more than 1,000,000 evaluations afterwards for the actual optimisation, which enabled them to obtain competitive IGD values for some benchmark functions. Compared to the present experiment, the difference lies only in the fact that once the contribution-based and interaction-based groups are obtained, the algorithms can only use 100,000 evaluations for the optimisation. The numbers in Table 6.14 indicate that this amount is not sufficient to make use of the obtained variable groups. We can conclude that even when optimal groups (or groups of suitable quality) have been found, or are given from external sources, the search mechanisms of the LMEA, MOEA/DVA and S3-CMA-ES can only obtain good results when used with a sufficiently large computational budget. If only limited budget is available, methods like WOF make better use of these computational resources.

Next, we take a look at the differences to the random group-based versions of the algorithms. In Table 6.9 we saw that LMEA in most cases performs superiorly to MOEA/DVA and S3-CMA-ES on the large-scale problems when random groups are used, winning 72% and 80% respectively. Slightly different results are observed in Table 6.14,