• Keine Ergebnisse gefunden

5.1 The Weighted Optimisation Framework

5.1.2 Dimensionality Reduction

To reduce the dimensionality of the problem, the concept of variable groups is used as introduced in Section 2.4. With the help of variable groups, we are able to reduce the dimensionality of the vector w~ as follows. The original nvariables of the problemZ are divided into a number of γ groups (G1, ..., Gγ) using a grouping mechanism Γ. These can be evenly sized or obtained by any other simple or intelligent mechanism. For simplicity, and without loss of generality, it is assumed in the following argumentation that evenly sized groups, each of size l (hence γ·l = n) are used. This assumption can later be relaxed again.

Since the new, transformed problem optimises the variables in the vector w, the task~ is now to reduce its dimensionality. Instead of assigning one wi for each xi in the transformation function, only one weight wj is used for all variables within the same group Gj withlvariables. The vector to be applied inside the transformation functions will then be

(w1, w1, w1, ..., w1

| {z }

ltimes

, ..., wγ, wγ, wγ, ..., wγ

| {z }

ltimes

) (5.5)

5.1. THE WEIGHTED OPTIMISATION FRAMEWORK 89

Since there are onlyγ different values to be optimised, the vectorw~ effectively reduces to (w1, ..., wγ). Therefore, the above-mentioned example transformation function can then

be reformulated as:

ψ(w, ~x) = (w~ 1x1, ..., w1xl

| {z }

usew1

, ..., wγxn−l+1, ..., wγxn

| {z }

usewγ

) (5.6)

As a result, the size of the vectorw~ is reduced toγ n(same as the number of groups), and the newly formed optimisation problem Z~x0 has then only γ decision variables. We can now pick an arbitrary but fixed solution ~x0, and compute an approximation of the Pareto-set by optimising this new optimisation problem instead [1]. These fixed solutions that need to be picked before using the transformed problem are called “pivot solutions”

or “candidate solutions” in the following.

The drawback of this approach is that this reduction of the dimensionality limits the search to certain solutions that can be reached within the transformed problem. This happens due to the non-surjective transformation function, but also and more importantly due to the reachable subspace defined by the variable groups. By grouping the original variables together and applying the samewj to a number of them, their values cannot be changed independently of each other any more, as they are all changed using the same wj. This substantially limits the reachable solutions in the original search space.

Geometrically, this dimensionality reduction can be seen as cutting out aγ-dimensional subspace out of the n-dimensional original search space. The searchable subspace of Ω, which can be reached by optimisingZ~x0, is defined mainly by three choices [1]:

1. The choice of the pivot solution ~x0

2. The amount of groups and the decision of which variables are put together in the same group, i.e. the used mechanism Γ

3. The choice of the transformation function ψ(·) 1. Choice of Pivot Solutions

An appropriate choice of~x0 is important. The chosen solution remains fixed for the time of optimising the weights and its variables are used for the creation of new solutions through applying the weight variables. Together with the two other points listed above, it defines the reachable subspace of Ω. In the following, a small example is used to illustrate this effect.

As an example, consider an optimisation problem with two decision variables x1 and x2. In Fig. 5.1 the decision space of this problem is shown together with a hypothetical

P

x

1

x

2

~x

0

(a) “Suboptimal” choice of~x0

P

x

1

x

2

~x

0

(b) “Better” choice of~x0

Figure 5.1: Examples for different choices of the pivot solution ~x0 in the decision space.

x1 and x2 belong to the same group, the resulting reachable subspace is shown as a red, straight line. The Pareto-set is shown as the line P in blue colour. The suboptimal pivot in Subfigure (a) does not lead to the discovery of optimal solutions, but can help to advance closer to them. Thus, it can still be useful for the overall search. Graphic taken from [1].

Pareto-set P. When we optimise the original problemZ, both of the variables can be altered separately, and therefore the whole decision space can be searched. As a result, it is possible to obtain a good approximation of P.

At this point, we use a variable grouping mechanism and put both of the variables,x1

and x2, into the same group. As described above, to reduce the dimensionality of the problem we apply only a single variable w1 for all variables in this group inside the transformation function. Thus, we reduce the dimensionality of the problem from n= 2 to γ = 1. The newly formed optimisation problem Z~x0 results in the optimisation of just one single variablew1. This limits the reachable solutions to a 1-dimensional subspace [1].

If again multiplication is used as the transformation function as in the example above, the new search space of Z~x0 is denoted as a red line from a chosen pivot solution~x0 to the origin in Fig. 5.1.

When we compare the scenes of Fig. 5.1a and Fig. 5.1b, we observe that the ability to reach the optimal solutions in this new search space depends on the choice of ~x0. The same principle applies in higher dimensional search spaces. Every group of variables defines a subset of solutions that can be reached by the new optimisation process. It is to be expected, however, that even “suboptimal” choices of ~x0 as in Fig. 5.1a can help the overall search process to advance closer to certain parts of the Pareto-set [1].

5.1. THE WEIGHTED OPTIMISATION FRAMEWORK 91

2. Choice of Grouping Mechanism

The grouping of the variables has, similar to the location of ~x0, an influence on the subspace of Ω in which the search takes place [1]. In the example above, this can not directly be observed since there are only two decision variables. However, if we extend this example to a 3-dimensional search space, we find a situation as exemplarily shown in Fig. 5.2. A 3-dimensional search space is shown, where we use the solution~x0 as pivot to perform the transformation. We can see here that even if using the same solution~x0 in all subfigures, the searchable subspace of the original 3-dimensional space largely differs depending on which variables end up together in the same group. In Fig. 5.2a, x1 and x2 are grouped together while a second group consists only of x3. The resulting search space is 2-dimensional, since there are two groups and therefore two variables w1 andw2. The resulting subspace that can be reached by optimising these two variables is denoted as a red shaded area in the figure. We can now observe that this subspace differs largely in Figs. 5.2b and 5.2c. These show the same situation with different groups, when x1 andx3 orx2 andx3 are put in a common group respectively. The last option is to put all variables in the same group and optimise only one weight for this group as seen in the previous 2-dimensional example. The corresponding situation in this 3-dimensional case is shown in Fig. 5.2d, where only a 1-dimensional subspace can be reached by optimising the one variable w1, which changes all three original variablesx1, x2 andx3 at the same time.

3. Choice of Transformation Function

The third main influence for the reachable subspace of the newly formed problem is the transformation function. In addition, the transformation function also defines in which way the solutions’ locations in the newly created search space change in response to a change in the variableswj. In addition, it can affect the geometry of the lower-dimensional space, e.g. by using linear or non-linear transformation functions.

By defining a subspace in which new solutions can be produced based on the original values of~x0, the transformation process can also be regarded as a kind of “neighbourhood”

function of the privot solution. However, in mutation operators or local search methods, the term “neighbourhood” is usually used to indicate a locality around the original solution. This is, however, not the case for the solutions created by the transformation process. Given that the transformation is usually used to produce more diverse solutions and explore larger areas of the original search space, such a locality is actually not desirable in this situation. Instead, it is assumed that a transformation can enable an algorithm to make large steps in the search space, depending on the values of the wj. Different examples of transformation functions are given below.

x1

x2

x3

~x0

(a) Group 1 ={x1, x2}, Group 2 ={x3}

x1

x2

x3

~x0

(b) Group 1 ={x1, x3}, Group 2 ={x2}

x1

x2

x3

~x0

(c) Group 1 ={x1}, Group 2 ={x2, x3}

x1

x2

x3

~x0

(d) Group 1 ={x1, x2, x3}

Figure 5.2: Example of the influence of different variable groups on a 3-dimensional example with a linear transformation function.