• Keine Ergebnisse gefunden

2.2 Optimisation

2.2.2 Multi-objective Optimisation

Multi-objective optimisation (MOO) is the process of finding the best possible solution to a problem that has multiple conflicting objectives. In other words, it is the process of finding a solution that maximises or minimises multiple objectives simultaneously. MOO is a challenging problem because there is often no singlebestsolution. Instead, there is a set of solutions, each of which is the best possible solution for a given set of objectives. The challenge in MOO is to find the best possible solution for the overall problem, not just for a specific set of objectives. MOO algorithms are designed to find the best possible solutions to a problem by searching through the space of all possible solutions. The search process is guided by a set of objectives, which are used to evaluate the quality of a solution. The most common type of MOO algorithm is the EA. EAs are a type of optimisation algorithm that uses a population of solutions, which are evolved over time through a process of selection, crossover and mutation. EAs are well suited to MOO because they can simultaneously optimise multiple objectives.

EAs can also handle problems with many variables and constraints. MOO algorithms are used to solve a range of problems in fields such as engineering, economics and operations research. There are several MOO algorithms, each with its own strengths and weaknesses. The choice of algorithm depends on the specific problem being solved. MOO is a powerful tool for solving complex problems. However, MOO algorithms can be computationally expensive and they may not always find the best possible solutions [Deb11a, Gol89, Mic96].

In Figure 2.2 the two primary goals of MOO methodology are shown. The first goal isconvergence, which is the or closeness to the true Pareto-front; the second is diversity, which is a measurement for how diverse and spread the solutions are along the Pareto-front.

Real-world problems often contain multiple conflicting objectives. The term multi-objective problem (MOP)for such problems has been used in the research community. Equation (2.1) shows a mathematical formulation.

In MOO one is confronted with several conflicting objectives fi(⃗x),i=1,· · ·,m which are to be optimised (without loss of generality, we take minimisation):

Z: min⃗f(⃗x) = (f1(⃗x),f2(⃗x), . . . ,fm(⃗x))T

s.t.⃗x∈Ω (2.1)

where⃗xcorresponds to a decision variable inn-dimensional feasible decision spaceΩ. The solution of this problem is a set of so-called Pareto-optimal solu-tions denoted byP. Pareto-optimality refers to a situation (or solution) where an objective value cannot be improved without worsening at least one other. The concept is used to introduce a partial ordering on a set of solutions to rank them.

To compare two solutions, it must be determined if one of themdominatesthe other in the Pareto-sense, which is defined below. This is resolved by using the Pareto-dominance criteria, described in Equation (2.2). In Figure 2.3a an example is shown. It follows for the set of Pareto-optimal solutions that for each

⃗x∈P, there is no other⃗y∈Ωwhich dominates⃗x(denoted by⃗y≺⃗x):

⃗y≺⃗x:fi(⃗y)≤ fi(⃗x),∀i=1,· · ·m

fj(⃗y)< fj(⃗x),∃j (2.2) Hence, the solutions inPare all Pareto-optimal and indifferent from each other.

Pareto-optimality is defined as follows:

⃗x∈Ωis Pareto-optimal ⇐⇒ ∄⃗x∈Ω|⃗x≺⃗x (2.3) These solutions are usually represented in the so-called decision spaceΩ(also called thesearch space), which represents the decision variables. The optimal solutions in this space construct the Pareto-set (PS). The image of these solutions in the objective space constitutes a Pareto-front (PF). Formally, the PS is defined as the set of all Pareto-optimal solutions:

PS :={⃗x|⃗xis Pareto-optimal} (2.4) The PF is the set of points that is obtained by applying the objective function vector to a Pareto-optimal solution:

PF :={⃗f(⃗x)|⃗x∈PS} (2.5) The goal of MOO algorithms is to find several Pareto-optimal solutions which can provide a good representation of the Pareto-front.

In an MOP, then-dimensional decision spaceΩis mapped to them-dimensional objective spaceM. There aremfitness functions that compute the objective values of a solution. The optimisation of an MOP aims to minimise or maximise these functions simultaneously. Such fitness functions are also called objective functions. Chapter 5 presents different encoding schemes of the pathfinding problem, which change the definition ofΩ. The decision space can be con-strained to implement a feasibility measurement. In this thesis, the particular optimisation in the field of pathfinding problems is discussed. Therefore, we assumeΩto be a subspace of all possible paths from the start to the end points, denoted byS. The spaceScan be further constrained by a number of

inequali-ties expressed by some function⃗g(p), where pis a solution path. The decision vector⃗xis, in the scope of this thesis, a pathp; hencep=⃗x.

S={p= (ni,· · ·,nk)|ni∈V,i= (1,· · ·,k)∧

∃φ(ei,i+1) = (ni,ni+1)∈E,i=1,· · ·,k−1} (2.6) Equation (2.6) shows the mathematical definition of the search space. Therefore, Ω={p∈S|⃗g(p)≤0} ⊆S. However, direct constraint handling is outside the scope of this thesis and is not addressed. Constraints in pathfinding problems are often set in the environment; furthermore, in this thesis we consider only minimisation problems.

Aside from the mentioned Pareto-dominance, other dominance criteria can be implemented, as follows. Figure 2.3 illustrates three such criteria.

ε-Dominance Theε-dominance introduces a factorε∈R>0which enlarges the area that is dominated by a solution. Applying it to a Pareto-front results in a set ofε-optimal alternatives with a limited number of solutions. Figure 2.3b shows a visual example, and it is defined as follows [PY00]:

⃗y⪯ε⃗x:fi(⃗y)−εi≤ fi(⃗x),∀i=1,· · ·,m

fj(⃗y)−εj<fj(⃗x),∃j (2.7) Cone-Dominance In [KWZ84], a cone-shaped domination relation is de-scribed. With such a relation, specific features of a Pareto-front can be found. For instance, solution candidates that are inferior to other solutions in one objective, yet non-dominated, can be dominated if cone-dominance is used [IKK01, BCGR11]. In other words, with cone-dominance, a cone (de-fined by an angle) defines the area that is dominated. Cone-dominance is also known asα-dominance. Figure 2.3c shows a visual example, and it is defined as follows (using angleϕ):

⃗y⪯α⃗x:ωi(⃗y)≤ωi(⃗x),∀i=1,· · ·m

ωj(⃗y)<ωj(⃗x),∃j (2.8) where

ωi(⃗x) =fi(⃗x) +

m

j=1,j̸=i

ai jfj(⃗x),i=1,· · ·,m ai j=tanϕ−90

2 ,∀i,j,i̸=j

(2.9)

An advantage of the cone-dominance relation is its ability to find knee-points [BDDO04] in a Pareto-front, which can be of great interest to DMs.

The reason is that a neighbouring solution to the knee-point on the front often has an unfavourable trade-off [AD13, DG11].

Typically, problems withm>1 are called multi-objective problems, whereas problem instances withm>3 are called many-objective optimisation problems.

In many-objective optimisation, various challenges arise [DS05, GFPC09, ZZN+19]. One of them derives from the fact that as the number of objectives

in-Figure 2.3: Different Dominance Relations

(a) Pareto-dominance

f1(#»x) f2(#»x)

(b)ε-dominance

f1(#»x) f2(#»x)

f2(#»y) f2(#»y −ε)

f1(#» y) f1(#» y−ε) (c) Cone-dominance

f1(#»x) f2(#»x)

ϕ

creases, so does the proportion of non-dominated solutions [Deb11b, GFPC09].

This characteristic makes methodologies based solely on Pareto-dominance less suitable for many-objective optimisation. It can happen that a large proportion of the solution set is non-dominated and focusing on those solutions is not beneficial to the search process as there is little room for new solutions [DJ14].

This is known as the loss of selection pressure [ZZN+19]. Another challenge is that measuring the diversity becomes computationally more expensive in high-dimensional spaces. Moreover, recombining solutions to generate new ones can be inefficient, as a few randomly chosen solutions from the population can be distant from each other, resulting in distant offspring solutions. Deb and Jain in [DJ14] stated that it is also difficult to represent the trade-off surface, as more points are needed with more dimensions. Furthermore, the computational costs of performance indicators can be high if there are many objectives. For instance, the computational effort of computing the hypervolume increases exponentially with the number of objectives [FPLI06, WHBH06a, DJ14]. Finally, presenting a solution set with many objectives visually is difficult.

Various methodologies have been developed to overcome these challenges. For instance, decomposition-based approaches, can divide the objective space into equally spaced regions that enable the algorithm to focus on solutions along those vector lines. Such approaches divide the problem into several single or multi-objective problems that are solved simultaneously by the algorithm. Aside from these kinds of algorithms handling many-objective problems, there are also domination-based approaches that improve either the dominance regulation or the sorting mechanism. Moreover, indicator-based algorithms use a particu-lar indicator to measure the quality of solutions during the optimisation, and objective-reduction-based approaches use a subset of objectives during the eval-uation [ZZN+19]. Increasing convergence and diversity in the decision space

Figure 2.4: Pathfinding problem classes.

Pathfinding Problem

Multi-agent PF

· · · VRP LRP MoMAPF Single-agent

PF

Longest Path Planning

LPP Coverage

Path Planning

CPP Shortest

path problem

(SPP)

Multi-objective SPP Single-objective

SPP

can be beneficial for the performance measured in the objective space, since close solutions in one space can be distant from each other in the other space, a likely case in multi-objective pathfinding problems. For instance, focusing solely on the objective space can result in a large uncovered area in the decision space. Problems such as the multi-objective pathfinding problem are problems where the quality of solutions benefits from these approaches.