• Keine Ergebnisse gefunden

Complexity

Im Dokument  (Seite 20-23)

1.3 Combinatorial Optimisation

1.3.2 Complexity

An important idea of OR is the complexity of optimisation problems. The complexity depends on the chosen methods to solve the problem; thus the concept of ”algorithm” and ”problem” has to be defined.

A problem P consists of an infinite number of problem specifications p∈P with the same structure. In general, the set of all values, which defines the concrete specification of a problem, is called input; the concrete specification with numerical values is aninstanceof the problem. A method which is able to solve each problem specification is analgorithm. The best algorithm would be an efficient one. The efficiency evaluation of an algorithm depends on the ressources a program uses to execute the algorithm. A program is a concrete scheme of calculation steps, which is necessary for the implementation on a computer. In this context the computing time of such a program plays an important role; it depends on many variables and is therefore difficult to determine exactly. Because of that the basic computation operations are counted: arithmetic operations, comparisons and saving operations are assumed to be elementary computation steps. For simplification all those steps shall have the same duration. But there is

no sense in calculating the number of necessary computation steps for an instance;

moreover it is interesting to measure the necessary computation time for solving any problem specification.

When the input of a specification is described as a sequence of symbols, the length of those sequence determines the input size. The value depends on the type of codification; therefore it is enough to know the dimension of a specification p. The dimension can be called|p|. The input size of a TSP specification with n locations for example is |p|=n.

If rA(p) is the minimum number of necessary computation operations to ex-ecute the program of an algorithm A, the maximum number of operations for a problem specification of the size n is given by:

sup

|p|=n{rA(p);p∈P} (1.10) In mathematics, thesupremum or least upper bound of a setS of real numbers is denoted by supS and is defined to be the smallest real number that is greater than or equal to every number in S. It is enough to estimate the order of the upper bound of this expression. Thus some mathematical concepts have to be introduced at first in Table 1.1.

g(n) is any, non-negative function over the definition space N: g :N−→R

1. Another non-negative function f(n) is of theorder of g(n), if there is a c∈R and n0∈N, so that

f(n)≤c·g(n) ∀ n≥n0

2. The number of all functions with the order g(n) is called O(g(n));

O is the Landau or complexity function.

3. Instead of f(n)∈O(g(n)) it can be writtenf(n) =O(g(n)).

Table 1.1: Definition 1

This definition means that the function f(n) is bounded by g(n), for n suffi-ciently large. So the function f(n) is of the order g(n), if the following is valid:

∃ c∈R lim

n→∞

f(n)

g(n) =c. (1.11)

With this arrangements the measure of necessary computation operations for the solution of a problem specification with input size n can be defined:

1. LetrA(p) be the number of necessary computation operations of an algorithm A to solve p∈P. Function RA(n) with

sup|p|=n{rA(p); p∈P} ∈ O(RA(n)) ∀ n

is called complexity of an algorithm A. It gives an upper estimation for the maximum number of computation steps of an algorithm A for a problem specification with input length n.

2. IfRA(n) is bounded by a polynom, the algorithm is called polynomial; otherwise it is called non-polynomial.

3. For two algorithms A and B with the complexities RA(n) and RB(n) A is more efficient than B, if following is valid:

RA(n)∈O(RB(n)) ∧ RB(n)∈ O/ (RA(n)) ∀ n

Table 1.2: Definition 2

More precisely RA(n) is called maximum computation time; the worst case analysis is orientated at this time measure. The disadvantage of this standard method is lacking representativeness with respect to practical problems. There-fore the average case analysis has gained significance lately. But in order to find the average effort of a problem, the probability distribution of all possible problem specifications has to be known. If just a finite number of exemplary problems is taken, the representativeness of this random sample has to be guar-anteed.

Those results can be directly transferred to problems. The complexity of the most efficient known algorithm to solve a problem defines the problem complexity in a weak sense. The difficulty of this definition is easy to see: the validity of a statement on complexity depends on the number of all known algorithms for a special problem and is thus of temporary character. That is interesting for the practitioner, but in theory this measure is just an upper bound for the complexity of the problem. But when there is evidence that no algorithm is more efficient than the known, one speaks of problem complexity in a strength sense.

The number of problems, which can be solved in polynomial time, has a special position. If there is a deterministic polynomial algorithm for the solution of a problem, it is calledpolynomial limited. The number of all polynomial limited problems is characterised by P. A problem P0 ∈ P/ is named non-polynomial limited. In informatics a distinction is drawn between deterministic and non-deterministic problems. N P is the set of all problems, which can be solved with deterministic algorithms in polynomial time. An algorithm is non-deterministic, when there is no certainty about the next step. Each problem of P is obviously an element of N P; but not vice versa. It is uncertain whether the formalism of N P is necessary, because nobody could prove a problem to be element of N P and not of P. If there would be a proof for P 6=N P, the search for an efficient solution could be dismissed.

If a problempis such that every problem inN Pis polynomially transformable to p, it is N P-hard. If in addition problem p itself belongs to N P, p is said to be N P-complete. The concept of transformability means following: Suppose there is a problem p1 which can be solved by an algorithm A. If every instance of another problem p2 can be transformed into an instance of p1 in polynomial time, then algorithm A can be used to solve p2. N P-complete problems are the

”hardest” of all problems inN P. If a polynomial algorithm for anyN P-complete problem would have been found, a polynomial algorithm for all problems of N P would be available and P =N P would be proved.

But all attempts to prove P = N P theoretically have failed so far. And because no exact polynomial algorithm has been found for any problem in N P, there is strong circumstantial evidence that P 6= N P. Therefore the use of heuristics has considerable justification.

Besides complexity there is another argument for favouring heuristics [Re95]:

the best solution of an optimisation modelis not automatically the best solution for the underlying real-world problem. Of course there is never a truly exact model, but heuristics are usually more flexible and capable of coping with more complicated (realistic) objective functions and constraints than exact algorithms.

Im Dokument  (Seite 20-23)