• Keine Ergebnisse gefunden

Let (xo xon+1) be specially choosen. Then Step 2 can be realized by nding a g.c. point of

minff(x xn+1) :=k(x xn+1);(xo xon+1)k2j(x xn+1)2MIg

P

I: where

MI:=f(x xn+1)2MIjgs+3(x xn+1) := fI(x xn+1);fI(^x ^xn+1) + 0g: We introduce the following notations:

~A1:=

;Q2 0 0 0

~A2:=

2

6

4

0 ::: 0 ... ... ...

0 ::: 1

3

7

5 A3:=

Q1 0 0 0

I :=f1 ng

~aj:=

aj

0

~am+i :=;

ei

0

~as+1:= en+1 j = 1 mi2I

~as+2:=0 0

as+3:= c

;1

~bj := bj j 2f1 mg

~bm+i = 0 i = 1 n ~bs+1:= 0 ~bs+2=;q ~bs+3:=;fI(^x ^xn+1) +

5.1 The general quadratic optimization problem 36

where ei is the unity vector inRn+1and s as dened above. Then, the problem PI is equivalent to

minff(x) =kx;xok2jx2MIg

P

I:

~MI:=fx2Rn+1jgj(x)0 j2Jg J :=f1 sgfs + 1 s + 2 s + 3g gj(x) := ~aTjx + ~bj j2f1 sg gs+1(x) := xT ~A1x + ~aTs+1x

gs+2(x) := xT ~A2x + ~aTs+2x + ~bs+2

gs+3(x) := xT ~A3x + ~aTs+3x + ~bs+3:

We easily verify that PIis a double quadratic problem. Therefore we consider minff(x xn+1 t)jgj(x xn+1 t)0 j2J =f1 s + 3g

P

I(t) : where

f(x xn+1 t) :=k(x xn+1);(xo xn+1o)k2 gj(x xn+1 t) := gj(x xn+1) i2f1 sg gs+1(x xn+1 t) := gs+1(x xn+1)

gs+2(x xn+1 t) := gs+2(x xn+1)

gs+3(x xn+1 t) := gs+3(x xn+1) + (t;1)gs+3(xo xon+1):

We note that (xo xon+1) has to be chosen as in (C2).

Remark 10.

We see that ^x solves PI if and only if MI = , for > 0 suf-ciently small. Here, we do not discuss the question: How can it be checked whetherMIis empty or not? This is why we assume

MI6=: (C8)

Now we have to show whether the choice of the starting point is possible.

According to Section 1, we dene

P:=f(x xn+1)2Rn+1jgj(x xn+1)0 (j = 1 s)g Gi:=fx2Rnjgs+i(x xn+1)0g i = 1 3 H1=fx2Rn+1jDxgs+3(x) = 0g:

Theorem 8 (Choice of a starting point).

Let > 0 be suciently small, and let ^z = (^x ^xn+1) be the solution ofIQOPc and cl intMcI = McI. Then we have

intP\intG1\intG2nG36=:

5.1 The general quadratic optimization problem 37

Proof. Let z := (x xn+1) and suppose that

intP\intG1\intG2nG3=: Then we have

8z2Rn+1 z62intP\intG1\intG2or z2G3: (35)We distinguish 2 cases:

Case 1: ^z2intP\intG1\intG2 Then, we have

^z62G3 (36)since gs+3(^z) = > 0. From (35) we obtain:

(a) ^z62intP\intG1\intG2(contradiction to ^z2McI).

(b) ^z2G3 (contradiction to (36)).

Case 2 : ^z2@P@G1@G2

Since cl intMcI= McI, there exists a sequence fzkgwith

fzkgintMcI zk ;1!

^z:

Hence, there is a zko such that gs+3(zko) = fI(zko);fI(^z) + > 0. Thus zko 2=G3. Arguing as in case 1, the proof is complete.

Theorem 9.

It holds (a) H1=:

(b) The choice of the starting point (xo xon+1) for PI is, by the assumptions made in Theorem8, always possible.

According to the results above, we propose the following algorithm, which solves (IQOPc).

Algorithm

2

Step 0 Transform (QOP) into (IQOP)c

Step 1 Choose an > 0 su ciently small.

Compute a ^z2stat(IQOPc)(gc(IQOP) Set k := 0, xok := ^z.

Step 2 If MIk =, stop (xok is the solution). Else, go to step 3.

Step 3 If ^z satises (C2), set xok = ^z. Else, compute a ~z close to ^z which satises (C2) and set xok= ~z. Go to step 4.

Step 4 Call the algorithm 1:

Compute a g.c. point p for PIk(xok ^z). If the algorithm 1 is successful, set k = k + 1 ^z := p xok := ^z and go to step 2. Else, stop.

5.1 The general quadratic optimization problem 38 5.1.2 The concave quadratic optimization problem

In view of the realization of Step2 we have:

minff(x) =kx;xok2jx2Mcg

P

c: where

Mc:=fx2Rnjgj(x)0 j2Jg J :=f1 m m + 1 m + n + 1g gj(x) := aTjx + bj j2f1 mg gm+i(x) :=;xi i2f1 ng

gs+1(x) := fc(x);fc(^x) + s := m + n and fc(x) = xTQx + cTx. Pcis a double quadratic problem.

For the choice of the starting point we assume:

cl int(PnGs+1) =PnGs+1and H2#int(PnGs+1):

Then we have the same results as in Theorem 8.

5.1.3 Illustrative examples

We consider Example 5.1 and try to nd an approximate solution using the algorithm 2. After transformation we have

minfx21;x3jx2R3 gj(x1 x2 x3)0 j = 1 6g

IQOP

c: where

g1(x1 x2 x3) := x1;10 g2(x1 x2 x3) :=;x1 g3(x1 x2 x3) :=;x2 g4(x1 x2 x3) := x2;10:0

g5(x1 x2 x3) := x3;x22 g6(x1 x2 x3) := x23;10000 (compactification) Step 1: Take the stationary point ^z = (10:0 10:0 100).

Step 2: Find a g.c. point of

minf(x1;xo1)2+ (x2;xo2)2+ (x3;xo3)2jgj(x1 x2 x3)0 j = 1 7g

P

I: where

g7(x1 x2 x3) := x21;x3;(^z1;^z3) + :

In the Figures 16, 17, 18 and 19 we have sketched and presented solutions of PIk, considering the iterations k = 1 7 10 (see also Table 1). The algorithm stops at the iteration 11 and more precisely, at a point of Type 5, where, locally the setPgc becomes empty (see the Fig. 19).

5.2 Multiobjective optimization 39

Iteration points

k ^z xo Solution of PIk

1 0:08 (10:0 10:0 100:0) (9:99 9:99 50:0) (7:08 9:99 50:24) 2 0:08 (7:08 9:99 50:2) (7:08 9:99 50:24) (7:07 9:99 50:2) 3 0:08 (7:07 9:99 50:2) (7:07 9:99 50:24) (7:06 9:99 50:2) 4 0:5 (7:06 9:99 50:2) (7:06 9:99 50:24) (7:02 9:99 50:2) 5 0:8 (7:02 9:99 50:2) (7:02 9:99 50:26) (6:96 9:99 50:2) 6 10:0 (6:96 9:99 50:2) (6:96 9:99 50:26) (6:19 9:99 50:26) 7 50:0 (6:19 9:99 50:26) (6:19 9:99 50:26) (0:25 9:99 61:97) 8 0:8 (0:25 9:99 61:97) (0:25 9:99 61:97) (0:10 9:99 62:72) 9 0:8 (0:10 9:99 62:72) (0:10 9:99 62:72) (0:04 9:99 63:51) 10 20:0 (0:04 9:99 63:51) (0:04 9:99 63:51) (0:0001 9:99 83:51) 11 0:8 (0:0001 9:99 83:51) (0:0001 9:99 83:51) (0:0 10:0 100)

Remark 11.

The considered Example 5:1, computed with PAFO 11], shows that, although we can reach t = 1for k20 and compute the solution, we are not able to check whether the computed solution is global, since we cannot decide whether MI =. At the iteration k = 10 we cannot reach t = 1, and PAFO stops at a point of Type 5, which is the solution. At this point MI becomes locally empty.

5.2 Multiobjective optimization

minff(x)jx2Pg f = (f1 fl)T (MOP)

where

f1(x) = xTQx + cT1x

fj(x) = cTjx + dj j = 2 l

P := fx2RnjaTjx + bj0 (j = 1 s)g

P is a convex polyhedron and the matrix Q is negative semi-denite.

The main information consists in estimating the objective value at the point xi in comparison with fk

;

= infffk(x)jx2Pg.

Telescreen:

5.2 Multiobjective optimization 40

5.2 Multiobjective optimization 41

0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040

9.4

0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040

9.4

0.00000 0.00002 0.00004 0.00006 0.00008 0.00010

9.988

0.00000 0.00002 0.00004 0.00006 0.00008 0.00010

9.988

5.2 Multiobjective optimization 42

f1

;

f1(xo) f1(xo);f1

jf1j :100

... ... ...

fk

;

fk(xo) fk(xo);fk

jfkj :100

... ... ...

fl

;

fl(xo) fl(xo);fl

jflj :100 We consider

M( 1) :=fx2P jfk(x) 1l k = 1 lg:

1is the goal of the decision maker. In order to nd the goal realizer ^x2M( 1), we propose to nd a g.c. point of

minfkx;xok2jM( 1)\E(p)g where (37)

E(p) :=fx2Rnjkxk2pg , and p2Ris su ciently large.

M( 1)\E(p) =

8

<

:

x2Rn aTix + bi 0 i = 1 s fj(x); 1j 0 j = 1 l

kxk2;p 0

9

=

Since (37) is a double quadratic problem, we can apply the results obtained above in order to nd a goal realizer.

REFERENCES 43

References

1] Allgower, E. L. and Georg, K.: Introduction to numerical continuation methods. Springer-Verlag, Berlin, Heidelberg, New York 1990

2] Cottle, W. R. Pang, J.-S. Stone, E. R.: The Linear Complementarity Problem. Academic Press, INC Boston 1992

3] Dentcheva, D. Gollmer, R. Guddat, J. and Rckmann, J.-J.: Pathfol-lowing methods in Nonlinear Optimization II: exact penalty methods. In:

Florenzano, M. et al. (eds.): Approximation and Optimization II. Proceed-ings of the 2nd International Conference Approximation and Optimization in the Caribbean, Havana, Cuba, 1993. Peter Lang Verlag, Frankfurt am Main 1995, 200-30

4] Fandom, R.: Kurvenverfolgung in der Nichtlinearen Optimierung. ber die Standardeinbettung. Diplomarbeit, HU-Berlin, 1995

5] Guddat, J. Guerra Vasquez, F. and Jongen, H. Th.: Parametric Optim-ization, Singularities, Pathfollowing and Jumps. John Wiley, Chichester 1990

6] Guddat, J. Jongen, H. Th. and Rckmann, J.: On stability and stationary points in nonlinear optimization. J. Aust. Math. Soc., Ser. B 28(1986) 36; 56

7] Guddat, J. Jongen, H. Th. Kummer, B. and Nozicka, F. (eds.): Para-metric Optimization and Related Topics III. Peter Lang Verlag, Frankfurt am Main, 1993

8] Gfrerer, H. Guddat, J. and Wacker, J.: A globally convergent al-gorithm based on imbedding and parametric optimization. In: Computing 30(1983) 225;52

9] Gmez, W. Guddat, J. Jongen, H. Th. Rckmann, J. and Solano, C.:

Curvas crticas y saltos en optimizacin no lineal. Textbook in Spanish (in preparation)

10] Gfrerer, H. Guddat, J. Wacker, Hj. and Zulehner, W.: Pathfollowing methods for Kuhn-Tucker curves by an active index set strategy. In: Bag-chi, A., Jongen, H.Th. (eds.) Systems and optimization. Lecture Notes in Control and Information Sciences 66, Springer-Verlag, Berlin, Heidelberg, New York (1985), 111;131

11] Gollmer, R. Kausmann, U. Nowack, D. and Wendler, K.: Computer-programm PAFO. Humboldt-Universitt Berlin, Institut fr Mathematik, 1995

12] Gromann, Ch. and Terno, J.: Numerik der Optimierung. Teubner Studi-enbcher Mathematik, Stuttgart 1993

REFERENCES 44

13] Horst, R. and Tuy, H.: Global Optimization: Deterministic Approaches.

Springer-Verlag, Berlin, Heidelberg, New York, 2nd revised edition 1993 14] Jongen, H. Th Jonker, P. and Twilt, F.: Nonlinear Optimization inRn. II.

Transversality, Flows, Parametric Aspects. Peter Lang Verlag, Frankfurt 1983

15] Jongen, H. Th. Jonker, P. and Twilt, F: Critical sets in parametric optim-ization. In: Math. Programming 34 (1986), 333;353

16] Jongen, H. Th Jonker, P. and Twilt, F.: On one-parameter-families of sets dened by (in)equality constraints. In: NIEW ARCHIEF VOOR WISKUNDE(3), XXX(1982), 307-322

17] Kojima, M. Megiddo, N. Noma T. and Yoshise, A.: A unied approach to interior point algorithms for linear complementarity problems. Springer-Verlag, Berlin, Heidelberg, New York 1991

18] Mbunga, P.: Quadratische Optimierung: ein parametrischer Zugang. Dip-lomarbeit, Humboldt-Universitt zu Berlin, 1997

19] Rckmann, J.: Einparametrische nichtkonvexe Optimierung: Struktur-untersuchungen und eine Verallgemeinerung des Einbettungsprinzips. Dis-sertation, TH Leipzig, 1988

20] Rckmann, J.: Stability of Noncompact Feasible Sets in Nonlinear Optim-ization. In: 7], 467;502

21] Rckmann, J. and Tammer, K.: On linear-quadratic perturbations in one-parametric nonlinear optimization. In: Systems Science, 18(1)(1992) 37-48 22] Sternberg, S.: Lectures on di erential geometry. Prentice Hall, Englewood

Cli s, NJ, 1964

23] Gmez Boll, W.: Properties of an interior embedding for solving nonlinear optimization problems. Humboldt-Universitt zu Berlin, preprint 96-31 24] Wetzel, R.: Untersuchungen zu einem Algorithmus fr quadratische

Opti-mierungsaufgaben unter Verwendung linearer Komplementarittsprobleme.

Diploma Thesis, Karl-Marx-Universitt, Leipzig (1981)

ÄHNLICHE DOKUMENTE