• Keine Ergebnisse gefunden

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und Statistik

N/A
N/A
Protected

Academic year: 2021

Aktie "Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und Statistik"

Copied!
2
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und Statistik

Prof. Dr. Stefan Volkwein

Roberta Mancini, Carmen Gräßle, Laura Lippmann, Kevin Sieg

Optimierung

http://www.math.uni-konstanz.de/numerik/personen/volkwein/teaching/

Sheet 6

Deadline for hand-in: 2014/07/16 at lecture Exercise 18 (Scaled gradient method)

Consider the quadratic function f : R 2 → R , f(x, y) = x y

100 −1

−1 2 x y

+ 1 1 x

y

+ 3,

and use a modified version of the Gradient Method where the update is x k+1 = x k − t k M −1 ∇f (x k )

with t k exact stepsize and M one of the following matrices M = Id =

1 0 0 1

, M = ∇ 2 f =

100 −1

−1 2

, M =

f xx 0 0 f yy

=

100 0 0 2

.

Use as basis the Gradient Method you implemented for the first program sheet to deter- mine the number of gradient steps required for finding the minimum of f with the different matrices M and initial value x0 = [1.5;0.6]. Hand in suitable and informative plots and comment your observations (you don’t need to hand in the code!).

Exercise 19 (Cauchy-step property) (4 Points)

The Cauchy step is defined as s c a = −t a ∇f (x a ), where t a is giving by (see the lecture notes)

t a =

(

a

k∇f(x

a

)k if ∇f (x a ) > H a ∇f(x a ) ≤ 0,

min

a

k∇f(x

a

)k , ∇f (x k∇f(x

a

)k

2

a

)

>

H

a

∇f(x

a

)

if ∇f (x a ) > H a ∇f(x a ) > 0.

Once the Cauchy point x c a = x a + s c a is computed, show that there is a sufficient decreasing in the quadratic model, i.e, the Cauchy step satisfies

ψ a (0) − ψ a (t a ) ≥ 1

2 k∇f (x a )k min

a , k∇f (x a )k 1 + kH a k

.

Exercise 20 (Dogleg strategy)

Let us consider the quadratic model of the function f at iteration k in x k m k (x) = f (x k ) + ∇f (x k ) > (x − x k ) + 1

2 (x − x k ) > B k (x − x k ),

(2)

with B k positive definite (Hessian matrix in x k or an approximation of that). The trust- region subproblem gives back x(∆) as

x(∆) = arg min

kx−x

k

k<∆

m k (x).

If the trust region is big enough i.e, as if there is no constraint in the TR-subproblem, the solution x(∆) would be the minimizer or m k (x): one would get the (quasi)Newton solution

x QN = x k − B k −1 ∇f(x k ).

When, instead, ∆ is too small, the quadratic contribution is small and one tends to get a solution in the form given by the Cauchy point formula,

x CP = x k − ∆ ∇f(x k )

k∇f(x k )k or x CP = x k − k∇f (x k )k 2

∇f (x k ) > B k ∇f(x k ) ∇f (x k ).

Hence, for all the others ∆, x(∆) will describe a curve in between the points x CP and x QN .

The idea of the dogleg method is to find an approximated solution replacing the curve just described with a path consisting of two straight lines: one from the current point x k to the Cauchy-point x CP and the other one from x CP to the (quasi)Newton solution x QN . The path is then described as

x(τ ) =

( x k + τ(x CP − x k ) τ ∈ [0, 1]

x CP + (τ − 1)(x QN − x CP ) τ ∈ [1, 2].

Show that m(x(τ)) is a decreasing function of τ.

Hint: consider m on the two straight lines separately.

Referenzen

ÄHNLICHE DOKUMENTE

Compute the point x k+1 using the damped Newton algorithm and compare it to the one returned by the classical Newton method. What do

Determine the number of gradient steps required for finding the minimum of f with the different matrices M and initial value x0 = [1.5;0.6] (use = 10

Herefore, write function files testfunction.m and rosenbrock.m which accept an input argument x and return the function and gradient values at x. Finally, write a file main.m where

Universität Konstanz Sommersemester 2015 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Show that for the starting point x 0 = 0, the classical Newton iteration of f has two accumulation points which are both not roots of f.. Find another initial point which does not