• Keine Ergebnisse gefunden

Universität Konstanz Sommersemester 2013 Fachbereich Mathematik und Statistik

N/A
N/A
Protected

Academic year: 2021

Aktie "Universität Konstanz Sommersemester 2013 Fachbereich Mathematik und Statistik"

Copied!
2
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Universität Konstanz Sommersemester 2013 Fachbereich Mathematik und Statistik

Prof. Dr. Stefan Volkwein

Roberta Mancini, Stefan Trenz, Carmen Gräßle, Marco Menner, Kevin Sieg

Optimierung

http://www.math.uni-konstanz.de/numerik/personen/volkwein/teaching/

Sheet 5

Deadline for hand-in: 2013/06/24 at lecture Exercise 13

Let f, g : R → R be four-times differentiable functions where g is an approximation of f : g(x) = f(x) + r(x)

with some error function r : R → R and

|r(x)| ≤ for some known > 0.

Determine the numerical first derivative D

c1

(g, h) of g by central differences, where h denotes the grid spacing. Compute the error arising in the numerical approximation of the first derivative. Use this result to show that the error arising in the numerical approximation of the second derivative by using again central differences is of order O(

49

).

Exercise 14

Let f ∈ C

2

( R

n

, R ) and consider a uniform n-dimensional mesh with spacing h and e

i

∈ R

n

, i = 1, ..., n, the canonical ith basis vector. Verify the (2nd order forward difference) formula

∂x

i

∂x

j

f(x) = f (x + he

i

+ he

j

) − f (x + he

i

) − f (x + he

j

) + f (x)

h

2

+ O(h),

i, j = 1, ..., n, for an approximation of the Hessian matrix.

Exercise 15 (4 Points)

Let f ∈ C

1

( R

n

, R ) be a quadratic function of the form f(x) = 1

2 hx, Qxi + hc, xi + γ,

Q ∈ R

n×n

symmetric and positive definite, c ∈ R

n

and γ ∈ R . Let x

0

∈ R

n

and H be a symmetric, positive definite matrix.

Define f(x) := ˜ f(H

12

x) and x ˜

0

= H

12

x

0

. Let (˜ x

k

)

k∈N

be a sequence generated by the steepest descent method,

˜

x

k+1

= ˜ x

k

+ ˜ t

k

d ˜

k

with d ˜

k

= −∇ f(˜ ˜ x

k

)

(2)

and t ˜

k

= t( ˜ d

k

) the optimal stepsize choice as determined in Exercise 5.

Let (x

k

)

k∈N

be generated by the gradient-like method with preconditioner H, x

k+1

= x

k

+ t

k

d

k

with d

k

= H

−1

(−∇f (x

k

))

and t

k

= t(d

k

) the optimal stepsize choice as determined in Exercise 5.

Show (by induction) that the two optimization methods are equivalent, i.e., for all k ∈ N it holds:

x

k

= H

12

x ˜

k

.

Referenzen

ÄHNLICHE DOKUMENTE

Compute the point x k+1 using the damped Newton algorithm and compare it to the one returned by the classical Newton method. What do

Determine the number of gradient steps required for finding the minimum of f with the different matrices M and initial value x0 = [1.5;0.6] (use = 10

Herefore, write function files testfunction.m and rosenbrock.m which accept an input argument x and return the function and gradient values at x. Finally, write a file main.m where

Universität Konstanz Sommersemester 2015 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Show that for the starting point x 0 = 0, the classical Newton iteration of f has two accumulation points which are both not roots of f.. Find another initial point which does not