• Keine Ergebnisse gefunden

Universität Konstanz Sommersemester 2016 Fachbereich Mathematik und Statistik

N/A
N/A
Protected

Academic year: 2021

Aktie "Universität Konstanz Sommersemester 2016 Fachbereich Mathematik und Statistik"

Copied!
2
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Universität Konstanz Sommersemester 2016 Fachbereich Mathematik und Statistik

Prof. Dr. Stefan Volkwein Sabrina Rogg

Optimierung

http://www.math.uni-konstanz.de/numerik/personen/volkwein/teaching/

Sheet 5

Deadline for hand-in: 13.06.2016 at lecture

Exercise 13 (3 Points)

We consider the problem of finding the point on the parabola y =

15

(u − 1)

2

which is closest to the point (y, u) = (2, 1)

>

. This can be formulated as

min f (y, u) = (y − 2)

2

+ (u − 1)

2

subject to 5y = (u − 1)

2

. (1) 1. Introduce the Lagrange function L : R

3

→ R for (1) and determine all critical points

(˜ y, u, ˜ p) ˜ ∈ R

3

satisfying

∇L(˜ y, u, ˜ p) = 0. ˜

2. Eliminate the variable u from the cost functional by directly inserting the constraint.

Show that the solutions of this reduced problem can not be solutions of the original one.

3. Which condition must be added in 2. so that the problems are equivalent?

Exercise 14 (Damped Newton) (2 Points)

Consider to use the classical Newton method for finding the root of g(x) = arctan(x). Let x

k

= 5. Compute the next iteration point x

k+1

according to the Newton algorithm

g

0

(x

k

)∆x

k

= −g(x

k

);

x

k+1

= x

k

+ ∆x

k

.

The damped Newton algorithm is defined as

g

0

(x

k

)∆x

k

= −g(x

k

);

x

k+1

= x

k

+ α

k

∆x

k

,

with α

k

∈ (0, 1] computed by a line search procedure such that

|g(x

k

+ α

k

∆x

k

)| ≤ (1 − δα

k

)|g(x

k

)|, δ ∈ (0, 1).

Let δ = 0.5. We start from α = 1 in the line search procedure and divide it by 2 every

time the condition is not satisfied. Compute the point x

k+1

using the damped Newton

(2)

algorithm and compare it to the one returned by the classical Newton method. What do you conclude?

Exercise 15 (3 Points)

Let f : R

n

→ R be the quadratic function

f(x) = 1

2 hx, Qxi + hc, xi + γ,

with Q ∈ R

n×n

symmetric and positive definite, c ∈ R

n

and γ ∈ R . Let x

0

∈ R

n

and H be a symmetric positive definite matrix.

Define f(x) := ˜ f(H

12

x) and x ˜

0

:= H

12

x

0

. Let (˜ x

k

)

k∈N

be a sequence generated by the steepest descent method,

˜

x

k+1

= ˜ x

k

+ ˜ t

k

d ˜

k

with d ˜

k

= −∇ f(˜ ˜ x

k

) and t ˜

k

the optimal stepsize from Exercise 4 (for f ˜ ).

Let (x

k

)

k∈N

be generated by the gradient-like method with preconditioner H, x

k+1

= x

k

+ t

k

d

k

with d

k

= H

−1

(−∇f (x

k

))

and t

k

the optimal stepsize from Exercise 4.

Show (by induction) that the two optimization methods are equivalent, i.e., for all k ∈ N it holds:

x

k

= H

12

x ˜

k

.

Referenzen

ÄHNLICHE DOKUMENTE

Determine the number of gradient steps required for finding the minimum of f with the different matrices M and initial value x0 = [1.5;0.6] (use = 10

Herefore, write function files testfunction.m and rosenbrock.m which accept an input argument x and return the function and gradient values at x. Finally, write a file main.m where

Universität Konstanz Sommersemester 2015 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und

Show that for the starting point x 0 = 0, the classical Newton iteration of f has two accumulation points which are both not roots of f.. Find another initial point which does not

Universität Konstanz Sommersemester 2014 Fachbereich Mathematik und