• Keine Ergebnisse gefunden

Selection Strategies of Set-Valued Runge-Kutta Methods

N/A
N/A
Protected

Academic year: 2022

Aktie "Selection Strategies of Set-Valued Runge-Kutta Methods"

Copied!
52
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Home Page

Title Page

Contents

JJ II

J I

Page1of52

Go Back

Full Screen

Close

Selection Strategies

of Set-Valued Runge-Kutta Methods

Robert Baier

University of Bayreuth Applied Mathematics

D-95440 Bayreuth, Germany

e-mail: robert.baier@uni-bayreuth.de

Lecture at the NAA 2004 Conference in Rousse (30 June 2004)

(2)

Home Page

Title Page

Contents

JJ II

J I

Page2of52

Go Back

Full Screen

Close

Quit

Contents

1 Introduction 3

1.1 Differential Inclusions and Set-Valued Integral . . . 3

1.2 Arithmetic Operations on Sets . . . 5

1.3 Modulus of Smoothness . . . 8

2 Quadrature and Combination Methods 9 2.1 Quadrature Methods . . . 9

2.2 Quadrature Method for the Approximation of Attainable Sets . . . 11

2.3 Combination Methods . . . 14

3 Set-Valued Runge-Kutta Methods 19 3.1 Euler’s Method . . . 23

3.2 Euler-Cauchy Method (or Heun’s Method) . . . 25

3.3 Modified Euler Method . . . 28

3.4 Runge-Kutta (4) . . . 40

4 Conclusions 50

(3)

Home Page

Title Page

Contents

JJ II

J I

Page3of52

Go Back

Full Screen

Close

1. Introduction

1.1. Differential Inclusions and Set-Valued Integral Problem 1.1 Consider the nonlinear differential inclusion (DI)

x

0

(t)

F (t, x(t))

(f. a. e.

t ∈ I := [t

0

, T ]

)

,

(1)

x(t

0

)

X

0 (2)

with the nonempty set

X

0

∈ C (

Rn

)

and the set-valued mapping

F : I ×

Rn⇒Rn with images in

C (

Rn

)

.

Hereby, C(Rn) denotes the set of nonempty, convex, compact subsets of Rn and

x : I →

Rn fulfills

x( · ) ∈

AC(I), i.e.

x( · )

is absolutely continuous.

Definition 1.2 The attainable set R(t, t0, X0) at a given time t

∈ I

for Problem 1.1 is defined as

R (t, t

0

, X

0

) = { x(t) | x( · ) ∈ AC(I )

is solution of (1)–(2)

} .

Aim of the methods presented here:

approximation of the attainable set at time T by other sets

(4)

Home Page

Title Page

Contents

JJ II

J I

Page4of52

Go Back

Full Screen

Close

Quit

simplification for main parts of the talk:

Problem 1.3 The linear differential inclusion (LDI) is stated as follows:

x

0

(t) ∈

A(t)x(t) + B(t)U (f. a. e.

t ∈ I = [t

0

, T ]

)

,

(3)

x(t

0

) ∈ X

0 (4)

with matrix functions

A : I →

Rn×n,

B : I →

Rn×m and sets

X

0

∈ C (

Rn

)

,

U ∈ C (

Rm

)

. Definition 1.4 The fundamental solutionof the corresponding matrix differential equation

X

0

(t) = A(t)X (t)

(f. a. e.

t ∈ I

)

, X (τ ) = I .

to Problem 1.3 is denoted by Φ(·, τ) for

τ ∈ I

, where

I ∈

Rn×n is the unit matrix.

Definition 1.5 ([Aumann, 1965])

Consider a set-valued function

F : I ⇒

Rn with images in

C (

Rn

)

which is measurable and integrably bounded, i.e. there exists

k( · ) ∈ L

1

(I )

with

F (t) ⊂ k(t)B

1

(0)

f. a. e.

t ∈ I

. Then, Aumann’s integralis defined as

T

Z

t0

F(t)dt

:=

T

Z

t0

f (t)dt | f ( · )

is an integrable selection of

F ( · ) .

(5)

Home Page

Title Page

Contents

JJ II

J I

Page5of52

Go Back

Full Screen

Close

1.2. Arithmetic Operations on Sets

Definition 1.6 Let

C, D ∈ C (

Rn

)

. The Hausdorff distance between

C

and

D

is defined as dH(C, D)

= max { d(C, D), d(D, C ) } ,

where

d(C, D) = sup

cC

dist(c, D) , dist(c, D) = inf

dD

k c − d k

2

(c ∈ C ) .

Notation 1.7 The arithmetic operationsof sets

λ

·

C := { λ

·

c | c ∈ C }

(scalar multiple)

, C

+D

:= { c+d | c ∈ C, d ∈ D }

(Minkowski sum)

,

A

·

C := { A

·

c | c ∈ C }

(image under a linear mapping) are defined as usual for

C, D ∈ C (

Rn

)

,

A ∈

Rk×n and

λ ∈

R.

(6)

Home Page

Title Page

Contents

JJ II

J I

Page6of52

Go Back

Full Screen

Close

Quit

Definition 1.8 Let

C ∈ C (

Rn

)

,

l ∈

Rn. The support function resp. the supporting face for

C

in direction

l

is defined as

δ(l, C)

:= max

cC

h l, c i

resp. Y(l, C)

:= { c ∈ C | h l, c i = δ

(l, C ) } .

Remark that

C =

\

klk2=1

{ x ∈

Rn

: h l, x i ≤ δ

(l, x) } .

Lemma 1.9 Let

C, D ∈ C (

Rn

)

,

A ∈

Rk×n and

λ ≥ 0

. Then,

C

D ⇐⇒ δ

(l, C)

δ

(l, D)

for all

l ∈ S

n1

Rn, i.e.

k l k

2

= 1

and the following calculus rules are valid for

l ∈ S

n1:

δ

(l, C

+D) =

δ

(l, C )+δ

(l, D) , Y (l, C+D) = Y (l, C)+Y (l, D) , δ

(l,

λC

) =

λδ

(l, C) , Y (l,

λC) = λY

(l, C) ,

δ

(l,

AC

) = δ

(A

t

l, C) , Y (l,

AC) = AY

(A

t

l, C) ,

dH

(C, D) =

sup

klk2=1

|

δ

(l, C )

δ

(l, D)

| (5)

and

d

H

(AU,

B

U ) ≤

kA − Bk

· k U k

with

k U k := sup

uU

k u k

2

,

(6)

d

H

((A

+B)U

,

AU +BU)

kA − Bk

· k U k .

(7)

(7)

Home Page

Title Page

Contents

JJ II

J I

Page7of52

Go Back

Full Screen

Close

Theorem 1.10 ([Aumann, 1965])

Let

F : I ⇒

Rn with nonempty, closed images be measurable and integrably bounded.

Then, the Aumann integral of

F ( · )

is compact, convex and nonempty with δ(l,

Z

I

F (t)dt) =

Z

I

δ(l,

F (t))dt .

Lemma 1.11 (e.g. [Sonneborn and van Vleck, 1965])

Given Problem 1.3, the attainable set at time

t ∈ I

can be rewritten as

R (T, t

0

, X

0

) = Φ(T, t

0

)X

0

+

T

Z

t0

Φ(T, t)B (t)U dt .

Scalarization by support functions resp. supporting faces yields for

l ∈ S

n1:

δ

(l,

R(T, t0, X0)) =

δ

(Φ(T, t

0)t

l,

X0

) +

Z

I

δ

(B (t)

tΦ(T, t)t

l,

U

)dt , Y (l,

R(T, t0, X0)) = Φ(T, t0)Y

(Φ(T, t

0)t

l,

X0

)

+

Z

I

Φ(T, t)B

(t)Y (B(t)

tΦ(T, t)t

l,

U

)dt

(8)

Home Page

Title Page

Contents

JJ II

J I

Page8of52

Go Back

Full Screen

Close

Quit

1.3. Modulus of Smoothness

Definition 1.12 Let

f : I →

Rn be bounded. The averaged modulus of smoothness of order k ∈ N is defined as

τk(f;h)

:= k ω

k

(f ; · ; h) k

L1

,

ω

k

(f ; x; h) := sup {| ∆

kδ

f (t) | : t, t + kδ ∈ [x − kh

2 , x + kh

2 ] ∩ I }

for

x ∈ I ,

where

kδ

f (t)

is the

k

-th forward difference of

f ( · )

in

t

with step-size

δ

.

Lemma 1.13 (cf. [Sendov and Popov, 1988]) Let

f : I →

Rn be bounded and

p ∈

N. Then,

τ

p

(f ; h) =













o

(1),

if

f ( · )

is Riemann integrable

O (h),

if

f ( · )

has bounded variation o

(h

p1

),

ifp

≥ 2

and

f

p2

( · ) ∈ AC(I ) O (h

p

),

ifp

≥ 2

,

f

p2

( · ) ∈ AC(I )

and

f

p1

( · )

has bounded variation

(9)

Home Page

Title Page

Contents

JJ II

J I

Page9of52

Go Back

Full Screen

Close

2. Quadrature and Combination Methods

2.1. Quadrature Methods

Notation 2.1 Let

I := [t

0

, T ]

and

f : I →

Rn be given. We denote the point-wise quadrature formula by

Q(f; [t0, T])

:=

s

X

µ=1

b

µ

f (t

0

+ c

µ

(T − t

0

)) ,

where

b

µ

Rare the weights and

c

µ

∈ [0, 1]

determine the nodes (

µ = 1, . . . , s

).

Set

h =

TNt0 as step-size for

N ∈

N and define the iterated quadrature formula as QN(f; [t0, T])

:= h

N1

X

j=0

Q(f ; [t

j

, t

j+1

]) = h

N1

X

j=0 s

X

µ=1

b

µf

(t

j

+ c

µ

h) .

Q(f ; I )

has precision p

N0, if all polynomials up to degree

p

are integrated exactly and there exists a polynomial

f

with degree

p + 1

and

Q(f ; I ) 6 =

R

I

f (t)dt

.

Definition 2.2 Consider a point-wise quadrature formula of Notation 2.1 and

F : I ⇒

Rn with images in

C (

Rn

)

. The iterated set-valued quadrature method is defined with the usual arithmetic operations

QN(F; [t0, T])

:= h

N1

X

j=0 s

X

µ=1

b

µF

(t

j

+ c

µ

h) .

(10)

Home Page

Title Page

Contents

JJ II

J I

Page10of52

Go Back

Full Screen

Close

Quit

Proposition 2.3

(cf. [Polovinkin, 1975], [Balaban, 1982], [Donchev and Farkhi, 1990], [Veliov, 1989a]), [Krastanov and Kirov, 1994], [B. and Lempio, 1994b], [B., 1995])

Consider

N ∈

N and a point-wise iterated quadrature formula of Notation 2.1 with non-negative weights

b

µ

≥ 0

(

µ = 1, . . . , s

) and the remainder term

R

N

(f ; I ) :=

Z

I

f (t)dt − Q

N

(f ; I ) .

Then, the corresponding set-valued quadrature method fulfills for

F : I ⇒

Rn with images in

C (

Rn

)

:

d

H

(

Z

I

F (t)dt, Q

N

(F ; I )) = sup

klk2=1

| R

N

(l, F ( · )); I ) | .

(11)

Home Page

Title Page

Contents

JJ II

J I

Page11of52

Go Back

Full Screen

Close

2.2. Quadrature Method for the Approximation of Attainable Sets Proposition 2.4

(cf. [Donchev and Farkhi, 1990], [B. and Lempio, 1994b], [B., 1995])

Consider

N ∈

N and a point-wise iterated quadrature formula of Notation 2.1 with non-negative weights

b

µ

≥ 0

(

µ = 1, . . . , s

). Assume that

the valuesΦ(T, tj + cµh) are knownfor

j = 0, . . . , N − 1

and

µ = 1, . . . , s

the quadrature method has precisionp

− 1

,

p ∈

N

• τ

p

(l, Φ(T, · )B( · )U ), h) ≤ Ch

p uniformly in

l ∈ S

n1 Then,

d

H

( R (T, t

0

, X

0

), Q

N

(Φ(T, · )B ( · )U ); [t

0

, T ]) = O (h

p

).

Proof: In [Sendov and Popov, 1988, Theorem 3.4]:

| R

N

(f ) | = |

Z

I

f (t)dt − Q

N

(f ; [t

0

, T ]) | ≤ 1 +

s

X

µ=1

b

µ

T − t

0

· W

p

· sup

klk2=1

τ

p

f, 2 p h

Since Lemma 1.11 and

δ(l, QN

(F ; I )) =

QN

(l,

F ( · )); I ) ,

one can apply the error estimation above to

f ( · ) = δ

(l, F ( · ))

with

F ( · ) = Φ(T, · )B( · )U

.

(12)

Home Page

Title Page

Contents

JJ II

J I

Page12of52

Go Back

Full Screen

Close

Quit

Example 2.5 set-valued rectangular rule (special Riemannian sum) for

I = [t

0

, T ]

:

Q(F ; I ) = (T − t

0

)F (t

0

), Q

N

(F ; I ) = h

N1

X

j=0

F (t

j

),

Q

N

(Φ(T, · )B( · )U ; I ) = h

N1

X

j=0

Φ(T, t

j

)B (t

j

)U

in iterative form:

Q

Nj+1

= Φ(t

j+1

, t

j

)Q

Nj

+ hΦ(t

j+1

, t

j

)B (t

j

)U, Q

N0

= X

0

(13)

Home Page

Title Page

Contents

JJ II

J I

Page13of52

Go Back

Full Screen

Close

Example 2.6 set-valued trapezoidal rule for

I = [t

0

, T ]

:

Q(F ; I ) = T − t

0

2 F (t

0

) + F (T )

, Q

N

(F ; I ) = h 2

N1

X

j=0

F (t

j

) + F (t

j+1

,

Q

N

(Φ(T, · )B ( · )U ; I ) = h 2

N1

X

j=0

Φ(T, t

j

)B (t

j

)U + Φ(T, t

j+1

)B(t

j+1

)U

in iterative form:

Q

Nj+1

= Φ(t

j+1

, t

j

)Q

Nj

+ h

2 Φ(t

j+1

, t

j

)B(t

j

)U + Φ(t

j+1

, t

j+1

)B (t

j+1

)U

, Q

N0

= X

0 Remark 2.7

problems with quadrature methods:

no generalization for nonlinear differential inclusions possible

values of fundamental solutions

Φ(t

j+1

, t

j

)

resp.

Φ(T, t

j

)

must be known in advance

(14)

Home Page

Title Page

Contents

JJ II

J I

Page14of52

Go Back

Full Screen

Close

Quit

2.3. Combination Methods

Proposition 2.8 (cf. [B. and Lempio, 1994b], [B., 1995])

Consider

N ∈

N and a point-wise iterated quadrature formula of Notation 2.1 with non- negative weights

b

µ

≥ 0

(

µ = 1, . . . , s

). Assume that

(i) the quadrature method has precisionp

− 1

,

p ∈

N

(ii)

τ

p

(l, Φ(T, · )B( · )U ), h) ≤ Ch

p uniformly in

l ∈ S

n1 (iii)

d

H

(X

0

,

X0N

) = O (h

p

)

and uniformly in

j = 0, . . . , N − 1

and

µ = 1, . . . , s

(iv) Φ(te j+1, tj)

= Φ(t

j+1

, t

j

) + O (h

p+1

)

(v)

d

H

(

Ueµ(tj +cµh),

Φ(t

j+1

, t

j

+ c

µ

h)B(t

j

+ c

µ

h)U ) = O (h

p

)

Then, the combination method defined as

Xj+1N

=

Φ(te j+1, tj)XjN

+ h

s

X

µ=1

b

µUeµ(tj + cµh)

(j = 0, . . . , N − 1)

satisfies the global estimate

d

H

( R (T, t

0

, X

0

), X

NN

) = O (h

p

) .

Especially, (iv) is satisfied for

U

eµ

(t

j

+ c

µ

h) :=

Φeµ(tj+1, tj +cµh)B

(t

j

+ c

µ

h)U ,

if Φeµ(tj+1, tj +cµh)

= Φ(t

j+1

, t

j

+ c

µ

h) + O (h

p

) .

(15)

Home Page

Title Page

Contents

JJ II

J I

Page15of52

Go Back

Full Screen

Close

Proof: Define for

j = 0, . . . , N − 1

the iterations

R

j+1N

= Φ(t

j+1

, t

j

)R

Nj

+

tj+1

Z

tj

Φ(t

j+1

, τ )B(τ )U dτ ,

Q

Nj+1

= Φ(t

j+1

, t

j

)Q

Nj

+ h

s

X

µ=1

b

µ

Φ(t

j+1

, t

j

+ c

µ

h)B (t

j

+ β

µ

h)U , R

N0

= Q

N0

= X

0

.

Then,

R

NN

= R (T, t

0

, X

0

) , Q

NN

= Q

N

(Φ(T, · )B( · )U ; [t

0

, T ]) .

Show that

X

jN is bounded uniformly in

j = 0, . . . , N

and that

d

H

(R

Nj+1

, Q

Nj+1

) ≤ k Φ(t

j+1

, t

j

) k · d

H

(R

jN

, Q

Nj

) + d

H

(

tj+1

Z

tj

Φ(t

j+1

, t)B (t)U dt, h

s

X

µ=1

b

µ

Φ(t

j+1

, t

j

+ c

µ

h)B(t

j

+ c

µ

h)U )

≤ (1 + h C

e

) d

H

(R

Nj

, Q

Nj

) + O (h

p+1

)

⇒ d

H

(Q

Nj

, X

jN

) ≤ (1 + h C

e

)

j

d

H

(R

N0

, Q

N0

) + j O (h

p+1

) ≤ N O (h

p+1

) = O (h

p

) .

(16)

Home Page

Title Page

Contents

JJ II

J I

Page16of52

Go Back

Full Screen

Close

Quit

Furthermore,

d

H

(Q

Nj+1

, X

j+1N

) ≤k Φ(t

j+1

, t

j

) k · d

H

(Q

Nj

, X

jN

)

+ k Φ(t

j+1

, t

j

) − Φ(t

e j+1

, t

j

) k · k X

jN

k + h

s

X

µ=1

b

µ

d

H

(Φ(t

j+1

, t

j

+ c

µ

h)B(t

j

+ c

µ

h)U, U

eµ

(t

j

+ c

µ

h))

≤ (1 + h C

e

) d

H

(Q

Nj

, X

jN

) + O (h

p+1

)

⇒ d

H

(Q

Nj

, X

jN

) ≤ (1 + h C

e

)

j

d

H

(Q

N0

, X

0N

) + j O (h

p+1

)

≤ (1 + h C

e

)

N

d

H

(X

0

, X

0N

) + N O (h

p+1

)

≤ e

(Tt0)Ce

O (h

p

) + O (h

p

) = O (h

p

) .

⇒ d

H

(R

Nj

, X

jN

) ≤ d

H

(R

Nj

, Q

Nj

) + d

H

(Q

Nj

, X

jN

) = O (h

p

)

uniformly in

j = 0, . . . , N

.

(17)

Home Page

Title Page

Contents

JJ II

J I

Page17of52

Go Back

Full Screen

Close

Example 2.9 combination method: iter. Riemannian sum/Euler for matrix differ. equation

X

0

(t) = A(t)X (t) (t ∈ [t

j

, t

j+1

]) ,

X (t

j

) = I

X

j+1N

= Φ(t

e j+1

, t

j

)X

jN

+ h Φ

e1

(t

j+1

, t

j

)B(t

j

)U , (j = 0, . . . , N − 1) Φ(t

e j+1

, t

j

) = Φ(t

e j

, t

j

) + hA(t

j

) Φ(t

e j

, t

j

) ,

Φ

e1

(t

j+1

, t

j

) = Φ(t

e j+1

, t

j

) .

Hence,

X

j+1N

= (I + hA(t

j

))X

jN

+ h(I + hA(t

j

))B(t

j

)U (j = 0, . . . , N − 1).

Other possibility for calculation: Euler for adjoint equation

Y

0

(t) = − Y (t)A(t) (t ∈ [t

0

, T ]) , Y (T ) = I

gives

X

NN

= Φ(T, t

e j

)X

0N

+ h

N1

X

j=0

Φ

e1

(T, t

j

)B (t

j

)U ,

Φ(T, t

e j

) = N − j

(backward) steps of Euler for adjoint equation,

Φ

e1

(T, t

j

) = Φ(T, t

e j

) .

(18)

Home Page

Title Page

Contents

JJ II

J I

Page18of52

Go Back

Full Screen

Close

Quit

Example 2.10 usual combination of set-valued quadrature method and pointwise DE solver which provides approximations to the values of the fundamental solution at the quadrature nodes:

set-valued solver for step-size overall

quadrature method differential equations of DE solver order

iter. Riemannian sum Euler

h O (h)

iter. trapezoidal rule Euler-Cauchy/Heun

h O (h

2

)

iter. midpoint rule modified Euler h2

O (h

2

)

iter. Simpson’s rule classical RK(4) h2

O (h

4

)

Romberg’s method extrapolation of midpoint rule (with Euler as starting procedure)

h

i

=

T2it0

O (

j

Q

ν=0

h

2iν

)

(under suitable smoothness assumptions)

Remark 2.11

problems with these combination methods:

no generalization for nonlinear differential inclusions possible

values of fundamental solutions

Φ(t

j+1

, t

j

), Φ

µ

(t

j

+ c

µ

h, t

j

)

resp.

Φ(T, t

j

), Φ

µ

(T, t

j

+ c

µ

h)

must be calculated additionally

approximation for

Φ

µ

(t

j

+ c

µ

h, t

j

)

resp.

Φ

µ

(T, t

j

+ c

µ

h)

is calculated too accurately (

O (h

p+1

)

instead of

O (h

p

)

)

(19)

Home Page

Title Page

Contents

JJ II

J I

Page19of52

Go Back

Full Screen

Close

3. Set-Valued Runge-Kutta Methods

Runge-Kutta methods could be expressed by theButcher array (cf. [Butcher, 1987]):

c

1

a

11

a

12

. . . a

1,s2

a

1,s1

a

1,s

c

2

a

21

a

22

. . . a

2,s2

a

2,s1

a

2,s

... ... ... ... ... ... ...

c

s1

a

s1,1

a

s1,2

. . . a

s1,s2

a

s1,s1

a

1,s

c

s

a

s,1

a

s,2

. . . a

s,s2

a

s,s1

a

s,s with

c

1

:= 0 . b

1

b

2

. . . b

s2

b

s1

b

s

Explicit Runge-Kutta methods satisfy

a

µ,ν

= 0

, if

µ ≤ ν

and

c

1

= 0

. The set-valued Runge-Kutta methodfor LDI is defined as follows:

Choose a starting set

X

0N

∈ C (

Rn

)

and define for

j = 0, . . . , N − 1

and

µ = 1, . . . , s

:

η

j+1N

= η

jN

+ h

s

X

µ=1

b

µξj(µ)

,

(8)

ξ

j(µ)

= A(t

j

+ c

µ

h) η

jN

+ h

µ1

X

ν=1

a

µ,ν

ξ

j(ν)

+ B(t

j

+ c

µ

h)u

(µ)j

,

(9)

u(µ)j ∈ U

,

(10)

η0N∈ X0N

,

(11)

Xj+1N

= { η

Nj+1

| η

j+1N is defined by (8)–(11)

} .

(12)

(20)

Home Page

Title Page

Contents

JJ II

J I

Page20of52

Go Back

Full Screen

Close

Quit

Remark 3.1 If nonlinear DIs are considered with F(t, x) = S

uU

{f(t, x, u)}, equation (9) must be replaced by

ξ

j(µ)

= f (t

j

+ c

µ

h, η

jN

+ h

µ1

X

ν=1

a

µ,ν

ξ

j(ν)

, u

(µ)j

) .

For some selection strategies, some of the selections

u

(µ)j depend on others (e.g., they could be all equal).

If

f (t, x, u) = f (t, u)

, i.e. F(t, x) = F(t), and

X

0N

= { 0

Rn

}

, we arrive at the underlying quadrature method

η

Nj+1

= η

jN

+ h

s

X

µ=1

b

µ

f (t

j

+ c

µ

h, u

(µ)j

), u

(µ)j

∈ U , X

j+1N

= X

jN

+ h

s

X

µ=1

b

µ

F (t

j

+ c

µ

h) ,

X

NN

= h

N1

X

j=0 s

X

µ=1

b

µ

F (t

j

+ c

µ

h) = Q

N

(F ; [t

0

, T ])

of the Runge-Kutta method.

If

f (t, x, u) = f (t, x)

, i.e. F(t, x) = {f(t, x)}, then

X

jN

= { η

jN

}

coincides with the pointwise Runge-Kutta method.

(21)

Home Page

Title Page

Contents

JJ II

J I

Page21of52

Go Back

Full Screen

Close

Remark 3.2 Grouping in equation (8) by matrices multiplied by

η

jN and

u

(µ)j ,

µ = 1, . . . , s

we arrive at the form

X

j+1N

= Φ(t

e j+1

, t

j

)X

jN

+ h

[

u(µ)j U

{

s

X

µ=1

b

µ

Ψ

eµ

(t

j+1

, t

j

+ c

µ

h)u

(µ)j

}

with suitable matrices

Φ(t

e j+1

, t

j

)

(involving matrix values of

A( · )

) and

Ψ

eµ

(t

j+1

, t

j

+ c

µ

h)

(involving matrix values of

A( · )

and

B ( · )

).

Φ(t

e j+1

, t

j

)

is the same matrix as in the pointwise case for

f (t, x, u) = A(t)x

, hence it approximates

Φ(t

j+1

, t

j

)

from the same order as in the pointwise case.

Questions:

What is the order of the set-valued Runge-Kutta method, i.e.

d

H

( R (T, t

0

, X

0

), X

0N

) = O (h

p

)

?

Does the order coincide with the single-valued case?

What selection strategy is preferrable?

Should the chosen selection strategy depend on the Runge-Kutta method?

What smoothness assumptions do we need?

(22)

Home Page

Title Page

Contents

JJ II

J I

Page22of52

Go Back

Full Screen

Close

Quit

Answers in the literature:

set-valued iter. quadrature global disturbance local order overall RK-method method order term for . . . of disturbance global order

Euler Riemannian sum

O (h) η

jN

O (h

2

) O (h)

u

(1)j

O (h)

Euler/Cauchy midpoint rule

O (h

2

) η

jN

O (h

3

) O (h

2

)

(constant sel.)

u

(1)j

O (h

2

)

Euler/Cauchy trapezoidal rule

O (h

2

) η

jN

O (h

3

) O (h

2

)

(2 free sel.)

u

(1)j

O (h

2

)

u

(2)j

O (h

2

)

Euler’s method (see Subsection 3.1):

cf. [Nikol’skiˇı, 1988], [Dontchev and Farkhi, 1989], [Wolenski, 1990] for nonlinear DIs, for extensions see [Artstein, 1994], [Grammel, 2003])

Euler-Cauchy method (see Subsection 3.2):

cf. [Veliov, 1992] as well as [Veliov, 1989b]

for strongly convex nonlinear DIs

modified Euler method (see Subsection 3.3) Runge-Kutta(4) method (see Subsection 3.4)

(23)

Home Page

Title Page

Contents

JJ II

J I

Page23of52

Go Back

Full Screen

Close

3.1. Euler’s Method

Remark 3.3 Consider Euler’s method, i.e. the Butcher array

0 0 .

1

underlying quadrature method = special Riemannian sum:

Q

N

(F ; [t

0

, T ]) = h

N1

X

j=0

F (t

j

)

Grouping by

η

jN and the single selection

u

(1)j yields

X

j+1N

= I + hA(t

j

)

X

jN

+ hB(t

j

)U (j = 0, . . . , N − 1) .

Proposition 3.4 Euler’s method is a combination method with the following settings:

Q

N

(F ; [t

0

, T ]) = h

N1

X

j=0

F (t

j

) ,

Φ(t

e j+1

, t

j

) = I + hA(t

j

) ,

Φ

e1

(t

j+1

, t

j

) = I .

(24)

Home Page

Title Page

Contents

JJ II

J I

Page24of52

Go Back

Full Screen

Close

Quit

Proposition 3.5 (cf. [Nikol’skiˇı, 1988], [Dontchev and Farkhi, 1989], [Wolenski, 1990], see also [Artstein, 1994], [Grammel, 2003])

If

• A( · )

is Lipschitz,

• B( · )

is bounded,

• τ

1

(l, Φ(T, · )B ( · )U ), h) ≤ C

huniformly in

l ∈ S

n1, e.g., if

B ( · )

is Lipschitz,

• d

H

(X

0

, X

0N

) = O (h)

,

then Euler’s method converges at least with order O(h). Proof: The quadrature method has precision 0.

If

B( · )

is Lipschitz, then

Φ(T, · )B( · )

and hence also

δ

(l, Φ(T, · )B( · )U )

(uniformly in

l ∈ S

n1) are Lipschitz.

The following estimations are valid:

k Φ(t

e j+1

, t

j

) − Φ(t

j+1

, t

j

) k = k I + hA(t

j

)

− Φ(t

j+1

, t

j

) k = O (h

2

) , k Φ

e1

(t

j+1

, t

j

) − Φ(t

j+1

, t

j

) k = k I − Φ(t

j+1

, t

j

) k = O (h) .

Hence, Proposition 2.8 can be applied yielding

O (h)

.

For order of convergence 1, it is sufficient that

A( · )

and

B ( · )

(resp.

δ

(l, Φ(T, · )B( · )U )

, uniformly in

l ∈ S

n1) have bounded variation.

(25)

Home Page

Title Page

Contents

JJ II

J I

Page25of52

Go Back

Full Screen

Close

3.2. Euler-Cauchy Method (or Heun’s Method)

Remark 3.6 Consider method of Euler-Cauchy (orHeun’s method), i.e. the Butcher array

0 0 0

1 1 0 .

1 2

1 2

underlying quadrature method = iterated trapezoidal rule:

Q

N

(F ; [t

0

, T ]) = h 2

N1

X

j=0

F (t

j

) + F (t

j+1

)

Grouping by

η

jN and the two selections

u

(1)j and

u

(2)j yields

X

j+1N

=

I + h

2 A(t

j

) + A(t

j+1

)

+ h

2

2 A(t

j+1

)A(t

j

)

X

jN

+ h

2

[

u(1)j ,u(2)j U

I + hA(t

j+1

)

B(t

j

)u

(1)j

+ B (t

j+1

)u

(2)j

for

j = 0, . . . , N − 1

.

(26)

Home Page

Title Page

Contents

JJ II

J I

Page26of52

Go Back

Full Screen

Close

Quit

Proposition 3.7 The method of Euler-Cauchy with two free selections ”

u

(1)j

, u

(2)j

U

” is a combination method with the following settings:

Q

N

(F ; [t

0

, T ]) = h 2

N1

X

j=0

F (t

j

) + F (t

j+1

) ,

Φ(t

e j+1

, t

j

) = I + h

2 A(t

j

) + A(t

j+1

)

+ h

2

2 A(t

j+1

)A(t

j

) , Φ

e1

(t

j+1

, t

j

) := I + hA(t

j+1

) ,

Φ

e2

(t

j+1

, t

j+1

) := I .

Proposition 3.8 The method of Euler-Cauchy with constant selection strategy ”

u

(1)j =u(2)j ” is a combination method with the following settings:

Q

N

(F ; [t

0

, T ]) = h

N1

X

j=0

F (t

j

+ h 2 ) , Φ(t

e j+1

, t

j

) = I + h

2 A(t

j

) + A(t

j+1

)

+ h

2

2 A(t

j+1

)A(t

j

) , U

e1

(t

j

+ h

2 ) := 1

2 B (t

j

) + B(t

j+1

) + hA(t

j+1

)B (t

j

)

U .

(27)

Home Page

Title Page

Contents

JJ II

J I

Page27of52

Go Back

Full Screen

Close

Proposition 3.9

(cf. [Veliov, 1992] as well as [Veliov, 1989b] for strongly convex nonlinear DIs) If

• A

0

( · )

and

B( · )

are Lipschitz,

• τ

2

(l, Φ(T, · )B ( · )U ), h) ≤ C

h2 uniformly in

l ∈ S

n1, e.g., if

B

0

( · )

is Lipschitz,

• d

H

(X

0

, X

0N

) = O (h

2

)

,

then the method of Euler-Cauchy with constant or with two free selections converges at least with order O(h2).

For order of convergence 2, it is sufficient that

A

0

( · )

and

B

0

( · )

(resp. dtd

δ

(l, Φ(T, · )B( · )U )

, uniformly in

l ∈ S

n1) have bounded variation.

(28)

Home Page

Title Page

Contents

JJ II

J I

Page28of52

Go Back

Full Screen

Close

Quit

3.3. Modified Euler Method

Remark 3.10 Consider modified Euler method, i.e. the Butcher array

0 0 0

1 2

1

2

0 .

0 1

underlying quadrature method = iterated midpoint rule:

Q

N

(F ; [t

0

, T ]) = h

N1

X

j=0

F (t

j

+ h 2 )

Grouping by

η

jN and the two selections

u

(1)j and

u

(2)j yields

X

j+1N

=

I + hA(t

j

+ h

2 ) + h

2

2 A(t

j

+ h

2 )A(t

j

)

X

jN

+ h

[

u(1)j ,u(2)j U

h

2 A(t

j

+ h

2 )B (t

j

)u

(1)j

+ B(t

j

+ h 2 )u

(2)j

for

j = 0, . . . , N − 1

.

(29)

Home Page

Title Page

Contents

JJ II

J I

Page29of52

Go Back

Full Screen

Close

Proposition 3.11 Modified Euler method with constant selection strategy ”

u

(1)j =u(2)j ” is a combination method with the following settings:

Q

N

(F ; [t

0

, T ]) = h

N1

X

j=0

F (t

j

+ h 2 ) , Φ(t

e j+1

, t

j

) = I + hA(t

j

+ h

2 ) + h

2

2 A(t

j

+ h

2 )A(t

j

) , U

e1

(t

j

+ h

2 ) :=

B (t

j

+ h

2 ) + h

2 A(t

j

+ h

2 )B(t

j

)

U .

constant approximation by the quadrature method (midpoint rule) on

[t

j

, t

j+1

]

constant selection in modified Euler is appropriate Proposition 3.12 If

• A

0

( · )

and

B( · )

are Lipschitz,

• τ

2

(l, Φ(T, · )B ( · )U ), h) ≤ C

h2 uniformly in

l ∈ S

n1, e.g., if

B

0

( · )

is Lipschitz,

• d

H

(X

0

, X

0N

) = O (h

2

)

,

then modified Euler method with constant selection strategy converges at least with order O(h2).

For order of convergence 2, it is sufficient that

A

0

( · )

and

B

0

( · )

(resp. dtd

δ

(l, Φ(T, · )B( · )U )

, uniformly in

l ∈ S

n1) have bounded variation.

Referenzen

ÄHNLICHE DOKUMENTE

Application of Gauss’s theorem (38) immediately yields the well known Hadamard form of the first shape derivative.. Recall the definition of the curvature vector κ in (29) and the

Keywords: set-valued Runge-Kutta methods, linear differential inclu- sions, selection strategies, modified Euler..

The results are applied to the numerical approximation of reachable sets of linear control problems by quadrature formulae and interpolation techniques for set-valued mappings..

The construction with lifting of the direction field gives a possibility to reduce the classification of characteristic net singularities of generic linear second order mixed type

We prove the existence of global set-valued solutions to the Cauchy problem for partial differential equations and inclusions, with either single-valued or set-valued

Looking for both single-valued and set-valued solutions t o systems of first-order partial differential inclusions is then the topic of this paper.. We present it

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions

In the cases of linear differential inclusions or of differential inclusions with strongly convex right-hand sides, the approximating discrete inclusions are