Home Page
Title Page
Contents
JJ II
J I
Page1of52
Go Back
Full Screen
Close
Selection Strategies
of Set-Valued Runge-Kutta Methods ∗
Robert Baier
University of Bayreuth Applied Mathematics
D-95440 Bayreuth, Germany
e-mail: robert.baier@uni-bayreuth.de
∗Lecture at the NAA 2004 Conference in Rousse (30 June 2004)
Home Page
Title Page
Contents
JJ II
J I
Page2of52
Go Back
Full Screen
Close
Quit
Contents
1 Introduction 3
1.1 Differential Inclusions and Set-Valued Integral . . . 3
1.2 Arithmetic Operations on Sets . . . 5
1.3 Modulus of Smoothness . . . 8
2 Quadrature and Combination Methods 9 2.1 Quadrature Methods . . . 9
2.2 Quadrature Method for the Approximation of Attainable Sets . . . 11
2.3 Combination Methods . . . 14
3 Set-Valued Runge-Kutta Methods 19 3.1 Euler’s Method . . . 23
3.2 Euler-Cauchy Method (or Heun’s Method) . . . 25
3.3 Modified Euler Method . . . 28
3.4 Runge-Kutta (4) . . . 40
4 Conclusions 50
Home Page
Title Page
Contents
JJ II
J I
Page3of52
Go Back
Full Screen
Close
1. Introduction
1.1. Differential Inclusions and Set-Valued Integral Problem 1.1 Consider the nonlinear differential inclusion (DI)
x
0(t)
∈F (t, x(t))
(f. a. e.t ∈ I := [t
0, T ]
),
(1)x(t
0)
∈X
0 (2)with the nonempty set
X
0∈ C (
Rn)
and the set-valued mappingF : I ×
Rn⇒Rn with images inC (
Rn)
.Hereby, C(Rn) denotes the set of nonempty, convex, compact subsets of Rn and
x : I →
Rn fulfillsx( · ) ∈
AC(I), i.e.x( · )
is absolutely continuous.Definition 1.2 The attainable set R(t, t0, X0) at a given time t
∈ I
for Problem 1.1 is defined asR (t, t
0, X
0) = { x(t) | x( · ) ∈ AC(I )
is solution of (1)–(2)} .
Aim of the methods presented here:approximation of the attainable set at time T by other sets
Home Page
Title Page
Contents
JJ II
J I
Page4of52
Go Back
Full Screen
Close
Quit
simplification for main parts of the talk:
Problem 1.3 The linear differential inclusion (LDI) is stated as follows:
x
0(t) ∈
A(t)x(t) + B(t)U (f. a. e.t ∈ I = [t
0, T ]
),
(3)x(t
0) ∈ X
0 (4)with matrix functions
A : I →
Rn×n,B : I →
Rn×m and setsX
0∈ C (
Rn)
,U ∈ C (
Rm)
. Definition 1.4 The fundamental solutionof the corresponding matrix differential equationX
0(t) = A(t)X (t)
(f. a. e.t ∈ I
), X (τ ) = I .
to Problem 1.3 is denoted by Φ(·, τ) for
τ ∈ I
, whereI ∈
Rn×n is the unit matrix.Definition 1.5 ([Aumann, 1965])
Consider a set-valued function
F : I ⇒
Rn with images inC (
Rn)
which is measurable and integrably bounded, i.e. there existsk( · ) ∈ L
1(I )
withF (t) ⊂ k(t)B
1(0)
f. a. e.t ∈ I
. Then, Aumann’s integralis defined asT
Z
t0
F(t)dt
:=
T
Z
t0
f (t)dt | f ( · )
is an integrable selection ofF ( · ) .
Home Page
Title Page
Contents
JJ II
J I
Page5of52
Go Back
Full Screen
Close
1.2. Arithmetic Operations on Sets
Definition 1.6 Let
C, D ∈ C (
Rn)
. The Hausdorff distance betweenC
andD
is defined as dH(C, D)= max { d(C, D), d(D, C ) } ,
where
d(C, D) = sup
c∈C
dist(c, D) , dist(c, D) = inf
d∈D
k c − d k
2(c ∈ C ) .
Notation 1.7 The arithmetic operationsof setsλ
·C := { λ
·c | c ∈ C }
(scalar multiple), C
+D:= { c+d | c ∈ C, d ∈ D }
(Minkowski sum),
A
·C := { A
·c | c ∈ C }
(image under a linear mapping) are defined as usual forC, D ∈ C (
Rn)
,A ∈
Rk×n andλ ∈
R.Home Page
Title Page
Contents
JJ II
J I
Page6of52
Go Back
Full Screen
Close
Quit
Definition 1.8 Let
C ∈ C (
Rn)
,l ∈
Rn. The support function resp. the supporting face forC
in directionl
is defined asδ∗(l, C)
:= max
c∈C
h l, c i
resp. Y(l, C):= { c ∈ C | h l, c i = δ
∗(l, C ) } .
Remark thatC =
\klk2=1
{ x ∈
Rn: h l, x i ≤ δ
∗(l, x) } .
Lemma 1.9 LetC, D ∈ C (
Rn)
,A ∈
Rk×n andλ ≥ 0
. Then,C
⊂D ⇐⇒ δ
∗(l, C)
≤δ
∗(l, D)
for alll ∈ S
n−1⊂
Rn, i.e.k l k
2= 1
and the following calculus rules are valid forl ∈ S
n−1:δ
∗(l, C
+D) =δ
∗(l, C )+δ
∗(l, D) , Y (l, C+D) = Y (l, C)+Y (l, D) , δ
∗(l,
λC) =
λδ∗(l, C) , Y (l,
λC) = λY(l, C) ,
δ
∗(l,
AC) = δ
∗(A
tl, C) , Y (l,
AC) = AY(A
tl, C) ,
dH(C, D) =
supklk2=1
|
δ
∗(l, C )
−δ
∗(l, D)
| (5)and
d
H(AU,
BU ) ≤
kA − Bk· k U k
withk U k := sup
u∈U
k u k
2,
(6)d
H((A
+B)U,
AU +BU)
≤
kA − Bk· k U k .
(7)Home Page
Title Page
Contents
JJ II
J I
Page7of52
Go Back
Full Screen
Close
Theorem 1.10 ([Aumann, 1965])
Let
F : I ⇒
Rn with nonempty, closed images be measurable and integrably bounded.Then, the Aumann integral of
F ( · )
is compact, convex and nonempty with δ∗(l,Z
I
F (t)dt) =
ZI
δ∗(l,
F (t))dt .
Lemma 1.11 (e.g. [Sonneborn and van Vleck, 1965])Given Problem 1.3, the attainable set at time
t ∈ I
can be rewritten asR (T, t
0, X
0) = Φ(T, t
0)X
0+
T
Z
t0
Φ(T, t)B (t)U dt .
Scalarization by support functions resp. supporting faces yields for
l ∈ S
n−1:δ
∗(l,
R(T, t0, X0)) =δ
∗(Φ(T, t
0)tl,
X0) +
Z
I
δ
∗(B (t)
tΦ(T, t)tl,
U)dt , Y (l,
R(T, t0, X0)) = Φ(T, t0)Y(Φ(T, t
0)tl,
X0)
+
ZI
Φ(T, t)B
(t)Y (B(t)
tΦ(T, t)tl,
U)dt
Home Page
Title Page
Contents
JJ II
J I
Page8of52
Go Back
Full Screen
Close
Quit
1.3. Modulus of Smoothness
Definition 1.12 Let
f : I →
Rn be bounded. The averaged modulus of smoothness of order k ∈ N is defined asτk(f;h)
:= k ω
k(f ; · ; h) k
L1,
ω
k(f ; x; h) := sup {| ∆
kδf (t) | : t, t + kδ ∈ [x − kh
2 , x + kh
2 ] ∩ I }
forx ∈ I ,
where∆
kδf (t)
is thek
-th forward difference off ( · )
int
with step-sizeδ
.Lemma 1.13 (cf. [Sendov and Popov, 1988]) Let
f : I →
Rn be bounded andp ∈
N. Then,τ
p(f ; h) =
o
(1),
iff ( · )
is Riemann integrableO (h),
iff ( · )
has bounded variation o(h
p−1),
ifp≥ 2
andf
p−2( · ) ∈ AC(I ) O (h
p),
ifp≥ 2
,f
p−2( · ) ∈ AC(I )
and
f
p−1( · )
has bounded variationHome Page
Title Page
Contents
JJ II
J I
Page9of52
Go Back
Full Screen
Close
2. Quadrature and Combination Methods
2.1. Quadrature Methods
Notation 2.1 Let
I := [t
0, T ]
andf : I →
Rn be given. We denote the point-wise quadrature formula byQ(f; [t0, T])
:=
s
X
µ=1
b
µf (t
0+ c
µ(T − t
0)) ,
where
b
µ∈
Rare the weights andc
µ∈ [0, 1]
determine the nodes (µ = 1, . . . , s
).Set
h =
T−Nt0 as step-size forN ∈
N and define the iterated quadrature formula as QN(f; [t0, T]):= h
N−1
X
j=0
Q(f ; [t
j, t
j+1]) = h
N−1
X
j=0 s
X
µ=1
b
µf(t
j+ c
µh) .
Q(f ; I )
has precision p∈
N0, if all polynomials up to degreep
are integrated exactly and there exists a polynomialf
with degreep + 1
andQ(f ; I ) 6 =
RI
f (t)dt
.Definition 2.2 Consider a point-wise quadrature formula of Notation 2.1 and
F : I ⇒
Rn with images inC (
Rn)
. The iterated set-valued quadrature method is defined with the usual arithmetic operationsQN(F; [t0, T])
:= h
N−1
X
j=0 s
X
µ=1
b
µF(t
j+ c
µh) .
Home Page
Title Page
Contents
JJ II
J I
Page10of52
Go Back
Full Screen
Close
Quit
Proposition 2.3
(cf. [Polovinkin, 1975], [Balaban, 1982], [Donchev and Farkhi, 1990], [Veliov, 1989a]), [Krastanov and Kirov, 1994], [B. and Lempio, 1994b], [B., 1995])
Consider
N ∈
N and a point-wise iterated quadrature formula of Notation 2.1 with non-negative weightsb
µ≥ 0
(µ = 1, . . . , s
) and the remainder termR
N(f ; I ) :=
Z
I
f (t)dt − Q
N(f ; I ) .
Then, the corresponding set-valued quadrature method fulfills for
F : I ⇒
Rn with images inC (
Rn)
:d
H(
ZI
F (t)dt, Q
N(F ; I )) = sup
klk2=1
| R
N(δ
∗(l, F ( · )); I ) | .
Home Page
Title Page
Contents
JJ II
J I
Page11of52
Go Back
Full Screen
Close
2.2. Quadrature Method for the Approximation of Attainable Sets Proposition 2.4
(cf. [Donchev and Farkhi, 1990], [B. and Lempio, 1994b], [B., 1995])
Consider
N ∈
N and a point-wise iterated quadrature formula of Notation 2.1 with non-negative weightsb
µ≥ 0
(µ = 1, . . . , s
). Assume that•
the valuesΦ(T, tj + cµh) are knownforj = 0, . . . , N − 1
andµ = 1, . . . , s
•
the quadrature method has precisionp− 1
,p ∈
N• τ
p(δ
∗(l, Φ(T, · )B( · )U ), h) ≤ Ch
p uniformly inl ∈ S
n−1 Then,d
H( R (T, t
0, X
0), Q
N(Φ(T, · )B ( · )U ); [t
0, T ]) = O (h
p).
Proof: In [Sendov and Popov, 1988, Theorem 3.4]:
| R
N(f ) | = |
ZI
f (t)dt − Q
N(f ; [t
0, T ]) | ≤ 1 +
s
X
µ=1
b
µT − t
0
· W
p· sup
klk2=1
τ
pf, 2 p h
Since Lemma 1.11 andδ∗(l, QN
(F ; I )) =
QN(δ
∗(l,F ( · )); I ) ,
one can apply the error estimation above to
f ( · ) = δ
∗(l, F ( · ))
withF ( · ) = Φ(T, · )B( · )U
.Home Page
Title Page
Contents
JJ II
J I
Page12of52
Go Back
Full Screen
Close
Quit
Example 2.5 set-valued rectangular rule (special Riemannian sum) for
I = [t
0, T ]
:Q(F ; I ) = (T − t
0)F (t
0), Q
N(F ; I ) = h
N−1
X
j=0
F (t
j),
Q
N(Φ(T, · )B( · )U ; I ) = h
N−1
X
j=0
Φ(T, t
j)B (t
j)U
in iterative form:Q
Nj+1= Φ(t
j+1, t
j)Q
Nj+ hΦ(t
j+1, t
j)B (t
j)U, Q
N0= X
0Home Page
Title Page
Contents
JJ II
J I
Page13of52
Go Back
Full Screen
Close
Example 2.6 set-valued trapezoidal rule for
I = [t
0, T ]
:Q(F ; I ) = T − t
02 F (t
0) + F (T )
, Q
N(F ; I ) = h 2
N−1
X
j=0
F (t
j) + F (t
j+1,
Q
N(Φ(T, · )B ( · )U ; I ) = h 2
N−1
X
j=0
Φ(T, t
j)B (t
j)U + Φ(T, t
j+1)B(t
j+1)U
in iterative form:Q
Nj+1= Φ(t
j+1, t
j)Q
Nj+ h
2 Φ(t
j+1, t
j)B(t
j)U + Φ(t
j+1, t
j+1)B (t
j+1)U
, Q
N0= X
0 Remark 2.7problems with quadrature methods:
•
no generalization for nonlinear differential inclusions possible•
values of fundamental solutionsΦ(t
j+1, t
j)
resp.Φ(T, t
j)
must be known in advanceHome Page
Title Page
Contents
JJ II
J I
Page14of52
Go Back
Full Screen
Close
Quit
2.3. Combination Methods
Proposition 2.8 (cf. [B. and Lempio, 1994b], [B., 1995])
Consider
N ∈
N and a point-wise iterated quadrature formula of Notation 2.1 with non- negative weightsb
µ≥ 0
(µ = 1, . . . , s
). Assume that(i) the quadrature method has precisionp
− 1
,p ∈
N(ii)
τ
p(δ
∗(l, Φ(T, · )B( · )U ), h) ≤ Ch
p uniformly inl ∈ S
n−1 (iii)d
H(X
0,
X0N) = O (h
p)
and uniformly in
j = 0, . . . , N − 1
andµ = 1, . . . , s
(iv) Φ(te j+1, tj)= Φ(t
j+1, t
j) + O (h
p+1)
(v)
d
H(
Ueµ(tj +cµh),Φ(t
j+1, t
j+ c
µh)B(t
j+ c
µh)U ) = O (h
p)
Then, the combination method defined asXj+1N
=
Φ(te j+1, tj)XjN+ h
s
X
µ=1
b
µUeµ(tj + cµh)(j = 0, . . . , N − 1)
satisfies the global estimated
H( R (T, t
0, X
0), X
NN) = O (h
p) .
Especially, (iv) is satisfied forU
eµ(t
j+ c
µh) :=
Φeµ(tj+1, tj +cµh)B(t
j+ c
µh)U ,
if Φeµ(tj+1, tj +cµh)= Φ(t
j+1, t
j+ c
µh) + O (h
p) .
Home Page
Title Page
Contents
JJ II
J I
Page15of52
Go Back
Full Screen
Close
Proof: Define for
j = 0, . . . , N − 1
the iterationsR
j+1N= Φ(t
j+1, t
j)R
Nj+
tj+1
Z
tj
Φ(t
j+1, τ )B(τ )U dτ ,
Q
Nj+1= Φ(t
j+1, t
j)Q
Nj+ h
s
X
µ=1
b
µΦ(t
j+1, t
j+ c
µh)B (t
j+ β
µh)U , R
N0= Q
N0= X
0.
Then,
R
NN= R (T, t
0, X
0) , Q
NN= Q
N(Φ(T, · )B( · )U ; [t
0, T ]) .
Show thatX
jN is bounded uniformly inj = 0, . . . , N
and thatd
H(R
Nj+1, Q
Nj+1) ≤ k Φ(t
j+1, t
j) k · d
H(R
jN, Q
Nj) + d
H(
tj+1
Z
tj
Φ(t
j+1, t)B (t)U dt, h
s
X
µ=1
b
µΦ(t
j+1, t
j+ c
µh)B(t
j+ c
µh)U )
≤ (1 + h C
e) d
H(R
Nj, Q
Nj) + O (h
p+1)
⇒ d
H(Q
Nj, X
jN) ≤ (1 + h C
e)
jd
H(R
N0, Q
N0) + j O (h
p+1) ≤ N O (h
p+1) = O (h
p) .
Home Page
Title Page
Contents
JJ II
J I
Page16of52
Go Back
Full Screen
Close
Quit
Furthermore,
d
H(Q
Nj+1, X
j+1N) ≤k Φ(t
j+1, t
j) k · d
H(Q
Nj, X
jN)
+ k Φ(t
j+1, t
j) − Φ(t
e j+1, t
j) k · k X
jNk + h
s
X
µ=1
b
µd
H(Φ(t
j+1, t
j+ c
µh)B(t
j+ c
µh)U, U
eµ(t
j+ c
µh))
≤ (1 + h C
e) d
H(Q
Nj, X
jN) + O (h
p+1)
⇒ d
H(Q
Nj, X
jN) ≤ (1 + h C
e)
jd
H(Q
N0, X
0N) + j O (h
p+1)
≤ (1 + h C
e)
Nd
H(X
0, X
0N) + N O (h
p+1)
≤ e
(T−t0)CeO (h
p) + O (h
p) = O (h
p) .
⇒ d
H(R
Nj, X
jN) ≤ d
H(R
Nj, Q
Nj) + d
H(Q
Nj, X
jN) = O (h
p)
uniformly in
j = 0, . . . , N
.Home Page
Title Page
Contents
JJ II
J I
Page17of52
Go Back
Full Screen
Close
Example 2.9 combination method: iter. Riemannian sum/Euler for matrix differ. equation
X
0(t) = A(t)X (t) (t ∈ [t
j, t
j+1]) ,
X (t
j) = I
X
j+1N= Φ(t
e j+1, t
j)X
jN+ h Φ
e1(t
j+1, t
j)B(t
j)U , (j = 0, . . . , N − 1) Φ(t
e j+1, t
j) = Φ(t
e j, t
j) + hA(t
j) Φ(t
e j, t
j) ,
Φ
e1(t
j+1, t
j) = Φ(t
e j+1, t
j) .
Hence,X
j+1N= (I + hA(t
j))X
jN+ h(I + hA(t
j))B(t
j)U (j = 0, . . . , N − 1).
Other possibility for calculation: Euler for adjoint equation
Y
0(t) = − Y (t)A(t) (t ∈ [t
0, T ]) , Y (T ) = I
gives
X
NN= Φ(T, t
e j)X
0N+ h
N−1
X
j=0
Φ
e1(T, t
j)B (t
j)U ,
Φ(T, t
e j) = N − j
(backward) steps of Euler for adjoint equation,Φ
e1(T, t
j) = Φ(T, t
e j) .
Home Page
Title Page
Contents
JJ II
J I
Page18of52
Go Back
Full Screen
Close
Quit
Example 2.10 usual combination of set-valued quadrature method and pointwise DE solver which provides approximations to the values of the fundamental solution at the quadrature nodes:
set-valued solver for step-size overall
quadrature method differential equations of DE solver order
iter. Riemannian sum Euler
h O (h)
iter. trapezoidal rule Euler-Cauchy/Heun
h O (h
2)
iter. midpoint rule modified Euler h2
O (h
2)
iter. Simpson’s rule classical RK(4) h2
O (h
4)
Romberg’s method extrapolation of midpoint rule (with Euler as starting procedure)
h
i=
T−2it0O (
j
Q
ν=0
h
2i−ν)
(under suitable smoothness assumptions)Remark 2.11
problems with these combination methods:
•
no generalization for nonlinear differential inclusions possible•
values of fundamental solutionsΦ(t
j+1, t
j), Φ
µ(t
j+ c
µh, t
j)
resp.Φ(T, t
j), Φ
µ(T, t
j+ c
µh)
must be calculated additionally•
approximation forΦ
µ(t
j+ c
µh, t
j)
resp.Φ
µ(T, t
j+ c
µh)
is calculated too accurately (O (h
p+1)
instead ofO (h
p)
)Home Page
Title Page
Contents
JJ II
J I
Page19of52
Go Back
Full Screen
Close
3. Set-Valued Runge-Kutta Methods
Runge-Kutta methods could be expressed by theButcher array (cf. [Butcher, 1987]):
c
1a
11a
12. . . a
1,s−2a
1,s−1a
1,sc
2a
21a
22. . . a
2,s−2a
2,s−1a
2,s... ... ... ... ... ... ...
c
s−1a
s−1,1a
s−1,2. . . a
s−1,s−2a
s−1,s−1a
1,sc
sa
s,1a
s,2. . . a
s,s−2a
s,s−1a
s,s withc
1:= 0 . b
1b
2. . . b
s−2b
s−1b
sExplicit Runge-Kutta methods satisfy
a
µ,ν= 0
, ifµ ≤ ν
andc
1= 0
. The set-valued Runge-Kutta methodfor LDI is defined as follows:Choose a starting set
X
0N∈ C (
Rn)
and define forj = 0, . . . , N − 1
andµ = 1, . . . , s
:η
j+1N= η
jN+ h
s
X
µ=1
b
µξj(µ),
(8)ξ
j(µ)= A(t
j+ c
µh) η
jN+ h
µ−1
X
ν=1
a
µ,νξ
j(ν)+ B(t
j+ c
µh)u
(µ)j,
(9)u(µ)j ∈ U
,
(10)η0N∈ X0N
,
(11)Xj+1N
= { η
Nj+1| η
j+1N is defined by (8)–(11)} .
(12)Home Page
Title Page
Contents
JJ II
J I
Page20of52
Go Back
Full Screen
Close
Quit
Remark 3.1 If nonlinear DIs are considered with F(t, x) = S
u∈U
{f(t, x, u)}, equation (9) must be replaced by
ξ
j(µ)= f (t
j+ c
µh, η
jN+ h
µ−1
X
ν=1
a
µ,νξ
j(ν), u
(µ)j) .
For some selection strategies, some of the selections
u
(µ)j depend on others (e.g., they could be all equal).If
f (t, x, u) = f (t, u)
, i.e. F(t, x) = F(t), andX
0N= { 0
Rn}
, we arrive at the underlying quadrature methodη
Nj+1= η
jN+ h
s
X
µ=1
b
µf (t
j+ c
µh, u
(µ)j), u
(µ)j∈ U , X
j+1N= X
jN+ h
s
X
µ=1
b
µF (t
j+ c
µh) ,
X
NN= h
N−1
X
j=0 s
X
µ=1
b
µF (t
j+ c
µh) = Q
N(F ; [t
0, T ])
of the Runge-Kutta method.If
f (t, x, u) = f (t, x)
, i.e. F(t, x) = {f(t, x)}, thenX
jN= { η
jN}
coincides with the pointwise Runge-Kutta method.Home Page
Title Page
Contents
JJ II
J I
Page21of52
Go Back
Full Screen
Close
Remark 3.2 Grouping in equation (8) by matrices multiplied by
η
jN andu
(µ)j ,µ = 1, . . . , s
we arrive at the formX
j+1N= Φ(t
e j+1, t
j)X
jN+ h
[u(µ)j ∈U
{
s
X
µ=1
b
µΨ
eµ(t
j+1, t
j+ c
µh)u
(µ)j}
with suitable matrices
Φ(t
e j+1, t
j)
(involving matrix values ofA( · )
) andΨ
eµ(t
j+1, t
j+ c
µh)
(involving matrix values ofA( · )
andB ( · )
).Φ(t
e j+1, t
j)
is the same matrix as in the pointwise case forf (t, x, u) = A(t)x
, hence it approximatesΦ(t
j+1, t
j)
from the same order as in the pointwise case.Questions:
•
What is the order of the set-valued Runge-Kutta method, i.e.d
H( R (T, t
0, X
0), X
0N) = O (h
p)
?Does the order coincide with the single-valued case?
•
What selection strategy is preferrable?•
Should the chosen selection strategy depend on the Runge-Kutta method?•
What smoothness assumptions do we need?Home Page
Title Page
Contents
JJ II
J I
Page22of52
Go Back
Full Screen
Close
Quit
Answers in the literature:
set-valued iter. quadrature global disturbance local order overall RK-method method order term for . . . of disturbance global order
Euler Riemannian sum
O (h) η
jNO (h
2) O (h)
u
(1)jO (h)
Euler/Cauchy midpoint rule
O (h
2) η
jNO (h
3) O (h
2)
(constant sel.)
u
(1)jO (h
2)
Euler/Cauchy trapezoidal rule
O (h
2) η
jNO (h
3) O (h
2)
(2 free sel.)
u
(1)jO (h
2)
u
(2)jO (h
2)
Euler’s method (see Subsection 3.1):cf. [Nikol’skiˇı, 1988], [Dontchev and Farkhi, 1989], [Wolenski, 1990] for nonlinear DIs, for extensions see [Artstein, 1994], [Grammel, 2003])
Euler-Cauchy method (see Subsection 3.2):
cf. [Veliov, 1992] as well as [Veliov, 1989b]
for strongly convex nonlinear DIs
modified Euler method (see Subsection 3.3) Runge-Kutta(4) method (see Subsection 3.4)
Home Page
Title Page
Contents
JJ II
J I
Page23of52
Go Back
Full Screen
Close
3.1. Euler’s Method
Remark 3.3 Consider Euler’s method, i.e. the Butcher array
0 0 .
1
underlying quadrature method = special Riemannian sum:
Q
N(F ; [t
0, T ]) = h
N−1
X
j=0
F (t
j)
Grouping byη
jN and the single selectionu
(1)j yieldsX
j+1N= I + hA(t
j)
X
jN+ hB(t
j)U (j = 0, . . . , N − 1) .
Proposition 3.4 Euler’s method is a combination method with the following settings:
Q
N(F ; [t
0, T ]) = h
N−1
X
j=0
F (t
j) ,
Φ(t
e j+1, t
j) = I + hA(t
j) ,
Φ
e1(t
j+1, t
j) = I .
Home Page
Title Page
Contents
JJ II
J I
Page24of52
Go Back
Full Screen
Close
Quit
Proposition 3.5 (cf. [Nikol’skiˇı, 1988], [Dontchev and Farkhi, 1989], [Wolenski, 1990], see also [Artstein, 1994], [Grammel, 2003])
If
• A( · )
is Lipschitz,• B( · )
is bounded,• τ
1(δ
∗(l, Φ(T, · )B ( · )U ), h) ≤ C
huniformly inl ∈ S
n−1, e.g., ifB ( · )
is Lipschitz,• d
H(X
0, X
0N) = O (h)
,then Euler’s method converges at least with order O(h). Proof: The quadrature method has precision 0.
If
B( · )
is Lipschitz, thenΦ(T, · )B( · )
and hence alsoδ
∗(l, Φ(T, · )B( · )U )
(uniformly inl ∈ S
n−1) are Lipschitz.The following estimations are valid:
k Φ(t
e j+1, t
j) − Φ(t
j+1, t
j) k = k I + hA(t
j)
− Φ(t
j+1, t
j) k = O (h
2) , k Φ
e1(t
j+1, t
j) − Φ(t
j+1, t
j) k = k I − Φ(t
j+1, t
j) k = O (h) .
Hence, Proposition 2.8 can be applied yielding
O (h)
.For order of convergence 1, it is sufficient that
A( · )
andB ( · )
(resp.δ
∗(l, Φ(T, · )B( · )U )
, uniformly inl ∈ S
n−1) have bounded variation.Home Page
Title Page
Contents
JJ II
J I
Page25of52
Go Back
Full Screen
Close
3.2. Euler-Cauchy Method (or Heun’s Method)
Remark 3.6 Consider method of Euler-Cauchy (orHeun’s method), i.e. the Butcher array
0 0 0
1 1 0 .
1 2
1 2
underlying quadrature method = iterated trapezoidal rule:
Q
N(F ; [t
0, T ]) = h 2
N−1
X
j=0
F (t
j) + F (t
j+1)
Grouping by
η
jN and the two selectionsu
(1)j andu
(2)j yieldsX
j+1N=
I + h
2 A(t
j) + A(t
j+1)
+ h
22 A(t
j+1)A(t
j)
X
jN+ h
2
[
u(1)j ,u(2)j ∈U
I + hA(t
j+1)
B(t
j)u
(1)j+ B (t
j+1)u
(2)jfor
j = 0, . . . , N − 1
.Home Page
Title Page
Contents
JJ II
J I
Page26of52
Go Back
Full Screen
Close
Quit
Proposition 3.7 The method of Euler-Cauchy with two free selections ”
u
(1)j, u
(2)j ∈U
” is a combination method with the following settings:Q
N(F ; [t
0, T ]) = h 2
N−1
X
j=0
F (t
j) + F (t
j+1) ,
Φ(t
e j+1, t
j) = I + h
2 A(t
j) + A(t
j+1)
+ h
22 A(t
j+1)A(t
j) , Φ
e1(t
j+1, t
j) := I + hA(t
j+1) ,
Φ
e2(t
j+1, t
j+1) := I .
Proposition 3.8 The method of Euler-Cauchy with constant selection strategy ”
u
(1)j =u(2)j ” is a combination method with the following settings:Q
N(F ; [t
0, T ]) = h
N−1
X
j=0
F (t
j+ h 2 ) , Φ(t
e j+1, t
j) = I + h
2 A(t
j) + A(t
j+1)
+ h
22 A(t
j+1)A(t
j) , U
e1(t
j+ h
2 ) := 1
2 B (t
j) + B(t
j+1) + hA(t
j+1)B (t
j)
U .
Home Page
Title Page
Contents
JJ II
J I
Page27of52
Go Back
Full Screen
Close
Proposition 3.9
(cf. [Veliov, 1992] as well as [Veliov, 1989b] for strongly convex nonlinear DIs) If
• A
0( · )
andB( · )
are Lipschitz,• τ
2(δ
∗(l, Φ(T, · )B ( · )U ), h) ≤ C
h2 uniformly inl ∈ S
n−1, e.g., ifB
0( · )
is Lipschitz,• d
H(X
0, X
0N) = O (h
2)
,then the method of Euler-Cauchy with constant or with two free selections converges at least with order O(h2).
For order of convergence 2, it is sufficient that
A
0( · )
andB
0( · )
(resp. dtdδ
∗(l, Φ(T, · )B( · )U )
, uniformly inl ∈ S
n−1) have bounded variation.Home Page
Title Page
Contents
JJ II
J I
Page28of52
Go Back
Full Screen
Close
Quit
3.3. Modified Euler Method
Remark 3.10 Consider modified Euler method, i.e. the Butcher array
0 0 0
1 2
1
2
0 .
0 1
underlying quadrature method = iterated midpoint rule:
Q
N(F ; [t
0, T ]) = h
N−1
X
j=0
F (t
j+ h 2 )
Grouping byη
jN and the two selectionsu
(1)j andu
(2)j yieldsX
j+1N=
I + hA(t
j+ h
2 ) + h
22 A(t
j+ h
2 )A(t
j)
X
jN+ h
[u(1)j ,u(2)j ∈U
h
2 A(t
j+ h
2 )B (t
j)u
(1)j+ B(t
j+ h 2 )u
(2)j
for
j = 0, . . . , N − 1
.Home Page
Title Page
Contents
JJ II
J I
Page29of52
Go Back
Full Screen
Close
Proposition 3.11 Modified Euler method with constant selection strategy ”
u
(1)j =u(2)j ” is a combination method with the following settings:Q
N(F ; [t
0, T ]) = h
N−1
X
j=0
F (t
j+ h 2 ) , Φ(t
e j+1, t
j) = I + hA(t
j+ h
2 ) + h
22 A(t
j+ h
2 )A(t
j) , U
e1(t
j+ h
2 ) :=
B (t
j+ h
2 ) + h
2 A(t
j+ h
2 )B(t
j)
U .
constant approximation by the quadrature method (midpoint rule) on
[t
j, t
j+1]
⇒
constant selection in modified Euler is appropriate Proposition 3.12 If• A
0( · )
andB( · )
are Lipschitz,• τ
2(δ
∗(l, Φ(T, · )B ( · )U ), h) ≤ C
h2 uniformly inl ∈ S
n−1, e.g., ifB
0( · )
is Lipschitz,• d
H(X
0, X
0N) = O (h
2)
,then modified Euler method with constant selection strategy converges at least with order O(h2).
For order of convergence 2, it is sufficient that