• Keine Ergebnisse gefunden

A Shooting Method for Fully Implicit Index-2 Differential-Algebraic Equations

N/A
N/A
Protected

Academic year: 2022

Aktie "A Shooting Method for Fully Implicit Index-2 Differential-Algebraic Equations"

Copied!
23
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

A Shooting Method for Fully Implicit Index{2 Dierential{Algebraic Equations

Rene Lamour

Abstract

A shooting method for two{point{boundary value problems for fully implicit index{

1 and {2 dierential{algebraic equations is presented. A combination of the shooting equations with a method of the calculation of consistent initial values leads to a system of nonlinear algebraic equations with nonsingular Jacobian. Examples are given.

AMS(MOS) Classication: 65L10

Keywords: DAE, BVP, Index 2, consistent initial value

1 Introduction

In this paper we consider the fully implicit index{2 system

f

(

x

0(

t

)

x

(

t

)

t

) = 0

t

2

a b

] (1.1)

with the boundary condition

g

(

x

(

a

)

x

(

b

)) = 0

:

(1.2)

Such problems arise as models of electrical networks, chemical reactions or index{reduced sys- tems of mechanical motions. The possibility of the direct solution of given index{2 problems is very useful because

the index reduction changes the stability behaviour of the DAE

it is easier to reduce an index{3 system by only one step than by two steps.

The realization of the shooting method is strongly connected with an integration method that integrates index{2 problems well. This is given if

ker(

f

y0) =

N

= const,

e.g. for the BDF{method. The presented shooting method links a procedure for the cal- culation of consistent initial values (this procedure alone is very useful) with the shooting equations, and the Jacobian of the whole method becomes nonsingular.

In Chapter 2 we introduce some projectors which are useful for the description of index{2 DAE's. We dene a Green function for the explicit representation of of the solution of a linear index{2 DAE (Chapter 3). The numerical solution of (1.1),(1.2) by a shooting method is presented in Chapter 4 and some remarks to the numerical realization you can nd in Chapter 5. Numerical examples complete the paper (Chapter 6).

(2)

2 Index determination and projectors

We investigate the nonlinear DAE

f

(

x

0(

t

)

x

(

t

)

t

) = 0 (2.1)

as an IVP or BVP. For the numerical approximation of (2.1) it is necessary to know which index the DAE has.

Let

x

? be a solution of the considered problem (2.1) and

A

(

t

) :=

f

y0(

x

0?(

t

)

x

?(

t

)

t

)

B

(

t

) :=

f

x0(

x

0?(

t

)

x

?(

t

)

t

)

:

(2.2) We dene the chain of matrix functions Ma87]

A

0 :=

A B

0 :=

B

;

AP

0

A

i+1 :=

A

i+

B

i

Q

i

B

i+1 := (

B

i;

A

i+1(

P

0

P

1

P

i+1)0

P

0

P

i;1)

P

i

:

(2.3)

Q

i is dened to be a projector onto

N

i := ker(

A

i(

t

))

P

i :=

I

;

Q

i for

i

0 and

P

0 =:

P

. Then the following denition is given.

Definition. Ma91] The ordered pair f

A B

g of continuous matrix functions is said to be index{

{tractable if all matrices

A

j(

t

)

j

= 0

:::

;1, within the chain (2.3) are singular with smooth nullspaces, and

A

(

t

) remains nonsingular.

The nonlinear DAE (2.1) is said to be index{1{tractable locally around

x

? if the pair of the linearization (2.2) is so, too.

The nonlinear DAE (2.1) is said to be index{

{tractable locally around

x

? for

>

1 if the pair of the linearization (2.2) is so in a neighbourhood of the solution.

We are interested in the index{2{tractable case under the assumption that ker(

f

y0) =

N

=

const

, i.e.

P

0= 0

:

The following situation is given

A

1 =

A

0+

B

0

Q B

1 = (

B

0;

A

1(

PP

1)0)

P

(2.4)

A

2 =

A

1+

B

1

Q

1 = (

A

+

BQ

+

BPQ

1)(

I

;

P

1(

PP

1)0

PQ

1)

:

(2.5)

Lemma 2.1

Denote by Q an arbitrary projector, and P :=

I

;Q. Then the matrix

M:=

I

;Q

Z

P

is nonsingular and its inverse is given by

M

;1=

I

+Q

Z

P

:

Proof. We consider the equation

M

z

= 0

:

It follows

z

=Q

Z

P

z

)P

z

= 0 and we have

z

= 0. 2

(3)

Using Lemma 2.1 it is clear that

A

2 is nonsingular,

A

~2 :=

A

+

BQ

+

BPQ

1 (2.6)

is nonsingular. However for arbitrary projectors

Q

and

Q

1 we can test the nonsingularity of

A

~2 pointwise without any knowledge of the derivative of a projector as in

A

2.

Lemma 2.2

The following relations are valid if

P

0 = 0 :

a:

)

A

;12

A

=

P

1

P

if

P

0 = 0 and

Q

1

Q

= 0 :

b:

)

A

;12

A

=

P

;

Q

1

c:

)

A

;12

BQ

=

Q

d:

)

A

;12

BPQ

1 =

Q

1;

P

1(

PQ

1)0

PQ

1

(2.7)

Proof. For

P

0 = 0 (

PQ

1)0 =;(

PP

1)0 is fullled. We obtain the relations a.) and b.) multiplying (2.5) from the right by

P

1

P

, and c.) by

Q

. For d.) we multiply (2.5) by

A

;12 and use the relation b.) and c.). 2

Sometimes it is very useful to choose a special structure of the projectors. We focus our interest on the so{called canonical projector

Q

1 with

Q

1 =

Q

1

A

;12

BP

(2.8)

(cf. Ma91b]). This projector fulls the condition

Q

1

Q

= 0 and it is really calculable, because

Q

1 =

Q

1

A

;12

BP

=

Q

1

A

~;12

BP:

(2.9)

For our further investigations we assume that

P

0= 0 (2.10)

and

Q

1

Q

= 0 (2.11)

where

Q

1 represents the canonical projector given in (2.9).

3 Representation of the solution for a linear index{2 BVP

In this chapter we present a solution of the linear system

A

(

t

)

x

0(

t

) +

B

(

t

)

x

(

t

) =

q

(

t

) (3.1)

D

a

x

(

a

) +

D

b

x

(

b

) =

:

(3.2)

(4)

First we consider the IVP (3.1) with the initial condition

P

(

s

)

P

1(

s

)(

x

(

s

);

) = 0

:

(3.3)

For the projector

P

1 :=

I

;

Q

1 we prefer now the canonical projector

Q

1 given in (2.9).

(3.1),(3.3) is uniquely solvable for all

q

withf

q

2

C

:

Q

1

A

;12

q

2

C

1g(cf. Ma89]).

For better understanding the index{2 case we split the solution

x

into three parts:

u

:=

PP

1

x v

:=

PQ

1

x

and

w

:=

Qx:

(With

Q

1

Q

= 0 also

PP

1 and

PQ

1 are projectors.)

Multiplying (3.1) by

PP

1

A

;12

QP

1

A

;12 and

PQ

1

A

;12 we obtain

u

0;(

PP

1)0

u

+

PP

1

A

;12

Bu

=

PP

1

A

;12

q

;

QQ

1

v

0;

QQ

1(

PQ

1)0

u

+

QP

1

A

;12

Bu

+

w

=

QP

1

A

;12

q

v

=

PQ

1

A

;12

q:

(3.4)

The Fundamental Matrix of a DAE is given by the solution of the homogeneous IVP

AX

0+

BX

= 0 (3.5)

P

(

s

)

P

1(

s

)(

X

(

s s

);

I

) = 0

:

(3.6)

Using (3.4) we have

V

:=

PQ

1

X

0,

U

:=

PP

1

X

=

PP

1

Y

where

Y

solves

Y

0= ((

PP

1)0;

PP

1

A

;12

B

)

Y Y

(

s s

) =

I

and

W

:=

QX

=

QQ

1(

PQ

1)0

U

;

QP

1

A

;12

BU

. With (3.6) and

QW

=

W

,

PP

1

U

=

U

we create

X

(

t s

) =

M

(

t

)

Y

(

t s

)

P

(

s

)

P

1(

s

) (3.7) with

M

(

t

) := (

I

+

Q

(

t

)

QQ

1(

PQ

1)0(

t

) ;

P

1(

t

)

A

;12

B

(

t

)]

P

(

t

)

P

1(

t

)). Using Lemma 2.1 it is easy to verify that

PP

1

M

=

PP

1

M

;1 =

PP

1

:

Recall that

P

(

t

)

P

1(

t

)

Y

(

t s

)

P

(

s

)

P

1(

s

) =

Y

(

t s

)

P

(

s

)

P

1(

s

) (3.8) (cf. Ma89]).

Remark 1

Using the special projector

Q

=

QP

1

A

;12

B

the splitted system (3.4) and also the matrix

M

of the fundamental matrix

X

look a little bit easier. However, this projector is very dicult to calculate, because of the derivatives in

A

2.

Now we are looking for a representation of the solution of the IVP (3.1),(3.3). Using the fundamental matrix

Y

(

t s

) we have for the component u

u

(

t

) =

Y

(

t s

)(

PP

1

+Zst

Y

(

s

)

PP

1

A

;12

qd

)

:

(3.9)

(5)

With (3.8) we transform (3.9) into

u

(

t

) =

Y

(

t s

)

P

(

s

)

P

1(

s

)(

+Zst

M

(

s

)

Y

(

s

)

PP

1(

)

h

(

)

d

) with

h

(

t

) :=

PP

1

A

;12

q

. For the other components we have

v

=

PQ

1

A

;12

q

and

w

=

QP

1

A

;12

q

+

QQ

1(

PQ

1)0

u

;

QP

1

A

;12

Bu

+

QQ

1

v

0 and therefore

x

=

u

+

v

+

w

=

Mu

+

q

(

t

)

with

q

(

t

) := (

PQ

1+

QP

1)

A

;12

q

+

QQ

1(

PQ

1

A

;12

q

)0.

x

(

t

) =

X

(

t s

)(

+Zst

X

(

s

)

h

(

)

d

)+

q

(

t

) (3.10) represents the solution of the IVP (3.1),(3.3). Now we consider the solution

x

(

t

) in

t

=

a

and

t

=

b

.

x

(

a

) =

X

(

a a

)

+

q

(

a

) and

x

(

b

) =

X

(

b a

)(

+Rab

X

(

a

)

h

(

)

d

)+

q

(

b

) with unknown

. The boundary condition (3.2) requires

S

:= (

D

a

X

(

a a

)+

D

b

X

(

b a

))

=

;

D

b

X

(

b a

)Rab

X

(

a

)

h

(

)

d

=: ~

(3.11) with

:=

;(

D

a

q

(

a

) +

D

b

q

(

b

)).

Theorem 3.1

Ma91] Let (3.1), (3.2) be a tractable index{2 equation and the projectors full (2.10) and (2.11). Then, for arbitrary right{hand sides

q

with

q

2

C a b

]

PQ

1

A

;12

q

2

C

1

a b

] and

2im(

D

a

D

b) (3.1), (3.2) have a unique solution i

ker(

S

) = im(

I

;

PP

1) (3.12)

im(

S

) = im(

D

a

D

b)

:

(3.13)

Proof. The unique solution of (3.1), (3.2) is related to the solution of the IVP (3.1), (3.3).

Then it becomes clear that only

PP

1

inuences the solution. Hence, we require for

=

PP

1

:

(3.14)

We are looking for solutions of (3.11) in the set P := f

z

j

z

2 im(

PP

1)g

:

The right{hand side of (3.11) fulls

2im(

D

a

D

b). This means that (3.13) is a necessary condition for the solvability of (3.11). The structure of

X

(

t s

) provides

S

=

SP

(

a

)

P

1(

a

) or ker(

S

)im(

I

;

P

(

a

)

P

1(

a

))

:

(3.15)

! Let

2P be a solution of (3.11), then also

+

2P solves (3.11) with

2ker(

S

). The uniqueness requires that P\ker(

S

) = f0g) ker(

S

) im(

I

;

PP

1). With (3.15) formulae (3.12) follows.

(6)

Let (3.12) be valid, and

1 and

2 2P denote two solutions of (3.11). Then

1 ;

2 2 ker(

S

), but ker(

S

)\P=f0gand

1 =

2. 2

S

; denotes the generalized reexive inverse of

S

with

S

;

SS

;=

S

;

SS

;

S

=

S

and

S

;

S

=

P

(

a

)

P

1(

a

)

:

(3.16)

This representation of

S

; is possible if (3.12) is valid. We multiply (3.11) by

S

;:

P

(

a

)

P

1(

a

)

=

S

;

:

~

With (3.10) we have

x

(

t

) =

X

(

t a

)

S

;

+Rat

X

(

t

)

h

(

)

d

;Rab

X

(

t a

)

S

;

D

b

X

(

b

)

h

(

)

d

+

q

(

t

)

:

Using (3.16)

S

;

D

b

X

(

b a

) =

PP

1;

S

;

D

a

X

(

a a

) (3.17)

is valid and, therefore,

X

(

t

);

X

(

t a

)(

PP

1;

S

;

D

a

X

(

a a

))

X

(

a

) =

X

(

t a

)

S

;

D

a

X

(

a

)

:

Now we introduce the Green's function

G

(

t s

) :=

( +

X

(

t a

)

S

;

D

a

X

(

a s

)

s > t

;

X

(

t a

)

S

;

D

b

X

(

b s

)

s < t

(3.18) and the following Theorem holds.

Theorem 3.2

Let Theorem 3.1 be valid and

S

;denotes a reexive inverse of

S

with

S

;

S

=

PP

1(

a

). The solution of the BVP (3.1),(3.2) has the representation

x

(

t

) =

X

(

t a

)

S

;

+Zab

G

(

t

)

h

(

)

d

+

q

(

t

)

:

4 Numerical solution by shooting method

The solution of BVP's by shooting methods requires that we are able to integrate the consid- ered equation. For DAE's this means that we have to make available consistent initial values.

We nd dierent ideas for the calculation of consistent initial values.

Numerical dierentiation

is used in the code DASSL (see also LPG91]),

Formel manipulation

is proposed by Han90],

Special structure

of the DAE is used by AP91].

We use a general approach taking advantage of the given subspaces by using special projectors.

A further disadvantage of shooting methods for DAE's is the singularity of the Jacobian. This problem we overcome as in the index 1 case (cf. Lam91]) by the combination of the shooting equation with the equation for the calculation of consistent initial values.

(7)

4.1 Consistent initial values

We consider the nonlinear DAE

f

(

x

0(

t

)

x

(

t

)

t

) = 0

:

(4.1)

For a better understanding of the index{2 case let us consider the transferable or index{1 case. The assumption ker(

f

y0) =

N

(

t

) allows us to transform (4.1) into

f

((

Px

)0(

t

);

P

0

x

(

t

)

x

(

t

)

t

) = 0 (4.2) (see GM87]).

Index 1:

We are looking for consistent initial values for the IVP (4.2) with

P

(

s

)(

x

(

s

);

) = 0

:

(4.3)

We split

x

=

Px

+

Qx

=:

u

+

v

and denote

y

:= (

Px

)0;

P

0

x

(recall that

Py

=

y

). Let us dene :=

y

+

v

then (4.2) considered in

t

=

s

is written as follows

f

(

P u

+

Q s

) = 0 (4.4)

with known

u

=

P

and searched . The Jacobian of (4.4) is given by

f

y0

P

+

f

x0

Q

(=

A

1)

which is nonsingular for the index{1 case.

Index 2:

Now we transfer this technique to the index{2 case. Here we assume the stronger condition that

ker(

f

y0) =

N

const. (i.e.

P

= const,

P

0= 0) (4.5) This assumption does not restrict the class of numerically solvable problems. We know (compare with the example in GP84]) that only assumption (4.5) saves numerical success for index{2 problems. With (4.5), (4.2) has the structure

f

((

Px

)0

x t

) = 0

:

(4.6)

We represent (4.6) in a more detailed way

f

((

PP

1

x

)0+ (

PQ

1

x

)0

x t

) = 0 or

f

((

PP

1

x

)0;(

PP

1)0

PP

1

x

+ (

PQ

1

x

)0;(

PQ

1)0

PP

1

x x t

) = 0 (4.7) with the initial condition

(

PP

1)(

s

)(

x

(

s

);

) = 0

:

(4.8)

We split x into the components

x

=

PP

1

x

+

PQ

1

x

+

Qx

=:

u

+

v

+

w

and we dene

y

:= (

PP

1

x

)0;(

PP

1)0

PP

1

x

and

:=

y

+

v

+

w:

(4.9)

(8)

Here

PP

1

y

=

y

is valid. Now (4.7) reads

f

(

PP

1 +

v

0;(

PQ

1)0

u u

+ (

PQ

1+

Q

)

s

) = 0

:

(4.10) The trouble in formula (4.10) is caused by the unknown term

v

0. Therefore, we consider (4.10) in a neighbouring point

s

+

h

and replace

v

0by the nite dierence

v

0

v

h;

v

We use the symbol (

h :

)h= ()(

s

+

h

).

f

:=

f

(

PP

1 + (

PQ

1 )h;

PQ

1

h

;

(

PQ

1)0

u u

+ (

PQ

1+

Q

)

s

) = 0 (4.11)

f

h :=

f

((

PP

1 )h+ (

PQ

1 )h;

PQ

1

h

;

f(

PQ

1)0

u

gh

u

h+f(

PQ

1+

Q

) gh

s

+

h

) = 0 (4.12) with

u

h=

u

+

hu

0=

u

+

h

(

PP

1 + (

PP

1)0

u

)

:

Theorem 4.1

Let the projector

Q

1 depend on

u

=

PP

1

x

and

t

only,

P

0= 0 and

Q

1

Q

= 0.

Then the system (4.11),(4.12) has a nonsingular Jacobian in the point(

y

?+

v

?0;(

PQ

1)0

u

?

x

?

s

) if

h

is suciently small, where (

:

)? is the part of

x

?.

Proof. The Jacobian is given by

J

=

0

B

B

B

B

@

@ f

@ @ f

@

h

@ @ f

h

@ f

h

@

h

1

C

C

C

C

A

:

with

@ f

@

=

A

(

PP

1;1h

PQ

1) +

B

(

PQ

1+

Q

)

@ f

@

h = 1h

A

f

PQ

1gh

@ f

h

@

= ;1h

A

h

PQ

1;

h

f

A

(

PQ

1)0;

B

gh

PP

1

@ f

h

@

h = f

A

(

PP

1+h1

PQ

1) +

B

(

PQ

1 +

Q

)gh

(4.13)

with

A

:=

f

y0 and

B

:=

f

x0. Using the identities given in Lemma 2.2 and (

P

;

Q

1)

PP

1 =

PP

1

(

P

;

Q

1)

PQ

1 = ;

QQ

1

P

1

P

= (

P

1;

Q

) (4.14)

(9)

for

P

0= 0 we have

J

=

A

2 0 0

A

h2

!

(4.15)

with

=

I

+

P

1((1 +h1)

QQ

1;(

PQ

1)0)

PQ

1

h1f

P

;

Q

1gh(

PQ

1);

h

f

A

;12 (

A

(

PQ

1)0;

B

)gh

PP

1

;

h1(

P

;

Q

1)f

PQ

1gh

I

+f

P

1((1 + 1h)

QQ

1;(

PQ

1)0)

PQ

1gh

!

:

To show the nonsingularity of

, rst we consider the equation

1

2

!

= 0 (4.16)

with

=

I

+

P

1((1 + 1h)

QQ

1;(

PQ

1)0)

PQ

1 ;1h(

P

;

Q

1)f

PQ

1gh

1hf

P

;

Q

1gh(

PQ

1)

I

+f

P

1((1 + h1)

QQ

1;(

PQ

1)0)

PQ

1gh

!

:

The result of the rst equation of (4.16) after multiplying by

Q

1 is

Q

1

1 = 0 (4.17)

and the second equation multiplied by

Q

h1 yields

Q

h1

2 = 0

:

(4.18)

Using (4.17) and (4.18) in (4.16) gives

1 =

2 = 0

:

Now we consider the matrix

~ :=

;

0 0

h

f

A

;12 (

A

(

PQ

1)0;

B

)gh

PP

1 0

!

:

~ depends continuously on

h

and ~

=

for

h

=

h

, i.e. if

h

is suciently small, then

is nonsingular. 2

Remark 2

We consider (4.11), (4.12) for a linear DAE

A

(

t

)

x

0(

t

) +

B

(

t

)

x

(

t

) =

q

(

t

)

and obtain

A

(

s

)(

PP

1 +f

PQ

1 gh;

PQ

1

h

; (

PQ

1)0

u

)

+

B

(

s

)(

u

+ (

PQ

1+

Q

) ) =

q

(

s

) (4.19)

A

(

s

+

h

)f

PP

1 gh+f

PQ

1 gh;

PQ

1

h

; f(

PQ

1)0

u

gh)

+

B

(

s

+

h

)(

u

h+f(

PQ

1+

Q

) gh) =

q

(

s

+

h

)

:

(4.20)

(10)

We multiply (4.19) by

A

;12 (

s

) and (4.20) by

A

;12 (

s

+

h

) and obtain (

P

;

Q

1)(

PP

1 + (

PQ

1 )h;

PQ

1

h

; (

PQ

1)0

u

)

+

A

;12 (

s

)

B

(

s

)(

u

+ (

PQ

1+

Q

) ) =

A

;12 (

s

)

q

(

s

) (4.21)

f(

P

;

Q

1)(

PP

1 )gh+ (

PQ

1 )h;

PQ

1

h

; f(

PQ

1)0

u

gh)

+

A

;12 (

s

+

h

)

B

(

s

+

h

)f

u

+ (

PQ

1+

Q

) gh) = f

A

;12

q

gh

:

(4.22) The multiplication of (4.21) and (4.22) by

PQ

1 and f

PQ

1gh respectively, yields

PQ

1 =

PQ

1

A

;12

q

(

s

)

f

PQ

1 gh = f

PQ

1

A

;12 gh

q

(

s

+

h

) or

hlim!0

f

PQ

1 gh;

PQ

1

h

=

v

0= (

PQ

1

A

;12

q

)0 (4.23)

Using (4.23) we consider (4.19) for

h

!0 and realize exactly the linear DAE because

PP

1 + (

PQ

1 )0;(

PQ

1)0

u

= (

Px

)0

and

u

+ (

PQ

1+

Q

) =

x:

The accuracy of the numerical solution depends essentially on the condition of the matrix

. We investigate the condition of the matrix

. Using Lemma 2.1 and

PQ

1(

P

;

Q

1) = 0 the inverse of

is given by

;1=

I

;

P

1((1 + h1)

QQ

1;(

PQ

1)0)

PQ

1 ; 1h(

P

;

Q

1)f

PQ

1gh

1hf

P

;

Q

1gh(

PQ

1)

I

;f

P

1((1 + h1)

QQ

1;(

PQ

1)0)

PQ

1gh

!

:

(4.24)

We introduce the constants

K

1 :=k

QQ

1kCab]

K

2 :=k

P

1(

PQ

1)0

PQ

1kCab]

:

Using the Taylor expansion (

PQ

1)h =

PQ

1+

h

(

PQ

1)0+

O

(

h

2) we obtain

k

k (1 +

K

1+

K

2) + 2

hK

1+

O

(

h

)

k

;1k (1 +

K

1+

K

2) + 2

hK

1+

O

(

h

)

:

This proves the

Corollary

The condition of

is bounded by cond(

)((1 +

K

1+

K

2) + 2

hK

1+

O

(

h

))2

:

Remark 3

The essential part of the estimation shows that cond(

)

O

(

h

;2) in the worst case. This is not surprising because of the numerical dierentiation.

(11)

4.2 The shooting method

We consider now a boundary value problem

f

(

x

0(

t

)

x

(

t

)

t

) = 0

t

2

a b

] (4.25)

g

(

x

(

a

)

x

(

b

)) = 0

:

(4.26)

The idea of shooting is well known. We subdivide the interval

a b

] into

m

subintervals

a

=

t

0

< t

1

<

< t

m =

b

and we look for the initial values

z

i :=

x

(

t

i)

i

= 0

::: m

;1.

x

(

t s z

) denotes the solution of the IVP (4.25) with

PP

1(

s

)(

x

(

s

);

z

) = 0

:

The

z

i have to full the boundary condition

g

(

z

0

x

(

t

m

t

m;1

z

m;1)) = 0 (4.27)

and the matching condition

(

PP

1)i(

z

i;

x

(

t

i

t

i;1

z

i;1)) = 0

:

(4.28) (The symbol ()i reads like ()(

t

i)).

The disadvantage of the system (4.27),(4.28) is the singularity of the Jacobian (as in the index{1 case). However we use the same idea that solves this problem in the index 1 case (cf. Lam91]), too. We combine the shooting equation with the equations (4.11),(4.12) for the determination of the initial values. For this aim we split the variable

z

i into the parts

z

i= (

PP

1)i

z

i+ (

PQ

1)i

z

i+

Q

i

z

i =:

u

i+

v

i+

w

i

and with i :=

y

i+

v

i+

w

i (cf.(4.9)) we have

z

i =

u

i+(

PQ

1+

Q

)i i. The shooting equations are given by

g

(

u

0+ (

PQ

1+

Q

)0 0

x

(

t

m

t

m;1

u

m;1)) = 0 (4.29)

u

i;(

PP

1)i

x

(

t

i

t

i;1

u

i;1) = 0 i=1:::m;1 (4.30) and the equations for the determination of the initial values in

t

i read

f

i :=

f

((

PP

1)i i+f

PQ

1 ghi;(

PQ

1)i i

h

;(

PQ

1)0i

u

i

u

i+ (

PQ

1+

Q

)i i

t

i) = 0 (4.31)

f

hi :=

f

(f

PP

1 ghi+f

PQ

1 ghi;(

PQ

1)i i

h

;f(

PQ

1)0i

u

igh

u

hi+f(

PQ

1+

Q

)i igh

t

i+

h

) = 0

i

= 0

::: m

;1

:

For the variable

u

we have to ensure that

PP

1

u

=

u:

This is valid for

t

i

i

= 1

::: m

;1 by using (4.30). For

u

0 we extend (4.29) to

g

(

u

0+ (

PQ

1+

Q

)0 0

x

(

t

m

t

m;1

u

m;1)) +

K

;1(

I

;

PP

1)0

u

0= 0 (4.32)

(12)

where

K

is a nonsingular matrix with

im(

g

0xa

g

x0b)Mim(

K

;1(

I

;

PP

1)0) =

R

n

:

For the calculation of the unknowns

u

0

u

m;1 we have to solve the equations (4.30) and (4.32). But in (4.32) also 0 is engaged. This means that we extend our system to the equations for the determination of the initial values (4.31) in the point

t

0.

Theorem 4.2

Let the assumptions of Theorem 3.1 and 4.1 be fullled and

Q

1=

Q

1(

t

) only, in this case system (4.32),(4.30) and (4.31) (for

i

= 0) has a nonsingular Jacobian.

Proof. We order the variables in the following way:

= (

u

0

::: u

m;1 0 h

0) then the Jacobian

J

is given by

J

=

0

B

B

B

B

B

B

B

B

B

B

B

B

@

G

au

G

bu ...

G

av 0

M

0

I

...

... ... ...

M

m;2

I

...

:::::::::::::::::::::::::::::::::::

F

u0 ...

J

0

1

C

C

C

C

C

C

C

C

C

C

C

C

A

(4.33)

where we have used the following abbreviations

G

au :=

g

x0a +

K

;1(

I

;

PP

1)0

G

bu:=

g

x0b

X

(

t

m

t

m;1)

G

av :=

g

x0a(

PQ

1+

Q

)0

M

i := ;(

PP

1)i+1

X

(

t

i+1

t

i)

F

u0 :=

;

A

0(

PQ

1)00+ (

BPP

1)0

f;

A

0(

PQ

1)00+ (

BPP

1)0gh(

I

+

O

(

h

))

!

J

0 denotes the matrix (4.13) in

t

0

:

We investigate the equation

J

= 0

:

(4.34)

J

denotes the matrix with the structure of

J

, but the matrices

J

0 are replaced by

J

0 :=

A

2 0 0

A

h2

!

(cf. (4.15)). The second to m{th equation of (4.34) is given by

u

i+1= (

PP

1)i+1

X

(

t

i+1

t

i)

u

i

i

= 0

::: m

;2

:

(4.35) This leads to

u

m;1= (

PP

1)m;1

X

(

t

m;1

t

0)

u

0

:

The latter 2n{dimensional equations are given by

J

0

0h

0

!

=

;

A

0(

PQ

1)00+ (

BPP

1)0

f;

A

0(

PQ

1)00+ (

BPP

1)0gh(

I

+

O

(

h

))

!

u

0

:

(4.36)

(13)

We multiply the rst equation of (4.36) by

A

;12 and the second one by (

A

h2);1. Using (4.24) and (2.8) we have

0 = ((

P

;

Q

1)(

PQ

1)0;

A

;12

BPP

1)

u

0 (4.37)

h

0 = f(

P

;

Q

1)(

PQ

1)0;

A

;12

BPP

1gh(

I

+

O

(

h

))

u

0

:

(4.38) Setting (4.37) in the rst equation of (4.34) yields

(

g

x0a+

K

;1(

I

;

PP

1)0)

u

0+

g

x0b

X

(

t

m

t

m;1)

u

m;1+

g

x0a(

PQ

1+

Q

)0 0

= (

g

x0a+

K

;1(

I

;

PP

1)0+

g

x0b

X

(

t

m

t

0) +

g

x0a(;

Q Q

1(

PQ

1)0;

A

;12

B

]

PP

1)

u

0

= (

g

x0a(

I

;

Q Q

1(

PQ

1)0;

A

;12

B

]

PP

1) +

g

x0b

X

(

t

m

t

0) +

K

;1(

I

;

PP

1)0))

u

0

= (

g

x0a

X

(

t

0

t

0) +

g

x0b

X

(

t

m

t

0) +

K

;1(

I

;

PP

1)0)

u

0

:

Using

u

0=

PP

1

u

0 we obtain

(

g

0xa

X

(

t

0

t

0) +

g

0xb

X

(

t

m

t

0))

u

0=

Su

0= 0

and with (3.12) it follows that

u

0 2ker(

S

) or

u

0 = (

I

;

PP

1)

z

and this gives

u

0 = 0. With (4.35) we have

u

i = 0 and, using (4.36), 0and 0h= 0. The matrix

J

is a regular perturbation of

J

, i.e. for

h

suciently small also

J

is nonsingular. 2

Remark 4

To determine the unknowns

u

0

::: u

m;1 0 h

0 and i hi i=1:::m;1

we have to solve the systemsf(4.32), (4.30),(4.31) with

i

= 0gand (4.31) for

i

= 1

m

;1.

All these systems have a nonsingular Jacobian. Consequently, for the solution of the BVP (1.1), (1.2) the represented shooting method is realizable with a common Newton{like method (taking into consideration the structure of the Jacobian, of course).

However a part of the unknowns of the nonlinear algebraic system are partially projected vectors (

u

=

PP

1

u

). The question is:

Does the Newton method save this condition ? Or, in other words :

Is the correction

u

also a

PP

1 projection?

To answer this question let us consider the nonlinear system

S

g :=

g

(

u

0+ (

PQ

1+

Q

)0 0

x

(

t

m

t

m;1

u

m;1)) +

K

;1(

I

;

PP

1)0

u

0 (4.39)

S

iu :=

u

i;(

PP

1)i

x

(

t

i

t

i;1

u

i;1) i=1:::m;1 (4.40)

S

0 :=

8

>

>

>

>

<

>

>

>

>

:

f

((

PP

1)0 0+ fPQ1gh0;(h PQ1)00

;(

PQ

1)00

u

0

u

0+ (

PQ

1+

Q

)0 0

t

0)

f

(f

PP

1 gh0+ fPQ1gh0;(h PQ1)00

;f(

PQ

1)00

u

0gh

u

h0+f(

PQ

1+

Q

)0 0gh

t

0+

h

)

(4.41) The Newton{correction

:= (

u

0

::: u

m;1 0 0h) =: (

u

0h) is given as the solution of

J

=;

S

(

)

:

(4.42)

Referenzen

ÄHNLICHE DOKUMENTE

Analysis of Fractional Nonlinear Differential Equations Using the Homotopy Perturbation Method.. Mehmet Ali Balcı and

In the second section we follow [9] to nd the linear and nonlinear system of magnetoelastic equations for a thin plate of uniform thickness 2h.. It is assumed that the plate

The L 2 -error of the reduced solutions in comparison to the finite volume solution is for the concentration smaller than 10 − 5 and for the electrical potential 10 − 4 to all

It will be shown that the projected Runge-Kutta method inherits the order of convergence from the underlying classical Runge-Kutta scheme applied to the index 3 DAE, i.e.,

bound on the degree of a smooth algebraic curve invariant under a polynomial vector field;.. an algebraic curve invariant under a vector field must contain a

An additional variable has been intro- duced in this function, namely the amount of voluntary savings which are'partially allocated by the consumers to the purchase of

This index reduction method permits us to establish a relation between the hid- den constraints of two modeling techniques in circuit simulation, the conven- tional Modied

The index of linear di erential algebraic equations with properly stated leading termsR. The aim of the present paper is to dene an appropriate general index for (1.1) in terms of