• Keine Ergebnisse gefunden

Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 5

N/A
N/A
Protected

Academic year: 2021

Aktie "Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 5"

Copied!
66
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

ITERATIVE PROJECTION METHODS FOR

SPARSE LINEAR SYSTEMS AND

EIGENPROBLEMS

CHAPTER 5 : INDEFINITE PROBLEMS

Heinrich Voss voss@tu-harburg.de

Hamburg University of Technology Institute of Numerical Simulation

(2)

Indefinite problems

Consider

Ax = b (1)

where A is symmetric, but not necessarily positive definite. Although problem (1) is not equivalent to the minimum problem

φ(x ) := 1 2x

TAx − bTx = min!

(3)

Conjugate gradient method

r0:=b − Ax0; α0:= (r0)Tr0. d1=r0; for k = 1, 2, . . . until dk ==0do sk =Adk; τk = αk −1  (dk)Tsk; xk =xk −1+ τ kdk; rk =rk −1− τ ksk; βk =1/αk −1; αk = (rk)Trk; βk = βk· αk; dk +1=rk+ β kdk; end for

(4)

CG method ct.

If A is symmetric, but not positive definite, the CG method can break down with (dk)Tsk = (dk)TAdk =0 for some k < n.

As in the positive definite case the search directions satisfy (dj)TAdi =0 for

i 6= j, and hence the dj are linearly independent.

Thus, as long as it does not break down the CG method determines a basis of the Krylov space Kk(r0,A), and the iterates xk =x0+ Kk(r0,A) satisfy the

Galerkin condition

(dj)T(Axk− b) = 0, j = 1, . . . , k .

This method can become unstable if the matrix Ak := ((di)TAdj)i,j=1,...,k

(5)

CG method ct.

For k ≤ n let

θ(k )1 ≤ θ(k )2 ≤ · · · ≤ θk(k )

be the eigenvalues of the matrix Ak, the so calledRitz valuesof A with respect

to Kk(r0,A).

Since

Kj(r0,A) ⊂ Kj+1(r0,A),

j =, . . . , n − 1, the minmax-characterization of the eigenvalues of a symmetric matrix yields, that for fixed j the sequence of the Ritz values decreases monotonely and is bounded below by the j smallest eigenvalue of the system matrix A.

(6)

CG method ct.

If the matrix A has exactly one negative eigenvalue (and if (r0)TAr0>0) then

there exists at most (in general exactly) one critical phase in the CG method, namely when the smallest Ritz value θ1(k )changes its sign.

Every other sequence {θ(k )j } is bounded below by λj >0.

Correspondingly there exist at most (in general exactly) m critical phases in the CG method if the matrix A has m negative und p ≤ n − m ≥ m positive eigenvalues.

(7)

CG method ct.

Consider

−∆u − 163.84u = 0 in Ω = (0, 1) × (0, 1), u = 0 on ∂Ω

Discretizing ∆ with central differences with stepsize h = 1/128 yields a linear system

3.99Uij− Ui−1,j − Ui,j−1− Ui,j+1− Ui+1,j=0, i, j = 1, . . . , 127,

of dimension n = 1272=16129.

The coefficient matrix has 8 negative eigenvalues

−8.795274784817014e − 003 −6.988549802753376e − 003 −6.988549802753343e − 003 −5.181824820689691e − 003 −3.978550749789028e − 003 −3.978550749789008e − 003 −2.171825767725394e − 003 −2.171825767725365e − 003

(8)

Example

10−6 10−5 10−4 10−3 10−2 10−1 100

CG for −∆ u−163.84 u=0; h=1/128

||e||

A

(9)

Example

10−2 100 102 104 106 108 1010 1012 CG for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r|| 2

(10)

Lanczos method

The Lanczos algorithm determines an orthonormal basis {q1, . . . ,qk} of the

Krylov space

Kk(r0,A) := span{r0,Ar0, . . . ,Ak −1r0}, k = 1, . . . , n,

such that

Tk :=QkTAQk, Qk := (q1, . . . ,qk)

is tridiagonal.

(11)

Lanczos method ct.

1: q0=0; k = 1 2: β0= kr0k 3: while rk −16= 0 do 4: qk =rk −1/βk −1 5: rk =Aqk 6: rk =rk− β k −1qk −1 7: αk = (qk)Trk 8: rk =rk− α kqk 9: βk = krkk 10: end while

Then with Tk =tridiag{βj−1, αj, βj}

(12)

Lanczos method ct.

Consider the CG method for the linear system

Ax = b, A symmetric and positive definite Let x0=0 (else consider Ay = b − Ax0=: ˜b)

The approximation xk after k steps minimizes

φ(x ) :=1 2x

TAx − bTx in K

(13)

Lanczos method ct.

Restricting φ to Kk(b, A) yields φk(y ) :=1 2y TQT kAQky − yTQTkb, y ∈ Rk,

and the CG iterate xk satisfies

Tkyk =QkTb, xk =Qkyk.

Disadvantage(at a first glance): All qj, j = 1, . . . , k , are needed to determine

(14)

Lanczos method ct.

Tk is SPD. Hence, it allows an LDLT factorization Tk =LkDkLTk,

Dk = diag{d1, . . . ,dk}, dj >0, Lk =        1 0 0 . . . 0 0 µ2 1 0 . . . 0 0 0 µ3 1 . . . 0 0 .. . ... ... . .. ... ... 0 0 0 . . . µk 1       

Comparing the elements in Tk =LkDkLTk yields d1= α1;

for j = 2 : k µj = βj−1/dj−1; dj = αj − βj−1µj; end

(15)

Lanczos method ct.

With zk :=LTkyk, Ck :=QkL−Tk it holds LkDkzk =QkTb, xk =Ckzk, and CkLTk = (c1, µ2c1+c2, . . . , µkck −1+ck) = (q1, . . . ,qk) =Qk.

Since LkDk is a lower triangular matrix and since QkTb = kbk2e1, one obtains

the solution of

LkDkzk = kbk2e1

from the solution zk −1of L

k −1Dk −1zk −1= kbk2e1justadding the last

component

(16)

CG with Lanczos process

1: r0=b − Ax0;q0=0; c0=0; β

0= kr0k2;

2: for k = 1, 2, . . . until convergence do

3: qk =rk −1 k −1 4: rk =Aqk− β k −1qk −1 5: αk = (qk)Trk 6: rk =rk− αqk 7: βk = krkk2 8: if k == 1 then 9: d1= α1; ζ1= β0/d1;c1=q1; 10: else 11: µk = βk −1/dk −1;dk = αk− βk −1µk; ζk = −µkdk −1ζk −1/dk; 12: ck =qk − µ kck −1; 13: end if 14: xk =xk −1+ ζ kck; 15: end for

(17)

Conclusion

The CG approximations xk can be obtained by the Lanczos method.

Cost of CG with Lanczos:

1 matrix-vector products 2 scalar products 5 _axpy

(18)

Indefinite problems

Lanczos’ method does not need the definiteness of A, hence the projected system

Tkyk =QTkAQkyk = kbk2e1 (1)

can be determined in a stable way. The LDLT factorization of T

k does not necessarily exist and can not be

computed in a stable way.

Paige & Saunders(1975): Use the LQ factorization

(19)

Indefinite problems ct.

Determine Uk as a product of Givens reflections

Uk =Gk  Gk −1 0 0T 1  · . . . ·  G2 O O Ik −2  =:Gk  Uk −1 0 0T 1  where Gj =   Ij−2 0 0 0T c j sj 0T s j −cj  ∈ Rj×j, cj2+sj2=1.

(20)

Indefinite problems ct.

TkUkT =  Tk −1 βk −1ek −1 βk −1(ek −1)T αk   UT k −1 0 0T 1  Gk =  Tk −1Uk −1T βk −1ek −1 βk −1(Uk −1ek −1)T αk  Gk =  ˜ Lk −1 βk −1ek −1 βk −1(ek −1)TUk −1T αk  Gk,

where Gk is chosen such that βk −1at the position (k − 1, k ) is annihilated.

βk −1(ek −1)TUk −1T = βk −1(ek −1)T  UT k −2 0 0T 1  Gk −1 = βk −1(ek −1)TGk −1= (0, . . . , 0, βk −1sk −1, −βk −1ck −1).

(21)

Indefinite problems ct.

Multiplying a matrix with Gk from the right only the last two columns are

combined.

Hence, in ˜Lk =TkUkT only 3 diagonals are different from zero, i.e. ˜Lk has the

form ˜ Lk =                γ1 0 0 0 . . . 0 0 0 δ1 γ2 0 0 . . . 0 0 0 ε1 δ2 γ3 0 . . . 0 0 0 0 ε2 δ3 γ4 . . . 0 0 0 .. . ... . .. ... ... ... ... ... .. . ... ... . .. ... ... ... ... .. . ... ... ... . .. ... . .. ... 0 0 0 0 . . . εk −2 δk −1 γ˜k               

(22)

Indefinite problems ct.

In the transition from ˜Lk to

˜ Lk +1 = Tk +1  UkT 0 0T 1  Gk +1 =  ˜ Lk βkek βk(ek)TUkT αk  Gk +1 =  Lk 0 0, . . . , 0, εk −1, δk ˜γk +1 

in the leading principal (k , k ) matrix ˜Lk only the diagonal element ˜γk is

(23)

Indefinite problems ct.

With ˜ Wk := (w1, . . . ,wk −1, ˜wk) :=QkUkT and ˜ zk := (ζ1, . . . , ζk −1, ˜ζk)T :=Ukyk

the linear system

Tkyk = kbk2e1, xk =Qkyk

can be rewritten as ˜

Lk˜zk = kbk2e1, xk = ˜Wkz˜k.

(As in the computation of the CG approximation with the Lanczos method for the SPD case) this decomposition can be used to solve the projected problem.

(24)

Indefinite problems ct.

For k = 2 the solution ˜z2is

ζ1= kbk2/γ1, ζ˜2= −δ1ζ1/˜γ2.

If ˜zk −1is already known, the leading k − 2 components of ˜zk coincide with

those of ˜zk −1, and it holds that

ζk −1 = ζ˜k −1γ˜k −1/γk −1 =ckζ˜k −1

˜

(25)

Indefinite problems ct.

From ˜ Wk = (w1, . . . ,wk −1, ˜wk) = (q1, . . . ,qk)UkT = (q1, . . . ,qk)  UT k −1 0 0T 1  Gk = ((q1, . . . ,qk −1)Uk −1T ,qk)Gk = (w1, . . . ,wk −2, ˜wk −1,qk)   Ik −2 0 0 0T c k sk 0T s k −ck  

we obtain wk −1 and ˜wk in the k -th step from

(wk −1, ˜wk) = ( ˜wk −1,qk)  ck sk sk −ck  , w˜1=q1.

(26)

Indefinite problems ct.

Although the LQ factorization is determined in a stable way (even for indefinite matrices) the algorithm should not be implemented directly since intermediate matrices ˜Lk may become singular or |˜γk| can become very small as a Ritz

value passes the origin. Since γk =

q β2

k+ ˜γk26= 0 as long as βk 6= 0 (i.e. es long as the Lanczos

process did not detect an invariant subspace of A) the matrix Lk is

nonsingular. Hence, the linear system

Lkzk = kbk2e1

for every k has a unique solution zk, which can be determined in a more

(27)

Indefinite problems ct.

Paige and Saunders propose to update the vectors ˆ

xk :=Wkzk := ˆxk −1+ ζkwk

instead of xk from which we can get the CG approximation xk +1= ˆxk + ˜ζk +1w˜k +1

after the method has converged. This method is calledSYMMLQ.

(28)

SYMMLQ

1: c1= −1; s1=0; c0=1; s0=0; ζ0= −1; ζ−1=0;

2: xˆ0=x0; ˜w0=0; q0=0; ˜q = b − Ax0; β

0= k˜qk2; q1= ˜q/β0;

3: for k = 1, 2, . . . until convergence do

4: q = Aq˜ k − β k −1qk −1 5: αk = ˜qTqk 6: q = ˜˜ q − αkqk 7: βk = k˜qk2 8: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γk2+ β2k 9: δk −1=skαk− ckck −1βk −1 10: εk −2 =sk −1βk −1 11: ck +1= ˜γk/γk; sk +1= βk/γk 12: w˜k =s kw˜k −1− ckqk 13: ζk = −(εk −2ζk −2+ δk −1ζk −1)/γk 14: qk +1= ˜q/β k 15: xˆk = ˆxk −1+ ζ k(ck +1w˜k+sk +1qk +1) 16: end for

(29)

Cost of SYMMLQ

1 matrix-vector product 2 scalar products 7 _axpy

(30)

Termination

The iteration is terminated, if the norm of the residual is sufficiently small. Although xk is not determined in every step, the corresponding residual

rk =b − Axk can be monitored during the iteration.

rk = b − Axk = kbk2q1− AQkyk

= kbk2q1− QkTkyk− βkqk +1(ek)Tyk = −βkηkqk +1,

where ηk denotes the k -th element of yk.

(31)

Termination

From

Tk =TkT =UkT˜LTk

and (*)(Tkyk =QTkAQkyk = kbk2e1)one obtains

˜

LTkyk = kbk2Uke1,

and comparing the last components of this equation we get ˜ γkηk = kbk2· s2s3. . .sk. Hence, rk = − 1 ck +1 s2· · · sk +1kbk2qk +1, and krkk

(32)

Theorem 5.1

The approximation ˆxk is the unique solution of the minimization problem

kek2=min!, x ∈ AKk(b, A) = span{Ab, A2b, . . . , Akb},

where e := A−1b − x denotes the error of x .

The CG iterates xk do not minimize an error norm in the indefinite case. However, they are usually better approximations to the solution than the vectors ˆxk.

(33)

Example

10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 SYMMLQ for −∆ u − 163.84 u = 0, h=1/128 ||\hat x||_2

(34)

Example

10−8 10−6 10−4 10−2 100

SYMMLQ and CG for −∆ u − 163.84 u = 0, h=1/128

||\hat x||_2

red : ||x

CG||2

blue : ||x

(35)

Example

10−5 10−4 10−3 10−2 10−1 100 101 SYMMLQ for −∆ u − 327.68 u = 0, h=1/128

(36)

Example

10−6 10−4 10−2 100 102 104 106 108 1010 1012 SYMMLQ for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r|| 2 ||\hat r||_2

(37)

Example

5 10 15 20 25 30 10−2 100 102 104 106 108 1010 1012 SYMMLQ for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r||2 ||\hat r||_2

(38)

Orthogonal direction method

Fridman(1963) suggested theorthogonal direction method,OD methodfor

short, to compute the minimizer ˆxk of the error kx − x∗k2upon AKk(b, A).

The method (which was rediscovered byFletcher(1976)) constructs a sequence of orthogonal directions in AKk(b, A) by the Lanczos procedure,

and minimizes kx − x∗k

(39)

OD method

1: Choose initial approximation ˆx0

2: r0=b − Aˆx0 3: d0=r0; d1=Ar0; 4: δ1=0; k = 0, 5: while krkk2>toldo 6: k = k + 1; 7: αk = (dk)Tdk 8: sk =Adk 9: τk = (rk −1)Tdk −1/αk 10: xˆk = ˆxk −1+ τ kdk 11: rk =rk −1− τ ksk 12: γk = (dk)Tsk k 13: if k>1 then 14: δk = αk/αj−1 15: end if 16: dk +1=sk − γ kdk − δkdk −1

(40)

OD method

The OD method requires 1 matrix-vector product and 7 level-1-operations, and five vectors have to be stored. Hence, it is less expensive than the SYMMLQ method.

However, it turned out to be unstable. The error decreases at the beginning of the iteration, but then it diverges rapidly.

Due to the loss of orthogonality of the search directions dj the identity

(rk −1)TA−1dk = (rk −1)Tdk −1

(which is the basis of the step size formula in the OD method and which can be proved easily by induction) loses its validity. Thus, for k sufficently large xk

(41)

Stabilized OD method

Stoer & Freund(1982) published a stable version of the OD method.

The orthogonal directions d1, . . . ,dk with d1=Ar0span the space AK k(b, A)

if and only if they have the form dj =Avj, v1=r0, where the vectors vj satisfy

span{v1, . . . ,vk} = AKk(b, A) and (vi)TA2vj =0 for i 6= j.

A set of vectors vj with these properties can be obtained by the Lanczos method with respect to the scalar product hx , y iA2 :=yTA2x .

In terms of these vectors the step size of the OD method is given by

τk = (r k −1)TA−1dk kdkk2 2 = (r k −1)Tvk (xk)TA2vk,

(42)

Stabilized OD method

1: Choose initial approximation ˆx0

2: r0=b − Aˆx0 3: v1=r0; v0=0; w1=Av1; w0=0; 4: δ1=0; k = 0, 5: while krkk2>toldo 6: k = k + 1; 7: αk = (wk)Twk 8: sk =Awk 9: τk = (rk −1)Tvk k 10: xˆk = ˆxk −1+ τ kwk 11: rk =rk −1− τ ksk 12: γk = (wk)Tsk/αk 13: if k>1 then 14: δk = αk/αj−1 15: end if 16: vk +1=wk− γkvk− δkvk −1 17: wk +1=sk− γkwk − δkwk −1

(43)

Stabilized OD method

Every step of the stabilized OD method requires 1 matrix-vector product and 9 level-1-operations, and 7 vectors have to be stored.

Compared to SYMMLQ the same number of operations is required, but 2 additional vectors have to be stored. Moreover, there is no easy way to obtain the CG approximation xk from ˆxk which is usually a better approximation to x

than ˆxk.

The following picture contains the convergence history of the OD and the stabilized OD method for the discrete Helmholtz equation

(44)
(45)

MINRES

A further stable method for solving indefinite systems is a modification of the

Method of conjugate residuals (CR method) Let A be symmetric and positive definite.

Determine xk ∈ x0+ K

k(r0,A) such that the error with respect to the

A2norm is minimal.

The conjugate residual method was introduced byStiefel(1955). It was rediscovered byLuenberger(1970), who considered already indefinite problems and discussed a variant that handles exact breakdowns. As for the CG method the approximations xk can be obtained from a

sequence of one dimensional line searches where the search directions satisfy a 3-term-recurrence.

(46)

CR method

1: r0=b − Ax0

2: t0=Ar0

3: α0= (t0)Tr0; d1=r0; s1=t0

4: for k = 1, 2, . . . until convergence do

5: τk = αk −1/(sk)Tsk 6: xk =xk −1+ τ kdk 7: rk =rk −1− τ ksk 8: βk =1/αk −1 9: tk =Ark 10: αk = (tk)Trk 11: βk = αkβk; 12: dk +1=rk+ βkdk 13: sk +1=tk+ βksk 14: end for

(47)

Cost of CR

1 matrix-vector product 2 scalar products 4 _axpy

Storage requirements: 5 Vectors

For x∗:=A−1b it holds kx − x∗k2

A2 = kA(x − x∗)k22= kAx − bk22= kr k22

Hence, the CR method minimizes the Euclidean norm of the residuum in x0+ K

k(r0,A).

Moreover, it can be shown that the residuals rk :=b − Axk determined by the

CR method are A-conjugate (this explains the name of the method). If A is symmetric but indefinite the approach of the CR method (to minimize the residuum on Kk(r0,A)) is still reasonable. However, the CR algorithm can

(48)

Example

10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/128 ||r CG||2 ||r CR||2

(49)

Example

10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/256 ||r CR||2 ||r CG||2

(50)

Example

10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/512 ||r CG||2 ||rCR||2

(51)

MINRES Paige & Saunders (1975)

Use the Lanczos process to tridiagonalize A and the LQ factorization of Tk to

solve the normal equations

QkTA2Qkyk =QkTAb, xk =Qkyk

of the least squares problem

kAQky − bk22=min!

(52)

MINRES ct.

Since rk is orthogonal to q1, . . . ,qk it holds

QkTA2Qk = (QkTk+rk(ek)T)T(QkTk+rk(ek)T)

= Tk2+ βk2ek(ek)T

QkTAb = β0QkTAq1= β0QkTAQke1

= β0QkT(QkTk+ (rk)Tek)e1= β0Tke1

From the LQ factorization of Tk = ˜LkUk which can be obtained in the same

manner as in the SYMMLQ method currently along with the Lanczos process one gets the Cholesky factorization

Tk2+ βk2ek(ek)T = ˜LkL˜Tk + βk2ek(ek)T =LkLTk

without computing T2

(53)

MINRES ct.

1: c1= −1; s1=0; c0=1; s0=0; q0=0; v0=0; v−1=0

2: q = b − Ax˜ 0; β0= k˜qk2; q1= ˜q/β0; η1= β0

3: for k = 1, 2, . . . until convergence do 4: q = Aq˜ k − βk −1qk −1 5: αk = ˜qTqk 6: q = ˜˜ q − αkqk 7: βk = k˜qk2 8: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γ2 k + β2k 9: δk −1=skαk− ckck −1βk −1; εk −2 =sk −1βk −1 10: ck +1= ˜γk/γk; sk +1= βk/γk 11: vk = (qk− ε k −2vk −2− δk −1vk −1)/γk; 12: xk =xk −1+ck +1ηkvk; 13: qk +1= ˜q/βk 14: ηk +1=sk +1ηk 15: end for

(54)

Cost of MINRES

1 matrix-vector product 2 scalar products 7 _axpy

(55)

Example

10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 ||rCR||2 ||r MINRES||2

(56)

Example

10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 ||r MINRES||2 ||r CR||2

(57)

Example

10−6 10−5 10−4 10−3 10−2 10−1 100 MINRES for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r||2

(58)

Error bounds for MINRES

xk ∈ x0+span{x0,Ax0, . . . ,Ak −1x0} is determined such that krkk

2is minimal in r0+span{Ar0, . . . ,Akr0}, i.e. rk =pM k(A)r0with pkM∈ Πk, pMk(0) = 1, krkk2= min pk∈Πk,pk(0)=1 kpk(A)r0k2.

For A = UΛUT, Λ = diag{λ

1, . . . , λn}, it follows krkk 2≤ min pk∈Πk,pk(0)=1 kpk(Λ)k2kr0k2, i.e. krkk 2 kr0k 2 ≤ min pk∈Πk,pk(0)=1 max j=1,...,n |pk(λj)|.

(59)

Error bounds for MINRES ct.

If A is positive definite (CR method!) we obtain in the same way as for the CG method krkk 2 kr0k 2 ≤ 2 p κ2(A) − 1 pκ2(A) + 1 !k .

For indefinite problems typically only very few (`  n) eigenvalues λ1≤ · · · ≤ λ`<0 are negative, and most of the eigenvalues

(60)

Error bounds for MINRES ct.

If pk(λ) =q`(λ)  Tk −`  2λ − λ`+1− λn λn− λ`+1 . Tk −`  −λ`+1− λn λn− λ`+1  q`(λ) = (−1)` ` Y j=1 (λ − λj)/ ` Y j=1 λj

and Chebyshev polynomials Tk −`,

one gets krkk 2 kr0k 2 ≤ q`(λn)2  √κn−`−1− 1 √ κn−`−1+1 k −` , κn−`−1:= λn λ`+1.

(61)

Preconditioning

The last error bound demonstrates that it is reasonable to use positive definite preconditioners even if the problem is indefinite.

To preserve symmetry of the problem we consider C−1AC−Ty = C−1b, x := CTy .

From

C−1AC−Tz = µz ⇐⇒ C−1Ay = µCTy , y = C−Tz ⇐⇒ C−TC−1Ay = (CCT)−1Ay = µy

we obtain that the Cholesky factor C of the preconditioner M := CCT should

be chosen such that:

(62)

MINRES with Preconditioning

1: c1= −1; s1=0; c0=1; s0=0; q0=0; v0=0; v−1=0 2: q = b − Ax ;w = M−1q;β0= p qTw 3: q1=q/β 0; w1=w /β0; η1= β0

4: for k = 1, 2, . . . until convergence do

5: q = Awk− β k −1qk −1 6: αk =qTwk 7: q = q − αkqk 8: wk +1=M−1q 9: βk = p qTwk +1 10: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γ2 k + β2k 11: δk −1=skαk− ckck −1βk −1; εk −2 =sk −1βk −1 12: ck +1= ˜γk/γk; sk +1= βk/γk 13: vk = (wk − ε k −2vk −2− δk −1vk −1)/γk 14: xk =xk −1+c k +1ηkvk 15: qk +1=q/β k 16: wk +1=wk +1 k η =s η

(63)

Example

100 200 300 400 500 600 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100

minres without preconditioning, 4.40E8 flops

nofill cholinc, 2.04E8+1.44E5 flops cholinc(a,0.1), 2.02E8+6.40E6 flops

cholinc(a,0.01), 1.40E8+1.31E7 flops

cholinc(a,0.001), 9.94E7+3.58E7 flops

(64)

Example

200 400 600 800 1000 1200 1400 1600 1800 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100

minres without preconditioning, 1.23E9 flops

nofill cholinc, 7.91E8+1.44E5 flops cholinc(a,0.1), 8.42E8+6.40E6 flops

cholinc(a,0.01), 6.89E8+1.31E7 flops cholinc(a,0.001), 1.25E9+3.58E7 flops −∆ u − 1638.4 u = 0, h=1/128

(65)

Example

100 200 300 400 500 600 700 10−8 10−6 10−4 10−2 100 102 104

without preconditioning, 5.402E8 flops

nofill cholinc, 2.598E8+1.439E5 flops cholinc(A,0.1), 2.633E8+6.401E6 flops

cholinc(A,0.01), 1.511E8+1.306E7 flops

cholinc(A,0.001), 1.109E8+3.575E7 flops

(66)

Example

200 400 600 800 1000 1200 1400 1600 1800 10−8 10−6 10−4 10−2 100 102 104 SYMMLQ for −∆ u − 1638.4 u = 0, h=1/128

without preconditioning, 1.381E9 flops

nofill cholinc, 9.089E8+1.439E5 flops

cholinc(A,0.1), 9.472E8+6.401E6 flops

cholinc(A,0.01), 7.060E8+1.306E7 flops

Referenzen

ÄHNLICHE DOKUMENTE

Obviously, the Gauß-Seidel method and the successive overrelaxation are not symmetric (the eigenvalues of the SOR iteration matrix are not even real). It is possible, however,

For the symmetric Gauß-Seidel method and the SSOR method the iteration matrices are similar to symmetric and positive semidefinite matrices. Obviously, the spectrum of the

Because the conjugate gradient method is a direct method, and hence, in exact arithmetic the solution is obtained in a finite number of steps, terms like ‘convergence’ or

Vinsome (1976) proposed a method called ORTHOMIN which is a truncated GCR method and which is considerably less expensive per iteration step. This method usually is referred

Gutknecht (1993) proposed BiCGStab2 where in odd-numbered iteration steps the polynomial ψ is expanded by a linear factor, but in the following even-numbered step this linear factor

TUHH Heinrich Voss QMR Methods Summer School 2006 22 / 63...

If an error tolerance is not met the search space is expanded in the course of the algorithm in an iterative way with the aim that some of the eigenvalues of the reduced matrix

TUHH Heinrich Voss Krylov subspace methods Summer School 2006 69 / 69.. Generalized