• Keine Ergebnisse gefunden

Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 7

N/A
N/A
Protected

Academic year: 2021

Aktie "Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 7"

Copied!
57
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

ITERATIVE PROJECTION METHODS FOR

SPARSE LINEAR SYSTEMS AND

EIGENPROBLEMS

CHAPTER 7 : SHORT RECURRENCES

Heinrich Voss voss@tu-harburg.de

Hamburg University of Technology Institute of Numerical Simulation

(2)

Short recurrence

In this Section we consider methods which determine approximations xk ∈ x0+ K

k(r0,A) from short recurrences.

By the Theorem ofFaber & Manteuffelthese methods can not minimize the residuum. Hence, measured by the number of iteration steps necessary to reduce the residual norm by a given factor they must be outperformed by GMRES.

However, the arithmetic work necessary to obtain a given accuracy can be much smaller than for GMRES.

(3)

BiCG method: Lanczos (1952), Fletcher (1976)

1: r0=b − Ax0

2: Choose ˜r0with α

0= (˜r0)Tr06= 0

3: d1=r0; ˜d1= ˜r0

4: for k = 1, 2, . . . until convergence do

5: sk =Adk 6: γk = (˜dk)Tsk 7: τk = αk −1/γk 8: xk =xk −1+ τkdk 9: rk =rk −1− τksk 10: ˜rk = ˜rk −1− τkATd˜k 11: βk =1/αk −1 12: αk = (˜rk)Trk; βk = αkβk 13: dk +1=rk+ β kdk 14: d˜k +1= ˜rk+ β kd˜k 15: end for

(4)

Cost fo BiCG

2 matrix-vector products 2 scalar products 5 _axpy

Storage requirements: 6 vectors

If A is symmetric and positive definite, and if ˜r0=r0, then ˜rk =rk and ˜dk =dk

for every k , and the BiCG method coincides with the CG method.

Whereas for SPD matrices (in exact arithmetic) the method can stop only if the solution xk =xof the linear system Ax = b is reached, in the general

(5)

BiCG ct.

The orthogonality of the residuals and the conjugacy of the search directions in the CG method have to be replaced by the following properties of the BiCG method:

4 vector sequences rk, ˜rk, dk and ˜dk are determined such that rk and ˜rk are

biorthogonal, i.e.

(˜ri)Trj =0 for i 6= j and such that dk und ˜dk arebiconjugate, i.e.

(6)

Theorem 7.1

The vectors rj, ˜rj, j = 0, . . . , k and dj, ˜dj, j = 1, . . . , k + 1, produced by the

biconjugate gradient method are such that

(˜ri)Trj =0 for i 6= j (biorthogonality property) (∗) (˜di)TAdj =0 for i 6= j (biconjugacy property) (∗∗).

Proof:

We prove by induction that (*) and (**) hold for 0 ≤ i, j ≤ k and 1 ≤ i, j ≤ k + 1, respectively.

(7)

Proof ct.

For k = 0 nothing has to be shown. Assume that the statement is true for some k ≥ 0. Then it follows for j = 0, . . . , k (assuming β0=0 if necessary)

(˜rj)Trk +1 = (˜rj)Trk − τk +1(˜rj)TAdk +1

= (˜rj)Trk − τk +1(˜dj+1− βjd˜j)TAdk +1.

Hence, (˜rj)Trk +1=0 for j < k by assumption and for j = k by the definition of

τk +1. In a similar way we get (rj)T˜rk +1=0 for j = 0, . . . , k , and therefore, (*) is

(8)

Proof ct.

For j = 1, . . . , k + 1

(dk +2)TATd˜j = (rk +1)TATd˜j+ βk +1(dk +1)TATd˜j

= (rk +1)T(˜rj−1− ˜rj)/τj+ βk +1(dk +1)TATd˜j.

For j ≤ k we obtain (dk +2)TATd˜j =0 by assumption. For j = k + 1 one gets from the definition of γk +1

(dk +2)TATd˜k +1 = (rk +1)T(˜rk− ˜rk +1)/τk +1+ βk +1γk +1

= −αk +1/τk +1+ βk +1γk +1

= −αk +1γk +1/αk+ βk +1γk +1,

and the definition of βk +1yields (dk +2)TATd˜k +1=0. Likewise

(˜dk +2)TATdj =0 can be established for j = 1, . . . , k + 1, and therefore, (**)

(9)

BiCG ct.

A direct consequence of the biorthogonality of rj and ˜rj is that the BiCG

method stops after at most n Steps. From dk +1=rk+ β kdk it follows by induction dk +1=rk+ k −1 X j=0 δjrj.

Hence, dk +1and ˜rj are orthogonal for j > k , and correspondingly ˜dk +1and rj

(10)

BiCG ct.

As for the CG method it follows by induction

span{d1, . . . ,dk} = span{r0,Ar0, . . . ,Ak −1r0}.

Hence, the iterates are contained in the shifted Krylov space x0+ K

k(r0,A).

Similarly, it holds that

span{˜r0, . . . , ˜rk −1} = span{˜r0,AT˜r0, . . . , (AT)k −1˜r0}.

Hence, the biorthogonality (˜rj)Trk =0, j = 0, . . . , k − 1 is equivalent to

((AT)j˜r0)T(b − Axk) =0, j = 0, . . . , k − 1.

Therefore, the iterate xk of the BiCG method is the Petrov-Galerkin

approximation with respect to the ansatz space Vk := Kk(r0,A) and the test space Wk = Kk(˜r0,AT).

(11)

Example

10−10 10−8 10−6 10−4 10−2 100 102 104 BiCG for −∆ u −256 u y = 0, h=1/128

without preconditioning: 1.821E8 flops

nofill luinc: 5.034E7+1.12E5 flops

luinc(a,0.1): 2.587E7+4.23E6 flops

luinc(a,0.01): 1.481E7+4.23E6 flops

(12)

Example

50 100 150 200 250 300 10−10 10−8 10−6 10−4 10−2 100 102 BiCG for −∆ u −256 u y − 163.84 u = 0, h=1/128

without preconditioning: 1.799E8 flops

nofill luinc: 5.593E7+1.12E5 flops

luinc(a,0.1): 2.687E7+4.23E6 flops

luinc(a,0.01): 1.587E7+4.23E6 flops

(13)

Example

10−10 10−5 100 105 BiCG for −∆ u −256 u y − 1638.4 u = 0, h=1/128

without preconditioning: 2.947E8 flops

nofill luinc: 6.805E7+1.12E5 flops

luinc(a,0.1): 2.986E7+4.23E6 flops luinc(a,0.01): 1.699E7+4.23E6 flops

(14)

CGS method

One severe drawback of the BiCG method is that in every step one

multiplication with AT is needed, which is not at hand in many cases (in FEM

modeling Ax is accumulated from its element proportions, and ATx is not

available) or which can not be realized easily (on parallel computers with distributed memory, e.g.).

The matrix AT in the BiCG method appears only in the update formulas for ˜rk

and ˜dk, and these vectors are only used in the computation of α

k = (˜rk)Trk

and γk = (˜dk)TAdk.

Sonneveld(1989) proposed a different way to compute these numbers, and he presented theCGS method(Conjugate Gradient Squared) which is a transposition free variant of the BiCG method, i.e. which avoids multiplications by AT.

In this method the operator of the BiCG method that reduces the initial residual is applied twice. This may yield a method which is twice as fast as BiCG, but on the other hand it may amplify huge residuals of BiCG twice as large.

(15)

CG and orthogonal polynomials

We first consider a different interpretation of the CG method. Let A be symmetric and positive definite.

From

span{d1, . . . ,dk} = span{r0,Ar0, . . . ,Ak −1r0}

it follows that there exists a polynomial ψk ∈ Πk −1with

dk = ψk −1(A)r0,

and rk =dk +1− β

kdk can be written as

(16)

Orthogonal polynomials

From

dk +1 = rk + βkdk = φk(A)r0+ βkψk −1(A)r0

rk +1 = rk − τk +1Adk +1= φk(A)r0− τk +1Aψk(A)r0

one obtains the recurrences

φk +1(t) = φk(t) − τk +1tψk(t)

ψk(t) = φk(t) + βkψk −1(t).

The parameters βk and τk can be determined from the polynomials φk and ψk

and the bilinear form

(17)

Orthogonal polynomials ct.

hφ, ψi := (φ(A)r0)Tψ(A)r0

is almost a scalar product. It is bilinear and symmetric and nonnegative. Only the positive definiteness is missing, since a polynomial φ might exist such that φ(A)r0=0.

Additionally it holds that

hφψ, χi = hφ, ψχi for every polynomial φ, ψ, χ.

(18)

Orthogonal polynomials with CG

1: φ0≡ 1; ψ0≡ 1 2: α0=1 3: for k = 1, 2, . . . do 4: γk −1= hψk −1, ιψk −1i 5: τk = αk −1/γk −1 6: φk = φk −1− τkιψk −1 7: βk =1/αk −1 8: αk = hφk, φki 9: βk = αkβk 10: ψk = φk+ βkψk −1 11: end for

(19)

CGS method

The orthogonality of the residuals and the conjugacy of the search directions in the CG method yield

hφj, φki = αjδjk, hψj, ιψki = γkδjk

for the constructed polynomials φj and ψj.

These equations do not depend on the scalar product h·, ·i under consideration, but they hold for every symmetric bilinear form [ · , · ] with

[φψ, χ] = [φ, ψχ] for every polynomial φ, ψ, χ.

Hence, the CG method can be considered as a method for determining orthogonal polynomials with respect to a given bilinear form [·, ·].

(20)

CGS ct.

Correspondingly the BiCG method can be considered as a method for determining polynomials φk, ˜φk ∈ Πk such that rk = φk(A)r0and

˜

rk = ˜φk(AT)˜r0, and the algorithm indicates that for every k it holds φk = ˜φk.

Therefore,

rk = φk(A)r0, ˜rk = φk(AT)˜r0.

Comparing BiCG with the CG Method it follows that this polynomial is the residual polynomial of the CG method.

Likewise,

dk = ψk −1(A)r0, d˜k = ψk −1(AT)˜r0,

(21)

CGS ct.

The parameters αk and γk can be determined from

αk = (˜rk)Trk = (φk(AT)˜r0)Tφk(A)r0= (˜r0)T(φk(A))2r0

and

γk = (˜dk)TAdk = (ψk −1(AT)˜r0)TAψk −1(A)r0= ˜r0A(ψk −1(A))2r0,

i.e. without using AT.

With

[φ, ψ] := (φ(AT)˜r0)Tψ(A)r0 αk and γk can be calculated from

(22)

CGS ct.

From φk = φk −1− τkψk −1, ψk = φk + βkψk −1 we get for Φk := φ2k, Ψk := ψ2k, Ξk := φkψk −1, Θk := φkψk = Φk+ βkΞk : Ξk = (φk −1− τkιψk −1)ψk −1= Θk −1− τkιΨk −1 Φk = Φ2k −1− 2τkιΘk −1+ τk2ι2Ψk −1= Φk −1− τkι(Θk −1+ Ξk) Θk = Φk + βkΞk Ψk = Φk +2βkΞk + βk2Ψk −1= Θk+ βk(Ξk + βkΨk −1).

Hence, the following algorithm allows to compute Φk, Ψk, Ξk and Θk, and thus

(23)

CGS for orthogonal polynomials

Φ0≡ 1; Ψ0≡ 1; Θ0≡ 1; α0=1 for k = 1, 2, . . . do γk = [1, ιΨk −1] τk = αk −1/γk Ξk = Θk −1− τkιΨk −1 Φk = Φk −1− τkι(Θk −1+ Ξk) βk =1/αk −1 αk = [1, Φk] βk = αkβk Θk = Φk + βkΞk Ψk = Θk+ βk(Ξk+ βkΨk −1) end for

(24)

CGS method

Substituting A into the polynomials and defining rk = φ2k(A)r0= Φk(A)r0

dk = ψk2(A)r0= Ψk −1(A)r0

uk = φk(A)ψk(A)r0= Θk(A)r0

qk = φk(A)ψk −1(A)r0= Ξk(A)r0

(25)

CGS Sonneveld 1989

1: r0=b − Ax0; d1=r0; u0=r0

2: Choose ˜r0with α

0= (˜r0)Tr06= 0

3: for k = 1, 2, . . . until convergence do

4: sk =Adk 5: γk = (˜r0)Tsk 6: τk = αk −1/γk 7: qk =uk −1− τksk 8: wk =uk −1+qk 9: xk =xk −1+ τ kwk 10: rk =rk −1− τ kAwk 11: βk =1/αk −1; αk = (˜r0)Trk; βk = αkβk 12: uk =rk+ β kqk 13: dk +1=uk + β k(qk+ βkdk) 14: end for

Cost of CGS: 2 matrix-vector products (not with AT)

(26)

Example

20 40 60 80 100 120 140 160 180 10−10 10−5 100 105 CGS for −∆ u − 256 u y = 0, h=1/128

without preconditioning: 1.289E8 flops

nofill luinc: 3.471E7+1.12E5 flops

luinc(a,0.1): 1.608E7+4.23E6 flops

luinc(a,0.01): 1.041E7+4.23E6 flops luinc(a,0.001): 5.438E6+4.225E6 flops

(27)

Example

10−10 10−8 10−6 10−4 10−2 100 102 CGS for −∆ u −256 u y − 163.84 u = 0, h=1/128

without preconditioning: 1.304E8 flops

nofill luinc: 3.569E7+1.12E5 flops

luinc(a,0.1): 1.713E7+4.23E6 flops luinc(a,0.01): 9.307E6+4.225E6 flops

(28)

Example

50 100 150 200 250 300 10−10 10−8 10−6 10−4 10−2 100 102 104 106 CGS for −∆ u −256 u y − 1638.4 u = 0, h=1/128

without preconditioning: 2.350E8 flops

nofill luinc: 5.433E7+1.12E5 flops

luinc(a,0.1): 2.026E7+4.23E6 flops

luinc(a,0.01): 1.152E7+4.23E6 flops

(29)

Residual update

Unfortunately, the true and the updated residual often drift apart during the iteration process, and this leads to misleading impressions of the actual error. When the method produces a small updated residual rk, then the true

residual b − Axk may not be small at all.

It may also be the case that we have done to many iteration steps, since the actual errors tend to stagnate in many cases where the updated residuals still further decrease beyond a certain value.

(30)
(31)

Residual update ct.

The vector Awk is only used to update the residuum rk. At the same expense one can determine the residuum rk =b − Axk directly in CGS. (For other

methods the computation of the true residual would require an additional matrix-vector product)

Van der Vorst(1992) reports that in many examples it has been observed that the direct calculation of the residuum has a negative effect on the iteration process, i.e. it slows down the convergence of CGS.

(32)
(33)

Residual update ct.

Among other modified strategies for determining the residualSleijpen & van der Vorst(1996) suggested to replace the updated residual only by the true one, if the updated residual becomes smaller than the previous residual norms.

The CGS algorithm is modified in the following way: Two new variables are initialized as xx = 0 and rmin=norm(r0).

Statement 9 is replaced by xx = xx + τkwk, and the following if-clause is

inserted after statement 10:

if krkk < r minthen x = x + xx ; xx = 0 rk =b − Ax rmin= krkk end if

(34)
(35)

Observation

The BiCG method often shows a very erratic convergence behavior with wildly oscillating residual norms krk

BiCGk2= kφk(A)r0k2, and for

krk

CGSk2= k(φk(A))2r0k2this behavior is even worse.

The polynomials φk are characterized by φk(0) = 1 and

(φj(AT)˜r0)Tφk(A)r0=0, j = 0, . . . , k − 1,

i.e. by the fact that φk(A)r0is orthogonal to the Krylov space Kk(˜r0,AT).

Equivalent is the requirement that φk(A)r0is orthogonal to ˜φj(AT)˜r0for suitably chosen polynomials ˜φj of degree j for j = 0, . . . , k − 1.

(36)

BiCGStab

Van der Vorst(1992) proposed the choice ˜ φk(t) = k Y j=1 (1 − ωjt) = (1 − ωkt) ˜φk −1(t),

where ωk is determined such that the residual norm krkk2is minimal.

The algorithmBiCGStabresults which shows a much smoother convergence behavior as BiCG or CGS.

(37)

BiCGStab ct.

For Φj := ˜φjφj, Ψj := ˜φjψj it holds Φk = φ˜kφk = (1 − ωkι) ˜φk −1(φk −1− τkιψk −1) = Φk −1− τkιΨk −1− ωkι(Φk −1− τkιΨk −1), Ψk = φ˜k(φk+ βkψk −1) = ˜φkφk + βk(1 − ωkι) ˜φk −1ψk −1 = Φk + βkΨk −1− βkωkιΨk −1

To execute these recurrences, we have to represent βk and τk of the BiCG

method as scalar products of the polynomials Φj and Ψj.

Let

˜

(38)

BiCGStab ct.

From ˜ φk = (−1)k k Y j=1 ωjιk + ˆφk −1, φˆk −1 ∈ Πk −1,

and the orthogonality of φk and Πk −1we get

˜ αk = (−1)k k Y j=1 ωjhιk, φki, and from φk = (−1)k k Y j=1 τjιk + ¯φk −1, φ¯k −1∈ Πk −1, it follows αk = hφk, φki = (−1)k k Y j=1 τjhιk, φki = k Y j=1 τj ωj ˜ αk,

(39)

BiCGStab ct.

αk = k Y j=1 τj ωj ˜ αk =⇒ βk = αk αk −1 = α˜k ˜ αk −1 · τk ωk .

Similarly from hχ, ιψk −1i = 0 for every χ ∈ Πk −2one gets

˜ γk := h1, ιΨk −1i = h ˜φk −1, ιψk −1i = (−1)k −1 k −1 Y j=1 ωjhιk, ιψk −1i γk = hψk −1, ιψk −1i = (−1)k −1 k −1 Y j=1 τjhιk −1, ιψ k −1i = k −1 Y j=1 τj ωj ˜ γj, and therefore τk =αk −1 γ = ˜ αk −1 ˜ γ .

(40)

BiCGStab ct.

Finally

rk = Φk(A)r0

= (I − ωkA) ˜φk −1(A)(φk −1(A) − τkAψk −1(A))r0

= (I − ωkA)(Φk −1(A) − τkAΨk −1(A))r0

=: (I − ωkA)uk

yields that krkk

2is minimal if and only if

ωk =

(uk)TAuk

kAukk2 2

(41)

BiCGStab: Van der Vorst 1992

1: r0=b − Ax0; d1=r0

2: Choose ˜r0with ˜α

0= (˜r0)Tr06= 0

3: for k = 1, 2, . . . until convergence do

4: sk =Adk 5: γ˜k = (˜r0)Tsk 6: τk = ˜αk −1/˜γk 7: uk =rk −1− τksk 8: tk =Auk 9: ωk = (tk)Tuk/ktkk22 10: xk =xk −1+ τ kdk+ ωkuk 11: rk =uk− ω ktk 12: βk =1/ ˜αk −1; 13: α˜k = (˜r0)Trk; 14: βk = ˜αkβkτkk 15: dk +1=rk+ β k(dk − ωksk); 16: end for

(42)

Cost of BiCGStab

2 matrix-vector products (not with AT)

4 scalar products 6 _axpy

(43)

Example

10−10 10−8 10−6 10−4 10−2 100 BiCGSTAB for −∆ u − 256 u y = 0, h=1/128

without preconditioning: 1.284E8 flops

nofill luinc: 3.650E7+1.12E5 flops

luinc(a,0.1): 1.690E7+4.23E6 flops luinc(a,0.01): 9.791E6+4.225E6 flops

(44)

Example

50 100 150 200 250 10−10 10−8 10−6 10−4 10−2 100 BiCGSTAB for −∆ u −256 u y − 163.84 u = 0, h=1/128

without preconditioning: 1.683E8 flops

nofill luinc: 3.856E7+1.12E5 flops

luinc(a,0.1): 1.800E7+4.23E6 flops

luinc(a,0.01): 9.791E6+4.225E6 flops luinc(a,0.001): 5.728E6+4.225E6 flops

(45)

Example

10−6 10−4 10−2 100 102 104 106

BiCG, CGS, BiCGStab for −∆ u − 256 u

y − 1638.4 u = 0, h=1/128

red : BiCG blue : CGS green : BiCGStab

(46)

Example

10 20 30 40 50 60 70 80 90 100 10−10 10−8 10−6 10−4 10−2 100 BiCGSTAB for −∆ u −256 u y − 1638.4 u = 0, h=1/128 without preconditioning

nofill luinc: 6.841E7+1.12E5 flops

luinc(a,0.1): 2.128E7+4.23E6 flops

luinc(a,0.01): 1.210E7+4.23E6 flops

(47)

Stagnation

BiCGStab steps can be viewed as a combination of a BiCG step and a GMRES(1) step. As soon as the GMRES(1) part (nearly) stagnates this behavior can often not be remedied by the following BiCG step.

Another inconvenient aspect of BiCGStab is the fact that the polynomials ψk

by construction only have real eigenvalues, whereas optimal reduction polynomials for matrices with nonreal eigenvalues often have nonreal roots. Gutknecht(1993) proposedBiCGStab2where in odd-numbered iteration steps the polynomial ψ is expanded by a linear factor, but in the following even-numbered step this linear factor is discarded, and the polynomial ψ from the preceding even-numbered step is expanded by a quadratic factor. This approach improved BiCGStab in examples with nonreal eigenvalues. A possible weakness in Gutknecht’s approach may be that in odd-numbered steps the same problem may occur as in BiCGStab, and since the following even-numbered step relies on this result, this may lead to poor convergence.

(48)

Stagnation ct.

Sleijpen & Fokkema(1993) suggested the following (even simpler) approach calledBiCGStab(2)which avoids the intermediate BiCGStab-type steps. Having applied two BiCG steps, the polynomial ψ is expanded directly by a quadratic factor without constructing the linear factors.

The method leads only in the even-numbered steps to significant residuals, whereas it does not generate useful approximations in the odd-numbered steps.

(49)

Stagnation ct.

More generally the polynomial ψ can be constructed as a product of

polynomials of degree ` ≥ 2 resulting in theBiCGStab(`)method which works as follows:

In the k -th step, k = m` + `, m = 0, 1, . . . the polynomial ψk is chosen as

ψk = χmψk −`, where χm∈ Π`is determined such that χm(0) = 1 and

krkk

2= kχ(A)ψk −`(A)φk(A)r0k2, χ ∈ Π`, χ(0) = 1,

is minimal. The minimum norm solution is obtained through solving the associated normal equations (i.e. solving a linear system of dimension `). The vectors rm`+i, i = 1, . . . , ` − 1, are constructed by (transpose free) BiCG steps, where ψm`+i is chosen as ψm`+i = ιiψm`.

A BiCGStab(`) implementation in MATLAB and FORTRAN is available from http://www.math.uu.nl/people/sleijpen

(50)

BiCGStab(2): Sleijpen, Fokkema 1992

1: r0=b − Ax0; Choose ˜r0with ˜α

0= (˜r0)Tr06= 0

2: ρ0=1; u = 0; α = 0; ω2=1;

3: for k = 0, 2, 4, . . . until convergence do

4: ρ0= −ω2ρ0; 5: ρ1= (˜r0)Tri; β = αρ1/ρ0; ρ0= ρ1; 6: u = ri− βu; v = Au; 7: γ =vT˜r0; α = ρ/γ; 8: r = ri− αv ; s = Ar ; x = xi+ αu; 9: ρ1= (˜r0)Ts; β = αρ1/ρ0; ρ0= ρ1; 10: v = s − βv ; w = Av ; 11: γ =wT˜r0; α = ρ/γ; 12: u = r − βu; r = r − αv ; s = s − αw ; t = As; 13: ω1=rTs; µ = sTs; ν = sTt; τ = tTt; ω 2=rTt; τ = τ − ν2/µ; 14: ω2= (ω2− νω1/µ)/τ; ω1= (ω1− νω2)/µ; 15: xi+2=x + ω 1r + ω2sαu; 16: ri+2=r − ω 1s − ω2t;

17: if xi+2accurate enoughthen quit

18: u = u − ω1v − ω2w ;

(51)

Comments on BiCGStab(2)

The even BiCG step is performed in steps 5-8, the odd BiCG step in steps 9-12, and steps 13-16 correspond to the GMRES(2) part.

The BiCGStab(2) algorithm requires 14 _axpy, 9 dot products, and 4 matrix-vector products per full cycle, and is only a little more expensive than two steps of BiCGStab, which requires 12 _axpy, 8 dot products, and 4 matrix-vector multiplications.

A further variant of BiCG calledGPBi-CG(generalized product BiCG) proposed byZhang(1997) generates the ψ polynomials by a three term recurrence relation. For a special choice of a parameter it reduces to BiCGStab(`).

(52)

Example

10 20 30 40 50 60 70 80 90 10−6 10−5 10−4 10−3 10−2 10−1 100 101 BiCGStabl(2) for −∆ u − 256 u y = 0, h=1/128 ||r|| 2

(53)

Example

10−6 10−5 10−4 10−3 10−2 10−1 100 101 BiCGStabl(2) for −∆ u − 256 u y − 163.84 u = 0, h=1/128 ||r|| 2

(54)

Example

10 20 30 40 50 60 70 80 90 100 110 120 10−6 10−5 10−4 10−3 10−2 10−1 100 101 102 103 BiCGStabl(2) for −∆ u − 256 u y − 1638.4 u = 0, h=1/128 ||r|| 2

(55)

Example

10−6 10−5 10−4 10−3 10−2 10−1 100 101 BiCGStab(4) for −∆ u − 256 u y = 0, h=1/128 ||r|| 2

(56)

Example

5 10 15 20 25 30 35 40 10−6 10−5 10−4 10−3 10−2 10−1 100 101 BiCGStabl(4) for −∆ u − 256 u y − 163.84 u = 0, h=1/128 ||r|| 2

(57)

Example

10−6 10−5 10−4 10−3 10−2 10−1 100 101 BiCGStabl(4) for −∆ u − 256 u y − 1638.4 u = 0, h=1/128 ||r|| 2

Referenzen

ÄHNLICHE DOKUMENTE

Two types of iterative projection methods for eigenproblems are considered in this summer school; methods which projected the underlying problem to a sequence of Krylov spaces,

Obviously, the Gauß-Seidel method and the successive overrelaxation are not symmetric (the eigenvalues of the SOR iteration matrix are not even real). It is possible, however,

For the symmetric Gauß-Seidel method and the SSOR method the iteration matrices are similar to symmetric and positive semidefinite matrices. Obviously, the spectrum of the

Because the conjugate gradient method is a direct method, and hence, in exact arithmetic the solution is obtained in a finite number of steps, terms like ‘convergence’ or

Although the LQ factorization is determined in a stable way (even for indefinite matrices) the algorithm should not be implemented directly since intermediate matrices ˜ L k may

Vinsome (1976) proposed a method called ORTHOMIN which is a truncated GCR method and which is considerably less expensive per iteration step. This method usually is referred

TUHH Heinrich Voss QMR Methods Summer School 2006 22 / 63...

If an error tolerance is not met the search space is expanded in the course of the algorithm in an iterative way with the aim that some of the eigenvalues of the reduced matrix