• Keine Ergebnisse gefunden

Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 2

N/A
N/A
Protected

Academic year: 2021

Aktie "Summer school on Iterative Projection methods for sparse linear systems and eigenproblems, Jyväskylä, Finland 2006 Chapters: 2"

Copied!
70
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

ITERATIVE PROJECTION METHODS FOR

SPARSE LINEAR SYSTEMS AND

EIGENPROBLEMS

CHAPTER 2 : SPLITTING METHODS

Heinrich Voss

voss@tu-harburg.de

Hamburg University of Technology Institute of Numerical Simulation

(2)

Jacobi method

The simplest iterative scheme for linear systems of equations Ax = b, A = (aij)i,j=1,...,n∈ Cn×n, b ∈ Cn,

is theJacobi methodwhich is defined for system matrices which have

nonzero diagonal elements.

The iteration step xk → xk +1can be motivated by solving the i-th equation for

xi and substituting the known xj(k )’s on the right hand side to obtain

xi(k +1)= 1 aii  bi− n X j=1 j6=i aijxj(k )  , i = 1, . . . , n.

(3)

Jacobi sweep

1: for i = 1 : n do

2: xnew (i) = b(i)

3: for j=1:i-1 do

4: xnew (i) = xnew (i) − a(i, j) ∗ xold (j)

5: end for 6: for j=i+1:n do

7: xnew (i) = xnew (i) − a(i, j) ∗ xold (j)

8: end for

9: xnew (i) := xnew (i)/a(i, i)

10: end for

(4)

Gauß–Seidel method

Notice that in the Jacobi method one does not use the newest information that is available.

When the i-th component xi(k +1)is evaluated then the improved components

x1(k +1), . . . ,xi−1(k +1)of xk +1are already known and it is natural to take advantage of this information.

(5)

Gauß–Seidel swep

1: for i = 1 : n do

2: x (i) = b(i)

3: for j=1:i-1 do

4: x (i) = x (i) − a(i, j) ∗ x (j)

5: end for 6: for j=i+1:n do

7: x (i) = x (i) − a(i, j) ∗ x (j)

8: end for

9: x (i) := x (i)/a(i, i)

10: end for

Observe that in this case we do not have to use two vectors to store xk and

xk +1but the memory of the old approximation can be overwritten with the new

(6)

Example

Consider

−∆u = 0 in Ω = (0, 1) × (0, 1), u = 0 on ∂Ω

Discretizing ∆ with central differences with stepsize h = 1/128 yields a linear system

4Uij− Ui−1,j− Ui,j−1− Ui,j+1− Ui+1,j=0, i, j = 1, . . . , 127,

(7)

Convergence history

1000 2000 3000 4000 5000 6000

10−2

10−1 100

Jacobi and Gauss−Seidel for −∆ u = 0, h=1/128

Jacobi

(8)

Splitting methods

The Jacobi and the Gauß-Seidel methods are typical members of a large class of methods for the iterative solution of the linear system of equations

Ax = b, A ∈ Cn×n, b ∈ Cn,

which are obtained by splitting of the system matrix A = B + (A − B), where B is regular, and rewriting Ax = b as

Bx = (B − A)x + b.

This system is solved iteratively by the so calledsplitting method

xk +1:=B−1(B − A)xk +b.

B is called thesplitting matrixand G := B−1(B − A) is called theiteration

(9)

Examples

Jacobi method

B := diag(A)

Gauß-Seidel method

B := diag(A) − E where −E denotes the lower triangular part of A

(10)

Convergence

By the fixed point theorem for contracting mappings the sequence {xk}

generated by

xk +1:=B−1(B − A)xk +b.

converges for every initial vector x0and every right hand side b to the unique

solution x∗ of Ax = b if there exists a vector norm k · k such that for the

corresponding matrix norm it holds

kGk = kB−1(B − A)k =: α < 1 (∗).

In this case the error of the k -th iterate can be estimated as

kxk− xk ≤ α

1 − αkx

k− xk −1k ≤ αk

1 − αkx

(11)

Convergence ct.

Conversely it can be shown that from the convergence of the splitting method

for every initial vector x0and for every b it follows that there exists a vector

norm k · k such that for the corresponding matrix norm (*) holds.

The adequate measure of the size of the iteration matrix G is thespectral

radius

ρ(G) := max {|λ| : λ ∈ spec(G)}

where spec(G) denotes thespectrumof G, i.e. the set of all eigenvalues of G.

This follows from the following theorem:

Theorem 2.1: The splitting iteration

xk +1:=B−1(B − A)xk +b

converges for every initial vector x0∈ Cnand for every vector b ∈ Cnto the unique solution of Ax = b if and only if the iteration matrix G := B−1(B − A) satisfies the condition

(12)

Proof for special case

Before we prove Theorem 2.1 for the general case we first consider the special case of a real symmetric iteration matrix G which is more transparent.

In this case there exists an orthonormal set v1, . . . ,vn∈ Cnof eigenvectors of

G:

Gvj = λjvj, (vj)Hvk = δjk, j, k = 1, . . . , n,

where δjk denotes the Kronecker symbol.

We represent xk and ˜b := B−1b as linear combinations of the vj’s

xk =: n X j=1 αkjvj, ˜b =: n X j=1 βjvj.

From the splitting iteration we obtain

n X αk +1,jvj =xk +1=Gxk+ ˜b = n X (αkjλj+ βj)vj.

(13)

Proof for special case ct.

Hence, the coefficients αkjsatisfy the recurrence formula

αk +1,j= λjαkj+ βj, j = 1, . . . , n,

from which we easily get by induction

αkj = ( λkjα0j+ 1−λk j 1−λjβj for λj 6= 1 α0j +k βj for λj =1 , j = 1, . . . , n.

Now it is obvious that the iteration converges for every initial vector x0and for

every vector ˜b (i.e. for every α0j and every βj, j = 1, . . . , n) if and only if all

(14)

Lemma

To prove Theorem 2.1 in its full generality we need the following lemma:

Lemma 2.2

Let C ∈ Cn×n. Then for every ε > 0 there exists a norm (which depends on C and on ε) such that

(15)

Proof of Lemma 2.2

Because it is more transparent than the general case we first consider the case that C is diagonalizable.

Let T ∈ Cn×nbe a regular matrix such that

T−1CT = diag {λ1, . . . , λn}.

Obviously, we then have

kT−1CT k∞= ρ(C).

Consider the vector norm kx kT−1:= kT−1x k. Then we obtain for the

corresponding matrix norm

kCkT−1 = max x 6=0 kCxkT−1 kxkT−1 =max x 6=0 kT−1CTT−1x k ∞ kT−1x k ∞ = max y 6=0 kT−1CTy k ∞ ky k∞ = kT−1CT k∞,

(16)

Proof of Lemma 2.2 ct.

In the general case we consider the transformation of C to Jordan’s canonical form T−1CT = diag {J1,J2, . . . ,Jm} =: J, where Ji =        λi 1 0 . . . 0 0 0 λi 1 . . . 0 0 .. . ... ... . .. ... ... 0 0 0 . . . λi 1 0 0 0 . . . 0 λi        ∈ Cni×ni, m X i=1 ni =n.

(17)

Proof of Lemma 2.2

Let Dε:=diag {1, ε, ε2, . . . , εn−1}. Then Dε−1JDε=diag {J1ε, . . . ,Jmε} and Jiε=        λi ε 0 . . . 0 0 0 λi ε . . . 0 0 .. . ... ... . .. ... ... 0 0 0 . . . λi ε 0 0 0 . . . 0 λi        . Hence, we obtain kD−1ε T−1CTDεk∞≤ ρ(C) + ε,

and the rest follows as in the case of a matrix which can be transformed to

(18)

Proof of Theorem 2.1

Let ρ(G) < 1. Then by Lemma 2.2 there exists a norm k · k such that kGk < 1 and the convergence follows by the contracting mapping theorem.

Suppose that ρ(G) ≥ 1. Let ˜λbe an eigenvalue of G such that |˜λ| = ρ(G), and

let y be a corresponding eigenvector. Then for the choice  x0=y , b = 0 for λ 6=˜ 1 x0=y , ˜b = y for λ =˜ 1 we obtain  xk = ˜λky for ˜λ 6=1 xk = (k + 1)y for ˜λ =1

(19)

Convergence Rate

By Theorem 2.1 we have to choose the splitting matrix B such that the spectral radius ρ(G) is less than 1 to obtain convergence of the iteration. Actually, we should choose B such that ρ(G) is as small as possible because ρ(G) is the final convergence rate of the iteration:

Theorem 2.3

Suppose that ρ(G) < 1 and denote by ek =xk− xthe error of the k -th iterate of the splitting method. Then for every vector norm

` := sup x06=x∗ lim sup k →∞ k s kekk ke0k = ρ(G).

(20)

Proof of Theorem 2.3

Let ˜λbe an eigenvalue of G such that |˜λ| = ρ(G) and let e0be an eigenvector

of G which corresponds to ˜λ. Then obviously,

kekk

ke0k = ρ(G)

k,

and the inequality

` ≥ ρ(G) holds.

To prove the converse inequality we choose δ > 0 arbitrarily. Then by Lemma

2.2 there exists a vector norm k · kδ such that

(21)

Proof of Theorem 2.3 ct.

The equivalence of all norms in Cnyields the existence of positive numbers

M ≥ m > 0 such that

mkx k ≤ kx kδ≤ Mkxk for every x ∈ Cn.

Hence, for every initial error e0we obtain

kekk ≤ 1 mke kk δ = 1 mkG ke0k δ ≤ 1 m(ρ(G) + δ) kke0k δ ≤ M m(ρ(G) + δ) kke0k, i.e. k s kekk ke0k ≤ (ρ(G) + δ) k r M m.

For k → ∞ we get ` ≤ ρ(G) + δ, and because δ > 0 can be chosen arbitrarily

(22)

Convergence rate of Jacobi method

For the Jacobi method for the model problem −∆u = f in the unit square with

Dirichlet’s boundary conditions the spectral radius of the iteration matrix GJ is

ρ(GJ) =cos

π

m + 1 (≈0.9997 for m = 128).

For the discrete model problem with stepsize h = 1/(m + 1) the spectral

radius of the iteration matrix of the Jacobi method satisfies 1 − ρ(GJ) =O(h2).

Hence, the convergence is very slow for fine discretizations. The same asymptotic holds true for the Gauß-Seidel method.

(23)

Remark

The contracting mapping theorem guarantees that the error is reduced in each step by a factor close to ρ(G). This is true with respect to the norm constructed in Lemma 2.2.

It is usually the 2-norm or the ∞-norm or some closely related norm of the error that is of interest.

For matrices with a complete set of orthonormal eigenvectors (i.e. for normal

matrices) the 2-norm and the spectral radius coincide. Thus, if I − B−1A is a

normal matrix, then the 2-norm of the error is reduced at least by the factor

ρ(I − B−1A) at each step.

For nonnormal matrices it is often the case that

ρ(I − B−1A) < 1 < kI − B−1Ak2.

In this case the 2-norm of the error may grow over some finite number of steps before it is reduced and convergence occurs.

(24)

Example

Consider the homogeneous problem Ax = 0 where

A =         1 −1.15 0.15 1 −1.15 . .. . .. . .. . .. . .. −1.15 0.15 1         ∈ R100×100

Choose B as the lower triangle of A (Gauß-Seidel method), and let x0be a

random vector.

The following figure demonstrates that the error increases by about 15 orders of magnitude before starting to decrease.

(25)

Example ct.

0 50 100 150 200 250 300 350 400 10−25 10−20 10−15 10−10 10−5 100 105 1010 1015

(26)

Remark

Without loss of generality we considered a homogeneous problem Ax = 0 The iteration

xk +1=B−1(B − A)xk+B−1b

for the inhomogeneous problem Ax = b and

yk +1=B−1(B − A)yk, y0:=x0− A−1b

for the homogeneous problem Ay = 0 exhibit the same convergence behavior. By induction its follows yk =xk− A−1b: For k = 0 this is trivial, and if it holds

for some k then it follows

yk +1 = B−1(B − A)yk =B−1(B − A)(xk− A−1b)

(27)

Accelerating the Gauß–Seidel method

The convergence properties of the Gauß-Seidel method can be improved substantially if extrapolation is used.

The Gauß-Seidel method yields the i-th component of the new iterate ˜ xi(k +1)= 1 aii  bi− i−1 X j=1 aijxj(k +1)− n X j=i+1 aijxj(k )  , which is modified as xi(k +1):=xi(k )+ ω(˜xi(k +1)− xi(k )),

where ω > 0 is a fixedrelaxation factor.

Inserting ˜xi(k +1)into the last equation one gets

xi(k +1)=xi(k )+ ω aii  bi − i−1 X j=1 aijxj(k +1)− n X j=i aijxj(k )  .

(28)

Successive Overrelaxation (SOR)

If we use the decomposition A = D − E − F where D denotes the diagonal of A, and −E and −F are the lower and upper triangle of A, respectively, then the iteration reads in matrix notation

(D − ωE )xk +1= (1 − ω)Dxk+ ωFxk+ ωb,

i.e.

xk +1:= (D − ωE )−1((1 − ω)D + ωF )xk + ω(D − ωE )−1b. (∗)

The iterative method which is defined by (*) is calledsuccessive

overrelaxation methodor for shortSOR method. Its iteration matrix is

Gω:= (D − ωE )−1((1 − ω)D + ωF ) = (I − ωL)−1((1 − ω)I + ωR),

(29)

Convergence of SOR

Theorem 2.4:

Let A ∈ Cn×nbe an arbitrary matrix with regular diagonal D. Then for every ω ∈ R

ρ(Gω) ≥ |ω −1|.

Hence, we can expect convergence of the SOR method only for relaxation factors ω ∈ (0, 2).

Proof: Let λ1, . . . , λnbe the eigenvalues of the iteration matrix Gω.

Since (D − ωE )−1and (1 − ω)D + ωF are triangular matrices

Qn

j=1λj = det Gω=det(D − ωE )−1· det((1 − ω)D + ωF )

= Qn j=1 1 ajj · (1 − ω) nQn j=1ajj= (1 − ω)n,

(30)

Convergence of SOR ct.

Theorem 2.5

Let A := D − E − ET be a Hermitean and positive definite matrix. Then the SOR method converges if and only if ω ∈ (0, 2).

Proof: By Theorem 2.4 we only have to prove that the SOR method converges for every ω ∈ (0, 2).

Let λ be an eigenvalue of Gωand let x 6= 0 be a corresponding eigenvector,

i.e.

(31)

Convergence of SOR

Obviously, the following two equations hold:

2((1 − ω)D + ωEH) = (2 − ω)D + ω(−D + 2EH)

= (2 − ω)D − ωA + ω(EH− E)),

2λ(D − ωE ) = λ((2 − ω)D + ω(D − 2E ))

= λ((2 − ω)D + ωA + ω(EH− E)).

Hence, we obtain from equation (*) (((1 − ω)D + ωEH)x = λ(D − ωE )x)

λ(2 − ω)xHDx + ωxHAx + ωxH(EH− E)x

(32)

Convergence of SOR

or using the abbreviations d := xHDx > 0, a := xHAx and xH(EH− E)x = is,

s ∈ R:

λ((2 − ω)d + ωa + iωs) = (2 − ω)d − ωa + iωs. Division by ω yields

λ(µd + a + is) = µd − a + is, where µ := (2 − ω)/ω.

If ω ∈ (0, 2) then µ > 0 and µd + is is contained in the right half of the complex plane. Hence, the distance to a is smaller than the one to −a, and therefore, we get

|λ| = |µd + is − a|

(33)

Consistently ordered matrices

For the following class of matrices the optimal relaxation parameter can be determined.

Definition:

Let A = D(I − L − R) the standard decomposition of A ∈ Cn×n.

A is2-consistently orderedif the eigenvalues of the matrix

J(α) := αL + α−1R

(34)

Consistently ordered matrices ct.

Example Let A = I −  O R˜ ˜ L O  .

Then it holds that

αL + α−1R =  O α−1R˜ α˜L O  =  I O O αI   O R˜ ˜ L O   I O O α−1I  .

Hence, αL + α−1R and L + R are similar matrices, and A is 2-consistently

(35)

Consistently ordered matrices ct.

Example

Consider the finite difference approximation of the model problem where the unknowns are ordered in the following way.

The grid points are subdivided into two classes, red and black (like on a checkerboard), such that for every row and for every column in Ω neighboring points belong to different classes, and then they are ordered within each class. Then the system matrix obtains the form considered in the last example, and hence, it is 2-consistently ordered.

(36)

Consistently ordered matrices ct.

If A is 2-consistently ordered then for α = 1 and α = −1 we obtain that the matrices L + R and −L − R have identical eigenvalues. Hence, the

eigenvalues µj appear in pairs which differ by their sign, i.e.

det(λI − L − R) = λm

r

Y

j=1

(λ2− µ2j), n = 2r + m. (1)

The following theorem relates the eigenvalues of the iteration matrices of the Jacobi method and of the SOR method for 2-consistently ordered matrices.

(37)

Theorem of Young

Let the matrix A be 2-consistently ordered where ajj=1 for every j, and assume that ω 6= 0. Then the following assertions hold:

(i) If λ 6= 0 is an eigenvalue of the iteration matrix Gωof the SOR method

and if µ satisfies the equation

(λ + ω −1)2= λµ2ω2, (2)

then µ is an eigenvalue of L + R.

(ii) If µ is an eigenvalue of L + R and if λ satisfies the equation (2), then λ is

(38)

Proof of Young’s Theorem

We first prove the identity

det(λI − sL − rR) = det(λI −√rs(L + R)). (3)

Both sides of (3) contain polynomials of degree n which are of the form λn+ . . .. From sL + rR =√rsr s rL + r r sR  , rs 6= 0,

it follows that sL + rR and√rs(L + R) have the same eigenvalues, and this is

true for rs = 0 as well.

Therefore, both polynomials have the same roots, and hence, they are identical.

(39)

Proof of Young’s Theorem ct.

Because det(I − ωL) 6= 0 the eigenvalues of Gωare the roots of

det(I − ωL) · det(λI − Gω) =det(λ(I − ωL) − (1 − ω)I − ωR)

=det((λ + ω − 1)I − ωλL − ωR) =: φ(λ).

From (3) it follows that

φ(λ) =det((λ + ω − 1)I − ω√λ(L + R)),

and from (1) one obtains

φ(λ) = (λ + ω −1)m r Y j=1  (λ + ω −1)2− ω2λµ2 j  , (4)

(40)

Proof of Young’s Theorem ct.

If µ is an eigenvalue of L + R and if λ satisfies equation (2), then φ(λ) = 0,

and λ is an eigenvalue of Gω. This proves the statement (ii).

If conversely λ 6= 0 is an eigenvalue of Gω, then one of the factors in (4)

vanishes. We assume that µ satisfies (2).

If µ 6= 0 then λ + ω − 1 6= 0. From (4) one obtains (λ + ω − 1)2= λω2µ2

j for

some eigenvalue µj of L + R, and (2) yields (λ + ω − 1)2= λω2µ2.

Thus, µ = ±µj, and because the eigenvalues of R + L appear in pairs of

opposite sign, µ is an eigenvalue of L + R.

If µ = 0, then it follows from (2) that λ + ω − 1 = 0, and from (4)

(41)

Corolary

As a direct consequence of Young’s Theorem we obtain for the case ω = 1:

Corollary

If A is 2-consistently ordered then

ρ(GGS) = ρ(GJ)2,

where GGS and GJ are the iteration matrices of the Gauss-Seidel method and the Jacobi method, respectively.

Hence, the Gauss-Seidel method needs only half as much iteration steps as the Jacobi method to yield the same accuracy.

(42)

Optimum parameter

Theorem 2.6: Let A = I − L − R be 2-consistently ordered, and suppose that the matrix L + R has only real eigenvalues and that ˆρ := ρ(L + R) < 1. Then for ω ∈ (0, 2) the spectral radius of the iteration matrix Gωattains its minimum value for

ω∗= 2

1 +p1 − ˆρ2, and the minimum spectral radius is given by

ρ∗= ω∗− 1 = ρˆ 1 +p1 − ˆρ2

!2 .

Remark: The spectral radius of Gωis given by

ρ(Gω) =  ω −1 for ω ≥ ω∗ 1 − ω +12ω2ρˆ2+1 2ω ˆρpω2ρˆ2+4(1 − ω) for ω ≤ ω ∗

(43)

Proof

From Young’s theorem we know that all non-vanishing eigenvalues λ of Gω

satisfy

(λ + ω −1)2= λω2µ2, (5)

where the eigenvalues µ appear in pairs of opposite sign and are real by our assumptions. Hence, we have to study (5) only for non-negative values of µ. For fixed µ ≥ 0 we first determine ω such that the maximum of the moduli of the two roots of (5) is minimum.

If equation (5) has no real root then the roots are complex conjugate, and their absolute value is |λ| = |ω − 1|.

(44)

Proof ct.

If equation (5) has a double real root, then we obtain from

λ2+ λ(2ω − 2 − ω2µ2) + (ω −1)2=0 the condition 1 4(2ω − 2 − ω 2µ2)2= (ω −1)2, i.e. µ2ω24(ω − 1) − µ2ω2=0.

Hence, one obtains µ = 0 or

ω = ˜ω = 2

1 ±p1 − µ2, (6)

and the corresponding root is as in the case of non-real roots λ = |˜ω −1| = ˜ω −1.

(45)

Proof ct.

Finally, if equation (5) has two real roots, then the straight line

gω(λ) := ω1(λ + ω −1) and the parabola p(λ) = ±

λµhave exactly two points

of intersection.

The bigger of these points is minimum, if they coalesce, i.e. if the second case appears.

Moreover, in (6) the +-sign has to be chosen, because in this case the root is situated in the interval (0, 1).

(46)

Proof ct.

If ω > 0 is small, then (5) has two real solutions.

For increasing ω the maximum root of (5) decreases until ω = ˜ωand (5) has a

double root λ = ˜ω −1.

If ω is increased further, the roots become complex and |λ| = |ω − 1| starts growing. Hence, for fixed µ the parameter

˜

ω = 2

1 +p1 − µ2

(47)

Proof ct.

We now demonstrate that for ω = ω∗the spectral radius of Gωsatisfies

ρ∗= ω∗− 1.

µ := ˆρis an eigenvalue of L + R. Hence, λ = ω∗− 1 > 0 is an eigenvalue of

Gω∗.

If µ ∈ [0, ˆρ)is an eigenvalue of L + R then the parabola P : λ 7→ ±√λµis

contained in the parabola corresponding to ˆρ.

Hence, the straight line λ 7→ gω∗(λ)does not meet the parabola P, and the

eigenvalues of Gω∗ corresponding to µ are non-real with absolute value

(48)

Proof ct.

Therefore, all eigenvalues of Gω∗ have the same absolute value ρ∗.

It remains to show that ρ(Gω) > ρ∗ for ω 6= ω∗.

If λ is the root of (5) corresponding to µ = ˆρof maximum modulus, then

|λ| ≥ |ω − 1| > ω∗− 1, and one obtains ρ(G

(49)

Consistently ordered matrices

Graph of ω 7→ ρ(Gω) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0.95 0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1

(50)

Consistently ordered matrices

Close-up of graph of ω 7→ ρ(Gω) 0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1

(51)

Consistently ordered matrices

Convergence history for model problem

100 200 300 400 500 600 700 800 900 1000 10−6 10−5 10−4 10−3 10−2 10−1 100

SOR for −∆ u=0, h=1/128

Jacobi Gauss−Seidel

lexicographic o.

red−black o.

For the model problem with red-black ordering and stepsize h = 1/128 one

(52)

Symmetric splitting methods

In the next chapter when analyzing the acceleration by polynomial methods we will assume that the iteration matrix G has real eigenvalues.

For a symmetric system matrix A this can be accomplished by the

requirement that the iteration matrix G has the form G = B−1(B − A), where B

is symmetric and positive definite, because in this case G is similar to the

symmetric matrix B1/2GB−1/2=B−1/2(B − A)B−1/2.

Moreover, in Chapter 4 we will need iteration matrices of this type with good convergence properties for the preconditioning of the conjugate gradient method.

(53)

Symmetric splitting methods ct.

Definition

The iteration method xk +1:=Gxk+ ˜b is calledsymmetric, if the iteration

matrix can be written as

G = B−1(B − A),

where the splitting matrix B is symmetric and positive definite whenever A is symmetric and positive definite.

Notice that in general the iteration matrix of a symmetric method is not symmetric.

(54)

Symmetric splitting methods ct.

Obviously, the Gauß-Seidel method and the successive overrelaxation are not symmetric (the eigenvalues of the SOR iteration matrix are not even real). It is possible, however, to symmetrize the Gauß-Seidel and the SOR method by coupling it with the corresponding backward scheme.

This backward iteration is obtained by updating the unknowns in reverse

order. Using the partition A := D − E − F = D(I − L − R) thebackward

Gauß-Seidel methodis given by the iteration matrix

GbGS:= (D − F )−1E = (I − R)−1L

and thebackward SOR methodby

(55)

Backward Gauß-Seidel method

Given xk the determination of xk +1by the backward Gauß–Seidel methods

reads 1: for i = n : −1 : 1 do 2: z = b(i) 3: for j=1:n do 4: z = z − a(i, j) ∗ x (j) 5: end for

6: x (i) = x (i) + ωz/a(i, i)

(56)

Symmetric splitting methods ct.

If we combine the forward and the backward scheme we obtain thesymmetric

Gauß-Seidel methodwhich is defined by the iteration matrix

GGSs = (D − F )−1E (D − E )−1F

and thesymmetric SOR methodorSSOR methodgiven by

Gsω= (D − ωF )−1((1 − ω)D + ωE )(D − ωE )−1((1 − ω)D + ωF ).

It can be shown easily that the splitting matrices of the symmetric Gauß-Seidel method and the symmetric SOR method are given by

BsGS= (D − E )D−1(D − F ) and Bωs = 1 ω(2 − ω)(D − ωE )D −1(D − ωF ),

(57)

Symmetric splitting methods

If A is symmetric and positive definite then F = EH and the diagonal part D is

positive definite, and the symmetric Gauß-Seidel method and the SSOR are indeed symmetric iteration schemes. Their convergence properties with respect to the energy norm are given in the following theorem.

Theorem 2.7

Let A = D − E − EH be Hermitean and positive definite. Then for every ω ∈ (0, 2) the SSOR method is convergent, and

ρ(Gsω) = kGsωkA= kGωk2A. Moreover, the spectrum of Gs

(58)

Proof of Theorem 2.7

The matrix Gs ωis similar to A1/2GωsA −1/2= A1/2GbωA −1/2 A1/2GωA−1/2  .

From Gω=I − (D − ωE )−1A and Gbω=I − (D − ωEH)−1A one obtains for the

first factor

A1/2GbωA−1/2 = I − A1/2(D − ωEH)−1A1/2

= I − A1/2(D − ωE )−1A1/2H=A1/2GωA−1/2)H.

Hence, A1/2Gs

ωA−1/2is Hermitean and positive semidefinite, and therefore,

spec(Gs

(59)

Proof of Theorem 2.7 ct.

ρ(Gs ω) = kGsωkA= kGωk2Afollows from ρ(Gsω) = ρ(A1/2GsωA−1/2) = kA1/2GsωA−1/2k2= kGωskA and kA1/2Gs ωA −1/2k 2 = k(A1/2GωA−1/2)H(A1/2GωA−1/2)k2 = kA1/2GωA−1/2k22= kGωk2A.

(60)

SSOR

At a first glance, two properties of a symmetrized method seem to be at hand. Firstly, it should require twice as much work as the underlying simple method, and secondly, it should converge faster because it is the composition of the base method and a second convergent method.

(61)

SSOR ct.

Due to an observation ofNiethammer(1964) (which was rediscovered by

Conrad & Wallach(1977)) a symmetrized method can be performed with nearly the same effort as the base method. We demonstrate this for the SSOR method.

The first halfstep (forward sweep) can be written as

xk +1/2:=xk + ωD−1(b − Dxk +Fxk+Exk +1/2) (∗)

and the second halfstep (backward sweep) is given by

xk +1:=xk +1/2+ ωD−1(b − Dxk +1/2+Exk +1/2+Fxk +1) (∗∗).

In the first halfstep Exk +1/2has already been evaluated and can be used in

(**). Moreover, Fxk +1, which is computed in (**), can be used in the first half

of (*) of the following iteration step. Hence, in every SSOR step we have to perform only one matrix-vector multiplication.

(62)

SSOR ct.

From Theorem 2.7 one obtains that with respect to the energy norm the SSOR method indeed converges twice as fast as the SOR method. The following example shows that with respect to the spectral radius, the norm independent measure of convergence, in general this is not true.

Example

We consider again the difference approximation of the model problem with lexicographical order of the unknowns and n = 128. The following table contains the spectral radii of the iteration matrices of the SSOR method for different values of the relaxation parameter. Obviously, the optimum

parameter is ω∗≈ 1.959 and the corresponding spectral radius is

ρ(Gs

(63)

SSOR ct.

ω ρ(Gω) ω ρ(Gω) ω ρ(Gω) 1.0 0.99878 1.955 0.96839 1.1 0.99852 1.91 0.97777 1.956 0.96831 1.2 0.99819 1.92 0.97574 1.957 0.96824 1.3 0.99777 1.93 0.97350 1.958 0.96820 1.4 0.99720 1.94 0.97116 1.959 0.96819 1.5 0.99640 1.95 0.96908 1.960 0.96820 1.6 0.99522 1.96 0.96820 1.961 0.96825 1.7 0.99326 1.97 0.97089 1.962 0.96833 1.8 0.98947 1.98 0.97966 1.963 0.96846 1.9 0.97958 1.99 0.98973 1.964 0.96863

For the (forward) SOR method with lexicographic ordering the optimal ω

(64)

SSOR ct.

Graph of the mapping ω 7→ ρ(Gs ω). 0.97 0.975 0.98 0.985 0.99 0.995 1

(65)

SSOR ct.

Close up of graph of the mapping ω 7→ ρ(Gs ω). 1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1

Close to the optimum parameter it seems to be smoother than that of the SOR method. Hence, the choice of the parameter is not as critical as in the SOR case.

(66)

2-cyclic matrix

Even more surprising is the analysis of the SSOR method for the following class of matrices:

Definition

The matrix A ∈ Cn×nis2-cyclicif it has the block form

A =  D1 F˜ ˜ E D2  ,

where D1and D2are diagonal matrices.

(67)

2-cyclic matrix ct.

If A is a regular and 2-cyclic matrix then the iteration matrix of the Gauß-Seidel method is given by

GGS = (D − E )−1F =  D1 O ˜ E D2 −1 O − ˜F O O  =  D−11 O −D2−1E D˜ −11 D2−1   O − ˜F O O  =  O −D1−1F˜ O D−12 E D˜ −11 F˜  . (+)

Correspondingly, we obtain the iteration matrix of the backward Gauß-Seidel method GbGS=  D1−1F D˜ −12 E˜ O −D−12 F˜ O  ,

(68)

2-cyclic matrix ct.

and therefore, the iteration matrix of the symmetric Gauß-Seidel method is given by GsGS=GGSb GGS =  O −D1−1F D˜ 2−1E D˜ 1−1F˜ O D2−1E D˜ 1−1F˜  . (++)

From (+) and (++) it follows that the Gauß-Seidel method and its symmetric version have the same speed of convergence because the spectral radii of their iteration matrices coincide.

(69)

2-cyclic matrix ct.

We now consider the SSOR method in the 2-cyclic case. The iteration matrices of the two halfsteps of the SOR method can be written as

G(1)ω =  (1 − ω)I ωD−11 F˜ O I  , Gω(2)=  I O ωD−12 E˜ (1 − ω)I  .

Hence, the iteration matrix of the SSOR method is given by Gωs =G(1)ω G(2)ω G(2)ω G(1)ω

which is similar to

(70)

2-cyclic matrix ct.

Moreover, it holds that G(1)ω G(1)ω =  (1 − ω)2I (2 − ω)ωD1−1F˜ O I  =G(1)ω(2−ω)

and G(2)ω G(2)ω =G(2)ω(2−ω), and therefore,

ρ(Gsω) = ρ(Gω(2−ω)).

Obviously, ω(2 − ω) < 1 for every ω ∈ (0, 2), ω 6= 1, and the spectral radius of the iteration matrix of the SOR method is strictly decreasing in the parameter intervall (0, 1].

Hence, the optimum SSOR method in the 2-cyclic case is the symmetric Gauß-Seidel method, and the speed of convergence is identical to that of the simple Gauß-Seidel method.

Referenzen

ÄHNLICHE DOKUMENTE

Two types of iterative projection methods for eigenproblems are considered in this summer school; methods which projected the underlying problem to a sequence of Krylov spaces,

For the symmetric Gauß-Seidel method and the SSOR method the iteration matrices are similar to symmetric and positive semidefinite matrices. Obviously, the spectrum of the

Because the conjugate gradient method is a direct method, and hence, in exact arithmetic the solution is obtained in a finite number of steps, terms like ‘convergence’ or

Although the LQ factorization is determined in a stable way (even for indefinite matrices) the algorithm should not be implemented directly since intermediate matrices ˜ L k may

Vinsome (1976) proposed a method called ORTHOMIN which is a truncated GCR method and which is considerably less expensive per iteration step. This method usually is referred

Gutknecht (1993) proposed BiCGStab2 where in odd-numbered iteration steps the polynomial ψ is expanded by a linear factor, but in the following even-numbered step this linear factor

TUHH Heinrich Voss QMR Methods Summer School 2006 22 / 63...

If an error tolerance is not met the search space is expanded in the course of the algorithm in an iterative way with the aim that some of the eigenvalues of the reduced matrix