ITERATIVE PROJECTION METHODS FOR
SPARSE LINEAR SYSTEMS AND
EIGENPROBLEMS
CHAPTER 5 : INDEFINITE PROBLEMS
Heinrich Voss voss@tu-harburg.de
Hamburg University of Technology Institute of Numerical Simulation
Indefinite problems
Consider
Ax = b (1)
where A is symmetric, but not necessarily positive definite. Although problem (1) is not equivalent to the minimum problem
φ(x ) := 1 2x
TAx − bTx = min!
Conjugate gradient method
r0:=b − Ax0; α0:= (r0)Tr0. d1=r0; for k = 1, 2, . . . until dk ==0do sk =Adk; τk = αk −1 (dk)Tsk; xk =xk −1+ τ kdk; rk =rk −1− τ ksk; βk =1/αk −1; αk = (rk)Trk; βk = βk· αk; dk +1=rk+ β kdk; end forCG method ct.
If A is symmetric, but not positive definite, the CG method can break down with (dk)Tsk = (dk)TAdk =0 for some k < n.
As in the positive definite case the search directions satisfy (dj)TAdi =0 for
i 6= j, and hence the dj are linearly independent.
Thus, as long as it does not break down the CG method determines a basis of the Krylov space Kk(r0,A), and the iterates xk =x0+ Kk(r0,A) satisfy the
Galerkin condition
(dj)T(Axk− b) = 0, j = 1, . . . , k .
This method can become unstable if the matrix Ak := ((di)TAdj)i,j=1,...,k
CG method ct.
For k ≤ n let
θ(k )1 ≤ θ(k )2 ≤ · · · ≤ θk(k )
be the eigenvalues of the matrix Ak, the so calledRitz valuesof A with respect
to Kk(r0,A).
Since
Kj(r0,A) ⊂ Kj+1(r0,A),
j =, . . . , n − 1, the minmax-characterization of the eigenvalues of a symmetric matrix yields, that for fixed j the sequence of the Ritz values decreases monotonely and is bounded below by the j smallest eigenvalue of the system matrix A.
CG method ct.
If the matrix A has exactly one negative eigenvalue (and if (r0)TAr0>0) then
there exists at most (in general exactly) one critical phase in the CG method, namely when the smallest Ritz value θ1(k )changes its sign.
Every other sequence {θ(k )j } is bounded below by λj >0.
Correspondingly there exist at most (in general exactly) m critical phases in the CG method if the matrix A has m negative und p ≤ n − m ≥ m positive eigenvalues.
CG method ct.
Consider−∆u − 163.84u = 0 in Ω = (0, 1) × (0, 1), u = 0 on ∂Ω
Discretizing ∆ with central differences with stepsize h = 1/128 yields a linear system
3.99Uij− Ui−1,j − Ui,j−1− Ui,j+1− Ui+1,j=0, i, j = 1, . . . , 127,
of dimension n = 1272=16129.
The coefficient matrix has 8 negative eigenvalues
−8.795274784817014e − 003 −6.988549802753376e − 003 −6.988549802753343e − 003 −5.181824820689691e − 003 −3.978550749789028e − 003 −3.978550749789008e − 003 −2.171825767725394e − 003 −2.171825767725365e − 003
Example
10−6 10−5 10−4 10−3 10−2 10−1 100CG for −∆ u−163.84 u=0; h=1/128
||e||
A
Example
10−2 100 102 104 106 108 1010 1012 CG for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r|| 2Lanczos method
The Lanczos algorithm determines an orthonormal basis {q1, . . . ,qk} of the
Krylov space
Kk(r0,A) := span{r0,Ar0, . . . ,Ak −1r0}, k = 1, . . . , n,
such that
Tk :=QkTAQk, Qk := (q1, . . . ,qk)
is tridiagonal.
Lanczos method ct.
1: q0=0; k = 1 2: β0= kr0k 3: while rk −16= 0 do 4: qk =rk −1/βk −1 5: rk =Aqk 6: rk =rk− β k −1qk −1 7: αk = (qk)Trk 8: rk =rk− α kqk 9: βk = krkk 10: end whileThen with Tk =tridiag{βj−1, αj, βj}
Lanczos method ct.
Consider the CG method for the linear system
Ax = b, A symmetric and positive definite Let x0=0 (else consider Ay = b − Ax0=: ˜b)
The approximation xk after k steps minimizes
φ(x ) :=1 2x
TAx − bTx in K
Lanczos method ct.
Restricting φ to Kk(b, A) yields φk(y ) :=1 2y TQT kAQky − yTQTkb, y ∈ Rk,and the CG iterate xk satisfies
Tkyk =QkTb, xk =Qkyk.
Disadvantage(at a first glance): All qj, j = 1, . . . , k , are needed to determine
Lanczos method ct.
Tk is SPD. Hence, it allows an LDLT factorization Tk =LkDkLTk,
Dk = diag{d1, . . . ,dk}, dj >0, Lk = 1 0 0 . . . 0 0 µ2 1 0 . . . 0 0 0 µ3 1 . . . 0 0 .. . ... ... . .. ... ... 0 0 0 . . . µk 1
Comparing the elements in Tk =LkDkLTk yields d1= α1;
for j = 2 : k µj = βj−1/dj−1; dj = αj − βj−1µj; end
Lanczos method ct.
With zk :=LTkyk, Ck :=QkL−Tk it holds LkDkzk =QkTb, xk =Ckzk, and CkLTk = (c1, µ2c1+c2, . . . , µkck −1+ck) = (q1, . . . ,qk) =Qk.Since LkDk is a lower triangular matrix and since QkTb = kbk2e1, one obtains
the solution of
LkDkzk = kbk2e1
from the solution zk −1of L
k −1Dk −1zk −1= kbk2e1justadding the last
component
CG with Lanczos process
1: r0=b − Ax0;q0=0; c0=0; β
0= kr0k2;
2: for k = 1, 2, . . . until convergence do
3: qk =rk −1/β k −1 4: rk =Aqk− β k −1qk −1 5: αk = (qk)Trk 6: rk =rk− αqk 7: βk = krkk2 8: if k == 1 then 9: d1= α1; ζ1= β0/d1;c1=q1; 10: else 11: µk = βk −1/dk −1;dk = αk− βk −1µk; ζk = −µkdk −1ζk −1/dk; 12: ck =qk − µ kck −1; 13: end if 14: xk =xk −1+ ζ kck; 15: end for
Conclusion
The CG approximations xk can be obtained by the Lanczos method.
Cost of CG with Lanczos:
1 matrix-vector products 2 scalar products 5 _axpy
Indefinite problems
Lanczos’ method does not need the definiteness of A, hence the projected system
Tkyk =QTkAQkyk = kbk2e1 (1)
can be determined in a stable way. The LDLT factorization of T
k does not necessarily exist and can not be
computed in a stable way.
Paige & Saunders(1975): Use the LQ factorization
Indefinite problems ct.
Determine Uk as a product of Givens reflections
Uk =Gk Gk −1 0 0T 1 · . . . · G2 O O Ik −2 =:Gk Uk −1 0 0T 1 where Gj = Ij−2 0 0 0T c j sj 0T s j −cj ∈ Rj×j, cj2+sj2=1.
Indefinite problems ct.
TkUkT = Tk −1 βk −1ek −1 βk −1(ek −1)T αk UT k −1 0 0T 1 Gk = Tk −1Uk −1T βk −1ek −1 βk −1(Uk −1ek −1)T αk Gk = ˜ Lk −1 βk −1ek −1 βk −1(ek −1)TUk −1T αk Gk,where Gk is chosen such that βk −1at the position (k − 1, k ) is annihilated.
βk −1(ek −1)TUk −1T = βk −1(ek −1)T UT k −2 0 0T 1 Gk −1 = βk −1(ek −1)TGk −1= (0, . . . , 0, βk −1sk −1, −βk −1ck −1).
Indefinite problems ct.
Multiplying a matrix with Gk from the right only the last two columns are
combined.
Hence, in ˜Lk =TkUkT only 3 diagonals are different from zero, i.e. ˜Lk has the
form ˜ Lk = γ1 0 0 0 . . . 0 0 0 δ1 γ2 0 0 . . . 0 0 0 ε1 δ2 γ3 0 . . . 0 0 0 0 ε2 δ3 γ4 . . . 0 0 0 .. . ... . .. ... ... ... ... ... .. . ... ... . .. ... ... ... ... .. . ... ... ... . .. ... . .. ... 0 0 0 0 . . . εk −2 δk −1 γ˜k
Indefinite problems ct.
In the transition from ˜Lk to
˜ Lk +1 = Tk +1 UkT 0 0T 1 Gk +1 = ˜ Lk βkek βk(ek)TUkT αk Gk +1 = Lk 0 0, . . . , 0, εk −1, δk ˜γk +1
in the leading principal (k , k ) matrix ˜Lk only the diagonal element ˜γk is
Indefinite problems ct.
With ˜ Wk := (w1, . . . ,wk −1, ˜wk) :=QkUkT and ˜ zk := (ζ1, . . . , ζk −1, ˜ζk)T :=Ukykthe linear system
Tkyk = kbk2e1, xk =Qkyk
can be rewritten as ˜
Lk˜zk = kbk2e1, xk = ˜Wkz˜k.
(As in the computation of the CG approximation with the Lanczos method for the SPD case) this decomposition can be used to solve the projected problem.
Indefinite problems ct.
For k = 2 the solution ˜z2is
ζ1= kbk2/γ1, ζ˜2= −δ1ζ1/˜γ2.
If ˜zk −1is already known, the leading k − 2 components of ˜zk coincide with
those of ˜zk −1, and it holds that
ζk −1 = ζ˜k −1γ˜k −1/γk −1 =ckζ˜k −1
˜
Indefinite problems ct.
From ˜ Wk = (w1, . . . ,wk −1, ˜wk) = (q1, . . . ,qk)UkT = (q1, . . . ,qk) UT k −1 0 0T 1 Gk = ((q1, . . . ,qk −1)Uk −1T ,qk)Gk = (w1, . . . ,wk −2, ˜wk −1,qk) Ik −2 0 0 0T c k sk 0T s k −ck we obtain wk −1 and ˜wk in the k -th step from
(wk −1, ˜wk) = ( ˜wk −1,qk) ck sk sk −ck , w˜1=q1.
Indefinite problems ct.
Although the LQ factorization is determined in a stable way (even for indefinite matrices) the algorithm should not be implemented directly since intermediate matrices ˜Lk may become singular or |˜γk| can become very small as a Ritz
value passes the origin. Since γk =
q β2
k+ ˜γk26= 0 as long as βk 6= 0 (i.e. es long as the Lanczos
process did not detect an invariant subspace of A) the matrix Lk is
nonsingular. Hence, the linear system
Lkzk = kbk2e1
for every k has a unique solution zk, which can be determined in a more
Indefinite problems ct.
Paige and Saunders propose to update the vectors ˆ
xk :=Wkzk := ˆxk −1+ ζkwk
instead of xk from which we can get the CG approximation xk +1= ˆxk + ˜ζk +1w˜k +1
after the method has converged. This method is calledSYMMLQ.
SYMMLQ
1: c1= −1; s1=0; c0=1; s0=0; ζ0= −1; ζ−1=0;
2: xˆ0=x0; ˜w0=0; q0=0; ˜q = b − Ax0; β
0= k˜qk2; q1= ˜q/β0;
3: for k = 1, 2, . . . until convergence do
4: q = Aq˜ k − β k −1qk −1 5: αk = ˜qTqk 6: q = ˜˜ q − αkqk 7: βk = k˜qk2 8: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γk2+ β2k 9: δk −1=skαk− ckck −1βk −1 10: εk −2 =sk −1βk −1 11: ck +1= ˜γk/γk; sk +1= βk/γk 12: w˜k =s kw˜k −1− ckqk 13: ζk = −(εk −2ζk −2+ δk −1ζk −1)/γk 14: qk +1= ˜q/β k 15: xˆk = ˆxk −1+ ζ k(ck +1w˜k+sk +1qk +1) 16: end for
Cost of SYMMLQ
1 matrix-vector product 2 scalar products 7 _axpy
Termination
The iteration is terminated, if the norm of the residual is sufficiently small. Although xk is not determined in every step, the corresponding residual
rk =b − Axk can be monitored during the iteration.
rk = b − Axk = kbk2q1− AQkyk
= kbk2q1− QkTkyk− βkqk +1(ek)Tyk = −βkηkqk +1,
where ηk denotes the k -th element of yk.
Termination
From
Tk =TkT =UkT˜LTk
and (*)(Tkyk =QTkAQkyk = kbk2e1)one obtains
˜
LTkyk = kbk2Uke1,
and comparing the last components of this equation we get ˜ γkηk = kbk2· s2s3. . .sk. Hence, rk = − 1 ck +1 s2· · · sk +1kbk2qk +1, and krkk
Theorem 5.1
The approximation ˆxk is the unique solution of the minimization problem
kek2=min!, x ∈ AKk(b, A) = span{Ab, A2b, . . . , Akb},
where e := A−1b − x denotes the error of x .
The CG iterates xk do not minimize an error norm in the indefinite case. However, they are usually better approximations to the solution than the vectors ˆxk.
Example
10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 SYMMLQ for −∆ u − 163.84 u = 0, h=1/128 ||\hat x||_2Example
10−8 10−6 10−4 10−2 100SYMMLQ and CG for −∆ u − 163.84 u = 0, h=1/128
||\hat x||_2
red : ||x
CG||2
blue : ||x
Example
10−5 10−4 10−3 10−2 10−1 100 101 SYMMLQ for −∆ u − 327.68 u = 0, h=1/128Example
10−6 10−4 10−2 100 102 104 106 108 1010 1012 SYMMLQ for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r|| 2 ||\hat r||_2Example
5 10 15 20 25 30 10−2 100 102 104 106 108 1010 1012 SYMMLQ for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r||2 ||\hat r||_2Orthogonal direction method
Fridman(1963) suggested theorthogonal direction method,OD methodfor
short, to compute the minimizer ˆxk of the error kx − x∗k2upon AKk(b, A).
The method (which was rediscovered byFletcher(1976)) constructs a sequence of orthogonal directions in AKk(b, A) by the Lanczos procedure,
and minimizes kx − x∗k
OD method
1: Choose initial approximation ˆx0
2: r0=b − Aˆx0 3: d0=r0; d1=Ar0; 4: δ1=0; k = 0, 5: while krkk2>toldo 6: k = k + 1; 7: αk = (dk)Tdk 8: sk =Adk 9: τk = (rk −1)Tdk −1/αk 10: xˆk = ˆxk −1+ τ kdk 11: rk =rk −1− τ ksk 12: γk = (dk)Tsk/α k 13: if k>1 then 14: δk = αk/αj−1 15: end if 16: dk +1=sk − γ kdk − δkdk −1
OD method
The OD method requires 1 matrix-vector product and 7 level-1-operations, and five vectors have to be stored. Hence, it is less expensive than the SYMMLQ method.
However, it turned out to be unstable. The error decreases at the beginning of the iteration, but then it diverges rapidly.
Due to the loss of orthogonality of the search directions dj the identity
(rk −1)TA−1dk = (rk −1)Tdk −1
(which is the basis of the step size formula in the OD method and which can be proved easily by induction) loses its validity. Thus, for k sufficently large xk
Stabilized OD method
Stoer & Freund(1982) published a stable version of the OD method.
The orthogonal directions d1, . . . ,dk with d1=Ar0span the space AK k(b, A)
if and only if they have the form dj =Avj, v1=r0, where the vectors vj satisfy
span{v1, . . . ,vk} = AKk(b, A) and (vi)TA2vj =0 for i 6= j.
A set of vectors vj with these properties can be obtained by the Lanczos method with respect to the scalar product hx , y iA2 :=yTA2x .
In terms of these vectors the step size of the OD method is given by
τk = (r k −1)TA−1dk kdkk2 2 = (r k −1)Tvk (xk)TA2vk,
Stabilized OD method
1: Choose initial approximation ˆx0
2: r0=b − Aˆx0 3: v1=r0; v0=0; w1=Av1; w0=0; 4: δ1=0; k = 0, 5: while krkk2>toldo 6: k = k + 1; 7: αk = (wk)Twk 8: sk =Awk 9: τk = (rk −1)Tvk/α k 10: xˆk = ˆxk −1+ τ kwk 11: rk =rk −1− τ ksk 12: γk = (wk)Tsk/αk 13: if k>1 then 14: δk = αk/αj−1 15: end if 16: vk +1=wk− γkvk− δkvk −1 17: wk +1=sk− γkwk − δkwk −1
Stabilized OD method
Every step of the stabilized OD method requires 1 matrix-vector product and 9 level-1-operations, and 7 vectors have to be stored.
Compared to SYMMLQ the same number of operations is required, but 2 additional vectors have to be stored. Moreover, there is no easy way to obtain the CG approximation xk from ˆxk which is usually a better approximation to x∗
than ˆxk.
The following picture contains the convergence history of the OD and the stabilized OD method for the discrete Helmholtz equation
MINRES
A further stable method for solving indefinite systems is a modification of the
Method of conjugate residuals (CR method) Let A be symmetric and positive definite.
Determine xk ∈ x0+ K
k(r0,A) such that the error with respect to the
A2norm is minimal.
The conjugate residual method was introduced byStiefel(1955). It was rediscovered byLuenberger(1970), who considered already indefinite problems and discussed a variant that handles exact breakdowns. As for the CG method the approximations xk can be obtained from a
sequence of one dimensional line searches where the search directions satisfy a 3-term-recurrence.
CR method
1: r0=b − Ax0
2: t0=Ar0
3: α0= (t0)Tr0; d1=r0; s1=t0
4: for k = 1, 2, . . . until convergence do
5: τk = αk −1/(sk)Tsk 6: xk =xk −1+ τ kdk 7: rk =rk −1− τ ksk 8: βk =1/αk −1 9: tk =Ark 10: αk = (tk)Trk 11: βk = αkβk; 12: dk +1=rk+ βkdk 13: sk +1=tk+ βksk 14: end for
Cost of CR
1 matrix-vector product 2 scalar products 4 _axpy
Storage requirements: 5 Vectors
For x∗:=A−1b it holds kx − x∗k2
A2 = kA(x − x∗)k22= kAx − bk22= kr k22
Hence, the CR method minimizes the Euclidean norm of the residuum in x0+ K
k(r0,A).
Moreover, it can be shown that the residuals rk :=b − Axk determined by the
CR method are A-conjugate (this explains the name of the method). If A is symmetric but indefinite the approach of the CR method (to minimize the residuum on Kk(r0,A)) is still reasonable. However, the CR algorithm can
Example
10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/128 ||r CG||2 ||r CR||2Example
10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/256 ||r CR||2 ||r CG||2Example
10−12 10−10 10−8 10−6 10−4 10−2 100 CR and CG for −∆ u = 0, h=1/512 ||r CG||2 ||rCR||2MINRES Paige & Saunders (1975)
Use the Lanczos process to tridiagonalize A and the LQ factorization of Tk to
solve the normal equations
QkTA2Qkyk =QkTAb, xk =Qkyk
of the least squares problem
kAQky − bk22=min!
MINRES ct.
Since rk is orthogonal to q1, . . . ,qk it holds
QkTA2Qk = (QkTk+rk(ek)T)T(QkTk+rk(ek)T)
= Tk2+ βk2ek(ek)T
QkTAb = β0QkTAq1= β0QkTAQke1
= β0QkT(QkTk+ (rk)Tek)e1= β0Tke1
From the LQ factorization of Tk = ˜LkUk which can be obtained in the same
manner as in the SYMMLQ method currently along with the Lanczos process one gets the Cholesky factorization
Tk2+ βk2ek(ek)T = ˜LkL˜Tk + βk2ek(ek)T =LkLTk
without computing T2
MINRES ct.
1: c1= −1; s1=0; c0=1; s0=0; q0=0; v0=0; v−1=0
2: q = b − Ax˜ 0; β0= k˜qk2; q1= ˜q/β0; η1= β0
3: for k = 1, 2, . . . until convergence do 4: q = Aq˜ k − βk −1qk −1 5: αk = ˜qTqk 6: q = ˜˜ q − αkqk 7: βk = k˜qk2 8: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γ2 k + β2k 9: δk −1=skαk− ckck −1βk −1; εk −2 =sk −1βk −1 10: ck +1= ˜γk/γk; sk +1= βk/γk 11: vk = (qk− ε k −2vk −2− δk −1vk −1)/γk; 12: xk =xk −1+ck +1ηkvk; 13: qk +1= ˜q/βk 14: ηk +1=sk +1ηk 15: end for
Cost of MINRES
1 matrix-vector product 2 scalar products 7 _axpy
Example
10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 ||rCR||2 ||r MINRES||2Example
10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 ||r MINRES||2 ||r CR||2Example
10−6 10−5 10−4 10−3 10−2 10−1 100 MINRES for −∆ u − 243.0653001012069 u = 0, h = 1/128 ||r||2Error bounds for MINRES
xk ∈ x0+span{x0,Ax0, . . . ,Ak −1x0} is determined such that krkk
2is minimal in r0+span{Ar0, . . . ,Akr0}, i.e. rk =pM k(A)r0with pkM∈ Πk, pMk(0) = 1, krkk2= min pk∈Πk,pk(0)=1 kpk(A)r0k2.
For A = UΛUT, Λ = diag{λ
1, . . . , λn}, it follows krkk 2≤ min pk∈Πk,pk(0)=1 kpk(Λ)k2kr0k2, i.e. krkk 2 kr0k 2 ≤ min pk∈Πk,pk(0)=1 max j=1,...,n |pk(λj)|.
Error bounds for MINRES ct.
If A is positive definite (CR method!) we obtain in the same way as for the CG method krkk 2 kr0k 2 ≤ 2 p κ2(A) − 1 pκ2(A) + 1 !k .
For indefinite problems typically only very few (` n) eigenvalues λ1≤ · · · ≤ λ`<0 are negative, and most of the eigenvalues
Error bounds for MINRES ct.
If pk(λ) =q`(λ) Tk −` 2λ − λ`+1− λn λn− λ`+1 . Tk −` −λ`+1− λn λn− λ`+1 q`(λ) = (−1)` ` Y j=1 (λ − λj)/ ` Y j=1 λjand Chebyshev polynomials Tk −`,
one gets krkk 2 kr0k 2 ≤ q`(λn)2 √κn−`−1− 1 √ κn−`−1+1 k −` , κn−`−1:= λn λ`+1.
Preconditioning
The last error bound demonstrates that it is reasonable to use positive definite preconditioners even if the problem is indefinite.
To preserve symmetry of the problem we consider C−1AC−Ty = C−1b, x := CTy .
From
C−1AC−Tz = µz ⇐⇒ C−1Ay = µCTy , y = C−Tz ⇐⇒ C−TC−1Ay = (CCT)−1Ay = µy
we obtain that the Cholesky factor C of the preconditioner M := CCT should
be chosen such that:
MINRES with Preconditioning
1: c1= −1; s1=0; c0=1; s0=0; q0=0; v0=0; v−1=0 2: q = b − Ax ;w = M−1q;β0= p qTw 3: q1=q/β 0; w1=w /β0; η1= β04: for k = 1, 2, . . . until convergence do
5: q = Awk− β k −1qk −1 6: αk =qTwk 7: q = q − αkqk 8: wk +1=M−1q 9: βk = p qTwk +1 10: γ˜k = −ckαk− ck −1skβk −1; γk = q ˜ γ2 k + β2k 11: δk −1=skαk− ckck −1βk −1; εk −2 =sk −1βk −1 12: ck +1= ˜γk/γk; sk +1= βk/γk 13: vk = (wk − ε k −2vk −2− δk −1vk −1)/γk 14: xk =xk −1+c k +1ηkvk 15: qk +1=q/β k 16: wk +1=wk +1/β k η =s η
Example
100 200 300 400 500 600 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100minres without preconditioning, 4.40E8 flops
nofill cholinc, 2.04E8+1.44E5 flops cholinc(a,0.1), 2.02E8+6.40E6 flops
cholinc(a,0.01), 1.40E8+1.31E7 flops
cholinc(a,0.001), 9.94E7+3.58E7 flops
Example
200 400 600 800 1000 1200 1400 1600 1800 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100minres without preconditioning, 1.23E9 flops
nofill cholinc, 7.91E8+1.44E5 flops cholinc(a,0.1), 8.42E8+6.40E6 flops
cholinc(a,0.01), 6.89E8+1.31E7 flops cholinc(a,0.001), 1.25E9+3.58E7 flops −∆ u − 1638.4 u = 0, h=1/128
Example
100 200 300 400 500 600 700 10−8 10−6 10−4 10−2 100 102 104without preconditioning, 5.402E8 flops
nofill cholinc, 2.598E8+1.439E5 flops cholinc(A,0.1), 2.633E8+6.401E6 flops
cholinc(A,0.01), 1.511E8+1.306E7 flops
cholinc(A,0.001), 1.109E8+3.575E7 flops
Example
200 400 600 800 1000 1200 1400 1600 1800 10−8 10−6 10−4 10−2 100 102 104 SYMMLQ for −∆ u − 1638.4 u = 0, h=1/128without preconditioning, 1.381E9 flops
nofill cholinc, 9.089E8+1.439E5 flops
cholinc(A,0.1), 9.472E8+6.401E6 flops
cholinc(A,0.01), 7.060E8+1.306E7 flops