• Keine Ergebnisse gefunden

III.2 Polynomial Approximations to the Matrix Exponential

III.2.1 Chebyshev Method

A near-optimal polynomial approximation to the exponential is given by its truncated Chebyshev expansion. We describe this approach, which in the context of Schr¨odinger equations was put forward by Tal-Ezer & Kosloff (1984), and give an error analysis based on Bernstein’s theorem on polynomial approximations to analytic functions on an interval.

We refer to Rivlin (1990) for background information on Chebyshev polynomials and to Markushevich (1977), Chap. III.3, for the polynomial approximation theory based on Faber polynomials.

Chebyshev Polynomials. For every non-negative integerk, the function defined by

Tk(x) = cos(kθ) with θ= arccosx∈[0, π], for x∈[−1,1] (2.3) III:cheb-cos

1The time step numbernwill always be indicated as superscript in the notation.

is in fact a polynomial of degreek, named thekth Chebyshev polynomial. This fact is seen from the recurrence relation

Tk+1(x) = 2xTk(x)−Tk1(x), k≥1, (2.4) III:cheb-rec starting fromT0(x) = 1andT1(x) =x, which is obtained from the trigonometric

iden-titycos((n+ 1)θ) + cos((n−1)θ) = 2 cosθcos(nθ). The Chebyshev polynomials are orthogonal polynomials with respect to the weight function(1−x2)1/2on[−1,1]:

Z 1

again verified by substitutingx= cosθ. The Joukowski transform

w=Φ(z) =z+p

is the conformal map between the exterior of the interval[−1,1]and the exterior of the unit disk,|w|>1. (The branch of the square root is chosen such that√ integral of that function over a closed contourΓ encircling the interval[−1,1]vanishes by Cauchy’s theorem. With Cauchy’s integral formula we thus obtain from (2.6)

2Tk(x) = 1 which establishes an important relationship between the Chebyshev polynomials and the

conformal map: the Chebyshev polynomials are the Faber polynomials for the interval [−1,1]; cf. Markushevich (1977), Sect. III.3.14.

Chebyshev and Fourier Series. Given a (smooth) complex-valued functionf(x)on the interval−1≤x≤1, we expand the2π-periodic, symmetric function

g(θ) =f(cosθ)

or in fact, by the symmetryg(−θ) =g(θ),

1−x2=dθ, we obtain the Chebyshev expansion f(x) =c0+ 2

Chebyshev Approximation of Holomorphic Functions. We study the approximation of a holomorphic functionf(x)by the truncated series withmterms,

Πmf(x) =c0+ 2

mX1

k=1

ckTk(x),

which is a polynomial of degreem−1. The following is a version of a theorem by Bernstein (1912); see Markushevich (1977), Sect. III.3.15. HereΦ(z) =z+√

z2−1is again the conformal map (2.7) from the complement of the interval[−1,1]to the exterior of the unit disk, andΨ(w) = 12(w+w1)is the inverse map.

Theorem 2.1 (Chebyshev Approximation). Letr > 1, and suppose thatf(z)is holo-III:thm:bernstein

morphic in the interior of the ellipse|Φ(z)|< rand continuous on the closure. Then, the error of the truncated Chebyshev series is bounded by

|f(x)−Πmf(x)| ≤2µ(f, r) rm We expand in negative powers ofw,

Ψ(w) Ψ(w)−x=

X k=0

ak(x)wk1 for |w|>1, (2.11) III:cheb-res

where the Taylor coefficients at∞are given as ak(x) = 1

By (2.8), these coefficients turn out to be simply ak(x) = 2Tk(x).

Inserting (2.11) into (2.10) therefore yields

f(x)−Πmf(x) = 1

and the result follows. ⊓⊔

Chebyshev Approximation of Complex Exponentials. The complex exponentialeiωx is an entire function, and we can chooserin Theorem 2.1 dependent onmto balance the growth ofµ(eiωz, r)withragainst the decay ofrm. This gives the following corollary showing superlinear convergence after a stagnation up tom≈ |ω|. Since the polynomial must capture the extrema and zeros ofcos(ωx)andsin(ωx)for a uniform approximation, it is obvious that at least a degreem proportional to |ω| is needed to obtain an error uniformly smaller than 1. Once this barrier is surmounted, the error decays very rapidly with growing degreem.

Theorem 2.2 (Eventual Superlinear Convergence toeiωωωx). The error of the Chebyshev III:thm:cheb-exp

approximationpm1(x)of degreem−1to the complex exponentialeiωxwith realωis bounded by

Choosingr= 2m/|ω| ≥2then yields the stated result, which could be slightly refined.

The Chebyshev coefficients ofeiωxare given explicitly by Bessel functions of the first kind: by formula (9.1.21) in Abramowitz & Stegun (1965),

ck= 1 π

Z π 0

ecosθ cos(kθ)dθ=ikJk(ω). (2.13) III:cheb-bessel

0 10 20 30 40 50 60

Fig. 2.1. Chebyshev approximation ofeiωx. Maximum error on[−1,1]versus degree, for ω = 4,8,16,32. Dashed: Error bounds of Theorem 2.2.

Fromeiωx with−1 ≤x ≤ 1, uniform polynomial approximation ofe forα ≤ ξ≤βis obtained by transforming

x= 2

Chebyshev Method for the Matrix Exponential Operator. LetAbe a Hermitian matrix all of whose eigenvalues are known to lie in the interval[a, b]. As proposed by Tal-Ezer

& Kosloff (1984), we approximate the action of the matrix exponential on a vectorvby

ei∆tAv≈ei∆t(a+b)/2 c0v+ 2 and observe that it is in fact a function of the product∆t A. The actual way to compute (2.14) is by a recursive algorithm proposed by Clenshaw (1962) for the evaluation of truncated Chebyshev expansions of functions:

Clenshaw Algorithm: letX= 2 (b−a)

A−(a+2b)I

, setdm+1=dm= 0and

dk=ckv+ 2Xdk+1−dk+2 for k=m−1, m−2, . . . ,0. The approximation (2.14) is then given as

Pm1(∆t A)v=d0−d2.

This identity is readily verified using the Chebyshev recurrence relation (2.4) for the terms in the sum, descending from the terms of highest degree. The algorithm requires mmatrix-vector multiplications to computePm1(∆t A)vand needs to keep only three vectors in memory.

Theorem 2.3 (Error of the Chebyshev Method). LetAbe a Hermitian matrix with all III:thm:cheb-method

its eigenvalues in the interval[a, b], and letvbe a vector of unit Euclidean norm. Then, the error of the Chebyshev approximation (2.14) is bounded in the Euclidean norm by

kPm1(∆t A)v−ei∆tAvk ≤4

e1(ω/2m)2 ω 2m

m

for m≥ω with ω=∆t(b−a)/2.

Proof. For a diagonal matrix A, the estimate follows immediately from Theorem 2.2 and the linear transformation between the intervals[∆t a, ∆t b]and[−1,1]. Since every Hermitian matrixAcan be unitarily transformed to diagonal form, we obtain the result as

stated. ⊓⊔

Step Size Restriction. The conditionm≥ωcan be read as a restriction of the step size for given degreem:

∆t≤ 2m b−a.

This can also be viewed as saying that at least one matrix-vector multiplication is needed on every time interval of length1/(b−a). In the treatment of the Schr¨odinger equation, this length shrinks as the spatial discretization is refined: for illustration, consider Fourier collocation in one space dimension, withK Fourier modes. For the matrixA =DK + FKVKFK1of (1.32), the eigenvalues lie in the interval[a, b]with

a= min

x V(x), b= 1 2µ

K2 4 + max

x V(x).

For largeK, or small∆x = 2π/K, we have thatω = ∆t(b−a)/2 is approximately proportional to∆t K2, or

ω∼ ∆t

∆x2.

The conditionm≥2ωfor the onset of error reduction therefore translates into a step-size restriction

∆t≤C m ∆x2, (2.15) III:cheb-dtdx

and the number of matrix-vector multiplications to cover a fixed time interval is thus inversely proportional to∆x2.