• Keine Ergebnisse gefunden

with the i either zero or one. Application of the transformation

˜

Instead of having to solve onesN×sNlinear system we now only have to solvesdifferent N ×N systems sequentially.

Examples for implicit Runge-Kutta processes based on Gaussian quadrature formulas with a number s = 1,2,3,4,5 of stages and orders 2,4,6,8,10 are given by Butcher [But64a]. For instance, the coefficients in the tableau

1

define an implicit Runge-Kutta method with 2 stages and of 4th order, and was previ-ously given by Hammer and Hollingsworth [HH55]. Other families of implicit Runge-Kutta processes exist, for instance the Radau family of order 2s−1 and the Lobatto family of order 2s−2.

2.8. Linear Multistep Methods

The methods previously covered, namely the one-step Runge-Kutta methods, only used information from the current step to calculate the next point of the solution. Multistep methods increase efficiency by using several previous solution points instead. A linear

multistep formula (LMF) approximates the next point of the solution by a linear com-bination of previous solution points and their derivatives. The general form of an LMF according to Dahlquist [Dah56] is

k

X

i=0

αiyn+i =h

k

X

i=0

βif(tn+i, yn+i), (2.13) wherehis the step size andkis the number of previous points. The point (tn+k, yn+k) is the next to compute, f(tn+k, yn+k) its derivative. Normalization is obtained by setting αk= 1. Ifβk= 0 the method is explicit (Adams-Bashforth type), otherwise it is implicit (Adams-Moulton type). In the latter caseyn+k can be found via simple iteration (in the non-stiff case) or Newton’s method (in the stiff case).

To start with an LMF the k initial points need to be obtained. A Taylor series ex-pansion ofy(t) around t0 can be used if the solution is analytic at t0 [Mil26]. Another possibility also suggested by Milne uses successive approximations by iterating a formula obtained from the polynomialp(t). One could as well use Runge-Kutta methods, espe-cially [Col66, Mil53, Sar65], to obtain starting values. Working with LMFs of variable orders also allows to find starting values successively.

2.8.1. Adams-Bashforth Methods

In [Bas83] Francis Bashforth describes an attempt to test the theories of capillary action by forms of fluid drops. The numerical methods for this were developed by John Couch Adams. The idea is to interpolate previous derivatives fn−i =f(tn−i, yn−i),0 ≤ i < k by a polynomialp(t) and replace the integrand in the equation

y(tn+1) =y(tn) + Z tn+1

tn

f(t, y(t))dt

by p(t). Analytic integration of the polynomial then yields a numerical integration method. The methods obtained for different values ofkare given in table 2.3. Note that fork= 1 the Euler method is obtained.

k= 1 : yn+1 =yn+hfn

k= 2 : yn+1 =yn+h2(3fn−fn−1)

k= 3 : yn+1 =yn+12h (23fn−16fn−1+ 5fn−2)

k= 4 : yn+1 =yn+24h (55fn−59fn−1+ 37fn−2−9fn−3) Table 2.3.: Adams-Bashforth methods.

2.8.2. Adams-Moulton Methods

Implicit LMFs can be obtained by including fn+1 into the calculation of yn+1. The polynomial p(t) now interpolates fn−i = f(tn−i, yn−i),−1 ≤ i < k. Methods obtained

2.8. Linear Multistep Methods

this way are shown in table 2.4 and have been derived in [Mou26]. Solution of the implicit equation can be performed by fixed-point iteration (in the non-stiff case) or by Newton iteration (in the stiff case). Fixed-point iteration only converges if |hβkL|<1 with Lthe Lipschitz constant of f.

k= 0 : yn+1 =yn+hfn+1

k= 1 : yn+1 =yn+h2(fn+1+fn)

k= 2 : yn+1 =yn+12h (5fn+1+ 8fn−fn−1)

k= 3 : yn+1 =yn+24h (9fn+1+ 19fn−5fn−1+fn−2) Table 2.4.: Adams-Moulton methods.

2.8.3. Predictor-Corrector Methods

Another way how to solve the implicit equations is to use those methods in combination with the explicit ones (both using the samek). The explicit method predicts the next so-lution ˆyn+1and the implicit method then uses this value to computefn+1 =f(tn+1,yˆn+1) andyn+1. The amount of correction needed gives an estimate for the error made [Mil26].

Such a predictor-corrector scheme is also named PECE, where P denotes prediction, C denotes correction, and E refers to evaluation off. If the correction step is performed several times, the method is P(EC)nE.

2.8.4. Nordsieck Method

Better performance of LMFs can be achieved by using a variable step size. Compared to the one-step Runge-Kutta methods, this requires regeneration of the k previous values upon each change of step size. One way to obtain this is to ignore old values and restart with k= 1 (Euler method), then working up to higher order methods. A better alternative is to obtain new values by interpolation/extrapolation.

An even better way to store the values was suggested by Nordsieck [Nor62]. He suggested, instead of storing previous values ofyorf, to store thek+1 scaled derivatives

hi

i!y(i)(tn),0≤i≤k for the current step (Nordsieck vector). This way the stored values are independent of the current step size and thus allow easily to work with a variable step size. Changing step size fromholdtohnew withν=hnew/holdis a simple matter of scaling each row of N by νi, i.e. multiplication of N by a matrix diag(1, ν, ν2, . . . , νk).

The value of the next step can then be computed by a Taylor series as y(t+h) =

k

X

i=0

hi

i!y(i)(t) +O(hk+1),

which is exact for y(t) being a polynomial of degree up to k. Higher order derivatives can be calculated in a similar way. For a general formulation to advance the Nordsieck vector see [Gea67] and see also equation (2.5) in [CLM89]. Gear [Gea67] also showed how to transform between the Adams and the Nordsieck representation.

2.8.5. Backward Differentiation Formula

Unlike the Adams formulas, that interpolate the derivativesfi, the backward differenti-ation formulas (BDF) interpolate the previous solution valuesyi andyn+1 such that the derivative of this polynomial at tn+1 agrees with fn+1. By construction these methods are implicit in nature and as such are useful to solve stiff problems (if Newton iteration is used).

For a k-step BDF a polynomial q(t) is constructed through Newton interpolation of yn+1, yn, . . . , yn−k+1 and by requiring that q0(tn+1) =f(tn+1, q(tn+1)) one obtains

k

X

i=1

1

i∆iyn+1=hfn+1

with ∆i+1yn+1 = ∆iyn+1−∆iyn and ∆0yn = yn. Conversion into the form of equa-tion (2.13) results in the BDFs shown in table 2.5. They have been introduced in [CH52]

and have been also described in [Gea71]. Methods withk≤2 are A-stable, those with k >2 are at leastA(α)-stable. Methods with k≥7 are of no use as they are not stable.

k= 1 : yn+1−yn=hfn+1

k= 2 : 32yn+1−2yn+12yn−1=hfn+1

k= 3 : 116yn+1−3yn+32yn−113yn−2=hfn+1

k= 4 : 2512yn+1−4yn+ 3yn−143yn−2+14yn−3 =hfn+1

k= 5 : 13760yn+1−5yn+ 5yn−1103yn−2+54yn−315yn−4=hfn+1

k= 6 : 4920yn+1−6yn+152 yn−1203 yn−2+154 yn−365yn−4+16yn−5=hfn+1 Table 2.5.: Backward differentiation formulas.