• Keine Ergebnisse gefunden

has to be taken in chosing the numerical boundary conditions for the partial differential equation. Otherwise, unphysical effects can be introduced into the solution.

The ”solves” in each step correspond to a time integration procedure, which has to be chosen of the appropriate order. For source term, the idea is that for these, typically quite simple time integration procedures can be chosen, possibly leading to more efficient overall schemes than when incorporating the source term into the computation of the fluxes. For the Navier-Stokes equations, an important idea is to use an operator splitting where an implicit time integration method is used for the diffusive fluxes and an explicit method is used for the convective parts, since the CFL condition is typically less severe than the DFL condition. This is sometimes referred to as an IMEX scheme for implicit-explicit, but care has to be taken since this term is also used for schemes where an explicit or implicit scheme is used depending on the part of the spatial domain considered.

Tang and Teng [193] proved for multidimensional scalar balance laws that if the exact solution operator is used for both subproblems, the described schemes converge to the weak entropy solution and furthermore that theL1 convergence rate of both fractional step methods is not worse than 1/2. This convergence rate is actually optimal, if a monotone scheme is used for the homogenous conservation law in combination with the forward Euler method for the time integration. Langseth, Tveito and Winther [121] proved for scalar one dimensional balance laws that theL1convergence rate of the Godunov splitting (again using the exact solution operators) is linear and showed corresponding numerical examples, even for systems of equations. A better convergence rate than linear for nonsmooth solutions is not possible, as Crandall and Majda proved already in 1980 [42].

The L1 error does not tell the whole story. Using the Strang or Godunov splitting combined with a higher order method in space and a second order time integration does improve the solution compared with first order schemes and is therefore appropriate for the computation of unsteady flows. This is for example suggested by LeVeque [124].

Regarding time adaptivity, embedded Runge-Kutta methods cannot be used and Richard-son extrapolation has to be used instead.

4.9 Alternatives to the method of lines

So far, we have looked at the method of lines only. In [130], Mani and Mavriplis follow a different approach in the FV case in that they write down a huge equation system for all the unknowns at several time steps simultanously and combine this with a time adaptive strategy. An alternative to discretizing in space first and then in time, is to discretize in space and time simultaneously. This approach is unusual in the finite volume context, but followed for example in the ADER method [197]. For DG methods, this is slightly more common, for example in the space-time DG of Klaij et. al. [111] or the space-time expansion (STE) DG of L¨orcher et. al. [128], which allows for a local time stepping via a predictor-corrector scheme to increase efficiency for unsteady flows. There and in other

approaches, the time integration over the interval [tn, tn+1] is embedded in the overall DG formulation.

4.9.1 Local time stepping Predictor-Corrector-DG

As an example of an alternative method, we will now explain the Predictor-Corrector-DG method and the local time stepping used in more detail. Starting point is the evolution equation (3.30) for the cell i, integrated over the time interval [tn, tn+1]:

un+1i =uni − Z tn+1

tn

nF aces

X

i=1

MSigi

| {z }

RV(pi)

d

X

k=1

Skfk

| {z }

RS(pi,pj)

dt.

The time integral is approximated using Gaussian quadrature. This raises the question of how to obtain the values at future times. To this end, the integral is split into the volume term defined by RV that needs information from the cell i only and the surface term RS that requires information from neighboring cells. Then, the use of cellwise predictor polynomialspi(t) in time is suggested. Once this is given, the update can be computed via

un+1i =uni − Z tn+1

tn

RV(pi)−RS(pi,pj)dt. (4.40) Several methods to obtain these predictor polynomials have been suggested. The first idea was to use a space time expansion via the Cauchy-Kowalewskaja procedure [128], which leads to a rather costly and cumbersome scheme. However, in [62], the use of continuous extension Runge-Kutta (CERK) methods is suggested instead. This is a type of RK methods that allows to obtain approximations to the solution not only at tn+1, but at any valuet ∈[tn, tn+1] [153]. The only difference to the dense output formulas (4.24) and (4.25) mentioned earlier is that the latter are designed for particular RK methods, whereas the CERK methods are full RK schemes in their own right. Coefficients of an explicit four stage CERK method with its continuous extension can be found in the appendix in tables B.12 and B.13.

The CERK method is then used to integrate the initial value problem d

dtui(t) = RV(ui), ui(tn) =uni, (4.41) in every celli, resulting in stage derivativeskj. The values at the Gauss points are obtaind via the CERK polynial,

p(t) =

p

X

k=0

qktk

4.9. ALTERNATIVES TO THE METHOD OF LINES 91 which is of orderpcorresponding to the order of the CERK method minus one and has the coefficients

qk = 1

∆tkn

s

X

j=1

bkjkj, where the coefficients bkj can be found in table B.13.

The predictor method needs to be of order k−1, if order k is sought for the complete time integration. If a global time step is employed, the method described so far is already applicable. However, a crucial property of the scheme is that a local time stepping procedure can be used.

A cell i can be advanced in time, if the necessary information in the neighboring cells is already there, thus if the local time at time level n is not larger than the time level in the neighboring cells:

tn+1i ≤min

tn+1j ∀j ∈N(i). (4.42)

This is illustrated in figure 4.5. There, all cells are synchronized at time level tn, then in each cell a predictor polynomial is computed using the CERK method, which is valid for the duration of the local time step ∆tni. However, for the completion of a time step in both cell i−1 and i+ 1, boundary data is missing. However, cell i fulfills the evolve condition and thus, the time step there can be completed using the predictor polynomials pi−1 and pi+1. Then, the predictor polynomial in cell i for the next local time step is computed.

Now, cell i−1 fulfills the evolve condition and after the completion of that time step, the second time step in celli can be computed. Note that while this example looks sequential, on a large grid, a number of cells can be advanced in time in parallel, making the scheme attractive on modern architectures.

Finally, to make the scheme conservative, care needs to be taken when a cell is advanced in time, whose neighbor has been partially advanced. Thus, the appropriate time integral is split into two parts, which are computed separately:

Z tn+1 tn

...dt= Z t1

tn

...dt+ Z t2

t1

...dt+...+ Z tK−2

tK−2

...dt+ Z tn+1

tK−1

...dt, we split the interval

tn1, tn+11

into the intervals

tn1, tn+12 and

tn+12 , tn+11

which yields Z tn+11

tn1

RS(˜p1,p˜2)dt = Z tn+12

tn1

RS(˜pn1,p˜n2)dt+ Z tn+11

tn+12

RSn1,p˜n+12 dt.

ti

t=0 t

t

i−1

i+1

i−1 i

Q Q Qi+1

1

1

1

fi−1/2

ti

ti

t=0 t

t

i−1

i+1

i−1 i

Q Q Qi+1

1

1

1

fi+1/2

2

ti

ti

fi−1/2

t=0 t

t

i−1

i+1

i−1 i

Q Q Qi+1

1

1

1 2

fi−3/2 ti−12

ti

ti

fi−1/2

ti

t=0 t

t

i−1

i+1

i−1 i

Q Q Qi+1

1

1

1 2

ti−12

fi+1/2

3

Figure 4.5: Sequence of steps 1-4 of a computation with 3 different elements and local time-stepping

Chapter 5

Solving equation systems

The application of an implicit scheme for the Navier-Stokes equations leads to a nonlinear or, in the case of Rosenbrock methods, linear system of equations. To solve systems of this form, differents methods are known and are used. As mentioned in the introduction, the question of efficiency of an implicit scheme will be decided by the solver for the equation systems. Therefore, this chapter is the longest of this book.

The outline is as follows: We will first describe properties of the systems at hand and general paradigms for iterative solvers. Then we will discuss methods for nonlinear systems, namely fixed point methods, multigrid methods and different variants of Newton’s method.

In particular, we make the point that dual time stepping multigrid as currently applied in industry can be vastly improved and that inexact Newton methods are an important alternative. We will then discuss Krylov subspace methods for the solution of linear systems and finally revisit Newton methods in the form of the easy to implement Jacobian-free Newton-Krylov methods.