• Keine Ergebnisse gefunden

4.4. (Discrete) Empirical Interpolation Method

5. RBM Applied to a Nonlinear Elliptic System

5.2. Error Estimators

Linearised Problem

In the next section we present an error estimator for the discretised elliptic problem which describes the charge transport in the positive electrode, cf. Theorem 5.4. This proof is only valid if we make some additional assumptions which are not motivated by physics: the diffusion coefficientµ1 is constant and the exchange current density µ3>0.

If we want to generalise the problem we can not use the techniques of the proof and need another approach. To this end we will consider the linearised discretised equation system, cf. Section 4.6. In the next section we will develop an error estimator for this problem as well. Numerical tests then show that this yields a convenient estimator to the nonlinear problem, cf. Section 5.4.

For an arbitraryµ∈Dwe consider the discretised equation system (5.2) and assume that FN ∶RNx →RNx is continuously differentiable. Further let φ,φ¯∈RNx be two vectors.

By Taylor’s expansion we get

FN(φ;µ)=FN(φ;¯ µ)+DφFN(φ;¯ µ)⋅(φφ¯)+o(∥φφ¯∥). We define the linear and the inhomogeneous part of the above equation:

LI,lin(φ;¯ µ)∶= DφFN(φ;¯ µ), (5.5) blin(φ;¯ µ)∶= FN(φ;¯ µ)−DφFN(φ;µ¯ )⋅φ.¯ (5.6) With these notations we define the linearised equation as

FlinN(φ;µ)∶=LI,lin(φ;¯ µ)⋅φ+blin(φ;¯ µ)=! 0. (5.7)

5.2. Error Estimators

We present an error estimator for the nonlinear equation (5.4) as well as for the linearised elliptic equation (5.7). Analogously to Section 2.5 we denote by φN(µ) ∈XN the FV solution which is represented by the coefficient vector φN(µ)∈RNx. The RB solution is denoted byφN(µ)∈XN and the corresponding coefficient vector by φN(µ)∈RNx. We define the residual for the discretised problem by

r(µ)∶=FN(φN(µ);µ), forµ∈D, (5.8) i.e. the FV function evaluated at the RB solution.

Error Estimator for the Semi-Linear Elliptic Case

We reformulate our discretised problem (5.4) using the following notations LI(µ)= 1

∆xµ1M+ηµ3diag([zeros(1,Nx−1),1],0), (5.9)

b(µ)=

⎛⎜⎜

⎜⎜⎜⎜

−µ2

0

⋮ 0

−ηµ3µ4

⎞⎟⎟

⎟⎟⎟⎟

−∆xf(µ),

g(u;µ)=

Note that we use for the definition of the linear part LI(µ), the inhomogeneous part b(µ) and for the nonlinear partg(u;µ) that So our discretised problem (5.4) can now be formulated as

FN(φN(µ;µ)=LI(µ)φN(µ)+b(µ)+g(φN(µ);µ) (5.11)

=L˜I(µ)⋅φN(µ)+˜b(µ)+h(φN(µ);µ).

In order to prove the error estimator in Theorem 5.4 we first show that LI(µ) is an M-matrix, cf. Definition 2.28, and hence regular.

Corollary 5.3. Forµ∈D the matrix LI(µ) in (5.9) is an M-matrix.

Proof. Let µ ∈ D. We prove that LI(µ) is a tridiagonal matrix which is irreducible diagonally dominant: the diagonal entries of LI(µ) are given by

(LI(µ))i,i=⎧⎪⎪⎪⎪

5.2. Error Estimators

and the entries on the super- and sub-diagonal ofLI(µ)are given by (LI(µ))i,i+1=(LI(µ))i+1,i=− 1

∆xµ1, i∈{1, . . . ,Nx−1}.

The other entries ofLI(µ)are zero: (LI(µ))i,j=0 fori, j∈{1, . . . ,Nx},j∉{i−1, i, i+1}. LI(µ)is weakly diagonally dominant in the rows i∈{1, . . . , Nx−1}

∣(LI(µ))i,i∣=2 1

∆xµ1=

Nx j=1,ji

∣(LI(µ))i,j, i≠1, i≠Nx,

∣(LI(µ))1,1∣= 1

∆xµ1=Nx

j=2

∣(LI(µ))1,j,

and strictly diagonally dominant in the last row i=Nx

∣(LI(µ))Nx,Nx∣= 1

∆xµ1+ηµ3> 1

∆xµ1=

Nx−1 j=1

∣(LI(µ))Nx,j,

because µ1, µ3, η>0. LI(µ) is irreducible, because (LI(µ))i,j≤0 for ij (Proposition 2.30). Hence LI(µ) is irreducible diagonally dominant and (LI(µ))i,j ≤0 forij and (LI(µ))i,i>0. Because of Theorem 2.31 LI(µ)is an M-matrix.

For the following theorem we need some notations. Referring to Definition 2.19 and Lemma 2.20 the constants for the norm equivalence of the p-norm to the maximum norm are denoted by a(p), b(p)≥0 such that

a(p)xp≤∥xb(p)xp, ∀x∈RNx.

Theorem 5.4. For µ∈ D let φN(µ) be the (inexact) FV solution and φN(µ) the RB solution of equation (5.2). By ∥⋅∥ we denote a p-vector norm and its corresponding lub-norm. Let the Newton tolerance for the FV solution be denoted by ǫNewton >0, i.e.

FN(φN(µ);µ)∥<ǫNewton. The residual is defined by r(µ)∶=FN(φN(µ);µ). Then, the error e(µ)=φN(µ)−φN(µ) between the FV and the RB solution satisfies

e(µ)∥≤(b(p) a(p))

2

L−1I (µ)∥(∥r(φN(µ);µ) ∥+ǫNewton). (5.12) Proof. Let µ∈Dbe arbitrary. Using (5.11) we conclude that

FN(φN(µ);µ)−FN (φN(µ);µ)=LI(µ)(φN(µ)−φN(µ))+b(µ)−b(µ) +g(φN(µ);µ)−g(φN(µ);µ)

=LI(µ)(φN(µ)−φN(µ)) +g(φN(µ);µ)−g(φN(µ);µ)

=L˜I(µ)e(µ)+h(φN(µ);µ)−h(φN(µ);µ).

Using the mean value theorem, cf. Theorem 2.11 we get the following equation for our

5.2. Error Estimators

where we applied the triangle inequality in the last operation.

Remark 5.5. For the Euclidean norm we could also use the smallest singular value of the matrix

1Nx×Nx+µ3η(cosh(η(φM

Nx(µ)−µ4))−1) ((LI(µ))−1diag([zeros(1,Nx−1),1],0)), to estimate the norm of its inverse, cf. Proposition 2.24. Since φis not bounded we can not estimate the smallest singular value and have to estimate the error in the Euclidean norm with the equivalent constant of the maximum norm, too. For the parabolic problem we can use this approach, cf. Proposition 6.6. In particular it is

e(µ)∥2≤N∥LI(µ)∥2(ǫNewton+∥r(µ)∥2). Error Estimator for the Linearised Problem

In this subsection we consider the linearised problem (5.7). For an arbitrary µ∈D we set φ= φN(µ) and ¯φ= φN(µ). The error is again defined by e(µ) ∶=φN(µ)−φN(µ). The residual to the linearised problem is defined as

rlin(µ)∶=FlinN(φN(µ);µ), (5.15) and if we set ¯φ(µ)=φN(µ), it follows that the residual to the linearised problemrlin(µ) and the residual to the nonlinear problem r(µ) do coincide.

Proposition 5.6. Let µ ∈ D be arbitrary and φ¯ ∈ RNx. Then the matrix LI,lin(φ;µ¯ ) (5.5) is an M-matrix.

Proof. Let µ∈D and ¯φ∈RNx be arbitrary. The tridiagonal matrix LI,lin(φ;¯ µ) is given by

LI,lin(φ;¯ µ)= 1

∆xM+µ2ηcosh(η(φ¯N

xµ4))diag([zeros(1,Nx−1),1],0). LI,lin(φ;¯ µ) is irreducible, because its entries on the secondary diagonals are nonzero (Proposition 2.30). FurtherLI,lin(φ;¯ µ)is weakly diagonally dominant in rows 1 toNx−1 and strict diagonally dominant in the Nx-th row. Additionally, it is (LI,lin(φ;¯ µ))i,i>0 and (LI,lin(φ;µ¯ ))i,j ≤ 0, i, j ∈ {1, . . . ,Nx}, ij. Thus, LI,lin(φ;¯ µ) is an M-matrix (Theorem 2.31).

In particular the matrix LI,lin(φ;¯ µ) is regular. With these preliminaries we get the following

Theorem 5.7. For an arbitrary µ ∈D, let φN(µ) denote the FV and φN(µ) the RB solution to equation (5.4), respectively. Further we denote by ∥⋅∥ a p-vector norm and its corresponding lub-norm. Let the linearised function evaluated at the FV solution be estimated byFlinN(φN(µ);µ)∥≤ǫNew,lin(µ)for allµ∈D. Then the error to the linearised problem can be estimated by

e(µ)∥≤∥LI,lin(φN(µ);µ)−1∥(∥rlin(µ)∥+ǫNew,lin(µ)). (5.16)

Proof. Let µ∈Dbe arbitrary. Using (5.7) we have

FlinN(φN(µ);µ)−FlinN(φN(µ);µ)=LI,lin(φN(µ);µ)⋅elin(µ)

⇒∥e(µ)∥=∥(LI,lin(φN(µ);µ))−1(FlinN(φN(µ);µ)−FlinN(φN(µ);µ))∥.

Since the matrix norm is consistent with the vector norm we obtain by the triangle inequality

e(µ)∥= ∥(LI,lin(φN(µ);µ))−1∥(∥FlinN(φN(µ);µ)∥+∥FlinN(φN(µ);µ)∥)

≤ ∥(LI,lin(φN(µ);µ))−1∥(ǫNew,lin(µ)+∥rlin(µ)∥).

The assumption that we can estimate the linearised function evaluated at the FV solution

FlinN(φN(µ);µ)∥≤ǫNew,lin(µ), forµ∈D is obvious because by Taylor’s expansion it is

FN(φN(µ);µ)=FN(φN(µ);µ)+DφFN(φN(µ);µ)⋅(φN(µ)−φN(µ)) +o(∥φN(µ)−φN(µ)∥),

assuming that FN(⋅;µ)∶RNx →RNx is continuously differentiable for every µ∈D. But how to find an upper boundǫNew,lin(µ)which is close to ∥FlinN(φN(µ);µ)∥ and faster to compute? Of course its computation should be faster than computing∥FlinN(φN(µ);µ)∥

itself. We do not have an upper bound but a “round about” for the error. More precisely we prove

Proposition 5.8. Let µ∈D be arbitrary. Let φN(µ)∈RNx be the representative of the FV and φN(µ) ∈RNx the representative of the RB solution to (5.4). We assume that DφFN(φN(µ);µ) is regular1. By φN

New(µ)∈RNx we denote the solution of one Newton step applied in the FV system starting with φN(µ), i.e.

φN

New(µ)=φN(µ)−DφFN(φN(µ);µ)−1FN(φN(µ);µ). (5.17) Then

FlinN(φN(µ);µ)∥≤∥DφFN(φN(µ);µ)∥⋅∥φN(µ)−φN

New(µ)∥

≈∥DφFN(φN(µ);µ)∥⋅∥φN(µ)−φN

New(µ)∥. (5.18) Proof of Proposition 5.8. Let µ∈ D be arbitrary. We denote the linearised function of (5.7) as

FlinN(φN(µ);φN(µ);µ)=FN(φN(µ);µ)+DφFN(φN(µ);µ)⋅(φN(µ)−φN(µ)).

1This property is already necessary to use the Newton method.

5.2. Error Estimators

Using (5.17) we add a “zero” to this term and get

FlinN(φN(µ);φN(µ);µ)= FN(φN(µ);µ)+DφFN(φN(µ);µ)⋅(φN(µ)−φN(µ))

−FN(φN(µ);µ)−DφFN(φN(µ);µ)⋅(φN

New(µ)−φN(µ))

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶

=0

=DφFN(φN(µ);µ)⋅(φN(µ)−φN

New(µ)). Consequently, we derive that

FlinN(φN(µ);φN(µ);µ)∥≤∥DφFN(φN(µ);µ)∥∥φN(µ)−φN

New(µ)∥. There remains to prove

φN(µ)−φN

New(µ)∥≈∥φN(µ)−φN

New(µ)∥.

The function DφFN ∶ RNx ×D → RNx ×RNx, (φ;µ) ↦ ∆x1 µ1M +µ3ηcosh(η(φN

xµ4))diag([zeros(1,Nx−1),1],0)is continuously differentiable with respect toφand thus locally Lipschitz continuous with the Lipschitz constantLLip>0, cf. [For13, Section 12, Theorem 2]. With this we get

FlinN(φN(µ);φN(µ);µ)=FN(φN(µ);µ)+DφFN(φN(µ);µ)⋅(φN(µ)−φN)

= −∫01[(DφFN(φN(µ)+s(φN(µ)−φN(µ);µ))

DφFN(φN(µ);µ)) (φN(µ)−φN(µ))]ds +FN(φN(µ);µ).

Thus, it follows that

FlinN(φN(µ);φN(µ);µ)∥

≤ ∫01(∥DφFN(φN(µ)+s(φN(µ)−φN(µ);µ))−DφFN(φN(µ);µ)∥

∥(φN(µ)−φN(µ))∥) ds+ǫNewton

≤ 1

2LLipφN(µ)−φN(µ)∥2+ǫNewton.

Accordingly, if the RB solution converges to the FV solution, i.e. φN(µ) → φN(µ), then the limit for the upper bound for the linearised FV function evaluated at the FV solution ∥FlinN(φN;φN(µ);µ)∥ is (upper) bounded byǫNewton. We define a sequence x(j+1)=x(j)DφFN(x(j);µ)−1FN(x(j);µ) with the start vector x(0)=φN(µ),j ∈N0. Since this is the definition of the Newton method, the sequence converges (locally) quadratic. So by Lemma 2.10 (with convergence order p=2) we know that

j→lim

x(j+1)x(j)

x(j)φN(µ)∥ =1.

Hence

x(j+1)x(j)∥≈∥x(j)φN(µ)∥,

for jj0 for j0 ∈N sufficient large. We set x(j)=φN(µ) and x(j+1) =φN

New. Then we estimate

FlinN(φN(µ);µ)∥≤ ∥DφFN(φN(µ);µ)∥⋅∥φN(µ)−φN(µ)∥

≈ ∥DφFN(φN(µ);µ)∥⋅∥φN

New(µ)−φN(µ)∥, if we do sufficient many iterations.

Remark 5.9. To estimateFlinN(φN(µ);µ)∥ with Proposition 5.8 we stop in the nu-merical tests after one iteration. So we use the first Newton step. Otherwise we could compute the FV solution to the parameter µ ∈D itself and the error estimator is not needed anymore.

We remark, if we do not respectFlinN(φN(µ);µ)∥ in the error estimator to the linearised problem, the estimator is not rigorous. That means that it can underestimate the error itself. We investigate this fact in the numerical test.