• Keine Ergebnisse gefunden

4.4. (Discrete) Empirical Interpolation Method

5. RBM Applied to a Nonlinear Elliptic System

5.4. Numerical Tests

In this section the error estimators and the residual are compared to the true error for a certain parameter training set: we compare the value and the position of the biggest (estimated) error depending on the norm (maximum or Euclidean) and estimator in the Greedy algorithm for every step.

The considered parameter set is

Dad=(0.3,0.75)×(−2.5,2.5)×(0.3,0.5)×(0.3,0.5). (5.21) Its discretisation as well as the remaining parameters are listed in Table 5.1. So our parameter training setDtrain={0.3,0.35, . . . ,0.75}×{−2.5,−2.0, . . . ,2.5}×{0.3,0.4,0.5}× {0.3,0.4,0.5} has 990 elements.

We compute the FV solutions to the complete parameter set Dtrain using the Newton tolerance of ǫNewton, abs =10−10 in the maximum norm. This tolerance is used for the computation of the RB solution, too.

We use the strong Greedy algorithm to generate our RB model. This means that we use the error itself to estimate the error between the FV and RB solution, cf. Algorithm 4.2.

The first parameter is determined by an initial guess, we chooseµ(1)=(0.3,−2.5,0.3,0.3). We compare the Greedy times as well as the parameter choices for different Greedy

5.4. Numerical Tests t[s]

training set 4.26⋅102 RB solution 7.38⋅10−3 FV solution 3.13⋅10−1 offline complete 7.93⋅102

Table 5.2.: Numerical test: computational time of the whole training set, of a RB so-lution to one parameter (in average), of a FV soso-lution to one parameter (in average) and of the complete offline time, i.e. including the computation of all different error types.

t[s], max. norm t[s], Eucl. norm strong Greedy 4.50⋅102 4.50⋅102 weak Greedy (error est.) 6.66⋅101 2.01⋅102 weak Greedy (error est. lin.) 6.07⋅101 2.01⋅102 weak Greedy (error est. lin. eps.) 1.03⋅102 2.42⋅102 residual based Greedy 3.87⋅101 3.49⋅102

Table 5.3.: Numerical test: computational time of the offline phase for the different Greedy types in the maximum and Euclidean norm: strong Greedy (error based), weak Greedy (error estimator to the nonlinear problem based), weak Greedy (error estimator to the linearised problem based, ǫNewton,lin = 0), weak Greedy (error estimator to the linearised problem based, considering ǫNewton,lin) and weak Greedy (residual based).

algorithm types. As reference for the parameter choice in the Greedy algorithm we use the strong one. Further we consider the error estimator to the nonlinear problem (eq.

(5.12), error est.), the error estimator to the linearised problem with ǫNewton,lin(µ)=0, for allµ∈ Dtrain(eq. (5.16), error est. lin.), the error estimator to the linearised problem considering ǫNewton,lin(µ), µ ∈ Dtrain (eq. (5.16), (5.18), error est. lin. eps.) and the residual (eq. (5.8)) which agrees with the residual to the linearised problem.

The Greedy algorithm stops after the third Greedy step because the desired Greedy tolerance in the maximum norm of ǫGreedy,∞ = 10−6 is reached. As basis vectors the (orthonormalised) FV solutions to the parameters µ(1) = (0.3,−2.5,0.3,0.3), µ(2) = (0.75,1.0,0.3,0.5) and µ(3) =(0.3,2.5,0.5,0.3) are used. The computational times for the computation of the whole training set, of a RB solution to one parameter (in aver-age), of a FV solution to one parameter (in average) and of the complete offline time, i.e. including the computation of all different error types can be found in Table 5.2.

Table 5.3 contains the computational times for the different estimators in the present Greedy run with respect to the maximum norm and Euclidean norm. The computa-tional times do include the time which is needed to compute the required FV solutions.

So in the strong Greedy algorithm the computational time for computing FV solutions to the complete training set is included.

If we use the almost linear structure of our present problem (5.4), e.g. that the nonlin-earity is just on the boundary, the speed-up factor of the RB solution in comparison to the FV solution is 42. Instead of 500 unknowns in the FV model we solve the equation

error residual error est.

k pos. val. pos. val. pos. val.

1 740 7.12⋅100 770 3.86⋅100 770 6.56⋅104 2 321 8.94⋅10−3 321 1.08⋅10−2 101 1.72⋅102 3 21 1.24⋅10−7 11 1.96⋅10−8 11 3.45⋅10−4

error est. lin. error est. lin. eps.

k pos. val. pos. val.

1 740 2.25⋅104 730 1.77⋅108 2 51 8.70⋅101 51 4.37⋅104 3 681 7.80⋅10−5 820 7.19⋅10−1

Table 5.4.: Numerical test: parameter choice depending on the estimator with respect to the maximum norm: error, residual, error estimator to the nonlinear problem, error estimator to the linearised problem withǫNew,lin=0, error estimator to the linearised problem consideringǫNew,lin.

error residual error est.

k pos. val. pos. val. pos. val.

1 740 1.33⋅102 770 4.85⋅100 770 3.26⋅107 2 321 1.46⋅10−1 321 1.52⋅10−2 101 8.47⋅104 3 681 2.78⋅10−6 11 1.96⋅10−8 11 1.70⋅10−1

error est. lin. error est. lin. eps.

k pos. val. pos. val.

1 740 2.22⋅104 730 1.76⋅108 2 51 8.56⋅101 51 4.31⋅104 3 681 7.30⋅10−5 490 7.09⋅10−1

Table 5.5.: Numerical test: parameter choice depending on the estimator with respect to the Euclidean norm: error, residual, error estimator to the nonlinear problem, error estimator to the linearised problem withǫNew,lin=0, error estimator to the linearised problem consideringǫNew,lin.

for 3 unknowns. Estimating the error in the Euclidean norm is more expensive than in the maximum norm, cf. Table 5.3. The estimator to the linearised problem considering ǫNewton,lin is just a little bit more expensive compared to the estimator to the linearised problem whereǫNew,lin(µ)=0.

Table 5.4 lists the parameter choice in the Greedy algorithm depending the different estimators with respect to the maximum norm and Table 5.5 with respect to the Eu-clidean norm. Further the value of the error (estimator) at this parameter (position) is shown.

The parameter choice for the error in comparison to any error estimator does not agree in every Greedy step, neither for the maximum norm nor for the Euclidean norm. All error estimators overestimate the error, cf. Figure 5.1. The error estimator to the nonlinear problem and the error estimator to the linearised problem whereǫNew,lin=0 overestimate the error about two scales and the error estimator whereǫNew,linis considered about four

5.4. Numerical Tests

scales with respect to the maximum norm. Considering the Euclidean norm the error estimator to the nonlinear problem overestimates the error about four scales, e.g. as much as the error estimator to the linearised problem withǫNew,lin≠0. This is because the prefactor for this estimator is for the maximum norm 1 and for the Euclidean norm Nx=500, cf. Theorem 5.4. Expect for this issue the trend is the same no matter which norm is considered. The residual underestimates the error in the Euclidean norm and is closer to the error in the maximum norm but still underestimates the error most of the time.

For the error estimator to the linearised problem we check how appropriate our estima-tion of ∥FlinN(φN(µ);µ)∥ by ǫNew,lin(µ) is by computing the exact value, cf. Proposition 5.8: after the first Greedy step the maximum over all parameters is smaller than 104 in the maximum norm, after the second Greedy step it is smaller than 102 and after the third it is smaller than 10−5. In the Euclidean norm the maximum of ∥FlinN(φN(µ);µ)∥

over all parameters is in each Greedy step one scale smaller than in the maximum norm. This means that (as expected) the value of ∥FlinN(φN(µ);µ)∥ gets smaller the better the reduced solution approximates the true solution. The maximum of the es-timator ǫNew,lin(µ) over all parameters is in the maximum norm on the same scale as

FlinN(φN(µ);µ)∥ in the first and second Greedy step, in the last Greedy step is it one scale bigger. In the Euclidean norm the maximum of the estimator ǫNew,lin(µ) is five to seven scales above the exact value ∥FlinN(φN(µ);µ)∥. This means that our estimation of the value∥FlinN(φN(µ);µ)∥is in this example convenient for the maximum norm but not for the Euclidean norm.

1 2 3

10−5 100 105

Error and Error Estimators in the Maximum Norm

Greedy Step Number error residual error est.

error est. lin.

error est. lin. eps.

1 2 3

10−5 100 105

Error and Error Estimators in the Euclidean Norm

Greedy Step Number error residual error est.

error est. lin.

error est. lin. eps.

Figure 5.1.: Numerical test: decay of the error and its estimators depending on the number of basis vectors with respect to the maximum (left) and Euclidean norm (right).

In Figure 5.2 the three used basis vectors which generate the RB model are plotted.

The presented numerical example is a simple one. But we can summarise that the error estimator to the nonlinear problem gives the same qualitative results as the error estimator to the linearised problem with respect to the maximum norm. In the Euclidean norm the error estimator to the nonlinear problem is around Nx =5⋅102 times bigger than the error estimator to the linearised problem. This statement will be reinforced

0 0.2 0.4 0.6 0.8 1

−2

−1 0 1 2

3 Basis Vectors

Spatial

Greedy Step: 1 Greedy Step: 2 Greedy Step: 3

Figure 5.2.: Numerical test: basis vectors for the RB model.

in the next chapter, where we will also have a closer look at the usage of the different Greedy algorithm types. Before that we apply in the next two sections our developed RB model to parameter estimation and for the sensitivity analysis.

5.5. Parameter Estimation

In this section we use our developed RB model to identify parameters. We have a given terminal voltage Ureq ∈ R and we want to estimate a parameter set µ(est) = (µ(est,1), µ(est,2), µ(est,3), µ(est,4))T ∈R4, which belongs to our admissible parameter space Dad (5.21), such thatUter(µ(est)) is closest toUreq. The cost functionJN ∶RNx×Dfor the discretised problem is defined as

JN(φN(µ);µ)∶= 1 2(φNN

x(µ)−φN

1 (µ)−Ureq)2. (5.22) We assume that Assumption 2.36 holds and write ˆJN(µ)∶=JN(φN(µ);µ) for µ∈ Dad, cf. (2.13). The optimisation problem is then given by

minµ∈D

JˆN(µ) subject to µ∈ Dad. (5.23) Since we have a formulation for the FV model we can easily derive a formulation for the RB model. In the following numerical tests, we compare the results obtained by the FV model and the RB model.

5.5.1. Numerical Tests

For the parameter estimation we use the Matlab routine fmincon with the SQP, cf.

Subsection 2.6.2, and a user defined gradient. The gradient of JN is given by

µJN(φN(µ);µ)=(φNN

x(µ)−φN

1 (µ)−Ureq)⋅ ∇µ(φNN

x(µ)−φN

1 (µ)). (5.24)

5.5. Parameter Estimation

MaxIter 100

TolFun 10−6

TolCon 10−6

TolX 10−10

µlb (0.3,−2.5,0.3,0.3) µub (0.75,2.5,0.5,0.5)

Table 5.6.: Numerical tests for the parameter estimation: setting.

FVM RBM

Ureq[V] −2.4825

µreq (0.4,1,0.4,0.4)

µinit (0.7,1,0.4,0.4)

µest (0.7390,1.8433,0.4000,0.4000) (0.7390,1.8433,0.4000,0.4000)

t[s] 6.67 1.93

function eval 10 10

iterations 7 7

residual 2.17⋅10−20 2.17⋅10−20

Table 5.7.: Test 1 for the parameter estimation: estimation of µ1.

The second factor∇µ(φNN

x(µ)−φN

1 (µ))of equation (5.24) is determined by solving the sensitivity equation, cf. Section 4.9 to obtain the sensitivity of the terminal voltage.

The options for thefminconroutine in Matlab are listed in Table 5.6: we set a maximum of 100 iteration steps (MaxIter), the tolerance for the (cost) function is set to 10−6 (TolFun), the tolerance for the violation of the constraints is 10−6 (TolCon) and the tolerance for the smallest step size is 10−10 (TolX). Further, the lower and upper bound for the parameters is given by µlb = (0.3,−2.5,0.3,0.3) and µub = (0.75,2.5,0.5,0.5), respectively. Thus we search for the desired parameter, where the RB model is valid.

We run four different numerical tests, cf. Tables 5.7, 5.8, 5.9 and 5.10. In the first two tests we vary parameter µ1, in the third and fourth µ2. In those pairs we switch the role of the required to the initial parameter. The other parameters are fixed at µref1 = 0.5, µref2 = 1, µref3 = 0.5 and µref4 = 0.5. During the variation of the parameters we also vary the required voltage Ureq in all tests. In the tables we list the required parameter µreq, the initial value µinit, the estimated value µest, the elapsed time, the number of function evaluations, the number of iterations and the residual which is here the corresponding cost function evaluated at the computed parameter µest. In all four tests the optimisation process stops because a local minimum is found.

We do not list the parameter estimation with the parameters µ3 and µ4 here: we put for both parameters one time the value 0.35 and another time 0.45 (the other three parameters were fixed). The terminal voltage was always Uter = −1.9860V like in the forth test, cf. Table 5.10. Hence an optimisation would stop after the first function evaluation.

For the different numerical tests the RB model yields the same results as the FV model.

In all tests the optimisation with the RB model is around four times faster than the one

FVM RBM

Ureq[V] −1.4186

µreq (0.7,1,0.4,0.4)

µinit (0.4,1,0.4,0.4)

µest (0.7488,1.0693,0.4000,0.4000) (0.7488,1.0693,0.4000,0.4000)

t[s] 6.62 1.77

function eval 11 11

iterations 6 6

residual 8.53⋅10−22 8.53⋅10−22

Table 5.8.: Test 2 for the parameter estimation: estimation of µ1.

FVM RBM

Ureq[V] 2.0060

µreq (0.5,−1,0.4,0.4)

µinit (0.5,1,0.4,0.4)

µest (0.6304,−1.2621,0.4000,0.4000) (0.6304,−1.2621,0.4000,0.4000)

t[s] 8.36 2.23

function eval 12 12

iterations 8 8

residual 1.51⋅10−15 1.51⋅10−15

Table 5.9.: Test 3 for the parameter estimation: estimation of µ2.

FVM RBM

Ureq[V] −1.9860

µreq (0.5,1,0.4,0.4)

µinit (0.5,−1,0.4,0.4)

µest (0.6304,1.2596,0.4000,0.4000) (0.6304,1.2596,0.4000,0.4000)

t[s] 8.35 2.28

function eval 12 12

iterations 8 8

residual 1.55⋅10−15 1.55⋅10−15

Table 5.10.: Test 4 for the parameter estimation: estimation of µ2.

5.6. Sensitivity Analysis

using the FV model. For a single solution the speed-up is 42, cf. Table 5.2, i.e. the problem setting is more time consuming than the computation of the solution itself.

The results of the optimisation lead to the observation thatµ3 and µ4 have no influence on the terminal voltage. The parameters µ1 and µ2 seem to have an impact. But they can influence the terminal voltage in the same way. These assumptions will be checked in the next section.

5.6. Sensitivity Analysis

In this section we discuss the results from the preceding section. In particular we want to examine why we can not identify the parameters µ3 andµ4 in the parameter estimation for the terminal voltage via sensitivity analysis. Therefore we consider the sensitivity equation, cf. equation (2.16), for our special case in the discretised form:

FφN(φN(µ);µ)⋅δdφN =−FµN(φN(µ);µ)⋅d, (5.25) whereFN ∶RNx×D→RNx is the equation which describes the charge transport in the positive electrode, cf. (5.4).

So we compute the sensitivity of the four parameters µ1,µ2,µ3 and µ4 to the electrical potential φ. With this magnitude we then compute the sensitivity of the terminal voltage. At first we need the derivatives of FN, remind that, cf. (5.4)

FN(φN(µ);µ)∶= 1

with respect to the four parameters andφN. FµN1(φN(µ);µ)= 1

We remark that FφN(φ(µ);µ) coincides with – respectively is – the Jacobian of the discretised system.

The sensitivity forµ4 can be computed directly. The sensitivity inµ4-direction is given by δd4φN(µ)=(1, . . . ,1)T ∈RNx. So the parameter µ4 has no influence on the terminal voltage φNN

x(µ)−φN

1 (µ), sinceδd4φ(µ)1 =δd4φ(µ)Nx as expected from the results of the previous section.

The remaining sensitivities have to be computed numerically for every parameter out of the (discretised) parameter set range. For this we have to solve the system (5.25) for the different parameters.

This analysis is time consuming if we solve the full FV system (5.25). Therefore, we make use of the RB model here:

FφNN(φ

coeff(µ);µ)δdφ

coeff=−FµN(φ

coeff(µ);µ)d, (5.26) whereFN(φ

coeff(µ);µ)∶=ΞφTW FNφφ

coeff(µ);µ).

As we want to consider the influence of the parameters µ1, µ2 and µ3 to the terminal voltage, we can fixµ4 in the sequel.

5.6.1. Numerical Tests

Again we consider our system of equations for the parameter set Dad of Section 5.4, cf.

Table 5.1 and the developed RB model to compute the sensitivities.

In Figures 5.3, 5.4 and 5.5 we plot the results for the sensitivity of the terminal voltage variation for two parameters in all three dimension, i.e.

µ→∇µi(φNN

x(µ)−φN

1 (µ)),

fori∈{1,2,3}. We skip the sensitivity inµ4-direction becauseµ4 has no influence on the terminal voltage and set µ4 =0.3. For the other fixed values we set µ1 =0.3,µ2 =−2.5 and µ3 =0.3.

In Figure 5.6 we plot the terminal voltage in dependency on our four parameters µ1, µ2, µ3 and µ4 to confirm our results of the parameter estimation. The other three parameters are fixed.

If we increase µ1 the sensitivity in µ1-direction decreases absolutely. If we increase µ2 absolutely the sensitivity in µ1-direction increases absolutely. The sensitivity in µ2 -direction stays constant when varying µ2, but increases absolutely when decreasing µ1. Additionally, both parameters can influence the terminal voltage in the same way. The parameter µ3 seems not to influence the sensitivities in any direction. The sensitivity in µ3-direction is in the range of 10−10 and therefore seems not to be affected by any parameter.

5.6.2. Conclusions

A direct result of the above sensitivity equation is that the parameterµ4 has no influence on the terminal voltage. With the numerical analysis we conclude that the parameterµ3 has almost no influence on the terminal voltage, too. Thus, which can as well be seen in

5.6. Sensitivity Analysis

0.2 0.4 0.6 0.8 1

−5 0 5

−40

−20 0 20 40

µ1

Sensitivity inµ1-direction

µ2 0.2 0.4 0.6 0.8 1

−5 0

−3.55

−3

−2.5

−2

−1.5

−1

µ1

Sensitivity inµ2-direction

µ2

Figure 5.3.: Plot of the sensitivities: influence of the parameters µ1 and µ2 on the ter-minal voltage in µ1-direction (left) and µ2-direction (right); µ3 = 0.3 and µ4 =0.3 are fixed.

0.2 0.4 0.6 0.8 1

0.3 0.4 0.5

−30

−20

−10 0

µ1

Sensitivity inµ1-direction

µ3 0.2 0.4 0.6 0.8 1

0.3 0.4 0.50 0.2 0.4 0.6 0.8 1

x 10−8

µ1

Sensitivity inµ3-direction

µ3

Figure 5.4.: Plot of the sensitivities: influence of the parameters µ1 and µ3 on the ter-minal voltage in µ1-direction (left) and µ3-direction (right); µ2 =−2.5 and µ4 =0.3 are fixed.

−4 −2 0 2 4 0.3

0.4

−3.40.5

−3.38

−3.36

−3.34

−3.32

−3.3

µ2

Sensitivity inµ2-direction

µ3 −4 −2 0 2 4

0.3 0.4 0.5−1

−0.5 0 0.5 1

x 10−8

µ2

Sensitivity inµ3-direction

µ3

Figure 5.5.: Plot of the sensitivities: influence of the parameters µ2 and µ3 on the ter-minal voltage in µ2-direction (left) and µ3-direction (right); µ1 = 0.3 and µ4 =0.3 are fixed.

the numerical tests, we could reduce our equation system (5.2) to a system which merely depends onµ1 andµ2 if we are interested in the terminal voltage for this parameter set.

Or more general, in a system which only depends onµ1,µ2 and µ3.

Physical meaning: A high conductivity (µ1) does not slow down the transport processes and so has essentially an influence (coupled with a high current (absolutely)) if it is small. As a consequence it has an influence on the sensitivity of the terminal voltage if it is small. The higher the current (µ2) becomes, the faster the transport processes in a battery happen. If it is coupled with a small conductivity its influence on the terminal voltage is big. The exchange current density (µ3) and the remaining part of the over potential (µ4) do not have an influence on the terminal voltage because in the present chapter we consider solely the domain of a single electrode. These two parameters could have an influence if we consider a domain consisting for example of an electrode and a separator: both would act like a “filter” between the two domains which decides how much of the charge can pass.

5.6. Sensitivity Analysis

0.3 0.4 0.5 0.6 0.7

4 5 6 7 8

Terminal Voltage

µ1 Uter[V]

−2 −1 0 1 2

−8

−6

−4

−2 0 2 4 6 8

Terminal Voltage

µ2 Uter[V]

0.3 0.35 0.4 0.45 0.5

7 7.5 8 8.5

Terminal Voltage

µ3

Uter[V]

0.3 0.35 0.4 0.45 0.5

7 7.5 8 8.5

Terminal Voltage

µ4

Uter[V]

Figure 5.6.: Plot of the terminal voltage in dependency on the parameter µ1 (up left), µ2 (up right), µ3 (bottom left) and µ4 (bottom right); the other three pa-rameters are kept fixed µ1 =0.3,µ2 =−2.5,µ3 =0.3 and µ4=0.3.