• Keine Ergebnisse gefunden

Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential Equations with Applications to Lithium-Ion Batteries

N/A
N/A
Protected

Academic year: 2022

Aktie "Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential Equations with Applications to Lithium-Ion Batteries"

Copied!
232
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential Equations

with Applications to Lithium-Ion Batteries

Dissertation submitted for the degree of Doctor of Natural Science

Presented by

Johanna Andrea Karolin Wesche

at the

Faculty of Science

Department of Mathematics and Statistics

Date of the Oral Examination: 22nd of March 2016 First Referee: Prof. Dr. Stefan Volkwein

Second Referee: Prof. Dr. Arnulf Latz

(2)
(3)
(4)
(5)

Abstract

In the present thesis we apply the reduced basis method (RBM) on a coupled elliptic- parabolic equation system, which is highly nonlinear. This equation system describes the charge and mass transport in a lithium-ion battery. For an efficient application of the RBM we develop an a posteriori error estimator which estimates the error between our finite volume and our reduced solution. To evaluate the nonlinearities in an efficient way we additionally apply the empirical interpolation method. The suitable reduced order model we use for the parameter estimation to a required terminal voltage. The results we confirm with a sensitivity analysis.

(6)
(7)

Contents

1. Introduction 1

1.1. Motivation . . . 1

1.2. Earlier Work . . . 2

1.2.1. Lithium-Ion Battery Models . . . 2

1.2.2. Reduced Basis Method . . . 3

1.2.3. Reduced Order Modelling in the Battery Context . . . 6

1.3. Outline . . . 6

2. Preliminaries 9 2.1. Basic Analysis . . . 9

2.2. Basic Linear Algebra . . . 12

2.3. Parameter Estimation . . . 21

2.4. Sensitivity Analysis . . . 23

2.5. Discretisation Methods . . . 24

2.5.1. Backward Euler Method . . . 24

2.5.2. Cell-Centred Finite Volume Method . . . 24

2.6. Solving Strategies . . . 26

2.6.1. Damped Newton Method . . . 26

2.6.2. Sequential Quadratic Programming . . . 28

3. Battery Model 31 3.1. Battery Model . . . 31

3.2. Elliptic-Parabolic PDE System . . . 31

3.3. Discretisation . . . 36

3.4. Validation of the One-Dimensional Model . . . 37

4. The Reduced Basis Method 43 4.1. The Reduced Basis Method for Finite Volume Solvers . . . 43

4.2. Proper Orthogonal Decomposition . . . 47

4.3. Basis Generation: POD-Greedy Algorithm . . . 48

4.4. (Discrete) Empirical Interpolation Method . . . 51

4.4.1. (D)EIM . . . 55

4.4.2. A Posteriori Error Estimator for EIM . . . 56

4.5. Error Estimator for Linear Problems . . . 57

4.6. Linearisation of a Nonlinear Problem . . . 59

4.7. Offline-Online Decomposition . . . 61

4.8. Parameter Estimation . . . 62

4.9. Sensitivity Analysis . . . 64

(8)

5. RBM Applied to a Nonlinear Elliptic System 65

5.1. Problem Formulation . . . 65

5.2. Error Estimators . . . 67

5.3. Error Estimator in Higher Dimensions . . . 74

5.4. Numerical Tests . . . 76

5.5. Parameter Estimation . . . 80

5.5.1. Numerical Tests . . . 80

5.6. Sensitivity Analysis . . . 83

5.6.1. Numerical Tests . . . 84

5.6.2. Conclusions . . . 84

6. RBM Applied to a Parabolic System 89 6.1. Problem Formulation . . . 89

6.2. Error Estimators . . . 92

6.2.1. Semilinear Parabolic Equation . . . 92

6.2.2. Linearised Problem . . . 97

6.3. Error Estimator in Higher Dimensions . . . 99

6.4. Numerical Tests . . . 100

6.4.1. Constant α . . . 101

6.4.2. c-dependentα . . . 104

6.5. Parameter Estimation . . . 111

6.5.1. Numerical Tests . . . 112

6.6. Sensitivity Analysis . . . 116

6.6.1. Numerical Tests . . . 117

6.6.2. Conclusion . . . 118

7. RBM Applied to a Coupled System 123 7.1. Problem Formulation . . . 123

7.2. Error Estimator to the Linearised Problem . . . 124

7.2.1. Examination of the Inequality Condition . . . 128

7.2.2. Conclusion . . . 130

7.3. Error Estimator for the Single Equations . . . 131

7.4. Numerical Tests . . . 135

7.4.1. Strong Greedy without EIM . . . 137

7.4.2. Weak Single Greedy without EIM . . . 139

7.4.3. Weak Greedy with EIM . . . 141

7.5. Parameter Estimation . . . 142

7.5.1. Numerical Tests . . . 143

7.6. Sensitivity Analysis . . . 146

7.6.1. Numerical Tests . . . 146

7.6.2. Conclusion . . . 147

8. Conclusion 151

A. Notation and Conventions 157

(9)

Contents

B. Discretisation 161

B.1. Elliptic Equation in Higher Dimensions . . . 163

B.2. Parabolic Equation in Higher Dimensions . . . 167

C. EIM Implementation in Matlab 173 D. Addition to the Numerical Tests 177 D.1. RBM Applied to Parabolic System - Additional Data for the Numerical Tests . . . 177

D.1.1. Constant α . . . 177

D.1.2. c-dependentα . . . 180

D.2. RBM Applied to Coupled System - Additional Data for the Numerical Tests193 D.2.1. Strong Greedy without EIM . . . 193

D.2.2. Weak Single Greedy without EIM . . . 197

D.2.3. Weak Greedy with EIM . . . 197

E. Settings 203 E.1. Computer Information . . . 203

E.2. Default Settings . . . 203

E.2.1. Damped Newton Method . . . 203

E.2.2. POD-Greedy Algorithm . . . 203

E.2.3. fmincon . . . 204

Zusammenfassung 205

Bibliography 209

Acknowledgement 215

(10)
(11)

List of Figures

3.1. Detailed battery model . . . 31

3.2. Simple battery model . . . 32

3.3. Structure of the considered battery domain . . . 32

3.4. Validation of the 1-d model: concentration . . . 39

3.5. Validation of the 1-d model: potential . . . 40

3.6. Validation of the 1-d model: error . . . 40

3.7. Validation of the 1-d model: relative error . . . 41

5.1. Decay of the error and its estimators . . . 79

5.2. Basis vectors . . . 80

5.3. Influence of the parameters µ1 and µ2 on the terminal voltage in µ1- direction andµ2-direction . . . 85

5.4. Influence of the parameters µ1 and µ3 on the terminal voltage in µ1- direction andµ3-direction . . . 85

5.5. Influence of the parameters µ2 and µ3 on the terminal voltage in µ2- direction andµ3-direction . . . 86

5.6. Terminal voltage in dependency onµ1,µ2,µ3 and µ4 . . . 87

6.1. Numerical test, constantα: decay of the error and its estimators . . . 104

6.2. Numerical test, constant α: basis vectors . . . 105

6.3. Numerical test for thec-dependent α: FV solution . . . 106

6.4. Numerical test, c-dependent α: decay of the error and its estimators - maximum norm . . . 119

6.5. Numerical test, c-dependent α: decay of the error and its estimators - Euclidean norm . . . 120

6.6. Parameter estimation: estimated SoC . . . 121

6.7. SoC and sensitivities . . . 122

7.1. Value for∥(LmI,1,c,lin+1 (µ))−1LmI,1,φ,lin+1 (µ)∥∥(LmI,2,φ,lin+1 (µ))−1LmI,2,c,lin+1 (µ)∥for fixed ∆t. . . 129

7.2. Value for∥(LmI,1,c,lin+1 (µ))−1LmI,1,φ,lin+1 (µ)∥∥(LmI,2,φ,lin+1 (µ))−1LmI,2,c,lin+1 (µ)∥for fixed ∆x . . . 130

7.3. Numerical test: strong Greedy without EIM: basis vectors . . . 137

7.4. Numerical test: strong Greedy without EIM: decay of the error and its estimators - maximum norm . . . 138

7.5. Numerical test: strong Greedy without EIM: decay of the error and its estimators - Euclidean norm . . . 138

7.6. Numerical test: weak single Greedy without EIM: basis vectors . . . 139

(12)

7.7. Numerical test: weak single Greedy without EIM: decay of the error and

its estimators - maximum norm . . . 140

7.8. Numerical test: weak single Greedy without EIM: decay of the error and its estimators - Euclidean norm . . . 140

7.9. Numerical test: weak Greedy with EIM: basis vectors . . . 141

7.10. Numerical test: weak Greedy with EIM: decay of the error and its esti- mators - maximum norm . . . 142

7.11. Numerical test: weak Greedy with EIM: decay of the error and its esti- mators - Euclidean norm . . . 143

7.12. Parameter estimation: estimated terminal voltage . . . 145

7.13. Terminal voltage and sensitivity . . . 147

7.14. Sensitivities in dependency onµ1 (terminal voltage) . . . 148

7.15. Sensitivities in dependency onµ2 (terminal voltage) . . . 149

7.16. Sensitivities in dependency onµ2 (terminal voltage) . . . 150

B.1. Discretisation of the positive electrode - 2d . . . 163

B.2. Discretisation of the positive electrode - 3d . . . 164

D.1. c-dependent α (strong, more modes): basis vectors . . . 181

D.2. c-dependent α (weak, more modes): basis vectors . . . 182

D.3. c-dependent α (weak, one mode): basis vectors . . . 184

D.4. c-dependent α (weak (ǫNew,lin≠0), one mode): basis vectors . . . 186

D.5. c-dependent α (weak, one mode, EIM): basis vectors . . . 188

D.6. c-dependent α (weak, one mode, DEIM): basis vectors . . . 191

(13)

List of Tables

3.1. Validation of the 1-d model: battery parameters . . . 38

3.2. Validation of the 1-d model: numerical parameters . . . 38

5.1. Data set . . . 76

5.2. Computational time . . . 77

5.3. Computational time: offline phase . . . 77

5.4. Parameter choice: maximum norm . . . 78

5.5. Parameter choice: Euclidean norm . . . 78

5.6. Numerical tests for the parameter estimation: setting . . . 81

5.7. Test 1: estimation ofµ1 . . . 81

5.8. Test 2: estimation ofµ1 . . . 82

5.9. Test 3: estimation ofµ2 . . . 82

5.10. Test 4: estimation ofµ2 . . . 82

6.1. Test for the constant α: parameters . . . 101

6.2. Computational time . . . 102

6.3. Computational time: offline phase . . . 102

6.4. Test for the c-dependentα: parameters . . . 105

6.5. Test for the c-dependentα: results . . . 108

6.6. Parameter estimation - test 1 . . . 114

6.7. Parameter estimation - test 2 . . . 114

6.8. Parameter estimation - test 3 . . . 115

6.9. Parameter estimation - test 4 . . . 115

7.1. Battery parameter set . . . 129

7.2. Numerical tests: battery parameters . . . 134

7.3. Summary of the results to the different Greedy algorithms . . . 136

7.4. Parameter estimation: test 1 . . . 144

7.5. Parameter estimation: test 2 . . . 145

D.1. Constantα: parameters . . . 177

D.2. Constantα: parameter choice depending on the estimator - maximum norm178 D.3. Constantα: parameter choice depending on the estimator - Euclidean norm179 D.4. c-dependent α (strong, more modes): parameters . . . 180

D.5. c-dependent α(strong, more modes): parameter choice depending on the estimator - maximum norm . . . 180

D.6. c-dependent α(strong, more modes): parameter choice depending on the estimator - Euclidean norm . . . 180

D.7. c-dependent α (weak, more modes): parameters . . . 181

(14)

D.8. c-dependent α (weak, more modes): parameter choice depending on the

estimator - maximum norm . . . 182

D.9. c-dependent α (weak, more modes): parameter choice depending on the estimator - Euclidean norm . . . 182

D.10.c-dependent α (weak, one mode): parameters . . . 183

D.11.c-dependent α (weak, one mode): parameter choice depending on the estimator - maximum norm . . . 183

D.12.c-dependent α (weak, one mode): parameter choice depending on the estimator - Euclidean norm . . . 184

D.13.c-dependent α (weak (ǫNew,lin≠0), one mode): parameters . . . 185

D.14.c-dependent α(weak (ǫNew,lin≠0), one mode): parameter choice depend- ing on the estimator - maximum norm . . . 186

D.15.c-dependent α(weak (ǫNew,lin≠0), one mode): parameter choice depend- ing on the estimator - Euclidean norm . . . 187

D.16.c-dependent α (weak, one mode, EIM): parameters . . . 188

D.17.c-dependent α (weak, one mode, EIM): parameter choice depending on the estimator - maximum norm . . . 189

D.18.c-dependent α (weak, one mode, EIM): parameter choice depending on the estimator - Euclidean norm . . . 189

D.19.c-dependent α (weak, one mode, DEIM): parameters . . . 190

D.20.c-dependent α (weak, one mode, DEIM): parameter choice depending on the estimator - maximum norm . . . 191

D.21.c-dependent α (weak, one mode, DEIM): parameter choice depending on the estimator - Euclidean norm . . . 192

D.22.Strong Greedy without EIM: parameter choice, maximum norm . . . 194

D.23.Strong Greedy without EIM: parameter choice, Euclidean norm . . . 195

D.24.Strong Greedy without EIM: parameters . . . 196

D.25.Weak single Greedy without EIM: parameter choice, maximum norm, concentration . . . 197

D.26.Weak single Greedy without EIM: parameter choice, maximum norm, potential . . . 198

D.27.Weak single Greedy without EIM: parameter choice, Euclidean norm, concentration . . . 199

D.28.Weak single Greedy without EIM: parameter choice, Euclidean norm, potential . . . 200

D.29.Weak single Greedy without EIM: parameters . . . 201

D.30.Weak Greedy with EIM: parameter choice, maximum norm . . . 201

D.31.Weak Greedy with EIM: parameter choice, Euclidean norm . . . 202

D.32.Weak Greedy with EIM: parameters . . . 202

E.1. System information . . . 203

E.2. Default setting - damped Newton . . . 203

E.3. Default setting - POD-Greedy . . . 203

E.4. Default setting -fmincon . . . 204

(15)

List of Algorithms

2.1. Newton method . . . 27

2.2. Damped Newton method . . . 28

2.3. SQP method . . . 29

4.1. POD algorithm . . . 48

4.2. Greedy algorithm . . . 49

4.3. POD-Greedy algorithm . . . 50

4.4. Gram-Schmidt process . . . 51

4.5. Empirical interpolation method (EIM) . . . 55

4.6. Discrete empirical interpolation method (DEIM) . . . 56

(16)
(17)

1. Introduction

In this chapter a motivation for the present thesis is given. We put the topic of this thesis into the context by referring to earlier works on lithium-ion batteries as well as the mathematical model order reduction techniques, especially the reduced basis method.

1.1. Motivation

In the last years the research on lithium-ion batteries has become more important.

There are many application fields in the electronic industry e.g. for laptops as well as mobile phones. Beside these small battery systems the importance of the battery cell as an energy supplier as well as direct engine in vehicles has increased rapidly. One advantage of the lithium-ion battery is that it has no memory effect. Batteries with memory effect remember the used capacity and thus the capacity itself decreases. This is common for nickel batteries. For the examination of the different materials and the different structures of the battery, many numerical solvers are being created to avoid experimental effort. Roughly speaking: the better the solver’s results reflect physical or chemical properties of the lithium-ion battery, the more expensive is the corresponding numerical/computational effort. In extreme cases a sensitivity analysis or parameter estimation is not possible.

Another issue in the battery context is its role in the automotive industry. Besides the examination of the battery fast solvers are needed for the control in an electrical vehicle:

the state of charge of the battery is permanently requested to check the remaining distance the vehicle can cover. This distance depends (for the same battery) on the initial state of charge of the battery, on the amount of energy which has already been consumed, recharging effects (braking) and on the condition of the environment, e.g. the temperature. As a standard approach so called equivalent circuit models are used for these control problems. In order to obtain an equivalent circuit model the battery of the car is measured (for different charge/discharge profiles) and the results are approximated by a circuit with different components. This method is heuristic but works quite well in practice.

The state of the art is that numerical solvers which reproduce well the physical and chemical processes are available and they are predictive. Thus new materials and ge- ometries of the battery can be tested without building a real battery and as a result the concentration and potential at every (control) node is obtained. Beside those expensive solvers, also cheap, non predictive numerical solvers exist for which the quality of the

“reproduction” of the battery is not known. Further as a simulation result only the terminal voltage and the state of charge are obtained.

For numerical solvers, which solve basically discretised equation systems, there exist mathematical model order reduction methods. Reducing the computational time, these

(18)

methods are convenient for expensive solvers. The established methods are proper or- thogonal decomposition (POD), balanced truncation, homogenisation and the reduced basis method (RBM), the latter being focus of this thesis.

The RBM is based on a high dimensional discretised equation system which depends on many parameters, e.g. in the battery context on chemical parameters (e.g. diffu- sion coefficient), geometrical (e.g. width of the electrodes) and state parameters (e.g.

temperature). We compute a few “true solutions”, i.e. solutions to the high fidelity solvers for certain parameters. It is assumed that these true solutions span the complete (reduced) solution space for a certain parameter range. A “reduced solution” is then a linear combination of so called “basis vectors” of the reduced space. The challenge is how to choose the corresponding basis vectors and to estimate how precisely the reduced solution approximates the aforementioned true solution.

1.2. Earlier Work

In this section some earlier publications on the modelling of the lithium-ion battery, the RBM and reduced order modelling applied in the battery context are listed to put the present thesis into context.

1.2.1. Lithium-Ion Battery Models

Battery models can be classified into models that describe the physical and/or electro- chemical processes and into empirical ones. An example for an empirical model is the equivalent circuit model that merely describes the discharge rate. A general discussion about how to model a battery can be found in [Shi09].

Models describing the physical and electrochemical processes can again be subdivided into two categories: macroscopic and microscopic models, see [BWFS11], [FSL+10].

Both types of models employ the same conservation laws for charge, mass and energy but differ in the way how the two material phases of the porous positive and nega- tive electrode of a battery are resolved by a continuum. A microscopic model resolves in detail the geometrical morphology of the two charge conducting material phases – the electrolyte (lithium-ion conducting phase) and the solid phase (electron conducting phase) – in contrast to the macroscopic model. Here, it is assumed that the porous electrode can be described by solving the conservation equations in two separate, su- perpositioned continua [NT62], where the geometrical morphology is not resolved. The macroscopic model covers modelling wise the geometrical morphology by the introduc- tion of so-called effective transport parameters. How the effective transport parameters can be derived based on fundamentals is covered by the homogenisation theory [CL11]

which gives the link between the micro- and macroscopic model.

In the following we focus on two mathematical models that describe the processes in a lithium-ion battery: the model by Newman and the model of the Fraunhofer ITWM1 by Latz and Zausch. They could be treated as a macroscopic as well as a microscopic model.

1Institut f¨ur Techno- und Wirtschaftsmathematik

(19)

1.2. Earlier Work

One can get an overview of the Newman model [NT04] by reading the Ph.D. the- sis of Newman’s student Doyle [Doy95] and the resultant papers [DFN93], [FDN94a], [FDN94b]. A mathematical discussion about the existence and uniqueness of weak so- lutions can be found in [WXZ06]. This work was extended to a more general context by Seger [Seg13]. With the Lp-theory he also treats the ITWM model under certain conditions. There are a lot of groups examining the mathematical description of the lithium-ion battery and the underlying physical phenomena based by Newman’s model, cf. e.g. [DGG+11].

The model of Latz and Zausch is developed in [LZI10]. Therein it is also shown that the Newman model could violate the second law of thermodynamics which says that the change in entropy must be greater than zero in chemical reactions without external energy supply. This characteristic of the Newman model has not been analysed in more detail yet. A numerical solver for the Latz-Zausch model is implemented by Popov et al. in [PVMI10] for block structured electrodes. In [LZ10] the model has been extended: in addition to solving the conservation equations for charge and mass, the energy conservation equation is considered as well.

1.2.2. Reduced Basis Method

The initial point for applying the RBM is a parameterised partial differential equation which is discretised by e.g. the finite volume method (FVM). This means that the equation system depends on several parameters µ. These parameters can be physical, chemical, geometrical or state parameters. Numerical solutions to a certain parameter µ are called “true solutions”. Prior to starting with the RBM the parameter space must be restricted. For this parameter range the later developed reduced order model is valid. This set needs to be discretised2. Using the (POD-)Greedy algorithm [HO08], parameters are chosen, to which the true solutions have to be computed. The reduced solutions are then a linear combination of these so called basis vectors. For the algorithm an error estimator which estimates the error between the “true” and the “reduced”

solution to a parameter is required. The first basis vector is found by an initial choice:

to a chosen parameter out of the discretised parameter set, the true solution is computed.

A true solution to a certain parameter is called “snapshot”. The second basis vector is found by estimating the error between the reduced and true solution to every parameter out of the discretised parameter set. The true solution to the parameter, where the estimated error reaches its maximum, is added to the basis portfolio. This procedure is iterated as long as the estimated error is bigger than a given tolerance. In comparison to reduced order models constructed by other methods a big advantage of the RBM is, that the so generated reduced order model is valid in the initially defined parameter range and agrees with the “full order model” to a pre-defined tolerance. Another advantage is that the RBM can be divided into two phases: the offline and the online phase, cf.

e.g. [PR07]. In the computationally expensive offline phase the basis is generated via the Greedy algorithm and the computational operations which are needed to be done for every reduced solution. In the online phase only the reduced solutions are computed if they are requested.

2There are also methods where the parameter set is discretised adaptively, cf. e.g. [HDO11]

(20)

Of course there are some drawbacks of the method, too: the RBM should only be applied if many evaluations for different parameters are needed, e.g. for parameter estimations, sensitivity studies or in the automotive context for controls. The offline phase is expensive, so if one only needs a “few” evaluations, it could be cheaper to compute the true solutions directly. If the range of the parameters changes the offline phase must be performed again. If the parameter set is discretised in a too coarse way, it can not be guaranteed that the generated reduced model is as accurate as requested.

A lot of mathematical effort is needed, too: one has to find a problem specific error estimator which neither underestimates nor extensively overestimates the error between the true and reduced solution. In some applications and settings it can be reasonable to use a so called “strong” Greedy algorithm where the error is not estimated but computed.

In this case for a discretised parameter set all true solutions are computed. After every Greedy step the reduced solutions to all discretisation points of the parameter set are computed and compared to the corresponding true solutions. This procedure can be used for a device which has only limited storage (hard disc) but enough capacity (RAM) to perform the online computation.

The RB approach has been studied by many authors and has successfully been applied to various complex systems of equations. A complete overview can hardly be given in this subsection. Here we briefly list some selected topics and publications. Most of the reduced basis approaches are done for systems discretised by finite elements.

The concepts can be found in the book by Patera and Rozza [PR07] and in the Ph.D.

thesis by Grepl [GMNP07]. The latter publication is a detailed introduction for error estimators for different types of problems: linear and nonlinear elliptic problems as well as linear and nonlinear parabolic problems are studied. Further the Greedy algorithm is described there and has been applied for non time dependent problems up to now.

In [HO08] Haasdonk and Ohlberger present an error estimator to a linear parabolic problem which is discretised by the FVM. In this paper the POD-Greedy algorithm is presented for the first time, too. This is an extension of the Greedy algorithm for time dependent problems where the time is not treated as an additional parameter. The snapshot matrix, i.e. the matrix, where one can find the true solution to a parameter for every node and every time step, is reduced in time via POD.

The POD method is well-known in the engineering community and in statistics also under the label principal component analysis (PCA), cf. e.g. [Per01] and Karhunen- Lo`eve transform (KLT), cf. e.g. [GDV06]. The idea is to extract the essential physical information of the system by computing the eigenvectors and eigenvalues to a (trans- formed) snapshot matrix. The snapshot matrix can either be an approximation of a true solution for every node in space and every time step or it can obtain measurement results to different parameters evaluated at different nodes in space. The eigenvectors of the correlation matrix are ordered by the size of their eigenvalues. The faster the decay of the eigenvalues, the less basis vectors are needed to approximate the snapshot matrix in a accurate way, cf. [HV03], [KV01], [KV02], [KV07] (for elliptic problems). A helpful introduction into the POD method can be found in the lecture notes by Volkwein [Vol12]. For an introduction in the engineering context we refer to [HLBR12].

There are a lot of issues in the RBM. We list some of them in the following:

• The parameter set for the Greedy algorithm can be discretised adaptively by cer-

(21)

1.2. Earlier Work

tain methods. For affine parameter dependent, nonlinear parabolic equations a so calledhp-POD-Greedy algorithm can be used, cf. e.g. [EKPR11].

• In some applications parameterised partial differential equations are very sensitive with respect to certain parameters so that many basis vectors are required to obtain a sufficiently accurate reduced order model for the determined parameter range. Usually, not all basis vectors are compulsory for every parameter, so one can work with different basis vectors depending for which parameter the reduced solution is needed. This is done by so called dictionaries [KH13].

• In addition there are strategies to reduce the computational effort in the offline phase, cf. [UVZ12, IV15]. The idea is to get the biggest error between the true and the reduced solution by solving an optimisation problem. The main issue is to get a good starting vector for the optimisation problem. This is done by a (random) coarse grid on the parameter domain: to each point of this coarse grid the value for the error estimator is computed in every Greedy step. Depending on the specific settings for a one-dimensional parameter set, the speed-up of the offline phase can at least be 10, comparing the standard Greedy to the Greedy with optimisation for a two-dimensional elliptical problem, cf. [IV15]. The more dimensions the parameter set has the higher the speed-up factor is with this method, cf. [UVZ12].

• For linear problems the solution space is projected via Galerkin onto the reduced basis space. For nonlinear problems the same is done, but one has to compute expensive evaluations of the nonlinearities in the full space. To avoid this two methods are introduced, so that the nonlinearities are evaluated just at a few rel- evant so-called magic points: the discrete empirical interpolation method (DEIM) and the empirical interpolation method (EIM). The first one bases on the POD method, cf. [CS10] and the other one uses a Greedy algorithm, cf. [BNMP04]. A comparison of both methods in the POD context can be found in [LV13]. Nonlinear operators can be interpolated empirically too, see [DHO12]. An error estimator for the discrete empirical interpolation method is presented in [WSH14].

• Furthermore there are works, where the domain is separated in “similar” compo- nents and on one of these components a reduced order model is generated. After that, the single components are sticked together again, cf. e.g. [EP13].

• The construction and results for the error estimator by finite element discretised equation systems has been done by Grepl, cf. [Gre12]. He consideres nonaffine linear and nonlinear parabolic equations. For an efficient evaluation of the non- linearity the empirical interpolation method is used. He develops therefore an a posteriori error estimator. But the nonlinearity is very specific, e.g. it should be increasing monotonically.

• Among the reduced basis community flexible solvers have been generated which can apply the RBM to linear as well as nonlinear partial differential equation sys- tems discretised by finite elements, finite volumes or finite differences. Well known examples are RBmatlab (developed at the University of Stuttgart), rbMIT (Mas- sachusetts Institute of Technology) and pyMOR (University of M¨unster) [MRS15].

(22)

In the present thesis we do not employ them because for nonlinear problems there are some “not out of the box” issues to respect.

1.2.3. Reduced Order Modelling in the Battery Context

A reduced order approach for a battery model is introduced by White and his former Ph.D. student Cai, cf. [CR09], [Cai10]. Cai uses the Newman model which is discretised by finite elements. He generates a reduced order model with the information of a snap- shot to a certain parameter set for a 1C-rate discharging, that means that the battery is completely discharged within one hour. Using this snapshot, he creates a reduced order model via POD and shows that (for the same parameter set) the reduced and the true results for the discharge curve agree for other discharge rates, too, e.g. a 10C-rate, where the battery is discharged within six minutes.

Volkwein and his former Ph.D. student Lass use the same approach but in a more mathematical way: Lass works on the Newman model, too. The equation system as well as the proof for the existence and uniqueness of a solution can be found in [WXZ06]. Lass applies the POD method to obtain a reduced order model which he uses for a (parameter) optimisation. The parameters in this context are generally restricted. So he checks if he has got the “most important” true solutions by considering the residual. In order to understand the results of the parameter estimation he also performs a sensitivity analysis. In contrast to Cai he does not just variate the current and his objective is not the state of charge (SoC) but the terminal voltage. Thus the results are hardly comparable. But Lass shows that the POD method is convenient to generate a reduced order model for the Newman model of the battery.

1.3. Outline

The aim of the present thesis is to generate a suitable reduced order model via the RBM for the transport equations of the lithium-ion battery, which is applicable for parameter estimations and sensitivity studies. An important tool for this is an error estimator which estimates the error between the FV and the RB solution.

The thesis is organised as follows.

• After we set the topic of the present thesis into the context of (mathematical) model order reduction and the numerical treatment of battery models in the pre- vious sections, in Chapter 2 we collect some mathematical preliminaries, which are required for the following chapters.

• In Chapter 3 we introduce our considered battery model. The elliptic-parabolic nonlinear partial differential equation system describes the mass and charge trans- port in a lithium-ion battery. We discretise this equation system for block elec- trodes in one dimension via the cell-centred FVM and the backward Euler scheme.

We implement the discretised system in Matlab and validate the numerical solver by comparing the results to the results which are generated by a commercial solver with the same input data.

(23)

1.3. Outline

• Chapter 4 is concerned with the model order reduction by employing the RBM to the discretised system. For the basis generation we use the POD-Greedy algorithm and need therefore POD. The nonlinearities are approximated by (D)EIM. For the (POD-)Greedy algorithm we present an error estimator which estimates the error between the RB and FV solution. For linear problems the error estimator is well known. In anticipation of difficulties for an error estimator for the transport equa- tions of the battery we linearise the corresponding equation system. Furthermore we give an outlook what to do with our reduced order model: two measurements of interest are the terminal voltage and the state of charge (SoC). We define both quantities and set the basics for the later parameter estimation and sensitivity analysis.

• In the next two chapters we study the RBM for the elliptic and the parabolic sub-problem separately.

In Chapter 5 we examine the elliptic equation on the positive electrode with constant concentration and adjusted boundary conditions. This equation sys- tem should represent the charge transport in the positive electrode. We de- velop an error estimator for a certain parameter constellation. Unfortunately this parameter constellation can be easily violated in the battery context.

Therefore we linearise the original nonlinear equation system and develop an error estimator for the linearised problem. Afterwards we compare these two estimators in a numerical test for parameter settings where both are valid.

The RB model is used for parameter estimation and sensitivity analysis with respect to the terminal voltage.

In Chapter 6 we follow the same steps for the parabolic equation on the posi- tive electrode. We adjust the boundary conditions and this time the potential is fixed. This system should represent the mass transport in the positive elec- trode. We develop an error estimator which is valid for a certain parameter constellation. Since this parameter constellation is generally violated in the battery context, we again linearise the equation system and construct an es- timator which is valid for an arbitrary (battery) parameter set. We compare both estimators numerically. For a parameter set, where solely the estimator for the linearised problem is valid, we run different tests, where the battery parameter set is always the same, but the settings in the Greedy algorithm vary. Since we consider in this chapter a highly nonlinear model we apply (D)EIM.

This time we consider the SoC in our parameter estimation and sensitivity analysis using the RB model.

• In Chapter 7 we use the knowledge of the previous two chapters and directly linearise the coupled system of equations. The error estimator for this problem is valid if a certain condition for the ratio of the step size in time and space holds.

In general this condition can not be ensured in a real battery context. Therefore we linearise the elliptic and the parabolic equation separately and develop an error estimator for both equations separately. We use these estimators generating an RB model for our parameter estimation and sensitivity analysis. There we focus

(24)

again on the terminal voltage and vary the diffusion coefficients in the electrodes and in the electrolyte.

• In Chapter 8 we summarise the results and give an outlook to possible future work on this topic.

(25)

2. Preliminaries

In this chapter we provide some mathematical tools which we apply in the following chapters. For more details as given below we refer mainly to the publications [HPUU09], [Zei92], [Wer11], [NW06], [Han09] and [Deu04].

2.1. Basic Analysis

We start with the definitions of a Banach and a Hilbert space.

Definition 2.1. Let V be a real vector space.

1. A mapping ∥ ⋅ ∥ ∶ V →[0,∞) is called a norm on V if i)a∥=0⇔a=0,

ii)λa∥=∣λ∣∥a,∀a∈V, λ∈R, iii)a+b∥≤∥a∥+∥b,∀a, b∈V.

2. A normed real vector space (V,∥⋅∥) is called real Banach space if it is complete.

That means that any Cauchy sequence {aj}jN in V has a limit a∈V with respect to the norm ∥⋅∥.

Definition 2.2. Let H be a real vector space.

1. A mapping ⟨⋅,⋅⟩∶ H × H→Ris an inner product on H if i)a, b⟩=⟨b, a,∀a, b∈H,

ii)λa, b⟩=λa, b, ∀a∈Hλ∈R,

iii)a, a⟩≥0∀a∈H and (⟨a, a⟩=0⇔a=0).

2. A vector space H with an inner product ⟨⋅,⋅⟩ and an associated norm

a∥∶=√

a, a, is called Pre-Hilbert space.

3. A Pre-Hilbert space (H,⟨⋅,⋅⟩) is called Hilbert space if it is complete with respect to the norma∥∶=√

a, a.

Definition 2.3. 1. LetBbe a Banach space. A bounded linear operator b∶ B→Ris called abounded linear functionalonB. The space of thebounded linear operators from B toR is denoted by L(B,R) equipped with the natural operator norm.

2. The space B∶=L(B,R) of linear functionals on B is called dual space of B.

(26)

3. The dual pairing⟨⋅,⋅⟩B,B of B and B is defined as

b, bB,B∶=b(b) (b∈B, b∈B).

Remark 2.4. One can prove that the dual space B of the Banach space Bis a Banach space, too. For B∶=L(B,R) the operator norm is defined as

b∥∶= sup

bB=1b(b)∣. For the following definitions let Ω⊂Rd,d∈Nbe a domain.

Definition 2.5. Let k∈N0 be a non-negative integer. The space ofk-timescontinuously differentiable functions is defined as

Ck(Ω)∶={fDαf is bounded and uniformly continuous on Ω∀α∈Nd0,0≤∣α∣≤k}, where, for a given multi-index α=(α1, . . . , αd), αi∈N0,1≤id,

Dα∶= α

xα11⋯∂xαdd

, withα∣=∑di=1αi.

Furthermore we define the space of the smooth functions C(Ω)∶= ⋂

kN

Ck(Ω).

Definition 2.6. Let 1≤p≤∞. The Lebesgue spaceLp(Ω) is defined as Lp(Ω)∶={[f]∣∥[f]∥Lp()<∞},

where ∥⋅∥Lp(Ω) is defined by

∥[f]∥Lp(Ω)∶=⎧⎪⎪

⎨⎪⎪⎩

(∫∣[f](x)∣pdx)1p, 1≤p<∞ esssupx∣[f](x)∣, p=∞.

[f] denotes the equivalence class of f: two functions of [f] can differ just on a subset ofwith measure zero, so for all f1, f2 ∈[f], [f]∈Lp(Ω)

(∫f1(x)∣pdx)

1

p =(∫f2(x)∣pdx)

1 p,

holds true. Note that the Lebesgue spaces equipped with the norm ∥⋅∥Lp() are Banach spaces.

Remark 2.7. In Definition 2.6 the Lebesgue spaces are denoted by Lp(Ω). In standard literature, e.g. [Wer11, Alt12] it is denoted by Lp(Ω) and Lp(Ω) is the space including the spaces with measure zero Z, i.e. Lp(Ω)=Lp(Ω)∪ Z.

(27)

2.1. Basic Analysis

Definition 2.8. Let 1≤p≤∞ andk∈N0. We define the Sobolev space Wk,p(Ω) as Wp,k(Ω)∶{fDαf∈Lp(Ω),∀α∈Nd

0 ∶∣α∣≤k}. The Sobolev spaces are Banach spaces with the norms

fWk,p()∶=⎧⎪⎪

⎨⎪⎪⎩

(∑αkDαfpdx)1p, 1≤p<∞, maxαkesssupxDαf(x)∣, p=∞.

We denote Hk(Ω)∶=Wk,2(Ω), the Hilbert space with the associated inner product

f, gHk(Ω)∶= ∑

αkDαf Dαg dx.

Definition 2.9. Let 0 < tf < ∞. The space of all ψ ∈ L2(0, tf;H1(Ω)) with weak derivative ψt∈L2(0, tf;(H1(Ω))) is defined as

W(0, tf;H1(Ω))={ψ∈L2(0, tf;H1(Ω))∶ψt∈L2(0, tf;(H1(Ω)))}, with the norm

ψW(0,tf;H1())∶=(∫0tf(∥ψ(t)∥2H1(Ω)+∥ψt(t)∥2(H1()))dt)

1 2. The space L2(0, tf;H1(Ω)) is defined as

L2(0, tf;H1(Ω))∶={f ∶(0, tf)→H1(Ω)∣∫0tff(t)∥2H1()dt<∞}, where

f2H1(Ω)∶=∥f2L2()+∥f2L2().

After this mainly functional analytical definitions we list some general analytical theo- rems.

Lemma 2.10. Let {xj}j=0 be a convergent sequence in RN, N >1 with the limit x, xjx for all j∈N0 and convergence order p>1, i.e.

xj+1x∥≤Cxjxp, j∈N0, for a constant C>0. Then it is

j→lim

xj+1xj

xxj∥ =1.

Proof. The proof can be found in [DR08, Lemma 5.19].

Theorem 2.11 (Mean value theorem). Let U ⊂ Rd be an open subset, f ∶ U → R differentiable, a, b∈U and let the line segment joining the points aand b be denoted by

ab∶={x∈RN∣∃t∈(0,1)∶x=a+t(ba)}, be a subset of U.

Then there exists cabsuch that

f(b)=f(a)+f(c)(ba). Proof. The proof can be found in [DR11, Theorem 11.20].

(28)

2.2. Basic Linear Algebra

In this chapter a few basics from the linear algebra are collected.

Definition 2.12. Let 1≤p≤∞. For x∈RN the p-normon RN is given by

xp=⎧⎪⎪

⎨⎪⎪⎩

(∑Nj=1xjp)p1, 1≤p<∞,

maxj=1,...,Nxj, p=∞. (2.1)

Definition 2.13. A matrix norm ∥⋅∥M ∶RN ×N →Ris called sub-multiplicativeif

ABM ≤∥AMBM,∀A, B∈RN ×N.

A matrix norm ∥⋅∥M ∶RN ×N →Ris called consistent to a vector norm ∥⋅∥∶RN →R if

Ax∥≤∥AMx,∀A∈RN ×N, x∈RN.

Definition 2.14. Let ∥⋅∥∶RN →Rbe a vector norm. Then, ∥⋅∥∶RN ×N →R,

A∥∶=sup

x0

Ax

x∥ = sup

x=1Ax, (2.2)

is the matrix norm induced by the vector norm ∥⋅∥.

Theorem 2.15. Let ∥⋅∥ ∶RN → R be a vector norm. The induced matrix norm ∥⋅∥ ∶ RN ×N →R in Definition 2.14 is consistent to its vector norm and sub-multiplicative.

Let ∥⋅∥M ∶RN ×N →R be another consistent matrix norm to the vector norm ∥⋅∥. Then there holdsA∥≤∥AM for all A∈RN ×N.

Proof. The proof can be found in [Han09, Theorem 2.5].

Remark 2.16. Because of Theorem 2.15 the matrix norm in (2.2) is called lub-norm (least upper bound).

In the following we assume, that ∥⋅∥∶RN →R is ap-norm. Its induced matrix norm is denoted by ∥⋅∥, too. In the numerical tests (cf. Sections 5.4, 6.4 and 7.4) we consider the Euclidean (p=2) and the maximum norm (p=∞).

Remark 2.17. For x∈RN the Euclidean norm is given by

x2=⎛

N j=1xj2

1 2

,

cf. Definition 2.12 for p= 2 and its induced matrix norm, the spectral norm, for A ∈ RM×N by

A22 =λmax(ATA),

(29)

2.2. Basic Linear Algebra

where λmax(ATA)is the greatest eigenvalue ofATA. Keep in mind thatATA is positive semi-definite and symmetric. Thus all eigenvalues are nonnegative. Themaximum norm is given by

x= max

j=1,...,Nxj,

cf. Definition 2.12 for p=∞and its induced matrix norm the row-sum norm by

A= max

i=1,...,M

N j=1Aij.

In the following we linguistically do not distinguish between the spectral norm and the Euclidean norm and between the row-sum norm and the maximum norm. We denote them by the Euclidean, the maximum norm respectively.

For our computations we need the norm of an inverse matrix. Considering the maximum norm there are efficient algorithms for tridiagonal matrices, cf. e.g. [Har04, Algorithm 2.2]. For the Euclidean norm it is much simpler:

Proposition 2.18 ([SK11, Subsection 2.2.1]). Let A be a regular matrix. Then there holds

A−122= 1 λmin(ATA), where λmin(ATA) is the smallest eigenvalue ofATA or

A−12 = 1 σmin, where σmin is the smallest singular value of A.

Definition 2.19. Let ∥⋅∥(1)∶RN →R and ∥⋅∥(2)∶RN →R be two norms. These norms are called equivalent if there exist a, b∈R+ such that

ax(2)≤∥x(1)bx(2) ∀x∈RN.

Lemma 2.20. Let ∥⋅∥(1) ∶RN →R and ∥⋅∥(2) ∶RN →R be two norms. Then they are equivalent.

Proof. The proof can be found e.g. in [Alt12, Lemma 2.8].

Theorem 2.21 (H¨older’s inequality). Let 1 ≤p≤ ∞ and q = pp1 for p ≠1, q =∞ for p=1, respectively. Let x∈RN and y∈RN be two arbitrary vectors. Then it is

N

j=1xjyj∣=∥xpyq.

Proof. The proof can be found e.g. in [DR11, Theorem 10.2].

(30)

Remark 2.22. With Theorem 2.21 we can proof that

xp≤N1p1qxq, x∈RN, with 1≤pq≤∞, 1p +1q =1. Especially it is

√1

N∥x2≤∥x≤∥x2, ∀x∈RN. (2.3) Remark 2.23. Let ∥⋅∥(1) ∶ RN → R and ∥⋅∥(2) ∶ RN → R be two norms which are equivalent with the constants a, b∈R+ such that

ax(2)≤∥x(1)bx(2) ∀x∈RN.

Let A∈R a matrix. For the by ∥⋅∥(1) and ∥⋅∥(2) induced (matrix) lub-norms we get the following inequality

a

bA(2)≤∥A(1)b

aA(2). (2.4)

For the error estimators with respect to the Euclidean norm in Sections 5.2 and 6.2 we need the following

Proposition 2.24. Let N ≥2. Furthermore letA=1N ×N+B∈RN ×N be a matrix with B ∈RN ×N, Bij =0 for j∈{1, . . . ,N −1} and BiN ≥0 for all i∈{1, . . . ,N}. Then it is

Ax2

¿Á ÁÁ ÁÁ À

⎛⎜⎜

⎝1+BN,N +1 2

N i=1

Bi,2N

¿Á ÁÁ À∑N

i=1

Bi,2N +BN,N

N i=1

Bi,2N +1 4(∑N

i=1

Bi,2N)

2

⎟⎟⎠∥x2, (2.5) for all x∈RN.

For the proof of Proposition 2.24 we need the following

Proposition 2.25. Let N ∈N withN ≥2. Furthermore let M ∈RN ×N be a symmetric matrix of the form

M =

⎛⎜⎜

⎜⎝

1 m1

1 mN −1 m1 mN −1 1+c

⎞⎟⎟

⎟⎠

, (2.6)

with mi ≥ 0 i ∈ {1, . . . ,N −1} and c ≥ 0. Then the characteristic polynomial of M is given by

det(Mλ1N ×N)

= (1−λ)N −2

⎜⎝1+1 2c+

¿Á Á

À1

4c2+N −

1 i=1

m2iλ

⎟⎠

⎛⎜

⎝1+1 2c

¿Á Á

À1

4c2+N −

1 i=1

m2iλ

⎟⎠. (2.7)

(31)

2.2. Basic Linear Algebra

Proof. We prove the statement by induction. For N =2 it is det(Mλ12×2)= det(1−λ m1

m1 1+cλ)

=λ2−2λ(1+1

2c)+1+cm21

=! 0.

Thus we obtain

λ1/2=1+1 2c±

√1

4c2+m21. So the statement is valid for N =2.

Assumption: we assume now that the statement (2.7) is valid forN and prove that it is valid for N +1. The characteristic polynomial ofM ∈R(N +1)×(N +1)is given by

det(Mλ1(N +1)×(N +1))= det

⎛⎜⎜

⎜⎝

1−λ m1

1−λ mN m1 mN 1+cλ

⎞⎟⎟

⎟⎠

=! 0.

We expand the determinant with respect to the first row and obtain

det(Mλ1(N +1)×(N +1))= (1−λ)det

⎛⎜⎜

⎜⎝

1−λ m2

1−λ mN m2 mN 1+cλ

⎞⎟⎟

⎟⎠

+(−1)N +2m1det

⎛⎜⎜

⎜⎝

0 1−λ

⋱ ⋱

0 1−λ m1mN −1 mN

⎞⎟⎟

⎟⎠ .

For the first determinant we directly use the assumption that (2.7) is valid forN. In the matrix of the second determinant we switch (N −1)-times the columns in the following way: second column with the first, then the third one with the second one and so on, e.g. the first column is put at last position. We get a lower triangular matrix and the determinant is the product of the entries on the diagonal. Summing up we obtain

det(Mλ1(N +1)×(N +1))

= (1−λ) (1−λ)N −2

⎜⎝1+1 2c+

¿Á Á

À1

4c2+∑N

i=2

m2iλ

⎟⎠

⎛⎜

⎝1+1 2c

¿Á Á

À1

4c2+∑N

i=2

m2iλ

⎟⎠

+(−1)N +2m1(−1)N −1m1(1−λ)N −1

= (1−λ)N −1

⎜⎝

⎛⎜

⎝1+1 2c+

¿Á ÁÀ1

4c2+∑N

i=2

m2iλ

⎟⎠

⎛⎜

⎝1+1 2c

¿Á ÁÀ1

4c2+∑N

i=2

m2iλ

⎟⎠−m21

⎟⎠.

Referenzen

ÄHNLICHE DOKUMENTE

We apply tools from nonlinear control theory, specifically Lyapunov function and small-gain based feedback stabilization methods for systems with a globally asymptotically

The authors gratefully acknowledge partial financial support by the project Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential

The thesis is organized as follows: In the first chapter, we provide the mathematical foundations of fields optimization in Banach spaces, solution theory for parabolic

[r]

(6) In terms of computational effort, the method consists in defining, during the offline stage, for i = 1,. The latter consists in assembling the matrices containing the evaluations

Moreover, in order to conclude the comparisons between the different approaches we show in Figure 4.6 the computational times necessary for each step of the optimization greedy by

The L 2 -error of the reduced solutions in comparison to the finite volume solution is for the concentration smaller than 10 − 5 and for the electrical potential 10 − 4 to all

Our results clearly indi- cate that standard exogeneity tests of the type proposed by Durbin (1954), Wu (1973, 1974), and Hausman (1978) are not appropriate to assess partial