• Keine Ergebnisse gefunden

Code generation for model predictive control of embedded systems

N/A
N/A
Protected

Academic year: 2021

Aktie "Code generation for model predictive control of embedded systems"

Copied!
136
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Model Predictive Control

of Embedded Systems

Dissertation

zur Erlangung des akademischen Grades

Doktoringenieur (Dr.-Ing.)

von

Juan Pablo Menendez Zometa

geboren am 29. Mai 1980 in Santa Tecla

genehmigt durch die Fakultät für Elektrotechnik und Informationstechnik

der Otto-von-Guericke-Universität Magdeburg

Gutachter:

Prof. Dr.-Ing. Rolf Findeisen

Prof. Dr. Daniel Limon

eingereicht am 23. August 2016

Promotionskolloquium am 12. Mai 2017

(2)
(3)

This thesis is the result of the work done as a research assistant at the Systems Theory and Automatic Control group of Prof. Rolf Findeisen, at the Otto-von-Guericke Univer-sity in Magdeburg, Germany.

This work would have not been possible without the support of innumerable people. Special thanks go to Prof. Findeisen for his supervision and support, and to friends, colleagues and family for their encouragement.

(4)
(5)

1. Introduction 1

1.1. Linear model predictive control for embedded systems . . . 2

1.2. Contribution . . . 6

1.3. Outline . . . 8

2. Model predictive control for embedded systems 11 2.1. Embedded digital control systems . . . 11

2.1.1. Digital control of linear dynamic systems . . . 11

2.1.2. Cyber-physical systems: real-time embedded systems . . . 15

2.2. On-line optimization . . . 19

2.2.1. Basics of convex optimization . . . 19

2.2.2. Quadratic programs . . . 20

2.2.3. Optimization theory for quadratic programs . . . 21

2.3. Model predictive control for linear systems . . . 23

2.3.1. A basic MPC setup . . . 25

2.3.2. MPC as a QP . . . 26

2.3.3. General moving horizon control formulation . . . 27

2.4. Summary . . . 28

3. Tailored on-line optimization software tools for MPC 29 3.1. Exploiting the properties of the MPC algorithm . . . 29

3.1.1. Known-ahead maximum computation time . . . 29

3.1.2. Partial use of the solution by the controller . . . 30

3.1.3. Similarities between consecutive problems . . . 31

3.1.4. Special structure of the data . . . 33

(6)

3.2. Tailored optimization software tools . . . 35

3.2.1. Interior point methods . . . 36

3.2.2. Active set methods . . . 38

3.2.3. Gradient methods . . . 40

3.2.4. Comments on explicit methods . . . 41

3.3. A novel optimization algorithm for embedded MPC . . . 41

3.3.1. Fast gradient method . . . 42

3.3.2. Augmented Lagrangian method . . . 45

3.3.3. The ALM+FGM algorithm for MPC . . . 49

3.3.4. ALM+FGM for embedded MPC . . . 53

3.4. Summary . . . 55

4. General description of multistage problems 57 4.1. General formulation of multistage problems . . . 57

4.1.1. Motivational example . . . 58

4.1.2. Abstract formulation of multistage problems . . . 59

4.2. Reformulation as a condensed optimization problem . . . 64

4.2.1. Reformulation as a general QCQP . . . 64

4.2.2. Reformulation as a QCQP in standard form . . . 67

4.2.3. Special case: condensed QP . . . 70

4.3. High-level multistage specification language . . . 72

4.3.1. Code generation of condensed formulation . . . 74

4.4. Summary . . . 75

5. muAO-MPC: a free code generation tool for embedded MPC 77 5.1. Core features . . . 77

5.2. Automatic generation of C code . . . 79

5.2.1. Forming and solving the condensed QP . . . 80

5.2.2. Solving the QP with the ALM+FGM algorithm . . . 81

5.2.3. Further controller performance improvements . . . 82

5.3. Examples: code generation for a microcontroller . . . 83

5.3.1. Setup description . . . 83

5.3.2. Considered Embedded Hardware . . . 84

(7)

5.3.4. Discussion . . . 85

5.4. Summary . . . 88

6. Application examples 91 6.1. Low-end example: A direct current motor . . . 91

6.1.1. System description . . . 91

6.1.2. Generating a fast embedded MPC controller . . . 93

6.1.3. Results . . . 93

6.2. High-performance example: An autonomous vehicle . . . 95

6.2.1. System description . . . 95 6.2.2. MPC Implementation . . . 97 6.2.3. Discussion . . . 99 6.3. Summary . . . 101 7. Conclusions 103 7.1. Outlook . . . 104

A. Forming a condensed parametric quadratic program 119 B. System Matrices 123 B.1. Simulation examples . . . 123

(8)
(9)

The internet of things is a novel paradigm based on networked physical devices that interact with their environment. This paradigm enables the improvement of existing technologies, and is likely to serve as the basis for yet unforeseen technological ad-vancements. An internet-of-things device works on a computer network, and typically integrates sensors and actuators. At the core of any device is an embedded computer, commonly with very low computational capabilities. Model predictive control is an ad-vanced control method likely to play a role in future internet-of-things applications as a way to improve the performance of networked cyber-physical systems. Therefore, the easy implementation of model predictive control on embedded hardware is of interest.

Model predictive control is based on the repeated solution of an optimization problem that allows to naturally handle multi-input multi-output systems subject to constraints. At every sampling time, inputs to the system are determined that optimize the pre-dicted state evolution up to a certain future time instance. For this, an optimization problem must be solved in real time at each sampling period. This makes the implemen-tation of model predictive control on embedded systems challenging. The difficulty on the implementation arises due to the combination of two factors: the limited computa-tional capabilities of today’s embedded hardware and the relatively high computacomputa-tional demands of solving the necessary optimization problem. The efficient and reliable im-plementation of model predictive control requires a tailored imim-plementation.

A starting point for the efficient implementation is that for a given model predictive control formulation, the optimization problem belongs to a often narrow problem class (e.g. a quadratic program) with a particular structure. This structure does not change for many applications, and from one sampling time to the next only the problem data is different. The few available tools often exploit the special multistage structure of model predictive control problems. This leads to code whose computational complexity grows linearly with the horizon length by exploiting the sparsity of the problem. An alternative are, especially for short horizon lengths, condensed formulations.

(10)

We present a universal code generation tool that exploits condensed parametric formu-lations tailored for linear time-invariant discrete-time model predictive control. As base we introduce a simple modeling language that presents an easy to formulate model pre-dictive control problems. Based on this language, we outline how one can automatically generate C code that is tailored towards dense solution approaches, which are espe-cially suitable for low-performance and memory-constrained embedded processors. We furthermore implement an optimization algorithm tailored for model predictive control that explicitly takes into account the limitations of embedded hardware.

The presented modeling language is intuitive and easy to use, and offers great flexibil-ity in the problem formulation. We demonstrate via several examples that the outlined code generation approach allows for fast solutions even on low-cost embedded platforms.

(11)

Das Internet der Dinge ist ein neues Paradigma, welches auf vernetzten physikalischen Geräten basiert, die mit ihrer Umgebung interagieren. Dieses Paradigma ermöglicht die Verbesserung von existieren Technologien und wird wahrscheinlich als Grundlage für noch unvorhergesehene technologische Fortschritte dienen. Ein Internet-der-Dinge-Gerät arbeitet auf einem Computer und bezieht typischerweise Sensoren und Aktoren mit ein. Im Kern eines jeden Gerätes befindet sich ein eingebetteter Computer, welcher für gewöhnlich über eine geringe Rechenleistung verfügt. Modellprädiktive Regelung ist eine erweiterte Methode in der Regelungstechnik, die wahrscheinlich eine zukünftige Rolle in der Anwendung des Internets der Dinge spielen wird. Sie stellt einen Weg dar, die Leistung von vernetzten cyber-physikalischen Systemen zu verbessern. Deshalb ist die einfach umsetzbare Implementierung von modellprädiktiver Regelung auf eingebetteter Hardware von Interesse.

Modellprädiktive Regelung ist ein fortschrittliches Regelungsverfahren, basierend auf der wiederholten Lösung eines Optimierungsproblems, mit dem auf einfache Weise auch Mehrgrößensysteme und Systeme mit Beschränkungen behandelt werden können. Zu je-dem Abtastzeitpunkt wird ein Eingang für das System berechnet, welcher die prädizierte Systementwicklung optimiert. Somit muss zu jedem Abtastzeitpunkt wiederholt und in Echtzeit ein Optimierungsproblem gelöst werden. Die wiederholte Lösung dieser Probleme macht die Implementierung der Modellprädiktiven Regelung auf eingebet-teten Systemen zu einer Herausforderung. Die Schwierigkeiten, die bei der Umsetzung entstehen sind auf eine Kombination von zwei Faktoren zurückzuführen: die begren-zte Rechenleistung eingebetteter Systeme sowie die relativ hohen Rechenanforderungen zur Lösung der Optimierungsprobleme. Daraus folgt, dass eine effiziente Umsetzung der Modellprädiktiven Regelung eine maßgeschneiderte problemspezifische Implemen-tierung erfordert. Für ein gegebenes Modellprädiktives Regelungsproblem gehört das entsprechende Optimierungsproblem oftmals zu einer speziellen Klasse (z.B. Quadratis-ches Programm) mit spezieller Struktur. In vielen Anwendungen bleibt diese Struktur

(12)

von einem Abtastzeitpunkt zum nächsten konstant, es ändern sich lediglich die Prob-lemdaten. Die wenigen, verfügbaren Software-Tools nutzen die inhärente Struktur der Modellprädiktiven Regelung aus, was zu einem linearen Wachstum der Rechenkomplex-ität mit der Länge des Prediktionshorizonts führt. Diese Implementierungen nutzen oft dünnbesetzte Matrizen zur Beschleunigung der Lösung des Optimierungsproblems aus. In einigen Fällen, insbesondere für Probleme mit kurzen Prädiktionshorizonten auf eingebetteten Systemen, sind allerdings kondensierte Formulierungen besser geeignet. In dieser Arbeit wird ein Software-Tool zur Codegenerierung vorgestellt, welches auf einer parametrischen Formulierung der linearen zeitinvarianten diskreten Modellprädiktive Regelung beruht. Hierzu wird zunächst eine einfache Modellierungssprache vorgestellt, welche erlaubt modellprädiktive Regelungsprobleme einfach zu formulieren. Auf Basis dieser Sprache wird umrissen, wie automatisch C-Code generiert werden kann, der die Verwendung kondensierter Lösungsansätze ermöglicht. Darüber hinaus wird ein Op-timierungsalgorithmus implementiert, der die Einschränkungen eingebetteter Systeme berücksichtigt. Die vorgestellte Modellierungssprache ist intuitiv und bietet große Flex-ibilität bei der Problemformulierung. Anhand einiger Beispiele wird demonstriert, dass der vorgestellte Ansatz der Codegenerierung zur schnellen Lösung von Modellprädiktiven Regelungsproblemen, auch auf preisgünstigen eingebetteten Systemen, ermöglicht.

(13)

Despite many conceptual and theoretical advances, the systematic use of advanced con-trol approaches for constrained system in industrial applications is still limited [1]. Often simpler unconstrained methods like a proportional-integral control might be sufficient to stabilize a system with constraints. In such cases, a more complex algorithm that can ex-plicitly take into account the constraints may be, due to technical and economic reasons, hard to justify even if it delivers superior closed-loop performance [1]. A notable excep-tion is the process industry, where model predictive control (MPC) [2, 3, 4, 5, 6, 7, 8] has been consistently applied for nearly four decades [9]. This is due several factors, including that the processes are slow and that operation near state constraints offers great economic benefits.

In recent years, the use of MPC has been applied to a broad range of applications, e.g. path-following in robotics [10], building climate control [11], control of water canals [12], and biomedical applications such as artificial pancreases [13].

In the case of fast embedded systems subject to constraints, MPC has been tradition-ally considered difficult to apply due to the high computational demands of the MPC algorithm relative to the computational capabilities of embedded computers. Therefore, a widespread use of MPC on embedded applications has been limited in the past and has only gained momentum in recent years. The constantly decreasing cost/performance ratio of computers, and, most importantly, the recent improvement in MPC-tailored op-timization algorithms has made the implementation of MPC in embedded hardware possible [14, 15, 16, 17], albeit still challenging on low-cost devices. This work attempts to further simplify the application of MPC on low-cost embedded systems by introducing a new code generation software tool for MPC that explicitly considers the limitations of low-cost embedded hardware. The two main components of the generated code are an efficient optimization solver and data structures tailored for embedded applications.

(14)

1.1. Linear model predictive control for embedded

systems

Model predictive control is an advanced control method that takes into account system constraints, and is based on repeated optimization using information of a prediction model. According to [7, Ch. 1], model predictive control is the only generic control method that can routinely deal with constraints. Furthermore, [8] states that MPC is “perhaps, the most general way of posing the process control problem in the time domain”. The fact that it is generic implies that a broad range of applications that deal with the control of constrained systems can be considered within the MPC framework. A further appeal of MPC is that it is conceptually intuitive, although its inner operation rely on complex and less intuitive concepts. Many types of MPC formulations exist. What all of them have in common are the explicit use of a mathematical model to make predictions and a receding horizon optimization [18]. MPC has been developed at least in three major streams: the industrial or classical setting, the one based on adaptive control, and the synthesis approach [8]. We focus here on the synthesis approach. Two of the main characteristic of this approach are the use of state-space models and stability guarantees [18].

To this end, we consider the linear discrete-time dynamical system

x+= Ax + Bu + h(w)

subject to input and state constraints

u ∈ U, x ∈ X,

with x the current state, x+ the successor state, u the input to the system and h(w) a function affine in the parameter w. This parameter represents a reference trajectory or a known disturbances (or a combination of them), for example. The input constraints

U are often determined by actuator limits. Examples are the minimum and maximum

voltage that can be applied to an electric motor, or the maximum aperture of a valve. These are usually hard constraints, i.e. they are enforced by the hardware and, under normal operation, cannot be violated. The state constraints X are usually determined by several factors, such as safety, comfort, and efficiency. These are commonly artificially

(15)

introduced by design. For example the acceleration of a vehicle can be artificially limited for comfort reasons, or the temperature or pressure of a chemical process can be limited for economic and safety reasons. These types of constraints are often considered as soft. In practice they may be violated, e.g. due to several sources of uncertainty in the control loop like disturbances and model-plant mismatch. In other words, soft constraints are desired limits that, in case of not being respected, the controller can relax.

To control this type of constrained system, we use the following linear MPC problem formulation: minimizeu N −1X j=0 `(xj, uj, wj) + `N(xN, wN) subject to xj+1 = Axj+ Buj + h(wj) j = 0, ..., N − 1 uj ∈ U j = 0, ..., N − 1 xj ∈ X j = 0, ..., N − 1 xN ∈ Xf ⊆ X x0 = x. (1.1)

The stage cost `(xj, uj, wj), the terminal cost `N(xN, wN) and the terminal constraint set

Xf constitute the three ingredients that help establish stability of the closed-loop system,

[19]. The horizon length is given by N. The stage cost is commonly used to determine the desired behavior by way of weighting matrices. Both, the stage and terminal costs, are commonly quadratic functions. The MPC formulation is an optimization problem that needs to be solved at each sampling time. The solution to the problem for the state

x at the current sampling time is used to compute the current control input u (see [3,

Ch. 2] for further details).

Note that the stage cost `(xj, uj, wj) is repeated for j = 0, · · · , N − 1. Thus, in this

work we would refer to MPC as belonging to the broader class of multistage problems. This classification also includes moving horizon estimation problems [3, Ch. 3].

We are particularly interested in MPC problems like (1.1) with a quadratic cost and affine inequality constraints and affine equality constraints (i.e. the linear dynamics). Such problems lead to quadratic programs (QP). To find a solution using iterative algo-rithms, the MPC problem is often brought into a special form. We distinguish between two types of problem formulations: sparse and condensed.

(16)

On the one hand, sparse formulations exploit the multi-stage nature of MPC to achieve a computational complexity that grows linearly with the horizon length, instead of cu-bically in the case of non-tailored interior point methods [20]. On the other hand, condensed problem formulations use dense matrices without a directly exploitable struc-ture. The computational complexity of a condensed solver typically grows cubically or quadratically with the horizon length. In general, condensed formulations are preferred in cases where the horizon length is short. In embedded applications, the horizon length is often selected to be as short as possible, while still delivering suitable performance. Thus, condensed formulations can be of computational advantage for embedded applica-tions. One specific problem that benefits from a condensed formulation is the case when the number of states is much larger than the number of inputs. For example, stochastic MPC formulations based on polynomial chaos expansion show this property [21].

One often cited drawback of MPC is that its computational requirements are sig-nificantly higher in comparison to simpler unconstrained controllers. This is partic-ularly relevant when implementing MPC on an embedded computer. An embedded computer in general provides only a small fraction of the computational performance of a modern desktop computer. Implementing MPC on an embedded computer is in general challenging. Although nowadays very powerful off-the-shelf microprocessors are cheaply available, in some applications, for technical or economical requirements, low-cost resource-limited embedded processors need to be used. Technical limitations may include low energy consumption, small footprint, etc. Furthermore, in cost-sensitive applications, the cheapest processor that can do the job is typically selected.

A great amount of research has been performed recently that aims at speeding up MPC implementations. Of particular interest are automatic code generation methods for MPC for embedded systems. [22, 23, 24, 25]. The current state of research on this field can be split into two branches: tailored solvers and automatic code generators.

One way to speed up finding a solution is the use of explicit MPC [26]. In this approach, the solutions are computed off-line and stored in a lookup table. This approach has high memory requirements and is in general limited to problems of small size [27]. In the following, we restrict our attention to MPC approaches that require solving an optimization problem on-line. We focus on optimization algorithms that iteratively search for a solution to the optimization problem. That is, the algorithm looks for a point that minimizes the value of the cost function and that, at the same time, satisfies all constraints.

(17)

A large variety of literature exist in the field of optimization [28, 29, 30]. Of particular interest in this work are convex optimization problems [31]. Although convex problems can be solved reliably, an off-the-shelf general purpose solver might not be fast enough for an MPC application. Furthermore, such a solver might be difficult to use on an embedded platform due to software dependencies and hardware limitations. This has motivated the development of solvers that take into account the MPC characteristics and limitations of embedded applications. Among the most commonly exploited charac-teristics are the MPC structure and warm starting. The former commonly refers to the exploitation of the banded structure of the problem matrices arising due to the predic-tive nature of MPC. Warm start refers to the use of the solution to the MPC problem at the previous sampling time as the initial guess for the solution of the problem at the current sampling time.

One of the first reported on-line algorithms that fully exploited the MPC structure for linear systems was presented in [20], which used a primal-dual interior point method (IPM). This has been the base for recent code generation tools tailored for MPC like the ones presented in [22, 23]. Primal-dual IPMs spends the first iterations finding the so-called central path. Once in that path, the algorithm can quickly converge to a solution. Hence, these methods do not typically make much progress on improving the quality of the solution on the very first iterations. Furthermore, primal-dual IPM are difficult to warm start [27]. This might restrict the application of tailored primal-dual IPMs on embedded applications where acceptable performance can be attained by rough solutions and only very limited time is available to compute the solutions. Alternatively, a tailored primal barrier IPM that can be warm started was discussed in [32]. This method is in general not as fast and accurate as primal-dual methods, but may offer a good trade-off between speed an accuracy for embedded applications. Finally, active-set methods have also been tailored to efficiently solve optimization problems arising from MPC, with perhaps the most popular being the online active set strategy [27], and its implementation, called qpOASES [33].

In general, the theoretical computational bounds are much larger than the ones ob-served in practice for interior point methods [31, 34] and active set methods [29, 35]. In safety-critical applications, error bound certification might be required. First-order methods in general have tighter bounds, and may be preferred for such applications. Some theoretical contribution in this direction have been made in [36, 37, 38].

(18)

generation tools. These tools commonly consist on a parser of a high-level domain-specific language, and a code generator of a tailored optimization solver. Typically the C programming language [39] is used for the generated code. Commonly, the generated code does not depend on external software libraries, i.e. an appropriate C compiler (e.g. [40]) suffices to generate executable binaries for a hardware target platform.

Several parsers for general purpose optimization problems exist, e.g. YALMIP [41], AMPL [42], and CVX [43]. However, these are mainly used to feed data to general purpose optimization routines, i.e. the solver does not take into account many of the characteristics of each problem at hand, which makes them in general less efficient. In contrast, CVXGEN, [44] generates embedded solvers for convex quadratic programs. In the CVXGEN context, embedded means that, given a problem with certain character-istics, solvers that exploit these characteristics are generated. CVXGEN reports several orders of magnitude improvements in the solution time compared to CVX [44]. Neverthe-less, the C code generated by CVXGEN grows very quickly in size (see Subsection 3.2.1 for further details). While CVXGEN can handle MPC problems [45], the generated code does not exploit the structure inherent in MPC. The combination of QCML and ECOS as presented in [24], the former being a parser and the latter being a stand alone general purpose second-order cone program solver [46], is based on similar ideas as CVXGEN. Both CVXGEN and QCML do not eliminate the equality constraints due to the system dynamics, leading to a sparse formulation.

In [47] two code generation tools are presented. One is FiOrdOs, which based on first-order methods that do not exploit structure inherent in MPC [48]. The other one is FORCES, a structure-exploiting primal-dual interior point method [22]. Parsers specific to nonlinear MPC have also been developed. The ACADO toolkit [49] is a C++ software tool that provides C-code generation capabilities for continuous-time nonlinear MPC problems [50]. Although it is possible to use the ACADO toolkit for discrete-time linear systems as well, the generated code is bulkier than code tailored for linear discrete-time MPC problems.

1.2. Contribution

This work addresses the computational drawback of MPC without compromising its advantages. The main contribution is a free MPC software tool called µAO-MPC that

(19)

simplifies the deployment of MPC controllers, while at the same time taking into consid-eration the computational limitation inherent to embedded processors. Because

µAO-MPC relies on a flexible problem formulation, it can be used to deploy moving horizon

estimators as well. In this sense, we retain the main advantages of the MPC framework, namely its intuitiveness and generality, while at the same time addressing one of its main cited drawbacks, i.e. prohibitive computational requirements for embedded computers.

Broadly speaking, µAO-MPC consist of three parts: a parser for a simple MPC prob-lem specification, a C code generator and a tailored optimization algorithm. The typical workflow is as follows: the user specifies an MPC problem using a language that closely resembles the way MPC problems are expressed mathematically. This specification is parsed and transformed into an optimization problem, in particular a quadratic program (QP) with dense matrices. Finally, C code is generated that contains the QP together with a first-order optimization algorithm that can approximately solve the optimization problem efficiently.

In this work, we follow an approach similar to CVXGEN in the way of how MPC problems are expressed (i.e. simple text-based mathematical representation). We focus however on code generation of embedded MPC solvers for computationally limited sys-tems using first-order methods, similar to FiOrdOs. At the moment, we have focused on solving QPs and our results show that µAO-MPC generates code of much smaller size than CVXGEN. One key difference from our approach and all other existing parser/code generators for linear systems is that we eliminate the equality constraints to derived a condensed formulation. Furthermore, the solver we use finds control inputs appropri-ate for certain types of embedded control faster than CVXGEN. This control inputs are based on rough approximate solutions, which are nevertheless in many applications good enough for control. We do not compare our results against FiOrdOs, but we would expect that µAO-MPC and FiOrdOs deliver similar performance in many situation. A full comparison with many optimization algorithms is beyond the scope of this work. The main contribution is therefore not in the field of convex optimization, but rather on the implementation aspects of automatic code generation tools for model predictive control on embedded systems.

In this work we extend the results presented in [25] by introducing a method and a parser that automatically generates easily usable code for the efficient on-line solution based on a condensed formulation. This approach, which is specially tailored for linear time-invariant MPC problems, allows (besides QP formulations) the consideration of

(20)

linear programs (arising e.g. in min-max robust MPC [51]), quadratically constrained quadratic programs (arising in robust MPC by adding ellipsoidal constraints [52]) and second-order cone programs (arising in stochastic MPC [53]). We show how to system-atically go from the MPC description to the condensed formulation using the proposed method. Thus, we extend µAO-MPC capabilities two-fold: first, the presented approach allows to generate data tailored for linear programs, quadratically constrained programs and second order cone programs in dense form. Second, we extend the family of prob-lems that can be considered by introducing a multistage modeling language. This allows not only to consider a broader range of multistage problems, but also moving horizon estimation problems and special filtering problems. Note that we focus here on the gen-eration of the dense problem data, and not on a particular solver. To the best of our knowledge, this is the first tailored tool for linear time-invariant discrete-time MPC that can automatically parse, condense the data, and generate easily usable code. Loosely related, but tailored towards continuous-time nonlinear systems is the ACADO Toolkit [49, 50].

1.3. Outline

This work is organized as follows. In Chapter 2 we describe the overall setup, namely digital control of embedded systems and the MPC algorithm. We discuss the properties of embedded systems most relevant to MPC applications. In particular, we discuss the computational limitations of embedded systems. Furthermore, we present the theoretical foundations of convex optimization applied in particular to convex quadratic programs. In Chapter 3 we discuss optimization algorithms tailored for embedded MPC. We briefly discuss some state-of-the-art approaches. The chapter focuses on a novel opti-mization algorithm tailored for embedded MPC applications. The algorithm is based on an augmented Lagrangian method combined with Nesterov’s fast gradient method (the ALM+FGM algorithm).

In Chapter 4 we introduce a general MPC language specification. We show how some types of MPC problems specified using this language can be transformed into condensed parametric quadratically constrained quadratic programs in a structured way.

Chapter 5 presents µAO-MPC, an automatic code generation tool for MPC written in Python. The software generates portable C-code for MPC problems specified in

(21)

the language introduced in Chapter 4. By default the generated condensed QPs are solved using the tailored ALM+FGM algorithm discussed in Chapter 3. We show with some simulation examples µAO-MPC’s suitability for embedded applications, taken into account the properties discussed in Chapter 2.

In Chapter 6 we present two application examples that again demonstrate that

µAO-MPC is appropriate for a specific type of embedded applications, namely those that can

be warm started, are well conditioned and where rather rough approximate solutions deliver good closed-loop performance.

(22)
(23)

embedded systems

Nowadays, most closed-loop control schemes are implemented on digital systems. This is mainly due to the flexibility provided by software implementations, and the ubiquity of microprocessors. Furthermore, there is a strong body of control theory and software tools that covers the design of digital control systems. Particularly well developed is the theory of linear unconstrained single-input single-output systems [54, 55]. Usually, the analysis is performed in the frequency domain (i.e. z-transform for discrete-time systems). In the case of unconstrained multi-input multi-output (MIMO) systems, modern control methods exist that are well established, such as H2and H∞control [54, Ch. 40]. Optimal control methods together with state-space representations are commonly used for the control synthesis of MIMO systems [56, 57]. In the case of constrained MIMO systems, model predictive control (MPC) is the most widespread technique [7, 8].

This chapter lays the theoretical foundations needed for a basic understanding of the main characteristics of MPC for embedded systems.

2.1. Embedded digital control systems

This section describes the most relevant characteristics of digitally implemented control systems.

2.1.1. Digital control of linear dynamic systems

We start this section with a few definitions motivated by [55]. We expand later on some concepts that are key for the further development of this topic.

Figure 2.1 shows the general scheme of a digital control system. We first focus on the right part of the figure, which represents the analog (or continuous) part of the control

(24)

CPU DAC ADC Plant Sensor ˆuk u(t) y(t) w(t) v(t) ˆyk Clock

Figure 2.1.: Block diagram of a basic digital control system, adapted from [55]. system. These are represented with grey boxes. We consider the plant to be any physical process that needs to be controlled. Most of the physical plants of interest are dynamical systems which are continuous in nature. We want to modify the behavior y(t) of the plant with a control input u(t). The variable w(t) represents disturbances. Finally, we capture the magnitude of physical variables of the plant via sensors. We use v(t) to denote disturbances or noise in the sensor. All these variables change continuously in time.

Let us now turn our attention to the middle part of Figure 2.1, which represents the bridge between the digital and the analog part of the control system, represented by the grey/white boxes. The analog-to-digital converter (ADC) samples a physical variable and then converts this value into a digital number representation. We assume that these samples are taken at constant time intervals, which are driven by the clock. We call the interval between any two samples the sampling period. The signals generated by the ADC vary only at discrete times, more precisely at every t = kT , with T the sampling period, and k = 0, 1, . . .. This type of signals are called discrete signals. For simplicity of notation, we write any discrete signal y(kT ) in the form yk. A system that has an

interface to continuous signals (e.g. y(t)) and discrete signals (e.g. yk) is known as a

(25)

as a digital number with a limited number of binary digits. This introduces the effect known as quantization, which refers to the effect of representing yk as a value ˆyk that

has been approximated to a certain precision. Digital computers work, therefore, with

digital signals, i.e. signals that are both quantized and discrete.

In the middle part of Figure 2.1 the digital-to-analog converter (DAC) is also depicted, which performs the opposite function of the ADC. The same concepts apply. The main difference is that the DAC usually implements a zero-order hold to keep the analog signal constant during a whole sampling period (the ADC also does this, but only for a fraction of the sampling period).

Finally, on the left side of the figure we have the digital part of our control system, represented by white boxes. The central processing unit (CPU) is in charge of computing a control action ˆuk based on the current sensor information ˆyk via a control algorithm.

Two of the most important factors that affect the performance of sampled data systems are quantization and the sampling period. If the sampling period is sufficiently small, and the quantization effects are negligible, a sampled-data control system might behave in some cases almost identically to a continuous-time control system [54, Ch. 15]. Usually, in cost-sensitive embedded control systems the sample period and quantization effects are not negligible and need to be taken into account in the design of the control system. We further expand the concepts and effects of quantization and the sampling period in the following.

Quantization

Loosely speaking, quantization is an effect where a digital number with limited precision approximates a real number [54, Ch. 15]. Consider a signal z ∈ Qz ⊆ R in the

interval [z, z] that is represented by the quantized value ˆz. The set Qz can represent

the values of a continuous signal (e.g. Qz = R) or of a digital signal (i.e. z is itself

quantized). The bounds z and z represent, for example, actuator or sensor limits. In this context we define quantization Φ : Qi → Qo as a nonlinear map from the set

Qi = {z ∈ Qz ⊆ R | z ≤ z ≤ z} to the set Qo = {ˆz ∈ R | ˆz = z + sρ(z − z)}, where

ρ = 2−l, with l ∈ Z is the number of bits available to represent the digital number and s ∈ Z, 0 ≤ s ≤ (2l1) is the full range of raw binary values. Note that Q

o is defined

by an affine function of integer numbers. Other definitions are also possible, e.g. using logarithmic functions.

(26)

The quantization Φ is usually defined in either of the following two ways. The first is truncation, which means mapping a real number in Qi to the largest previous number

in Qo. The second way is round-off, which is more commonly used in control computers.

Round-off simply maps any number in Qi to the nearest number in Qo. We assume in

the following that the quantization function uses round-off. We define the maximum quantization error β for Φ as

β= 1

2ρ(z − z). (2.1)

Furthermore, for the mapping Φ : z 7→ ˆz we have β ≤ |z − ˆz|. For example, consider the quantization happening in the ADC and the DAC. In the former case, we have Φ : y(kT ) 7→ ˆyk with Qz = R. In the latter case, the main difference is that the input

set Qi is now also defined for a quantized signal, with Qz possibly defined similarly as

Qo (i.e. quantized).

Quantization have different effects in the CPU than in the ADC or the DAC. In many applications, the round-off errors for floating-point operations performed by the CPU can be neglected. However, floating-point arithmetic can be sometimes replaced by

fixed-point arithmetic. The notation Qa.b denotes binary fixed-fixed-point numeric representation

using a + b + 1 bits, with a integer bits, b fractional bits and 1 sign bit. For example,

Q13.18 denotes 32-bit binary fixed-point numeric representation, with 13 integer bits

plus 1 sign bit and 18 fractional bits. Fixed-point arithmetic operations are performed in general more efficiently (in terms of energy and speed) than floating-point operations. The main drawback of fixed-point arithmetic is an increase in round-off errors due to the lower precision when compared to floating-point arithmetic. See [58] for a discussion of fixed-point arithmetic used in model predictive control.

In contrast to the CPU, quantization effects play a major role on the DAC and ADC, because these components use shorter word sizes (commonly between 8 and 16 bits). The maximum quantization error β may be large enough to noticeably degrade performance in cost-sensitive applications, where 8-bit or 10-bit DACs and ADCs are typically used.

Sampling period

The selection of the sampling rate is commonly based on a rule of thumb and a trade-off between cost and performance. The first thing to consider is that the sample rate ωs

(27)

the Nyquist-Shannon sampling theorem, which states that a signal that is a bandlimited by ωb is uniquely characterized by its uniform samples taken with a frequency ωs >2ωb,

see [54, Ch. 16]. In practice, higher ratios are required to have a smooth time response. The following rule of thumb is commonly used [55]:

20 < ωs

ωb

<40.

In [54, Ch. 16] other possible heuristic rules are also discussed.

The next consideration in determining the sampling rate is cost. In general, higher sampling rates imply better closed-loop performance. If cost is no concern, a powerful processor can be used that allows a very high sampling rate (e.g. ωs

ωb > 40). For

cost-sensitive applications, the sampling rate is selected as the lowest that meet all performance requirements: fast disturbance rejection, smooth time behavior, etc.

Although there exist applications that may benefit from a variable sampling period, we focus here on systems with constant sampling period. Furthermore, we only deal with continuous-time systems discretized via zero-order hold on the inputs, i.e. we consider discrete-time systems of the form xk+1 = f(xk, uk). As a side note, when using very

fast sampling rates an incremental model of the form dxk = f(xk, uk, T) [59] might be

necessary for the closed-loop control of embedded systems [58].

2.1.2. Cyber-physical systems: real-time embedded systems

The term embedded system has different meanings in different contexts. In this work, we define an embedded system as a dedicated microprocessor-based system that controls a physical process. Recently, the term cyber-physical systems has been applied to describe embedded systems that control physical processes in real time [60, 61], possibly via a computer network. This term has also different scopes in different disciplines, and some authors emphasize the communication between embedded systems in the provided definition [62]. We summarize the main characteristics of cyber-physical systems that are relevant for later chapters.

Hardware

There exist a broad spectrum of microprocessors. We focus on devices at the bottom end of the spectrum in terms of computational capabilities, namely microcontrollers.

(28)

Even among microcontrollers there is a broad range of options. The CPU for example is driven by clock frequencies usually in the range of several tens to a few hundred megahertz. Another distinctive features is the word size, which is typically 8-bit, 16-bit or 32-bit. For computationally intensive applications, like the MPC algorithm, 32-bit microcontrollers are in general preferred. We briefly describe the most relevant hardware characteristics of microntrollers: the processor (or CPU) and the memory. In the case of the processor, we base our discussion on a popular family of 32-bit microprocessors: ARM Cortex-M. These state-of-the-art processors are nowadays in widespread use and are well suited for applications with complex data processing requirements like MPC [63, Ch. 1]. For a detailed description of this family of processors see [63].

We start by discussing the arithmetic capabilities of a microcontroller. Let us first consider the low-cost Cortex-M3 microprocessor. This CPU lacks a floating-point unit (FPU). This means that only integer arithmetic can be performed directly on hardware. Integer addition and multiplication can be executed in one clock cycle, whereas division needs several cycles. Furthermore, floating-point operations are emulated by software, with additions and multiplications being several times slower than their integer counter-parts. Empirical evidence shows that fixed-point operations (add, multiply) are around 4 times faster than floating-point operations for the Cortex-M3 [25] as well as for the classical ARM-7TDMI microprocessor [64]. In contrast, the more expensive Cortex-M4 includes digital signal processing (DSP) capabilities and, optionally, a single-precision FPU. This allows to execute 32-bit floating-point arithmetic operations directly on hard-ware.

The next big component of a microcontroller is memory. We distinguish between two types of memory: program memory, and data memory. The former is a non-volatile stor-age typically based on flash technology. It is commonly between ten kilobytes and a few megabytes, and is sometimes colloquially called ROM (read-only memory). The latter is a volatile type of storage typically based on SRAM (static random-access memory), and has a capacity of one kilobyte to a few hundred kilobytes.

Real-time systems

We now introduce some concepts related to applications that process data in real-time. We take most of our definitions from [60]. A real-time system is one whose correctness is based not only on the correctness of the results of the computations but also on the

(29)

physical time at which these results are produced.

A deadline is the time by which a result must be produced. Deadlines can be classified in three types: firm, soft and hard. A firm deadline is characterized by the fact that a result is no longer useful after the deadline has passed. If the deadline is soft, a result that missed the deadline can still be used. A hard deadline is one that, if missed, failure or catastrophic consequences may result. It follows that any hard deadline is a firm deadline.

Similar as in the classification of deadlines, real-time systems are also separated in soft and hard real-time systems (also called safety-critical systems). The former is any real-time system with no hard deadlines, whereas the latter is a real-time system that must meet at least one hard deadline.

In this work, we consider digital control applications with firm deadlines. Depending on the application, a control system may be soft or hard real-time. In the case of soft real-time control systems, missing a deadline may result in degradation of performance. Another important concept is that of temporal determinism, which means that the response time for any event is known. Computational delay is the time it takes the mi-croprocessor to execute the control algorithm. The variation on the computational delay is called the control jitter. We consider an algorithm to be (temporally) deterministic, if the control jitter is much smaller than the computational delay.

Software

Within software we make the distinction between application, operating system and compiler. We provide loose definitions of these terms, adapted from [65, Ch. 1]. An

application performs an user-defined task. A digital control algorithm is considered an

application. It may be a state observer, a model predictive control algorithm, etc. The operating system is in charge of interfacing the application with the hardware. A real-time operating system (RTOS) is additionally in charge, among other things, of task scheduling. The control application is one of the many tasks that run concurrently on the CPU. The RTOS scheduling algorithm guarantees, under certain conditions, that our control algorithm is executed periodically with a constant sampling period [65, 66]. In embedded applications the C programming language [39], or simply C, and C++ are commonly used. However, C presents certain advantages over C++, namely: for embedded hardware targets, object-oriented C++ may carry penalty on computational

(30)

performance and power consumption when compared to procedural C [67]. Furthermore, for embedded targets, C compilers are more common than C++ compilers.

We use the term compiler as a broad term to denote the process of translating the user source code into binary code executable by the microcontroller. Hence, in this context a compiler also includes a preprocessor, an assembler, and a linker. The source code refers to both, the application and the operating system. In embedded systems, an application is commonly written entirely in C or C++, whereas the OS is usually a mix of C/C++ and hardware specific assembler code.

Memory management

Although the topic of memory management is a complex one (see [65, Ch. 3]), here we reduce it to two simplified cases: dynamic and static memory allocation. A program-ming language typically offers only certain types of memory management functionality. Dynamic memory allocation is done during program execution. In C and C++ for in-stance, dynamic allocation is done with a call to the malloc function. In C++ the new operator can also be used to dynamically allocate memory. Furthermore, an RTOS is in charge of allocating memory for each task using the functionality provided by the base language.

The main advantage of dynamic allocation is that it offers flexibility. That is, the amount of memory required by certain types of data (e.g. arrays) does not need to be known at compile time. However, a dynamic memory allocation algorithm does not show temporal determinism [68, 69]. Therefore, its use is restricted in real-time systems. Moreover, the use of dynamic memory allocation is prohibited in critical systems accord-ing to the MISRA-C standard, [70]. Furthermore, some embedded C/C++ compilers do not implement dynamic memory allocation algorithms.

In contrast, static memory allocation is made at compile time. As its name implies, the amount of memory cannot be change during runtime. If the size of the data involved changes, the affected code needs to be recompiled. The main advantage is that there are no runtime penalties. Therefore, static memory allocation is commonly required in real-time applications.

(31)

2.2. On-line optimization

This section introduces some of the basic concepts of on-line optimization, which is fundamental for the implementation of MPC.

2.2.1. Basics of convex optimization

Before we proceed, let us state some required definitions. The following presentation is based on [31].

We consider an optimization problem of the form minimize

y∈Rny f0(y)

subject to fi(y) ≤ 0, for all i ∈ I,

fi(y) = 0, for all i ∈ E,

(2.2)

where the functions fi are smooth real-valued functions in Rny and I and E are each a

finite set of indices. The function f0 : Rny → R is called the cost function or objective

function. The vector y ∈ Rny is the optimization variable. The functions f

i : Rny

R, i ∈ I and fi : Rny → R, i ∈ E are called inequality and equality constraint functions,

respectively. A point y is feasible if it satisfies all constraints. Problem (2.2) is said to be feasible if there exist at least one feasible point. Otherwise it is called infeasible. The set containing all feasible points is called the constraint set or feasible set. The optimal

value f

0 of problem (2.2) is defined as:

f0= inf{f0(x) | fi(x) ≤ 0, ∀ i ∈ I, fi(x) = 0, ∀ i ∈ E}.

A solution yto problem (2.2), also called an optimal point, is a point that is feasible and for which f0(y) = f0∗ holds. A feasible point y is called -suboptimal if satisfies

f0(y) ≤ f0∗+ , for  > 0. We call  the suboptimality level.

An optimization problem is called convex if the cost function is convex, the inequality constraint functions are convex, and the equality constraint functions are affine (i.e.

fi(y) = Diy −di,for all i ∈ E). This implies that the feasible set of a convex optimization

(32)

2.2.2. Quadratic programs

Of particular interest in this work are quadratic optimization problem (QP) of the form: minimize y∈Rny 1 2y T Hy+ gT y subject to Cy ≤ c, Dy= d. (2.3)

With respect to (2.2) we have f0(y) = 12yTHy + gTy, fi(y) = Diy − di, for all i ∈ E,

and fi(y) = Ciy − ci, i ∈ I. If we replace the affine inequality constraints in (2.3) with

quadratic constraints we obtain the problem: minimize y∈Rny 1 2y T Hy+ gT y subject to 12yT Piy+ qiTy ≤ ri, for all i ∈ I, Dy = d,

with Pisymmetric positive semi-definite, the problem is called a quadratically constrained

quadratic program (QCQP). This type of problem arises in various MPC setups, see for

example [71, 72]. A related problem is the second-order cone program (SOCP) of the form: minimize y∈Rny 1 2y T Hy+ gT y

subject to k ¯Piy+ ¯pik2 ≤ ¯qiTy+ ¯ri, for all i ∈ I,

Dy = d.

SOCPs arise in many engineering applications [73], including stochastic MPC problems [53, 74, 75]. We are mainly interested in QPs without equality constraints, which in standard form are represented as:

minimize y∈Rny 1 2y T Hy+ gT y subject to Cy ≤ c. (2.4)

In general, a QP is convex if and only if the Hessian matrix H is symmetric positive semi-definite. Furthermore, we focus on strictly convex QPs. A QP is called strictly convex if and only if HT= H > 0. Strict convexity implies that if a solution yexists

(33)

then it is unique. Additionally, strict convexity is an important property exploited by many optimization algorithms, and in particular by the method discussed in Section 3.3.

2.2.3. Optimization theory for quadratic programs

One important concept is that of duality, since it commonly exploited by optimization algorithms. We start by defining the Lagrangian function of problem (2.2), which is called the primal problem, as

L(y, λ) = f0(y) + X

i∈E∪I

λifi(y), (2.5)

where λ denotes a Lagrange multiplier vector. Then, the Lagrange dual function is defined as the minimum value of the Lagrangian over y for some fixed λ,

g0(λ) = inf y∈Dy L(y, λ) = f0(y) + X i∈E∪I λifi(y). (2.6)

One important characteristic of the dual function is that it is concave, even if problem (2.2) is not convex, [31, Ch. 5]. The Lagrange dual problem for problem (2.2) is defined as

maximize g0(λ)

subject to λi0, i ∈ I,

(2.7) which is itself a convex optimization problem. We say that strong duality holds if the optimal value of dual problem g

0 equals the optimal value of the primal problem f0∗. Duality is important since, for a strictly convex quadratic program (2.8) (for which strong duality holds), the solution to the primal problem can be readily found by solving the dual problem. Many optimization algorithms solve directly the primal problem, others solve the dual problem instead to find the solution of the primal, or exploit information of both primal and dual problems to find a solution.

We are interested in quadratic programs of the form (2.4). If the gradient and con-straint vector depend on a parameter w ∈ Rnw, we have the parametric problem P(w):

minimize y∈Rny 1 2y T Hy+ g(w)T y subject to Cy ≤ c(w), (2.8)

(34)

with c(w) ∈ Rnc. In MPC applications, this parameter is commonly given by (a

com-bination of) the current state vector x, a reference trajectory, known disturbances. For notational simplicity and where no confusion may arise, the dependency on the param-eter w will be dropped.

For our further discussion, we need a few further definitions.

Definition 1 (Feasible set). For the quadratic program P(w) the feasible set is given by

F(w) = {y ∈ Rny | Cy ≤ c(w)}. (2.9) Definition 2 (Active set). Consider problem P(w). A constraint Ciy ≤ ci(w), 1 ≤ i ≤

nc is called active at y ∈ F(w) 6= ∅ if Ciy = ci(w) holds, and inactive otherwise. The

set of indices

A(y; w) = {i ∈ {1, . . . , nc} | Ciy= ci(w)} (2.10)

is called set of active constraints or simply active set. If yis the solution to P(w) we

call A(y; w) optimal active set.

Definition 3 (LICQ). Given the pair (y, w), the constraints ˆci(y; w) = Ciy − ci(w),

and the active set A(y; w), we say that the linear independence constraint qualification

(LICQ) holds if the set of active constraint gradients {∇ˆci(y; w), i ∈ A(y; w)}, is linearly

independent.

Now we are ready to introduce the first-order optimality conditions, also known as Karush-Kuhn-Tucker (KKT) conditions (taken from [29, Theorem 12.1]), which charac-terize an optimal point.

Theorem 1 (First-order optimality conditions). Suppose that yis a local solution to

(2.2), that the functions fi in (2.2) are continuously differentiable, and that the LICQ

holds at y. Then there is a Lagrange multiplier vector λ, with components λ

i, i ∈ E ∪I,

such that the following conditions are satisfied at (y, λ):

∇L(y, λ) = 0, fi(y) ≤ 0, ∀ i ∈ I, fi(y) = 0, ∀ i ∈ E, λi0, ∀ i ∈ I, λifi(y) = 0, ∀ i ∈ I ∪ E. (2.11)

(35)

A proof can for example be found in [29, Ch. 12].

For convex problems like (2.8), the KKT conditions provide sufficient and necessary conditions for a pair (y, λ) to be optimal. The point yis the unique minimizer if the problem is strictly convex. Furthermore, if the LICQ holds, for any given ythe optimal Lagrange multiplier λis also unique.

For the discussion of the active set optimization algorithm and the explicit MPC method in Chapter 3, the following definitions are needed.

Definition 4(Set of feasible parameters). The set of feasible parameters of a parametric

quadratic program is given by

W = {w ∈ Rnp | F(w) 6= ∅}. (2.12) Definition 5(Critical region). Let a strictly convex quadratic program P(w) with w ∈ W

be given. Furthermore, let y(w) denote the solution and A(y(w); w) the corresponding

optimal active set. Then, for every index set A ⊆ {1, . . . , nc}, the critical region of W

is given by

CRA = {w ∈ W | A(y(w); w) = A}. (2.13)

2.3. Model predictive control for linear systems

We are ready to introduce the model predictive control algorithm. We consider the linear MPC setup of the form (1.1). Figure 2.2 depicts the behavior of a system being controlled by a discrete MPC implementation. At time tk = kT , k ∈ Z and T the

sampling period, we estimate the state ˆxkfrom measurements ˆyk, typically using a state

observer. The MPC algorithm uses ˆxk and the linear model of the system to predict

the future trajectory of the state xk for the following N steps. With this information,

a minimization problem is solved to find the input sequence uk that optimizes a given

performance criterion for the given horizon N. In the figure, the continuous function

φ(t, ˆxk, uk) (shown in dashed red) represents the nominal state evolution starting at

ˆxk under the influence of the input uk. The predictive state trajectory xk coincides

with φ(t, ˆxk, uk) at each sampling time tk (represented by black dots over φ(tk, ·)). The

discrete-time MPC algorithm has no knowledge of φ(·), though. We have included this continuous time plot in the figure to simplify explanation of the concept. Furthermore,

(36)

t x, u tk tk+1 tk+2 tk+N −1 tk+N tk+N +1 ˆ xk ˆ xk+1 ˆ xk+2 φ(t,ˆxk,uk) φ(t,ˆxk+1,uk+1) uk uk+1 . . . xk xk+1 N N

Figure 2.2.: A visual representation of the MPC algorithm.

ukis held constant at each sampling interval (a zero-order hold), as is common for digital

controllers [7].

Ideally, the state trajectory of the controlled system would follow the trajectory

φ(t, ˆxk, uk). However, due to several effects which are often not included in the MPC

model (disturbances, model-plant mismatch, errors in the state estimation, quantiza-tion of the input, etc.), the real system follows a slightly different trajectory under the influence uk during the interval from tk to tk+1 (shown in solid red).

At the next sampling period tk+1, we repeat the process. We first take a new

mea-surement ˆyk+1 and estimate ˆxk+1. We then compute the input sequence uk+1 (dashed

blue) for the next following N steps, and apply the first part of uk+1 (solid blue) to the

real system during the current sampling interval, and so on for the subsequent sampling times tk, k = 2, 3 . . .

To simplify our discussion, we focus on one particular moving horizon setup, namely regulation to the origin of a constrained linear system. We describe this sample setup in the following.

(37)

2.3.1. A basic MPC setup

We considered discrete-time linear time-invariant systems subject to input and state constraints described by

x+=Ax + Bu, (2.14)

where x ∈ X ⊆ Rn, and u ∈ U ⊂ Rm are the system state and input at the current

sampling time, respectively. The successor state is denoted by x+. The considered input constraint set U is a convex and compact box set containing the origin. Moreover, the state constraint set X is a closed (not necessarily bounded) convex set with the origin in its interior. These type of constraints are commonly found in practice (see [3, Ch. 1]).

The discrete-time system and input matrices are represented by A and B, respectively. We assume the pair (A, B) to be stabilizable. The control objective is to bring the system to a desired equilibrium point while satisfying all constraints. For simplicity and without loss of generality, in this section we only consider regulation to the origin.

MPC is a natural candidate to solve the constrained regulation problem. The MPC controller needs to solve at each sampling time k an optimization problem parameterized by the current state x. This problem is, for the conditions given above, defined as follows:

minimizeu 12N −1X j=0 (kxjk2Q+ kujk2R) + 1 2kxNk2P subject to xj+1 = Axj+ Buj, j = 0, ..., N − 1, u ≤ uj ≤ u, j = 0, ..., N − 1, e ≤ Exxj + Euuj ≤ e, j = 0, ..., N − 1, f ≤ F xN ≤ f , x0 = x, (2.15)

where N ≥ 2 is an integer denoting the prediction horizon. We consider positive definite quadratic forms kwk2

S = w

TSw > 0 and kwk2

S = 0 only if w = 0, with S a symmetric

positive definite matrix. The matrices Q, R and P are symmetric positive definite and denote the states, input and terminal weighting matrices, respectively. If Q is symmetric positive semi-definite, an additional requirement for stability is needed, namely that the pair (A, Q) is detectable [3, Ch. 2]. The matrices Q and R, loosely speaking, are a

(38)

specification of the desired controller behavior. The term `(xj, uj) = 12(kxjk2Q+ kujk2R)

is called stage cost, whereas `N(xN) = 12kxNk2P is called terminal cost. Furthermore,

the box set U is defined by the lower and upper bounds u and u. Similarly, the state constraints X are defined by the bounds e and e, and the matrices Ex ∈ Rq×n and

Eu ∈ Rq×m. The terminal state constraint set Xf ⊆ X is a polytope defined by the

triplet f, f and F ∈ Rr×n. This set, together with the cost function, is often used to

guarantee closed-loop stability [3, Ch. 2].

We optimize over the input sequence u = {u0, . . . , uN −1} ∈ U = UN. The state

constraints further impose the constraints u ∈ UN(x) = {u ∈ U | x(x, u) ∈ X }.

The state sequence x(x, u) ∈ R(N +1)n describes the trajectory followed by the states of system (2.14) starting at x and under the input sequence u, i.e. x(x, u) = {x0 = x, x1 =

Ax0 + Bu0, . . . , xN = AxN −1 + BuN −1}. The set X = {x ∈ R(N +1)n | xk ∈ X, k =

0, . . . , N − 1, xN ∈ Xf} denotes the state trajectories satisfying all constraints. Thus,

UN(x) is the set of feasible input sequences of problem (2.15) for a given x.

In an MPC scheme, the first element of the optimal input sequence uis used as feedback control action, i.e. during each sampling period u = u

0 is applied to the plant. In many applications, it is safe to assume that the input sequence found by optimization algorithms is indeed u. However, in the case of embedded systems with fast sampling rate, the MPC controller may only have enough time to compute u∈ U

N(x), a (possibly

rough) approximation to u.

2.3.2. MPC as a QP

Most implementations of optimization algorithms cannot directly solve problems like (2.15). Therefore, the MPC problem needs to be brought into a form that can be directly used together with a tailored solver.

The MPC optimization problem (2.15) can be brought into the following equivalent form (refer to Appendix A)

minimize u∈RN m 1 2u T Hu+ g(x)T u subject to Cu ≤ c(x). (2.16)

Compared to (2.4), we see that the gradient vector g(x) is now given by a function

(39)

particular c : Rn → R2((m+q)N +r). Problem (2.16) is known as parametric QP, and is parameterized by the variable x. Note that this problem does not include equality constraints (i.e. E = ∅).

There are several ways to guarantee strict convexity of (2.16). In the case of setpoint stabilization (2.15), selecting the (symmetric) weighting matrices such that Q ≥ 0, P ≥ 0 and R > 0 guarantees that the Hessian matrix is symmetric positive definite.

A MPC problem like (2.16) can be reliably solved by many commercial (e.g. MAT-LAB, Gurobi) and non-commercial (e.g. CVXOPT [76], OpenOpt [77]) optimization packages [78]. Nevertheless, to efficiently apply MPC on embedded systems, a solver needs to exploit the structure of problems like (2.16) that are derived from (2.15). Next chapter addresses this topic.

2.3.3. General moving horizon control formulation

So far, we have only consider one particular type of MPC problem. However, one of the strengths of moving horizon formulation is that is general and therefore can deal with a great variety of problems. Nevertheless, we focus our attention on problems that can be equivalently expressed as a parametric quadratic program P(p) of the form:

minimizeu 12uT

Hu+ g(p)T

u

subject to Cu ≤ c(p), (2.17)

with H and C constant matrices, and g(p) and c(p) vectors affine in each of the param-eters of the parameter sequence p.

Note that (2.16) is the standard QP representation the MPC setup (2.15). This particular problem (regulation to the origin subject to constraints) has the state vector

x as its only parameter. That is, (2.16) is a particular case of (2.17) in which p consist

of a single parameter x, i.e. p = {x}. In general, p may consist of several parameters that depend on the MPC problem at hand. Thus, formulation (2.17) allows us to consider a broad range of problems, including, but not limited to: setpoint stabilization, trajectory tracking [79, 80], systems with dead time [81], and systems with known-ahead disturbances [11, 82]. Notable systems that cannot be expressed in form (2.17) are time-varying systems, nonlinear systems, and formulations with weighting matrices time-varying with time.

(40)

Require: At each sampling time k = 0, 1, . . . get the parameter sequence pk

1: Find the solution uto problem (2.17) for p = pk

2: Assign the MPC control action uk = u0 3: return uk

Algorithm 1: Model predictive control algorithm

The MPC scheme that was intuitively described in Figure 2.2 is formally presented in Algorithm algorithm 1.

2.4. Summary

We presented a general description of the characteristics of the problems we are inter-ested in. We introduced digital control of embedded systems and outlined important concepts that are relevant for MPC applications. We focused first on the limitations of embedded computers. We then introduced the model predictive control algorithm for linear systems, and showed that it is, under certain conditions, equivalent to a convex optimization problem. We presented the theoretical foundations of convex optimization which are needed in the following chapters. In particular, we discussed convex quadratic programs, a type of optimization problem commonly arising in linear MPC. In the next chapter we discuss how problems like (2.17) can be solved efficiently, while at the same time taking the limitations of embedded systems into account.

(41)

software tools for MPC

This chapter discusses how optimization algorithms can be tailored for efficient MPC implementations. We start by describing the main properties of the MPC algorithm that need to be taken into account. It follows the discussion on how some of these properties are exploited by state-of-the-art tailored optimization software tools. We then present a novel optimization algorithm that takes into account the properties of MPC and is well suited for embedded system. This allows the algorithm to achieve good controller performance with the limited computational resources available on embedded hardware platforms.

3.1. Exploiting the properties of the MPC algorithm

In this section we discuss the most relevant properties of the MPC algorithm that need to be taken into account in embedded systems. These properties can be used to develop efficient MPC implementations. We discuss some methods that work well in practice. From a theoretical perspective, however, they present some challenges. We, therefore, emphasize in the following the practical implications of these ideas, while we only briefly mention some theoretical aspects.

3.1.1. Known-ahead maximum computation time

As discussed in Section 2.1, one fundamental assumption that is commonly made in digital control systems is that the sampling period is constant. This imposes a limit on what the maximum computational time of the MPC algorithm can be. Because an MPC scheme runs alongside with several other software components on the same processor, the exact available computational time for the MPC can in most cases only determined

Referenzen

ÄHNLICHE DOKUMENTE

Abstract: Model Predictive Control (MPC) is a control method in which the solution of optimal control problems on infinite or indefinitely long horizons is split up into the

In MPC, an optimal control problem on a long, possibly infinite time horizon is split up into the consecutive solution of problems on relatively short finite time horizons.. Due to

Model Predictive Control (MPC) is a control method in which the solution of optimal control problems on infinite or indefinitely long horizons is split up into the successive

“Intelligent multi-objective nonlinear model predictive control (imo- nmpc): Towards the on-line optimization of highly complex control problems,” Expert systems with

Abstract: In a series of papers by Annunziato and Borz`ı, Model Predictive Control of the Fokker-Planck equation has been established as a numerically feasible way for

Model Predictive Control (MPC), also known as Receding Horizon Control (RHC), is a control method that computes a feedback law by iteratively solving optimal control problems on

The fundamental idea of such a model predictive controller is simple and consists of three steps which are repeated at every discrete time instant during the process run: First,

Instead of designing a static state feedback with sampling and zero order hold by designing a continuous–time controller which is stabilizing an equi- librium and discretizing