• Keine Ergebnisse gefunden

Organization of the Thesis

Im Dokument 1.1.1 LPV Model Predictive Control (Seite 21-27)

Before discussing the main topic of the dissertation, several important mathematical and control theoretic concepts are introduced in Chapter 2. The first part of the chapter is dedicated to the definition of mathematical and system and set theoretic concepts to be used throughout this work. The latter part discusses preliminaries of the two central topics of the dissertation, namely Model Predictive Control and Linear Parameter-Varying modelling; it is written assuming little if any familiarity with these topics in order to make the rest of the thesis easily digestible.

Chapter 3 presents the foundation of the qLMPC framework in the state space setting. An iterative algorithm is presented, which can be used to efficiently solve the nonlinear MPC optimization problem; stability conditions in the form of LMI problems are derived, whose offline solution yield the terminal ingredients (terminal cost and terminal constraint set) to be used in the online MPC law. The problem of offset-free tracking and nonlinear constraints is discussed as well.

The control law is tested experimentally on an Arm-Driven Inverted Pendulum (PenduBot).

A velocity-form nonlinear MPC is presented in Chapter 4. The stability result in this case makes use of terminal equality constraints in the velocity space, making its implementation comparatively simple (as no terminal ingredients need to be computed). The use of velocity-based linearization and state augmentation allows to straightforwardly consider nonlinear output equations. The velocity algorithm is tested on a 2-DOF robotic manipulator considering both nonlinear constraints and nonlinear output.

The qLMPC framework is extended to consider input-output quasi-LPV models in Chapter 5.

The stability analysis of the state space framework is appropriately modified for this kind of models and the tracking problem is discussed in more detail as the stability conditions tailor better to tracking in an IO setting. The IO-qLMPC law is validated on a 2-DOF robotic manipulator.

A data-driven predictive control law based on Koopman operators and qLMPC is presented in Chapter 6. A short overview of the Koopman framework is given and an algorithm to compute the Koopman operator online is derived. The Koopman-based lifted linear model is used in conjunction with the qLMPC framework resulting in a data-driven control strategy. This approach is tested experimentally on a 4-DOF Control Moment Gyroscope.

A stability analysis tool for LPV MPC is proposed in Chapter 7. Under certain assumptions about the nonlinear system, the nonlinearity arising from a QP can be characterized by a so-called Parameter Dependent Quadratic Constraint. This characterization is used to derive a dissipation inequality and establish stability of the closed-loop a priori without artificially imposed stabilizing constraints.

Preliminaries

This chapter introduces the reader to many of the mathematical and system and control theory concepts which are relevant in the present context. The presentation and developments here are brief but encompass much of the prerequisites to easily follow the rest of the thesis.

2.1 Mathematical Preliminaries

Before introducing the two central topics of this thesis, namely Model Predictive Control (MPC) and quasi-Linear Parameter-Varying (quasi-LPV) modelling, several useful mathematical tools, definitions and results to be used throughout this work are summarized in this section. The section is intended to serve both as the definition of these concepts and as a quick reference for the reader.

2.1.1 Optimization Problems

The following definitions deal with relevant types of optimization problems, the interested reader is referred to [20] and [21] for a more thorough discussion.

Definition 2.1(Quadratic Program). A Quadratic Program (QP) is an optimization problem of the form

minπ‘₯

π‘₯>𝐻 π‘₯+𝑔>π‘₯+π‘Ÿ subject to 𝐴π‘₯ ≀ 𝑏

𝐴𝑒 π‘žπ‘₯=𝑏𝑒 π‘ž

This family of optimization problems represents the most widely used structure in the context of Model Predictive Control. It arises naturally when using quadratic cost functions and the system’s dynamics are linear; however, even for Nonlinear MPC (NMPC) it is frequently used when using second-order approximations of the nonlinear optimization problem (e.g. Newton’s method).

Definition 2.2(Linear Matrix Inequality). A Linear Matrix Inequality (LMI) is a relation-ship expression of the form

𝐹(π‘₯) =𝐹

0+

π‘š

Γ•

𝑖=1

𝐹𝑖π‘₯𝑖 0 whereπ‘₯ ∈Rπ‘š is the variable and𝐹𝑖 =𝐹>

𝑖 0 are constant matrices.

The inequality symbol () is used in this context to denote that the matrix is positive definite, i.e. 𝑀 0 ⇔ π‘₯>𝑀 π‘₯ > 0 βˆ€π‘₯ β‰  0. A generalization of LMIs which, although often found, are not as attractive as LMIs (for reasons to be discussed in what follows) are Bilinear Matrix Inequalities.

Definition 2.3(Bilinear Matrix Inequality). A Bilinear Matrix Inequality (BMI) is a rela-tionship expression of the form

𝐹(π‘₯)= 𝐹

0+

π‘š

Γ•

𝑖=1

𝐹𝑖π‘₯𝑖+

𝑛

Γ•

𝑗=1

𝐺𝑗𝑦𝑗+

π‘š

Γ•

𝑖=1 𝑛

Γ•

𝑗=1

𝐻𝑖 𝑗π‘₯𝑖𝑦𝑗 0 where𝐺 =𝐺> 0,𝐻 =𝐻> 0.

LMIs, and to a somewhat lesser extent BMIs, are routinely used to express stability and perfor-mance metrics for controller synthesis. Indeed the variables in the LMI (BMI) are related to the controller and finding a solution yields a controller that meets the required specifications. A solution for the LMI case is found by solving a semidefinite program.

Definition 2.4(Semidefinite Program). A Semidefinite Program (SDP) is an optimization problem of the form

minπ‘₯

𝑔>π‘₯ subject to 𝐹(π‘₯) =𝐹

0+

π‘š

Γ•

𝑖=1

𝐹𝑖π‘₯𝑖 0

An important characteristic of SDPs is that they are convex, which guarantees uniqueness of a solution, if one exists (i.e. if the problem is feasible). Examining the structure of the problem, it is clear that BMIs cannot be included in an SDP and for that case other (often heuristic) methods have to be used to solve the optimization problem. Given the central role played by LMIs in some of the derivations of this thesis, and the fact that SDP encompass a much richer class of optimization problems (indeed QPs are a special case) SDPs with LMI constraints are henceforth referred to simply asLMI problems, correspondingly optimization problems subject to BMI constraints are referred to asBMI problems.

When faced with a BMI problem with a certain structure, there are several tools that can be used to convexify the problem i.e. to turn the BMI into an LMI. The simplest is a so-called linearizing change of variable, which, by means of defining a new variableπ‘§π‘˜ =π‘₯𝑖𝑦𝑗 can turn the problem linear. Another very useful tool is presented in the following Lemma.

Lemma 2.1(Schur Complement). The matrix inequalities

𝑆(π‘₯) 0, 𝑄(π‘₯) βˆ’π‘…(π‘₯)>𝑆(π‘₯)βˆ’1𝑅(π‘₯) 0

where𝑄(π‘₯), 𝑅(π‘₯), 𝑆(π‘₯) are affine functions ofπ‘₯, are equivalent to the LMI 𝑄(π‘₯) 𝑅(π‘₯)>

𝑅(π‘₯) 𝑆(π‘₯)

0.

2.1.2 Stability

This section lists several definitions and theorems useful for establishing stability of equilibria of nonlinear systems. These are standard in nonlinear control literature e.g. [67]. Consider the dynamic system

Β€

π‘₯ = 𝑓(π‘₯).

In what follows it is assumed that the equilibrium of interest (i.e. the one for which stability is to be analyzed) has been appropriately shifted to the origin so that Β―π‘₯ =0.

Definition 2.5(Lyapunov Stability). The equilibrium point Β―π‘₯ =0 is said to be stable (in the sense of Lyapunov) ifβˆ€πœ– > 0,βˆƒπ›Ώ(πœ–) > 0 such that

kπ‘₯(0) k < 𝛿 =β‡’ kπ‘₯(𝑑) k < πœ– , βˆ€π‘‘ β‰₯ 0.

The previous definition is a somewhat weak statement of stability as it implies that the state remains within a neighborhood of the equilibrium, but not necessarily that it converges to it. A stronger statement would be to say that, not only should the trajectory remain within a neighborhood of the equilibrium point, but that it ultimately convergences to it.

Definition 2.6(Asymptotic Stability). The equilibrium point Β―π‘₯=0 is said to be asymptoti-cally stable if it is stable according to Definition 2.5 and

βˆƒπ›Ώ >0 : kπ‘₯(0) k < 𝛿 =β‡’ lim

π‘‘β†’βˆž

π‘₯(𝑑) =0

Lyapunov stability is the most widely used method for stability analysis for nonlinear systems, as well as for time varying and uncertain linear systems. The reason for this is that it can be readily characterized by existence of an energy-like function that fulfills certain conditions.

Definition 2.7 (Lyapunov function). A function 𝑉(π‘₯) : R𝑛 β†’ R is called a Lyapunov function ifβˆƒπ‘Ÿ > 0:

β€’ 𝑉(0) =0, 𝑉(π‘₯) > 0, 0< kπ‘₯k < π‘Ÿ

β€’ 𝑉€(π‘₯) =βˆ‡π‘‰π‘‘π‘₯

𝑑 𝑑 =βˆ‡π‘‰ 𝑓(π‘₯) ≀0 , 0< kπ‘₯k < π‘Ÿ

Note that both conditions are local in nature as they must hold only within aballof radiusπ‘Ÿ. The connection between existence of a Lyapunov function and stability as defined above is given by the following theorem.

Theorem 2.1(Lyapunov Stability Theorem). The equilibrium Β―π‘₯ =0 is stable if there exists a Lyapunov function for the system. If, in addition, 𝑉€(π‘₯) < 0, 0 < kπ‘₯k < π‘Ÿ then the equilibrium is (locally) asymptotically stable.

Remark 2.1. The definitions and results in this section are not the most general; stability in this context is characterized as what is often referred to asuniform stability. The difference is that the general definitions can depend explicitly on time, while their uniform counterparts do not and are therefore more restrictive. Likewise, the Lyapunov function can depend explicitly on time; this case is not discussed in this context.

All the definitions above assume a continuous-time system. It turns out that all the definitions apply to a discrete time system of the form π‘₯π‘˜+

1 = π‘“Λœ(π‘₯π‘˜), however, the stability conditions on the Lyapunov function need to be redefined.

Definition 2.8(Lyapunov function (discrete-time)). A function𝑉(π‘₯) : R𝑛 β†’ Ris called a Lyapunov function ifβˆƒπ‘Ÿ > 0:

β€’ 𝑉(0) =0, 𝑉(π‘₯) > 0, 0< kπ‘₯k < π‘Ÿ

β€’ Δ𝑉(π‘₯) =𝑉(π‘“Λœ(π‘₯)) βˆ’π‘‰(π‘₯) ≀0 , 0< kπ‘₯k < π‘Ÿ

The Lyapunov stability theorem holds for the discrete-time case as well, with the only modifi-cation being that for asymptotic stability, the Lyapunov difference Δ𝑉 (as opposed to its time derivative) needs to be strictly negative.

2.1.3 Set Theory

Set theoretic results are frequently used in the context of MPC. For this reason, some important definitions are listed below ([21]).

Definition 2.9. A set 𝑆is said to be convex if

βˆ€π‘₯

1, π‘₯

2∈ 𝑆, πœ†π‘₯

1+ (1βˆ’πœ†)π‘₯

2 βˆˆπ‘†,0≀ πœ†β‰€ 1

Definition 2.10(Ellipsoid). An ellipsoidal set centered at Β―π‘₯ is defined by the inequality E =

π‘₯ : (π‘₯βˆ’π‘₯Β―)>π‘Š(π‘₯βˆ’π‘₯Β―) ≀ 1 , π‘Š =π‘Š> 0

An important characteristic of ellipsoidal sets is that their volume is proportional to det(π‘Šβˆ’1/2), this fact is used to find maximum volume ellipsoids which fulfill certain constraints.

Definition 2.11(Sublevel set). A sublevel set of a function 𝑓(π‘₯) :R𝑛→ Ris the set given by

𝐿𝛼 ={π‘₯ : 𝑓(π‘₯) ≀𝛼}.

All sublevel sets of convex function are convex; in particular, a sublevel set of a quadratic form π‘₯>𝑃π‘₯is an ellipsoid.

Definition 2.12(Convex Hull). The convex hull of a collection of vectorπ‘₯𝑖is the set defined as

Co(π‘₯

1, π‘₯

2..., π‘₯𝑛) = ( 𝑛

Γ•

𝑖=1

πœ†π‘–π‘₯𝑖 :

𝑛

Γ•

𝑖=1

πœ†π‘– =1, πœ†π‘– β‰₯ 0βˆ€π‘– )

.

Definition 2.13(Invariant set). A subsetXof the state space is said to be positively invariant with respect to the dynamic systemπ‘₯(π‘˜ +1) = 𝑓(π‘₯(π‘˜))if 𝑓(π‘₯(π‘˜)) ∈X,βˆ€π‘₯(π‘˜) ∈X.

Intuitively, once a trajectory of the system π‘₯(π‘˜ +1) = 𝑓(π‘₯(π‘˜)) enters an invariant set, it will never leave it.

Im Dokument 1.1.1 LPV Model Predictive Control (Seite 21-27)