• Keine Ergebnisse gefunden

2.2 Preliminary Concepts

3.1.1 State Space Manipulations

This section briefly reviews elementary state space manipulations which are used in model order reduction. For an LTI change of state space coordinatesξ=T xwith a nonsingular transformation matrixT ∈Rnx×nx, the LPV state space representation (3.2) becomes

ξ˙=

The important property of the transformation is that, due to time invariance,ξ=T xξ˙ = Tx. The use of an LPV transformation to change the state space coordinates to˙ ξ=T(ρ)x, on the other hand, leads to ˙ξ=Tx˙+ ˙T x. It hence introduces an explicit dependence on the parameter variation rate into the differential equation that governs the dynamics of the transformed system, i. e.,

ξ˙=

This additional rate dependence is a realization artifact, attributed to expressing the state space model in parameter-dependent coordinates: Clearly, starting from the rate dependent model (3.4), the transformationx=T−1(ρ)ξleads back to the rate-independent realization (3.2). In fact, any affine rate dependence can be removed through a suitable transformation [Wood 1995, p. 144]: If ¯A(ρ,ρ) = ¯˙ A0(ρ) +Pnρ

i=1A¯i(ρ) ˙ρi and a parameter-dependent transformation ¯T(ρ) is performed, the resulting state space matrix becomes

A(ρ,ˆ ρ) = ¯˙ T(ρ) ¯A0(ρ) ¯T−1(ρ) +

Dependence on the rate ˙ρi is therefore eliminated if the transformation ¯T(ρ) is selected to satisfy ∂ρT(ρ)¯i =−T¯(ρ) ¯Ai(ρ). Unfortunately, the construction of a transformation that satisfies such a requirement is in practice not easy. The only other possibility is then to treat ˙ρas a newly introduced independent parameter vector, which effectively doubles the number of scheduling parameters and hence increases model complexity.

The purpose of changing the basis of the state space, either by LTI or LPV trans-formations, is to bring the dynamic system into a form that permits a partitioning

ξ˙1=A11ξ1+A12ξ2+B1u ξ˙2=A21ξ1+A22ξ2+B2u y= C1ξ1+ C2ξ2+ D u.

(3.5)

In this form, ξ1 represents the subset of state variables that is to be retained in the reduced-order model whileξ2represents state variables that are to be removed. Given the partitioning (3.5), there are essentially two different ways of removingξ2: truncation and residualization.1

Definition 3.1(Residualization [Liu & Anderson 1989]). Residualization, also known as singular perturbation approximation, refers to setting ˙ξ2 = 0, solving for ξ2, and substituting the resulting expression into Equation (3.5). The reduced-order model is

ξ˙1= A11A12A−122 A21

ξ1+ B1A12A−122 B2 u y= C1C2A−122 A21

ξ1+ DC2A−122 B2

u. (3.6)

N Residualization preserves the steady-state gainDC A−1B of the original system and hence retains accuracy “at low frequencies”. Residualized state variablesξ2 can be inter-preted to immediately attain their steady-state values, which is a good approximation when the corresponding dynamics are very fast compared to those of the state variables ξ1 and hence transients are negligible. A classical example for residualization is to remove the state variables that represent pitch rate and angle of attack from an aircraft model that is concerned with the slow altitude forward velocity oscillation known asphugoid. Definition 3.2 (Truncation [e. g., Moore 1981]). Truncation refers to simply discarding ξ2 from the dynamic system (3.5), i. e., the reduced-order model is

ξ˙1=A11ξ1+B1u

y= C1ξ1+ D u. (3.7)

N Truncation exactly preserves the feedthrough gain D and hence the truncated model equals the full-order model at infinite frequency. Model reduction by truncation is therefore preferred when accuracy of the reduced-order model “at high frequencies” is required. The truncated state variablesξ2 can be interpreted to remain at the initial value zero for all times, which is a good approximation when the corresponding dynamics are very slow

1In fact, both methods can be interpreted as the extreme cases of ageneralized singular perturbation approximation that seeks to approximate a dynamic system such that its frequency response gain is exactly preserved at a specified frequencyσ0[Fernando & Nicholson 1982]. Residualization corresponds toσ0= 0 and truncation toσ0→ ∞. This perspective is intimately related to the approximation of dynamic systems by moment matching [Antoulas 2005, Cha. 11].

compared to those of the state variablesξ1and henceξ2is almost constant during changes inξ1. A classical example for truncation is to remove the state variables that represent altitude and forward velocity from an aircraft model that is concerned with short period dynamics, i. e., pitch motion.

Another perspective on the problem of removing unwanted state variables is provided by the projection framework [e. g., de Villemagne & Skelton 1987, Saad 2000, Cha. 5].

Definition 3.3 (Projection). A projection is a linear operationRnx7→ V ⊂Rnx defined by a matrix

Π =V (WTV)−1WT (3.8)

withV ∈Rnx×nz,W ∈Rnx×nz,nx> nz and rank(WTV) =nz. A matrix Π∈Rnx×nx represents a projection if and only if it is idempotent, i. e., Π = Π2. A projection is completely characterized by its range space, span(Π) = span(V), and its nullspace, ker(Π) = span(ΠT)= span(W). The range space of a projection

V := span(Π) = span(V) (3.9)

is calledbasis space. The subspace orthogonal to a projection’s nullspace

W:= ker(Π)= span(W) (3.10)

is calledtest space. N

The basic facts associated with Definition 3.3 are easy to prove by replacingV andW with their respective thin QR-factorizations.2 A vector space is said to be projected by Π alongthe orthogonal complement of the subspace spanned by the columns of W andonto a subspace spanned by the columns ofV.

GivenV,W, and a pointx∈Rnx, the projection Πxprovides the unique approximation toxinVwith zero error withinW. This is illustrated in the following. As the approximation lies in the span ofV, it can be written as

xapprox:=V z (3.11a)

with some coefficient vector z∈Rnz. The basis spaceV thus literally forms a basis for the approximation. The component ofxthat is eliminated by the projection, i. e., the approximation error orresidual

r:=xxapprox=xV z (3.11b)

is in the nullspace of Π. It is hence orthogonal toW which can be expressed as

WTr= 0. (3.11c)

2A thin QR-factorization is the unique factorizationX=QRof a matrixXRn×mwithn > minto a matrixQRn×mwith orthonormal columns and an upper-triangular matrixRRm×mwith positive diagonal entries [e. g., Golub & Van Loan 2013, Theorem. 5.2.3, p. 248].

The test spaceW thus determines the measure of error and “tests” the approximation.

Substituting Equation (3.11b) into Equation (3.11c) and solving forz results in

z= (WTV)−1WTx. (3.11d)

The inverse in Equation (3.11a) exists according to Definition 3.3 such that the solution is unique. Substituting Equation (3.11d) into Equation (3.11a) finally shows that

xapprox=V(WTV)−1WTx= Πx (3.11e)

and from Equation (3.11b) it further follows that

r= (I−Π)x. (3.11f)

If Π = ΠT, then Π is called an orthogonal projection since its nullspace is orthogonal to its range space. In this case, basis and test space coincide, i. e.,W =V.

Definition 3.4 (Orthogonal Projection). An orthogonal projection is a linear operation defined by a matrix

Π =V (VTV)−1VT (3.12)

withV ∈Rnx×nz, nx> nz, and rank(VTV) =nz. It is completely characterized by its

range space span(Π) = ker(Π)= span(V). N

The approximation error of an orthogonal projection is orthogonal to the approximation, i. e.,VT(x−V z) = 0. The unique solution to this equation is the least squares solution z= (VTV)−1VTx. To make a clear distinction, the general projection of Definition 3.3 is also referred to as anoblique projection.

The following theorem simplifies the treatment in this chapter.

Theorem 3.1 ([de Villemagne & Skelton 1987, Corollary 2.1]). Any projection can be parameterized byV ∈Rnx×nz and a nonsingular square matrixS∈Rnx×nx as

Π =V (VTS V)−1VTS

| {z }

WT

. (3.13)

Proof. de Villemagne & Skelton [1987] provide a proof based on generalized inverses.

AnyW constructed in this way is biorthogonal toV, i. e.,WTV =Inz. Thus, biorthogo-nality ofV andW can always be assumed without loss of generality.3

Model order reduction requires the approximation of a dynamic system given by a differential equation, rather than an approximation for a single point in the state space.

3Biorthogonality can also be achieved by reassigningWT(WTV)1WT in Definition 3.3.

The goal is thus to find an approximate solutionxapprox:=V zto the differential equation

The residual of this approximation is

r:=Vz˙−(A(ρ)V z+B(ρ)u). (3.15)

Restricting the residual (3.15) again to be orthogonal to the test spaceW leads to

WT(Vz˙−(A(ρ)V z+B(ρ)u)) = 0. (3.16)

The unique solution is

˙

z= (WTV)−1WTA(ρ)V z+ (WTV)−1WTB(ρ)u. (3.17) The approximation xapprox =V z where z is the solution to Equation (3.17) is known asPetrov-Galerkin approximation [e. g., Antoulas 2005, Sec. 9.1.2]. Augmenting Equa-tion (3.17) with an output equaEqua-tiony=C(ρ)xapprox+D(ρ)uthen yields the reduced-order model of Definition 3.5. In view of Theorem 3.1, the following simpler definition is used.

Definition 3.5 (Petrov-Galerkin Approximation). Let an LPV system with state space representation ˙x = A(ρ)x+B(ρ)u, y = C(ρ)x+D(ρ)u, and matrices V ∈ Rnx×nz, W ∈Rnx×nz withWTV =Inz be given. The reduced-order model from Petrov-Galerkin approximation with basis spaceV = span(V) and test spaceW = span(W) is

˙ Comparing the approximation (3.18) with the reduced-order model (3.7) which is obtained through transformation and subsequent truncation, it is clear that both are equivalent withV=T−1[Inz 0nz×(nxnz)]T ,WT=[Inz 0nz×(nxnz)]T. Truncation is thus an orthogonal projection Πtruncation= [I0][I0]. A coordinate transformation followed by truncation is an oblique projection. Residualization, on the other hand, cannot be expressed as a projection, as is immediately apparent from the modifiedD-matrix of the system (3.6).

For LPV systems, it seems natural to let V andW also depend on the scheduling parameter, just as in the case of parameter-varying state transformations. Repeating the Petrov-Galerkin approximation with parameter-dependent matricesV(ρ(t)) andW(ρ(t)) yields

The residual of this approximation is hence The reduced-order LPV model obtained by enforcing the orthogonality constraint WT(ρ)r= 0 is

It explicitly depends on the parameter rate ˙ρin addition to the original parameterρ.