• Keine Ergebnisse gefunden

We begin with a review on tensor theory as well as linear control theory in Chapter 2. Regarding the tensor theoretic ideas, we present the most important properties of the Kronecker product and the closely related operation of vectorization of a matrix.

Moreover, we define the tensor rank and matricizations of vectors which are important in Chapter4and Chapter5. For linear control systems, we give an introduction into the previously mentioned concepts of reachability, observability and stability. Although we restrict ourselves to the linear continuous time-invariant case, we point out differences that occur in the discrete-time setting. We conclude the chapter with an explanation of projection-based model reduction for linear systems, including the special cases of interpolation and balancing, respectively.

In Chapter 3, we focus on model reduction of linear control systems. Due to its impor-tance for this thesis, we discuss the problem ofH2-optimal model reduction in detail and state different optimality conditions. Subsequently, we derive new results concerning low rank approximations of large-scale linear matrix equations. In particular, we pick up an idea from Riemannian optimization, introduced in [125], and show how to achieve the same results by means of the concept of rational interpolation. We differentiate between

1.4. Outline of the thesis 9 the symmetric and the unsymmetric case. While for the first case, the goal is to min-imize the canonical energy norm induced by the Lyapunov operator, for unsymmetric matrices, we extend the ideas to a more general setting which reappears in Chapter 4.

Chapter 4reflects the main contributions of this work. Here, we deal with the problem of MOR for bilinear control systems. After an introduction into the basic theory for this class of systems, we extend the ideas fromH2-optimal model reduction for linear systems to the bilinear case. We derive new abstract interpolatory optimality conditions that we show to be equivalent to existing ones based on generalized linear matrix equations.

We further propose two iterative algorithms that theoretically as well as numerically are proven to outperform other state-of-the-art techniques with respect to the bilinear H2 -norm. In the second part of Chapter 4, we discuss low rank approximation methods for generalized matrix equations arising in the method of balanced truncation for bilinear control systems. Besides a theoretical explanation for the often observed fast singular value decay of the solution matrix, we investigate the generalization of several successful low rank approximation methods known for the case of linear control systems.

In Chapter 5, we discuss a recently introduced method, see [72], for more general non-linear control systems. The fundamentals for this approach again have their origin in the idea of rational interpolation by projection. Here, the new contribution on the one hand is an efficient construction of a reduced-order model and, on the other hand, is the development of a two-sided projection method theoretically improving the existing technique. We further extensively test the method by several examples arising from the semi-discretization of nonlinear PDEs and compare the results with those obtained by the proper orthogonal decomposition (POD) method, a commonly used method in nonlinear MOR.

We conclude with a summary of the results and an overview of open questions for further research in Chapter 6.

CHAPTER 2

MATHEMATICAL FOUNDATIONS

Contents

2.1 Tensors and matricizations . . . 11 2.2 Linear time-invariant systems . . . 14 2.2.1 The continuous-time case . . . 14 2.2.2 The discrete-time case . . . 23 2.3 Model reduction by projection . . . 24 2.3.1 Interpolation-based model reduction . . . 26 2.3.2 Balancing-based model reduction . . . 27 In this chapter, we collect basic concepts and ideas that we use and assume to be known throughout the rest of this thesis. Most of the tools presented in the first section are well-known in the the context of matrix and tensor theory and can be found in, e.g., [65, 68, 80, 87]. The mathematical foundations of classical linear control theory are discussed in nearly every textbook like, e.g., [78]. For a detailed introduction into the topic of model order reduction, we refer to [3] and the references therein.

2.1 Tensors and matricizations

For what follows, one of the most important operations is the Kronecker product of matrices together with the closely related vec (·)-operator defined as follows.

Definition 2.1.1. ([65, Section 12.1]) Let X = [x1, . . . ,xm] ∈ Rn×m and Y ∈ Rp×q.

12 Mathematical Foundations Then

vec (X) :=

 x1

... xm

∈Rn·m×1, X⊗Y =

x11Y . . . x1mY ... ... xn1Y . . . xnmY

∈Rn·p×m·q.

From the above definition, one can immediately show the following useful properties, see, e.g., [65, Section 12.1].

Proposition 2.1.1. Let A ∈ Rn×m, B ∈ Rp×q, C ∈ Rm×r, D ∈ Rq×s and E ∈ Rr×`. Then it holds

a) vec (ACE) = (ET ⊗A) vec (C), b) tr (AC) = vec ATT

vec (C), c) (A⊗B)(C⊗D) = (AC)⊗(BD).

We further need some properties of the derivative of the trace operator. If f(X) is a matrix function, let

∂tr (f(X))

∂X =

∂tr (f(X))

∂Xij

ij

denote its derivative with respect to X. From [47], we cite a very useful result on its computation. Note that the first part of the following statement is due to Kleinman and Athans and can be found in [8, 85].

Theorem 2.1.1. Let f(X) be some matrix function, then 1) (by Kleinman and Athans) if

f(X+∆X)−f(X) = M(X)∆X, →0, we have

∂tr (f(X))

∂X =MT(X);

2) (by Dulov and Andrianova) if

f(X+∆X)−f(X) = M1(X)∆XM2(X), →0, then

∂tr (f(X))

∂X =

MT2(X)M1(X)T

.

For later purposes, we introduce a special permutation matrix which simplifies the com-putation of Kronecker products for certain block matrices.

Proposition 2.1.2. ([20]) Let A, E ∈ Rm×m, B ∈ Rn×n, C ∈ Rn×m, D ∈ Rm×n. Assume that a permutation matrix M is given as follows

M=

Im⊗ In

0

Im⊗ 0T

Im

. (2.1)

2.1. Tensors and matricizations 13

Then it holds

MT

A⊗

B C D E

M=

A⊗B A⊗C A⊗D A⊗E

.

Furthermore, we constantly make use of the tensor rank of a vectorized matrix.

Definition 2.1.2. ([68, 87]) Let x= vec (X)∈Rn2. Then the minimal number k s.t.

x=

k

X

i=1

ui⊗vi,

where ui,vi ∈Rn, is called the tensor rank of the vector x.

Remark 2.1.1. Due to the properties of the Kronecker product, it is easily seen that the tensor rank of a vectorized matrix X coincides with rank (X).

In recent years, more and more attention has been paid to tensors. Formally, a tensor H is a vector indexed by a product index set

I =I1× · · · × Id, |Ij|=nj.

Besides the concept of the above mentioned tensor rank, several tensor decompositions have been discussed in detail in [68, 87, 89]. An important idea in understanding the nature of tensors is to transform them into matrices. For a given tensor H, the corre-sponding tensor operation is called t-matricization H(t) and is defined as

H(t) ∈RIt×It0, H(i(t)

µ)µ∈t,(iµ)µ∈t0 :=H(i1,...,id), t0 :={1, . . . , d}\t.

Since the concept is rather abstract, it might be helpful to consider a simple example.

Due to its importance later on, we restrict ourselves to a 3-tensor. For example, here we can think of the Hessian matrix of a vector valued function.

Example 2.1.1. For a given 3-tensor H(i1,i2,i3) with i1, i2, i3 ∈ {1,2}, we have the fol-lowing matricizations:

H(1) =

H(1,1,1) H(1,2,1) H(1,1,2) H(1,2,2) H(2,1,1) H(2,2,1) H(2,1,2) H(2,2,2)

, H(2) =

H(1,1,1) H(2,1,1) H(1,1,2) H(2,1,2) H(1,2,1) H(2,2,1) H(1,2,2) H(2,2,2)

, H(3) =

H(1,1,1) H(2,1,1) H(1,2,1) H(2,2,1) H(1,1,2) H(2,1,2) H(1,2,2) H(2,2,2)

.

Roughly speaking, for thet-matricization, thet-th index of the tensorHi1,i2,i3 determines the row. The columns then are sorted according to a reverse lexicographic ordering.

14 Mathematical Foundations