• Keine Ergebnisse gefunden

sam-pling time is given by T = 0.01 and the reaction value by µ = 15. The minimal stabilizing horizon observed in the MPC closed loop is displayed in Table 3.5. Obvi-ously, the horizon shrinks for large values of|b|. This corresponds to our theoretical considerations for σ. For b ≥ 0 we observe a monotonically decay for the horizon.

This behaviour is reasonable because for b ≥ 0 the overshoot constant does not depend on b. Thus, the decay can again explained by the behaviour ofσ. For small negative values ofb the decay rate cannot compensate the higher value ofC and we observe a larger horizon. Since the influence of C grows with an increasing regular-ization parameter this effect is stronger for large values of λ.

b -5 -4 -3 -2 -1 0 1 2 3 4

λ= 0.01 2 2 5 6 6 5 4 3 3 2

λ= 0.1 2 5 11 11 9 8 6 5 3 2

Table 3.5: Minimal stabilizing horizon of the reaction-convection-diffusion equation (3.56) depending on the convection parameterb. The parameters used in the numerical MPC simulation are given by. µ= 15 and T = 0.01.

Finally, we want to remark that the findings from Section 3.3 concerning stability with the shortest possible horizon is also true for the PDE (3.56): Since we have C →ξ(K)>1 for λ→0 we cannot expect to observe stability for N = 2, even for arbitrarily small λ. This is indeed visible in the numerical simulation.

3.6 Method of Nevistic/Primbs

As we have seen in the previous sections the predicted minimal stabilizing horizons are very conservative. In this section we show the possibility to improve the results concerning suboptimality and stabilizing horizons by taking advantage of special structures of the control problem. In the following we briefly describe the method of Nevistic and Primbs presented in [75]. The method is developed for finite dimen-sional linear-quadratic control problems without state or control constraints and it uses Riccati difference equations (RDE). However, after an appropriate discretiza-tion this technique is also applicable for linear PDEs.

We look at the following finite dimensional linear system

y(n+ 1) =Ay(n) +Bu(n), y(0) =y0 (3.62) where y(n)∈Rn and u(n)∈Rm. The quadratic cost functional is given by

VN(y0) = inf

u(·) N1

X

k=0

y(k)Qy(k) +u(k)Ru(k). (3.63)

3 Minimal Stabilizing Horizons

Note that we have a different notation to [75]. To be consistent in this thesis we changed the limit of the sum fromN toN-1. The matricesQ∈Rn×nandR ∈Rm×m are positive definite, A is invertible and [A, B] is a stabilizable pair.

The presented method is based on the knowledge of the optimal value function. In the linear quadratic control problem without constraints this information can be obtained by the Riccati Difference Equation (RDE) which is given by

Pj+1 =A

then the receding horizon policy is stabilizing andVN is a Lyapunov function for the closed loop system with The proof can be found in [75]. Note that the inverse of

1 + the role of the suboptimality degree αN in our method.

In the next step we apply the presented method to the discretized one dimensional heat equation with distributed control

yt(x, t) =yxx(x, t) +µy(x, t) +u(x, t) in (0,1)×(0,∞) (3.66a) Since the previous results are formulated for finite dimensional discrete time systems we have to rewrite (3.66) in an appropriate way. In the first step we discretized the spatial variable by central finite differences with M inner grid points and obtain

yh(t) = ˜Ayh(t) + ˜Buh(t) (3.68)

3.6 Method of Nevistic/Primbs

where IM denotes the unit matrix and hx = 1/(M + 1) the spatial discretization.

Equation (3.68) is a linear system of ODEs with constant coefficients and the solution exists and is unique. The solution can be formally written as

yh(t) =etA˜yh0+ Z t

0

e(tτ) ˜ABu˜ h(τ)dτ (3.69) wheree·denotes the matrix exponential. It is remarkable that the solution formula also holds for infinite dimensional systems if ˜A is interpreted as the infinitesimal generator of etA˜ in the sense of semigroup theory, cf. [78].

By introducing the sampling time T and taking into account that the control is constant in each sampling interval we obtain

y(n+ 1) =eTA˜y(n) + ˜A1 exactly the required form (3.62).

Furthermore we have to approximate theL2-norm in the stage cost (3.67). By using the trapezoidal rule we obtain the following matrices for Qand R

Q:= hx

In order to compare the results of [75] with our method we implemented the proce-dure in Matlab. Since the main focus of this section is not on an efficient code but only on the results we implement the matrices straightforward by using the matrix exponential expm. The required values in (3.65) are the maximal eigenvalue λN of PN and the valueβN that can be computed as the maximum eigenvalue ofPN+1PN1, cf. [75]. In order to ensure that the Riccati matricesPN are correctly computed we also determine the optimal control sequence by

uN(k) =− BPN(k+1)B +R1

BPN(k+1)Ay(k)

and compare the results with those one obtained by the optimal control algorithms presented in Section 4.1.

3 Minimal Stabilizing Horizons

To get the values for the method presented in Section 3.1 we proceed as already described: Insert the K depending values σ(K) and C(K) from Theorem 3.4 into the αN formula (1.35) and determine that value of K which produces the largest αN.

We consider the linear heat equation (3.66) with parameter T = 0.01, λ = 0.01, M = 100 and µ= 12.

N αN N/P αN

7 -0.0291 -0.1483 8 0.0935 -0.1398 9 0.2080 -0.1256 10 0.3140 -0.1061 11 0.4108 -0.0816 12 0.4979 -0.052 13 0.5751 -0.019 14 0.6426 0.017 15 0.7010 0.057

Table 3.6: Suboptimality degree αN determined by the method of Nevistic/Primbs [75] and by the method presented in Section 3.1 in dependence of the horizonN

In Table 3.6 theαN values for both methods are displayed. Obviously, the values de-rived from the method presented in Section 3.1 are smaller than those we determined by the procedure of Nevistic/Primbs. As a result, the α value becomes positive for a smaller horizon N and, thus, we can guarantee stability for a smaller horizon. In the example the predicted minimal stabilizing horizon for the first method is given by N = 8 and, thus, much more closer to the true horizonN = 4 than the predicted horizon N = 14 from the second method. This outcome is also observable for all considered parameters.

This result is not surprising because the method of this section uses the special structure of the linear quadratic problem to obtain information about the optimal value function. The drawback is that this approach will not work for different con-trol structures or if concon-trol or state constraints are incorporated. The power of the technique presented in Section 3.1 is that it is also applicable for nonlinear and infinite dimensional systems. Furthermore, no knowledge of the optimal control is required. However, the price of this generality is given by the conservatism of the results.

Remark 3.18

We want to mention that the results concerning the minimal stabilizing horizon can even be improved if we take the exact optimal solution into account. For the one di-mensional linear heat equation with distributed control and quadratic cost functional