• Keine Ergebnisse gefunden

3.7 Linear Wave Equation

3.7.2 Numerical results

In this section we give some numerical results in order to illustrate the findings from Theorem 3.20. Since energy arguments play an important role in our reasoning, we choose the energy conserving Newmark scheme for the time discretization, cf. [55].

Further remarks concerning the discretization of the wave equation in the context of optimization can be found in [101] and [36]. The spatial discretization is given by δx = 0.001.

First, we show that the decay rate, derived in Theorem 3.20, holds for Example 3.19.

3.7 Linear Wave Equation With the parameter c= L =δ = 1 and T = 0.01 we get σ = eT /3. In Figure 3.5 the σ- values from the numerical simulation are displayed for the classical energy (blue ’o’) and the weighted energy (red ’x’). Obviously, the values of the weighted energy are bounded from above by σ =e0.01/3 (solid black line) while the classical energy achieves values of one and is therefore useless for our method.

For the next example we look at the numerical observations from [55] and [66]. The

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

0.95 0.96 0.97 0.98 0.99 1 1.01 1.02

t

σ

Figure 3.5: σ- values from the numerical simulation for the classical energy (’o’) and the weighted energy (’x’). The solid line (black) displays the theoretical bound of σ derived in Theorem 3.20.

results indicate that the MPC algorithm performs very well, even for the shortest possible horizon N = 2. This special kind of MPC, where the prediction horizons do not overlap, is also called instantaneous control. Note that the use of the term instantaneous control is not unique. In [53] the authors used this notation forN = 2, but the corresponding optimal control problem is only approximately solved by ex-actly one gradient step. This technique, in turn, sometimes is also called one step gradient method, cf. [66].

In the following we investigate the stability behaviour of instantaneous control.

From the α-formula (1.35) for N = 2 we obtain the sufficient stability condition α2 = 1−(C(1 +σ)−1)2 > 0. For Example 3.19 and the constants δ = K = 1 we get the condition T >−3 log 11+λλ

. In the simulation we choose λ = 0.001 and T = 0.01>−3 log 11+0.0010.001

≈0.006.

We compare the closed loop solution of the instantaneous control with the optimal control open loop solution on the time interval [0,2]. The solution trajectory for the instantaneous control (red ’x’) and the optimal control (solid blue line) at different time snapshots are depicted in Figure 3.6. Obviously, both trajectories are quite close together. This observation is pretty surprising, since the optimal control takes information of the whole interval [0,2] (in this example 200 time steps), while the instantaneous control technique only uses the information of the next time step in

3 Minimal Stabilizing Horizons

each iteration. Whereas at t = 2 differences between the solutions are visible, at t = 4 the equilibrium is also for the instantaneous control almost attained (as in [55] the optimal control is continued by zero input on (2,4]).

It is remarkable that the computation time for the optimal control in the whole time interval is more than 21000 times longer than for one single step of the instanta-neous control. Thus, the overall time for the instantainstanta-neous control algorithm on the interval [0; 2] is more than 105 times smaller than the optimal open loop control.

Since the controls from both methods are quite close together and the computation of the instantaneous control is in real time possible (even for this fine spatial dis-cretization), this technique seems to be very attractive for the wave equation.

The numerical results in [66] show that the success of the presented method is not due to the introduced weighted energy. However, only for this concept our analysis is applicable and we can prove that a stabilizing feedback is obtained. A gener-alization of the weighted energy to the n-dimensional wave equation seems rather complicated, because of the domain dependence of the weight functions. Only for simple domains an appropriate weight function can be found, e.g., for a two dimen-sional rectangular domain. In contrast to this, the instantaneous control algorithm with the classical energy can be easily generalized to the multi dimensional case.

An example for a successful application of this method on a L shaped domain can be found in [55].

3.7 Linear Wave Equation

Figure 3.6: Comparison of optimal control (solid blue line) and instantaneous control (red ’x’) at different time snapshots.

4 Algorithms

In this chapter we present algorithms for solving MPC problems with PDEs. Fur-thermore, we investigate methods for solving the arising subproblems. The basic idea of MPC was presented in Section 1.2 and the most simple MPC Algorithm 1.1 forms the basis for the following considerations.

The essential effort of MPC is to solve an optimal control problem in each time step. Since this is, especially in the case of PDEs, a time consuming task, it is quite important to find efficient algorithms to solve such problems. Recently a lot of literature concerning this topic has been published. A detailed introduction to the field of optimization algorithms for infinite dimensional systems can be found in [52]

and [18]. In Section 4.1 we summarize well known algorithms in PDE constrained optimization.

One possibility to significantly reduce the computing time for the optimal control problem is the use of model reduction techniques. Here, we focus on the reduction method Proper Orthogonal Decomposition (POD) presented in Section 2.3.1. In Section 4.2 we present known algorithms, which combine model predictive control with model order reduction and compare those with our new approach.

In the ensuing Section 4.3 we introduce the idea of adaptive horizon MPC from [38] and [77] and show how to implement these algorithms in the context of infinite dimensional control systems. Moreover, we investigate the possibility to combine adaptive horizon MPC with multigrid methods.

4.1 Algorithms in PDE Optimization

In this section we present algorithmic approaches for solving PDE constrained op-timization problems. The presentation is based on [52], [49] and [18]. First, we recapitulate the abstract optimization problem (2.1)

(y,u)minY×UJ(y, u) subject to e(y, u) = 0, u∈U (4.1) with J : Y ×U → R, e : Y ×U → Z. In our context J(y, u) denotes the cost functional, e(y, u) = 0 represents the PDE and the admissible set of control values U is given by pointwise box constraints.

The algorithms presented in this section are based on the so called Black-Box ap-proach. This means we distinguish between the independent control variable uand the dependent state variable y(u). We already investigated this concept in Section 2.1 where we introduced the unique solution operator y = S(u) to eliminate the

4 Algorithms

constraint e(y, u) = 0. The reduced cost functional is given by ˆJ(u) := J(S(u), u).

Thus, the stateyis eliminated from the optimization problem. However, the control constraints are still present.

In contrast to this, the All-at-Once technique considers the control and state vari-ables as independent optimization varivari-ables, which are coupled through the PDE e(y, u) = 0. For details we refer to [40].

Furthermore, we distinguish between first order gradient-type methods and higher order (Quasi-) Newton methods. The following algorithms are formulated in a func-tion space setting. The presentafunc-tion is based on [52] and [18]. A detailed overview of algorithms in finite dimensional optimization can be found in [76] and [35, 34].