• Keine Ergebnisse gefunden

Additional Numerical Examples

2 F for each implemented MPC step. These are used, e.g., in Figures 5.1 and 5.2.

<u> outMu0.txt Contains the state µ (mean) for each implemented MPC step.

<u> outSigmaSq0.txt Contains the state Σ (covariance matrix) for each im-plemented MPC step.

outParams.txt Stores the relevant configuration parameters in a more readable form.

outParamsMaple.txt Stores the relevant configuration parameters for fur-ther use in Maple.

Table 7.2: Files generated by OU-MPC. The prefix<u>is used to distinguish between the optimal control u (uOpt) and the equilibrium control ¯u(uTarget).

7.4 Additional Numerical Examples

In this section we present some additional numerical examples where, to our knowledge, there is no known closed form solution. The Fokker–Planck optimal control problems were solved in PDE-MPC. We verified the results of the Fokker–Planck approach on the mi-croscopic level by solving the SDE numerically inSDEControlwith the (optimal) controls calculated by PDE-MPC. The plots in this section were created in ParaView [11].

Example 7.1 (Shallow Water). The following two-dimensional stochastic process models the dispersion of substance in shallow water [56]: Consider (1.1) with

˜ using a 321×321 grid, which results in 2·102720 control variables in the case of space-dependent control.

To experiment with non-Gaussian PDFs and how well they can be attained with space-dependent controls in various settings, the initial PDF is a (smoothed) delta-Dirac located at the center (4,4) and we choose the target PDF

¯ equilib-rium PDF of a stochastic Lotka-Volterra two-species prey-predator model [103].

118 Chapter 7. Numerical Implementation and Simulations We employ control constraintsu1, u2 ∈[−10,10] and solve the optimal control problem using MPC with the shortest possible horizon N = 2, a sampling time of Ts = 0.5, and the L2 stage cost

`(ρ(k), u(k)) = 1

2kρ(k)−ρk¯ 2L2(R2)

2ku(k)k2L2(R2;R2),

where γ = 0.001. The solution and the evolution of the stochastic process on a micro-scopic level (100000 paths) are depicted in Figure 7.2, with the corresponding controls in Figure 7.3.

In the case of space-independent control(u1(t), u2(t)), such as illustrated in Figure 2.1, the stage cost is given by

`(ρ(k), u(k)) = 1

2kρ(k)−ρk¯ 2L2(R2)

2|u(k)|2.

Example 7.2 (Bimodal Target). Consider a 2D Ornstein–Uhlenbeck process, i.e., (1.1) with spatial domain Ω is discretized using a 301×301 grid, which results in 2·90300 control variables.

We consider bimodal PDFs of the form

˜

We employ control constraints u1, u2 ∈ [−10,10] and we solve the optimal control problem using MPC with the shortest possible horizon N = 2, a sampling time ofTs= 0.5, and the L2 stage cost

`(ρ(k), u(k)) = 1

2kρ(k)−ρk¯ 2L2(R2)

2ku(k)k2L2(R2;R2),

where γ = 0.001. The solution and the evolution of the stochastic process on a micro-scopic level (100000 paths) are depicted in Figure 7.4, with the corresponding controls in Figure 7.5.

Example 7.3 (Moving Bimodal Target). Consider Example 7.2 with the only change being a moving bimodal target PDF

¯

where σ¯:= (0.4,0.4,0.6,0.6).

The solution is depicted in Figure 7.6, with the corresponding controls in Figure 7.7.

Figure 7.8 illustrates the evolution of the stochastic process on a microscopic level (100000 paths).

Example 7.4 (Bimodal Uniform Target). Consider the 2D Ornstein–Uhlenbeck process from Example 7.2, i.e.,

˜

a(x, t) :=

1 /2 0 0 1/2

and b(x, t;u) :=

u1(x, t)−νx1

u2(x, t)−νx2

with ν := 3 4,

and the associated Fokker–Planck equation on Q := Ω×[0,1] with Ω := ]−3,3[2. The spatial domain Ω is discretized using a 121×121 grid, which results in 2·14520 control variables.

Starting from the initial Gaussian PDF

˚

ρ(x) := (2π)2

2

Y

i=1

˚σ2i

!−1/2

exp −

2

X

i=1

x2i 2˚σ2i

!

with˚σ12 = ˚σ22 =1/6, we want the PDF ρ(x, t) to attain the uniform target PDF

¯ ρ(x) :=

(1, x∈[−1,0]2∪[0,1]2, 0, otherwise.

We solve the optimal control problem using MPC with the shortest possible horizon N = 2, a sampling time of Ts= 0.1, and theL2 stage cost

`(ρ(k), u(k)) = α

2 kρ(k)−ρk¯ 2L2(R2)+ γ

2 ku(k)k2L2(R2;R2),

where α = 2 and γ is either 0.01 or 0.001. The solutions are depicted in Figure 7.9, with the corresponding controls in Figure 7.10. Figure 7.11 illustrates the evolution of the stochastic process on a microscopic level.

120 Chapter 7. Numerical Implementation and Simulations

(a) Initial PDF (concentration at (4,4)). (b) Desired PDF.

(c) Controlled PDF at t= 0.5. (d) Controlled SDE att= 0.5.

(e) Controlled PDF at t= 1.5. (f) Controlled SDE att= 1.5.

(g) Controlled PDF at t= 5. (h) Controlled SDE att= 5.

Figure 7.2: Desired and controlled PDF (using space-dependent control) for Example 7.1 (Shallow Water) and the evolution of the stochastic process on a microscopic level.

(a) Control u1(x,0). (b) Controlu2(x,0).

(c) Controlu1(x,0.5). (d) Control u2(x,0.5).

(e) Controlu1(x,1). (f) Controlu2(x,1).

(g) Control u1(x,4.5). (h) Control u2(x,4.5).

Figure 7.3: Controls u1(x, t) and u2(x, t) for Example 7.1 (Shallow Water). Note the different scales at t= 0.

122 Chapter 7. Numerical Implementation and Simulations

(a) Initial PDF. (b) Desired PDF.

(c) Controlled PDF at t= 0.5. (d) Controlled SDE att= 0.5.

(e) Controlled PDF att= 1. (f) Controlled SDE at t= 1.

(g) Controlled PDF at t= 5. (h) Controlled SDE att= 5.

Figure 7.4: Desired and controlled PDF for Example 7.2 (Bimodal Target) and the evo-lution of the stochastic process on a microscopic level.

(a) Control u1(x,0). (b) Controlu2(x,0).

(c) Controlu1(x,0.5). (d) Control u2(x,0.5).

(e) Controlu1(x,2.5). (f) Controlu2(x,2.5).

(g) Control u1(x,4.5). (h) Control u2(x,4.5).

Figure 7.5: Controls u1(x, t) and u2(x, t) for Example 7.2 (Bimodal Target). Note the different scales at t= 0.

124 Chapter 7. Numerical Implementation and Simulations

(a) Controlled PDF att= 0.5. (b) Desired PDF att= 0.5.

(c) Controlled PDF at t= 2.5. (d) Desired PDF att= 2.5.

(e) Controlled PDF att= 4. (f) Desired PDF at t= 4.

(g) Controlled PDF at t= 5. (h) Desired PDF att= 5.

Figure 7.6: Desired and controlled PDF for Example 7.3 (Moving Bimodal Target).

(a) Control u1(x,0). (b) Controlu2(x,0).

(c) Controlu1(x,2). (d) Controlu2(x,2).

(e) Controlu1(x,3.5). (f) Controlu2(x,3.5).

(g) Control u1(x,4.5). (h) Control u2(x,4.5).

Figure 7.7: Controls u1(x, t) and u2(x, t) for Example 7.3 (Moving Bimodal Target).

126 Chapter 7. Numerical Implementation and Simulations

(a) Controlled SDE att= 0. (b) Controlled SDE att= 0.5.

(c) Controlled SDE att= 1.5. (d) Controlled SDE att= 2.5.

(e) Controlled SDE att= 3.5. (f) Controlled SDE at t= 4.

(g) Controlled SDE att= 4.5. (h) Controlled SDE att= 5.

Figure 7.8: Controlled SDE for Example 7.3 (Moving Bimodal Target); 100000 paths.

(a) Initial PDF. (b) Desired PDF.

(c) Controlled PDF at t= 0.1 (γ = 0.01). (d) Controlled PDF att= 0.1 (γ= 0.001).

(e) Controlled PDF at t= 1 (γ = 0.01). (f) Controlled PDF at t= 1 (γ = 0.001).

(g) Controlled PDF at t = 1 (γ = 0.01; grid points).

(h) Controlled PDF at t = 1 (γ = 0.001; grid points).

Figure 7.9: Desired and controlled PDF for Example 7.4 (Bimodal Uniform Target) and various regularization parameters γ.

128 Chapter 7. Numerical Implementation and Simulations

(a) Control u1(x,0) forγ = 0.01. (b) Control u2(x,0) for γ= 0.01.

(c) Controlu1(x,4.5) forγ = 0.01. (d) Control u2(x,4.5) forγ = 0.01.

(e) Controlu1(x,0) forγ = 0.001. (f) Control u2(x,0) forγ = 0.001.

(g) Control u1(x,4.5) forγ = 0.001. (h) Control u2(x,4.5) forγ = 0.001.

Figure 7.10: Controlsu1(x, t) andu2(x, t) for Example 7.4 (Bimodal Uniform Target) and various regularization parametersγ. Note the different scales for the different values of γ.

(a) Controlled SDE att= 0 (γ = 0.01). (b) Controlled SDE att= 0 (γ = 0.001).

(c) Controlled SDE att= 0.1 (γ = 0.01). (d) Controlled SDE att= 0.1 (γ = 0.001).

(e) Controlled SDE att= 0.2 (γ = 0.01). (f) Controlled SDE att= 0.2 (γ = 0.001).

(g) Controlled SDE att= 1 (γ = 0.01). (h) Controlled SDE att= 1 (γ = 0.001).

Figure 7.11: SDE simulation for Example 7.4 (Moving Uniform Target) and various reg-ularization parameters γ; 100000 paths.

8

Future Research

In this concluding chapter we outline various extensions and possibilities for future re-search.

8.1 Generalization of existing results

One way to deepen the understanding to what extent Model Predictive Control works well in the Fokker–Planck optimal control framework (cf. Section 1.1) is to generalize the existing results in both the stabilizing MPC case (cf. Section 3.2) and the economic MPC case (cf. Section 3.3).

8.1.1 Minimal Stabilizing Horizon

In the case of stabilizing MPC we have shown that the MPC approach we employ “works”

for linear stochastic processes and Gaussian PDFs, cf. Theorem 5.11, provided the horizon is sufficiently large. It would be interesting to generalize this result on the Fokker–Planck PDE level and to incorporate nonlinear stochastic processes or linear processes with a nonlinear control, cf. Section 7.4.

Another challenge is to determine or at least approximate the minimal stabilizing horizon. In this context, we have studied the Ornstein–Uhlenbeck process in more detail, cf. Section 4.2, where the control is independent and Section 5.3.2, where the space-dependent control is linear in space. A common trait we encountered is that the minimal stabilizing horizon is N = 2, i.e., the shortest possible horizon, although the optimal value function grows, cf. Example 5.14 or Section 4.3 and Figure 4.5. For nonlinear as well as for other linear processes, strategies have to be developed to cope with this. One such strategy—which was persued in Section 4.2—is to suitably modify the stage cost without affecting the resulting optimal control sequence, such that Theorem 3.4 can still be employed.

8.1.2 Strict Dissipativity

In the case of economic MPC we have investigated the strict dissipativity of optimal control problems subject to a bilinear discrete-time dynamics that approximates the Ornstein–Uhlenbeck process. While various stage costs were considered, it would be very desirable to extend this analysis to more complicated dynamics, ideally to a class of stochastic processes through a combination of a stage cost ` and a suitable storage function λ such that the modified cost ˜` is (strongly) convex.

132 Chapter 8. Future Research Another open question, although minor compared to the previous one, is whether the conditions in Assumption 3.10 hold, which is necessary in order to apply the sta-bility result from Theorem 3.13. While the continuity of the storage function is usually directly provable, the remaining properties likely need to be shown indirectly, via local controllability, cf. Definition 3.11.