• Keine Ergebnisse gefunden

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems

N/A
N/A
Protected

Academic year: 2022

Aktie "Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems"

Copied!
10
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Universität Konstanz

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems

Laura Iapichino Stefan Trenz Stefan Volkwein

Konstanzer Schriften in Mathematik Nr. 347, Dezember 2015

ISSN 1430-3558

© Fachbereich Mathematik und Statistik Universität Konstanz

Fach D 197, 78457 Konstanz, Germany

Konstanzer Online-Publikations-System (KOPS) URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-0-313874

(2)
(3)

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems

Laura Iapichino1, Stefan Trenz2, and Stefan Volkwein3

1 Delft University of Technology, Department of Precision and Microsystems Engineering, Mekelweg 2, 2628 CD Delft, The Netherlands,

l.iapichino@tudelft.nl

2 University of Konstanz, Department of Mathematics and Statistics, Universit¨atsstraße 10, D-78457 Konstanz, Germany,

stefan.trenz@uni-konstanz.de

3 University of Konstanz, Department of Mathematics and Statistics, Universit¨atsstraße 10, D-78457 Konstanz, Germany,

stefan.volkwein@uni-konstanz.de

Abstract. In this paper a reduced-order strategy is applied to solve a multiobjec- tive optimal control problem governed by semilinear parabolic partial differential equations. These problems often arise in practical applications, where the quality of the system behavior has to be measured by more than one criterium. The weighted sum method is exploited for defining scalar-valued nonlinear optimal control prob- lems built by introducing additional optimization parameters. The optimal controls corresponding to specific choices of the optimization parameters are efficiently com- puted by the reduced-order method. The accuracy is guaranteed by an a-posteriori error estimate.

1 Introduction

In real applications, optimization problems are often described by introducing several objective functions conflicting with each other. This leads to multi- objective or multicriterial optimization problems; see, e.g., [1]. Finding the optimal control that represents a good compromise is the main issue in these problems. For that reason the concept of Pareto optimal or efficient points is developed. In contrast to scalar-valued optimization problems, the com- putation of a set of Pareto optimal points is required. Consequently, many scalar-valued constrained optimization problems have to be solved.

When dealing with control functions instead of parameters, a multiob- jective optimal control problem (MOCP) needs to be solved. In this paper we apply the weighted sum method [14,1] in order to transform the MOCP into a sequence of scalar optimal control problems and to solve them using well known optimal control techniques [12]. Preliminary results combining reduced-order modeling and multiobjective PDE-constrained optimization are recently derived [4,9]. The main focus of the present work lies in the extension of [4] to nonlinear, control constrained optimal control problems governed by evolution problems.

(4)

2 L. Iapichino, S. Trenz, and S. Volkwein

The paper is organized as follows. In Section 2 the multiobjective optimal control problem is formulated and transformed into a scalar-valued prob- lem that is considered in Section 3. The numerical strategy and results are discussed in Section 4.

2 Problem formulation and Pareto optimality

Let Ω⊂ Rd, d∈ {1,2,3}, be an open and bounded domain with Lipschitz continuous boundaryΓ =∂Ω. For givenT >0 we setQ= (0, T)×Ω and Σ= (0, T)×Γ. Then we consider the following class of multiobjective optimal control problems governed by semilinear parabolic equations:

minJ(y, u) =

J1(y, u) J2(y, u) J3(y, u)

=

 1 2

Z

y(T,·)−y

2dx 1

2 Z T

0

Z

y−yQ

2dxdt 1

2

m

X

i=1

ui−udi

2

(1a)

subject to the semilinear parabolic differential problem yt(t,x)−∆y(t,x) +y3(t,x) =

m

X

i=1

uibi(x) +f(t,x) for (t,x)∈Q,

∂y

∂n(t,s) = 0 for (t,s)∈Σ, y(0,x) =y(x) forx∈Ω

(1b)

and to the bilateral control constraints u∈Uad=

˜

u= (˜u1, . . . ,u˜m)>∈Rm

uai ≤u˜i≤ubi for 1≤i≤m . (1c) In (1a) we assume that y ∈ L(Ω),yQ ∈L(Q), ud = (ud1, . . . , udm)> ∈ Rm. By ‘>’ we denote the transpose of a vector or matrix. Furthermore, we suppose that b1, . . . , bm ∈ L(Ω), y ∈ L(Ω). In (1c) let uia, uib ∈ R satisfyinguia ≤uib for 1≤i≤m.

Recall that the state equation (1b) has a unique (weak) solutiony=y(u) for everyu∈Uad; see, e.g., [12]. Hence, we can introduce the reduced objective by ˆJ(u) =J(y(u), u) foru∈Uad. Instead of (1) we consider now

min ˆJ(u) =

 Jˆ1(u) Jˆ2(u) Jˆ3(u)

 subject to u∈Uad. (2) Problem (2) involves the minimization of a vector-valued objective. This is done by using the concepts of order relation and Pareto optimality [1]. In R3 we make use of the following order relation: For allz1, z2∈R3 we have

z1≤z2 ⇔ z2−z1∈R3+=

z∈R3

zi≥0 fori= 1,2,3 .

(5)

Reduced-Order Multiobjective Optimal Control 3 Definition 1 (Pareto optimal). Let Z= ˆJ(Uad)⊂ R3 be the image set of Uad under the cost functional ˆJ. We call a point ¯z∈Zglobally (strictly) efficientwith respect to the order relation≤, if there exists noz∈Z\{z}¯ with z≤z. If ¯¯ zis efficient and ¯u∈Uadsatisfies ¯z= ˆJ(¯u), we call ¯u(strictly) Pareto optimal. Let ¯u∈Uadhold. If there exists a neighborhoodN(¯u)⊂Uadof ¯uso that ¯z = ˆJ(¯u) is (strictly) efficient for the (local) image set ˆJ(N(¯u))⊂ Z, the point ¯uis calledlocally (strictly) Pareto optimal. Moreover, ¯z is said to belocally efficient.

Now, the multiobjective optimal control problem (2) is understood as follows:

Find Pareto optimal points inUad for the vector-valued reduced objectiveJˆ. First-order necessary optimality conditions for Pareto optimality are pre- sented in the next theorem which is proved in [1, Theorem 3.21 and Corol- lary 3.23]. The proof is based on the result of Kuhn-Tucker [6].

Theorem 2. Suppose that u¯ ∈ Uad is Pareto optimal. Then, there exists a parameter vector µ= (¯µ1,µ¯2,µ¯3)∈ R3 satisfying the Karush-Kuhn-Tucker conditions

¯

µi∈[0,1],

3

X

i=1

¯

µi = 1and

3

X

i=1

¯

µii0(¯u)>(u−u)¯ ≥0 for all u∈Uad. (3)

Motivated by Theorem 2, let us choose 0µlb<1 and set Dad=

µ= (µ1, µ2, µ3)∈Rk+

3

X

i=1

µi= 1, µ3≥µlb

⊂[0,1]3. The condition µ3 ≥ µlb is necessary for the well-posedness of the scalar- valued optimal problem (Pˆµ) introduced below. For anyµ∈Dad we define the parameter-dependent, scalar-valued objective as ˆJ(u;µ) = µ>Jˆ(u) for u∈ Uad. Then, (3) are the first-order necessary optimality conditions for a local solution ¯u= ¯u(µ) to the parameter-dependent optimization problem

min ˆJ(u;µ) subject to u∈Uad (Pˆµ) for the parameterµ= ¯µ. In theweighted sum methodPareto optimal points are computed by solving (Pˆµ) for variousµ∈Dad; see [14] and [1, Chapter 3].

Remark 3. To solve (Pˆµ) we apply a globalized Newton-CG method [7]. ♦

3 The scalar-valued optimal control problem

Suppose that ¯u=u(µ)∈Uad is an optimal solution to (Pˆµ) for given µ∈ Dad. Let ¯y =y(¯u;µ) denote the associated optimal state satisfying (1b) for

(6)

4 L. Iapichino, S. Trenz, and S. Volkwein

u = ¯u. Following [12] first-order necessary optimality conditions for (Pˆµ) ensure the existence of a unique adjoint ¯p=p(¯u;µ) solving

−pt(t,x)−∆p(t,x) + 3y2(t,x)p(t,x) =µ1 yQ(t,x)−y(t,¯ x) inQ,

∂p

∂n(t,s) = 0 onΣ, p(T,x) =µ2 y(x)−y(T,¯ x)

in Ω (4)

with y = ¯y. Moreover, for any µ ∈ Dad the gradient of the reduced cost functional ˆJ(·;µ) at a givenu∈Uad is given by

0(u;µ) =

µ3 ui−udi

− Z T

0

Z

p(t,x)bi(x) dxdt

1≤i≤m

,

wherepsolves (4) andyis the solution to (1b).

Since (Pˆµ) is a non-convex problem, we make use of the hessian ˆJ00(u;µ)∈ Rm×m in order to ensure sufficient optimality conditions. Let ¯u= ¯u(µ) be locally optimal for (Pˆµ) satisfying the second-order sufficient optimality con- dition

00(¯u;µ)(u, u)≥κ|u|22 for allu∈Rm

with a µ-independent lower boundκ > 0 for the smallest eigenvalue of the hessian. Then, for any ˜κ∈(0, κ) there exists a radius ˜ρ=ρ(˜κ)>0 such that Jˆ00(˜u;µ)(u, u)≥κ˜|u|22 for all ˜uwith|˜u−u|¯2≤ρ.˜ (5) If ˜u∈Uadsatisfies (5) we can estimate the error between ¯uand ˜uas [5]

|¯u−u|˜2≤ 1

˜

κ|ζ|2, (6)

whereζ=ζ(˜u)∈Rmis given by

ζi:=







 h

µ3 ui−udi

−RT 0

R

p(t,˜ x)bi(x) dxdti

, if ˜ui=uai,

−µ3 ui−udi +RT

0

R

p(t,˜ x)bi(x) dxdt, ifuai <˜ui< ubi,

−h

µ3 ui−udi

−RT 0

R

p(t,˜ x)bi(x) dxdti

+

, if ˜ui=ubi,

(7)

˜

p solves (4) with y = ˜y and ˜y solves (1b) with u = ˜u. In (7) we denote by [s] = −min(0, s) and [s]+ = max(0, s) the negative and positive part function, respectively. Hence, given a (suboptimal) ˜u∈Uad, the error ¯u−u˜ can be estimated by the right-hand side in (5) provided the lower bound ˜κ for the symmetric matrix ˆJ00(˜u;µ) is known. We proceed as follows: From Jˆ(˜u;µ) =µ>Jˆ(˜u) we find

00(˜u;µ) =µ1100(˜u) +µ2200(˜u) +µ3300(˜u) forµ∈Dad.

(7)

Reduced-Order Multiobjective Optimal Control 5 It follows from Theorem of Courant-Fischer [10, Corollary 4.7] that a lower boundλLBminfor the smallest eigenvalue λminof ˆJ00(˜u;µ) is given by

λmin00(˜u;µ)

3

X

i=1

µiλmini00(˜u)

=: λLBmin(˜u;µ), (8) whereλmin(A) denotes the smallest eigenvalue of a symmetric matrixA.

4 Numerical solution strategy

To solve (1) we apply the weighted sum method. Thus, the set of Pareto opti- mal points is approximated by solutions ¯u(µ) to (Pˆµ) for various parameters µ∈ Dad. Consequently, many constrained nonlinear optimization problems have to be solved numerically, which is computationally expensive. For this reason model-order reduction (MOR) is applied to reduce significantly the required computational resources. Our MOR approach is based on a Galerkin- type approximation to (Pˆµ) using MOR basis functions, where for certain weighting parametersµ∈Dadthe MOR basis functions contain information from optimal states ¯y(µ) and adjoints ¯p(µ) associated with optimal controls

¯

u(µ). The MOR basis functions are determined in an offline phase. In the online phasethe weighted sum method is applied, where numerical solutions to (Pˆµ) are computed rapidly by a MOR Galerkin discretization [11].

Offline phase I: eigenvalue computation on control grid. Suppose the discrete (regular) control gridΞgrid={uk}Kk=1 in the set Uad of admissible controls. In anofflinephase we compute and store theµ-independent smallest eigenvalues λmin( ˆJi00(uk)) at any grid nodeuk∈Ξgridfori= 1 and 2. Since Jˆ300(uk) is the identity, we have λmin( ˆJ300(uk)) = 1. Now, (8) yields a numer- ically efficient computation of the approximative lower boundλLBapp(˜u;µ) in theonline phase at any (suboptimal) control ˜u∈Uadby convex combination of the stored smallest eigenvalues λmin( ˆJi00(uk)) fork= 1, . . . , K:

λmini00(˜u)

≈λiapp(˜u) :=

K

X

k=1

ωkλmin( ˆJi00(uk)). (9)

In (6) we utilizeλiapp(˜u) instead of ˜κ.

Remark 4. The computation of theλmin( ˆJi00(uk)) can be realized in parallel

computing with respect to k∈ {1, . . . , K}. ♦

Offline phase II: MOR basis computation. Estimate (6) can be suitably used as ingredient to apply a MOR strategy for the solution of nonlinear multiobjective problems. In order to use a MOR technique for its solution, we

(8)

6 L. Iapichino, S. Trenz, and S. Volkwein

propose to use the POD-greedy algorithm based on [3] and [2,8]. As an input the POD-greedy algorithm requires a discrete parameter training setStrain⊂ Dad, as well as he smallest eigenvalues λmin( ˆJi00(uk)) for i = 1,2,3 on the control gridΞgridand the corresponding precomputed grid node dataDgrid, both needed for the smallest eigenvalue approximation in the a-posteriori error estimation.

Online phase: multiobjective optimal control As regards the original multiobjective problem, we are interested in the solution of the parametric optimal control problem for a large number of parameter values, since we want to identify a set of optimal control solutions that does not a-priorily penalize any cost functional. In other words, we are interested in identifying the Pareto optimal front of the multiobjective problem, that consists in a large set of cost functionals evaluation corresponding to the solution of a large number of optimal control problems (obtained in correspondence of several parameter values, at the randomly chosen). In order to identify the Pareto front we need to evaluate several times the parametric optimal control problem. For this purpose, the proposed model order reduction strategy can be efficiently reduce the required computational times.

5 Numerical example

We consider (1) with spatial domain Ω = (0,1)×(0,1) ⊂ R2, final time T = 1, desired states y = 0, yQ(t,x) = 100tcos(2πx1) cos(2πx2), initial condition y(x) = 0 and inhomogeneity f(t,x) = 10tx1. Furthermore, for m= 4 each shape function bi(x), i= 1, . . . ,4, has the support in a quarter of the domain Ω and ud = (0.5,−4,−0.5,4)> ∈R4. The high fidelity spa- tial approximations of the problem solution, used for the basis computations and the error comparisons is computed by a finite element (FE) model with a Newton method that uses P1 elements, it has 729 degrees of freedom in 1352 elements. For the temporal discretization the implicit Euler method is applied with equidistant step size ∆t= 0.01 steps. Figure 1 shows optimal states ¯y = y(¯u;µ) for solutions ¯u(µ) to (Pˆµ) corresponding to two values of the parameter µ.In the left plot of Figure 2 we plot error comparisons obtained by solving (Pˆµ). In particular, we compare the errors between the optimal controls computed by the model order reduction (MOR) method and the one by the FE method. We show its minimum, maximum and av- erage values (over a range of 1000 parameter values) by varying the number of basis functions used in the MOR scheme. In the right plot of Figure 2 we present the Pareto front obtained by solving (Pˆµ) with the MOR tech- nique for different parameter values µ. In order to show the correctness of the Pareto front, we also include the values of the cost functionals ˆJ1(u), Jˆ2(u) and ˆJ3(u) obtained for 1000 control values randomly chosen as fol- lows: u1 ∈ [−3,3], u2 ∈ [−8,−1], u3 ∈ [−5,−2], u4 ∈ [−1,6] (not optimal

(9)

Reduced-Order Multiobjective Optimal Control 7

Fig. 1.Optimal states ¯y=y(¯u;µ) for solutions ¯u(µ) to (Pˆµ) for parameter values µ= (0.05,0.9,0.05) (left) andµ= (0.9,0.05,0.05) (right).

Fig. 2.Error bound and error between the MOR and FE solutions by varying the number of bases: minimum, maximum and average values over a set of 100 random parameter values (left); Pareto front and cost functionals values corresponding to admissible control points (right).

controls). In Figure 3 we show the optimal controls ¯u(µ) corresponding to 1000 randomly chosen parameter values and to the parameter values selected for the bases computations during the POD-greedy algorithm. Regarding the computational performances the online evaluation time required for solving problem (Pˆµ) for each parameter value by using 10 basis functions is about 0.7 seconds; while the evaluation of the FE solution requires about 7.3 sec- onds. We conclude that the gained speedup allows a much faster optimal so- lution evaluation and an efficient identification of the Pareto front, for which several repeated solutions have to be computed.

References

1. M. Ehrgott. Multicriteria Optimization Springer, Berlin, 2005.

2. M. Grepl. Certified reduced basis methods for nonaffine linear time-varying and nonlinear parabolic partial differential equations. Mathematical Models and Methods in Applied Sciences; 22(3), 2012.

(10)

8 L. Iapichino, S. Trenz, and S. Volkwein

Fig. 3.Optimal controls ¯u(µ) corresponding to 1000 randomly chosen parameter values (left) and to the parameter values selected for the bases computations during the POD-greedy algorithm (right).

3. B. Haasdonk and M. Ohlberger.Reduced basis method for finite volume approx- imations of parametrized linear evolution equations. Mathematical Modelling and Numerical Analysis; 42; pp. 277–302, 2008.

4. L. Iapichino, S. Ulbrich, and S. Volkwein. Multiobjective PDE-constrained optimization using the reduced-basis method. Submitted (2015), available at http://nbn-resolving.de/urn:nbn:de:bsz:352-2501909

5. E. Kammann, F. Tr¨oltzsch, S. Volkwein.A-posteriori error estimation for semi- linear parabolic optimal control problems with application to model reduction by POD. Mathematical Modelling and Numerical Analysis; 47; pp. 555–581, 2013.

6. H. Kuhn and A. Tucker. Nonlinear programming. In Newman, J., editor, Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, pp. 481–492, 1951.

7. J. Nocedal and S.J. Wright.Numerical Optimization. Springer Series in Oper- ation Research, second edition, 2006.

8. A.T. Patera and G. Rozza. Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations. MIT, 2007.

9. S. Peitz and M. Dellnitz. Multiobjective optimization of the flow around a cylinder using model order reduction. In Proceedings in Applied Mathematics and Mechanics (PAMM) - Accepted, 2015.

10. G.W. Stuart and J. Sun. Matrix Perturbation Theory. Computer Science and Scientific Computing. Academic Press, 1990.

11. E. Sachs and S. Volkwein. POD Galerkin approximations in PDE-constrained optimization. GAMM-Mitteilungen; 33; pp. 194–208, 2010

12. Tr¨oltzsch, F.: Optimal Control of Partial Differential Equations: Theory, Meth- ods and applications. Graduate Studies in Mathematics, Vol. 112, American Mathematical Society (2010)

13. F. Tr¨oltzsch and S. Volkwein. POD a-posteriori error estimates for linear- quadratic optimal control problems. Computational Optimization and Applica- tions; 44; pp. 83-115, 2009.

14. L. Zadeh. Optimality and non-scalar-valued performance criteria.IEEE Trans- actions on Automatic Control; 8; pp. 59-60, 1963.

Referenzen

ÄHNLICHE DOKUMENTE

In this paper we apply the reference point method [11] in order to trans- form a bicriterial optimal control problem into a sequence of scalar-valued optimal control problems and

The thesis is organized as follows: In the first chapter, we provide the mathematical foundations of fields optimization in Banach spaces, solution theory for parabolic

When using conventional simulation or optimization techniques the whole simulation of these (something like) stiff dynamics needs to be done on a comparably fine grid as otherwise

While the measure theoretical boundary trace sense for Sobolev functions is commonly used in the analysis of the weak contact problem formulation, the considerations in the

Model Predictive Control, Cooperative Control, Feedback Synthesis, Nonlinear Systems, Multiobjective Optimization.. The authors are supported by DFG Grant

“Intelligent multi-objective nonlinear model predictive control (imo- nmpc): Towards the on-line optimization of highly complex control problems,” Expert systems with

Abstract: The optimal tracking problem of the probability density function of a stochastic process can be expressed in term of an optimal bilinear control problem for the

Despite being the most straightforward and simple of the approaches described in this article, the direct discretization approach is currently the most widely used approach