• Keine Ergebnisse gefunden

Mesh Adaptivity in Optimal Controlof Elliptic Variational Inequalities with Point-Tracking of the State

N/A
N/A
Protected

Academic year: 2022

Aktie "Mesh Adaptivity in Optimal Controlof Elliptic Variational Inequalities with Point-Tracking of the State"

Copied!
37
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

An adaptive finite element method is developed for a class of optimal control problems with elliptic variational inequality constraints and objective functionals defined on the space of continuous functions, necessitated by a point-tracking requirement with respect to the state variable. A suitable first order stationarity concept is derived for the problem class via a penalty technique. The dual-weighted residual approach for goal-oriented adaptive finite elements is applied and relies on the stationarity system. It yields primal residuals weighted by approxi- mate dual quantities and vice versa as well as complementarity mismatch errors. A report on numerical tests, including the critical case of biactivity, completes this work.

2010 Mathematics Subject Classification: [2010]49K20, 65K15, 90C33

Keywords: adaptive finite element methods, optimal control of variational inequalities in func- tion space, point-tracking, C-stationarity, goal-oriented error estimation.

1

Mesh adaptivity in optimal control of elliptic variational

inequalities with point-tracking of the state

Charles Brett

MASDOC CDT, Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK

c.brett@warwick.ac.uk Charles M. Elliott Mathematics Institute,

University of Warwick, Coventry CV4 7AL, UK C.M.Elliott@warwick.ac.uk

Michael Hinterm¨uller

Department of Mathematics, Humboldt-Universit¨at zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany

hint@math.hu-berlin.de Caroline L¨obhard

Department of Mathematics, Humboldt-Universit¨at zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany

loebhard@math.hu-berlin.de

Abstract

The final publication is available at European Mathematical Society Publishing House:

DOI 10.4171/IFB/332

© European Mathematical Society 2015

(2)

1 Introduction

In this paper we study a class of optimal control problems with variational inequality and pointwise control constraints, as well as an objective containing the tracking of the state variable at specific points. Such models are of interest, for example, in inverse problems where a parameter has to be identified from measurements at predefined locations in the underlying domain; see [1] for an application in mathematical finance.

Since variational inequalities often model an equilibrium condition, the problems treated here are called mathematical programs with equilibrium constraints (MPECs) in the literature. This problem class has been studied since the 1970s in finite dimensional spaces and later also in function space;

see [38, 43] and [6, 41] for extensive accounts of the progress in the respective subjects. Despite the research efforts ever since, the problem class considered in this paper has not yet been analyzed.

The degeneracy of the constraint set which is typical for MPECs makes standard methods for the derivation of necessary optimality conditions inapplicable. As an alternative, penalty methods (see e.g. [6] and the references therein) and methods relying on generalized differentiability concepts (see e.g. [40]) have been used. Concerning the state system of the present work, useful analytic properties (such as stability and directional differentiability) of solution operators of variational inequalities are discussed in [15, 34, 45]. These properties were employed successfully to derive alternative forms of stationarity. For such concepts in function space we refer to [28, 42], and to [47]

for finite dimensional problems.

Besides their value for the design of mesh independent solution algorithms, function space sta- tionarity conditions can be employed to derive goal oriented a posteriori error indicators with respect to the objective functional in the context of finite element discretizations. In optimization with partial differential equation constraints, this concept was pioneered in [10, 4], and an associated adaptive method for an MPEC is given in [27]. The method relies on local error indicators for the imple- mentation of an automatic adaption of the discrete space. For different approaches to adaptive finite element methods (AFEMs) we refer to the monographs [49, 4, 44, 3] and the references therein. The typical dual-weighted residual error estimators have been successfully applied to optimal control problems with partial differential constraints e.g. in [37, 50, 24, 23, 11, 25, 26, 46]. Alternatively, functional error estimation for optimal control problems is analyzed in [20], and residual based es- timators are studied in [37, 36, 31, 30, 33, 32, 19]. In particular, a residual based estimator for the optimal control of an obstacle problem is suggested in [19].

In the present paper, we analyze the new problem class and derive suitable stationarity condi- tions with a smooth penalty approach and an averaging technique for the function evaluations in the objective. Some of the analytic complications associated with the underlying problem class are due to the point tracking which requires a state space which embeds into continuous functions, and thus leads to reduced regularity of adjoint states. Based on the resulting so-called-almost C-stationarity system, we develop a goal-oriented indicator for the error in the objective functional of the optimal control problem which contains dual weighted residuals and terms covering the mismatch in com-

(3)

plementarity of primal variables and the respective multipliers. These are similar to the ones in [27], but with extra terms concerning the supplementary control constraints.

The paper is structured as follows: We provide a collection of basic notation in the rest of this section. In Section 2 we state the considered MPEC and prove a regularity result for the solution of the variational inequality. In addition, the penalization scheme with a new averaging technique for the objective is defined. Section 3 shows the limiting process that takes the first order optimality conditions of the auxiliary problem to the-almost C-stationarity conditions for the MPEC. These conditions are employed for the derivation of an abstract error representation formula and the a pos- teriori error indicators in Section 4. We finish in Section 5 with a brief description of the solution method and document numerical results.

Notation

For a Banach spaceY and its dualY, the dual pairing of elements y ∈ Y and y ∈Y is written as⟨y, y⟩Y =y(y). In a reflexive Banach space Y with the identificationi ∶ Y →Y∗∗, we write

⟨y, yY ∶= ⟨i(y), yY∗∗. If X is a Hilbert space, we write (x1, x2)X for the scalar product of x1, x2 ∈ X. For sequences(yk)k∈N ⊂ Y and (yk)k∈N ⊂ Y, we denote the strong convergence to a limity∈Y (i.e.,∥yk−y∥Y →0fork → ∞) byyk →y, whereas weak and weakconvergence are denoted byyk⇀y(i.e.,∀z∈Y,⟨z, yk−y⟩Y→0) andyk y(i.e.,∀z∈Y,⟨yk−y, z⟩Y →0).

The standard Lebesgue an Sobolev spaces on an open, bounded Lipschitz domainΩare denoted in the usual way and we use the norms ∥v∥H01() ∶= ∥∇v∥L2() and ∥v∥W01,q() ∶= ∥∇v∥Lq(). Fur- thermore we writeC0(Ω¯)for continuous functions on the closure of Ωand identify its dual space (C0(Ω¯))with the space of bounded Borel measuresMb(Ω¯)by the Riesz-Alexandroff representa- tion theorem.

For a functionψ ∈L2(Ω), we define(ψ)+=max(0, ψ)in a pointwise almost everywhere (a.e.) sense. The characteristic function of a subsetM ⊂Ωis denoted byχM ∶Ω→ {0,1}, while∣M∣is the Lebesgue measure ofM providedM is measurable. We writeR>0= {t∈R∣t>0}and, for a subset M of a Banach spaceY,M+= {y ∈Y∣ ∀y∈M ∶ y(y) ≥0} ⊂Y. The number of elements in a finite setI is denoted bycard(I).

Throughout the text,C >0is a generic constant which depends only on the problem input data, but not on solutions, or on the discretization.

(4)

2 Model problem and the penalization scheme

2.1 Statement of the problem

For an open bounded domain Ω ⊂ R2, we consider aij ∈ L(Ω) (i, j ∈ {1,2}) such that for all ζ= (ζ1, ζ2) ∈R2and almost allx∈Ω,

2 i,j=1

aij(x)ζiζj ≥α∣ζ∣2,

whereα>0, and we define the uniformly coercive operatorA∶H01(Ω) →H1(Ω)by A= − ∑2

i,j=1

∂xiaij

∂xj = −div((aij)∇). (2.1)

Here and below, the coefficient matrix in A is denoted by (aij) ∈ L(Ω)2×2. Let V = H01(Ω), A ∶ V → V and K = {z ∈ V ∣z ≥ 0a.e. inΩ}. By the Lions-Stampacchia theorem (cf. [45, Thm.4:3.1]), the variational inequality problem

Findy∈K such that ⟨Ay, z−y⟩H−1()≥ ⟨f+u, z−y⟩H−1() ∀z∈K, (2.2) has a unique solution for all u, f ∈ V. This solution can be equivalently characterized by the following complementarity system with slack variableξ,

ξ=Ay−f−u∈V, y≥0inV, ξ≥0inV, ⟨ξ, y⟩H−1()=0. (2.3) In a slight misuse of notation, we denote the solution operator of (2.2) byy(⋅), so this mapsu∈V to y = y(u). It is well-known from [13] (see also [45, Thm.5:2.5 (i)]) that for ∂Ω ∈ C1, aij ∈ L(Ω) ∀i, j∈ {1,2}andf+u∈W1,q(Ω), there existsqˆ>2such that for all2<q<qˆthe solution of (2.2) satisfiesy(u) ∈W01,q(Ω). The following proposition shows that in fact, less regularity of the boundary∂Ωis necessary to prove that the solution operator of (2.2) mapsL2(Ω)intoW01,q(Ω). Proposition 2.1. Assume thatΩis of classC0,1(i.e. has a Lipschitz boundary, which includes non- convex polygonal domains),aij ∈L(Ω) ∀i, j∈ {1,2}andf+u∈L2(Ω). Then there existsQ>2 such that for all2<q<Qthe solution of (2.2) satisfiesy(u) ∈W01,q(Ω)and the following estimate holds,

∥y(u)∥W01,q()≤C∥f +u∥W−1,q(). (2.4) Proof. In the same way as in [35, IV, Theorem 2.3], we defineϑτ ∶R→ [0,1]forτ >0as

ϑτ(t) =⎧⎪⎪⎪

⎨⎪⎪⎪⎩

1, t≤0, 1−τt, 0<t<τ,

0, t≥τ

(5)

and approximate the variational inequality by the problem

Find y∈H01(Ω) such that Ay=max(0,−f−u)ϑτ(y) +f+u. (2.5) Lemma 2.2 in [35, IV] yields a unique solution yτ ∈ H01(Ω). The Lipschitz domain Ωis regular in the sense of [22, Def. 2], and thus by [22, Thm. 3] there existsqˆ> 2such that the duality map

−∆∶H01(Ω) →H1(Ω)mapsW01,ˆq(Ω)ontoW1,ˆq(Ω)and∥φ∥W01,q()≤C∥ −∆φ∥W−1,q()for all φ∈W01,q(Ω). So from Meyer’s estimate (see [39, Thm.1]) we obtainQ∈ (2,qˆ], depending only on Ω, the ellipticity constant and the norm of (aij), such that AmapsW01,q(Ω)ontoW1,q(Ω)for all 2<q<Q, and we have the respective norm estimate. The right hand side of (2.5) is inW1,q(Ω)so the solution satisfiesyτ ∈W01,q(Ω)and

∥yτW01,q()≤C∥AyτW−1,q()≤C∥max(0,−f−u)ϑτ(yτ) +f+u∥W−1,q()

≤C∥f+u∥W−1,q().

Thus for a sequence τk → 0, (yτk)k∈N is uniformly bounded in W01,q(Ω) and therefore contains a weakly convergent subsequence with limity¯∈W01,q(Ω). Following the arguments in [35], we find that the limit satisfies the variational inequality, i.e.y¯=y(u). The uniqueness of this solution implies the weak convergence of the full sequence. The weak lower semi-continuity of the norm implies

∥y(u)∥W01,q()≤lim inf{∥yτkW01,q()∣k∈N} ≤C∥f+u∥W−1,q().

In the rest of this paper we assume thatΩis a Lipschitz domain inR2 and thatq∈ (2, Q), with Q>2from Proposition 2.1. In this way,W01,q(Ω)embeds intoC(Ω¯), and thus an optimal control problem with objectiveJ ∶C(Ω¯) ×L2(Ω) →Rcan be treated. SinceL2(Ω) ≅L2(Ω)⊂W1,q(Ω), we can consider the solution operator as a mappingy∶L2(Ω) →W01,q(Ω).

We analyze optimal control problems with point-tracking of the state variableyin the objective, and with the constraint (2.2) as well as optionalL2(Ω)-box-constraints on the control variable u, i.e.,

Minimize J(y, u) = 1

2∑

wI

(y(w) −yw)2

2∥u∥2L2() (2.6a)

over (y, u) ∈W01,q(Ω) ×L2(Ω), (2.6b)

subject to (s.t.) y=y(u)solves (2.2), (2.6c)

u∈Uad= {u∈L2(Ω) ∣a≤u≤ba.e. inΩ}, (2.6d) wherewdenotes a tracking point in a finite set I ⊂Ω, yw ∈R is a desired (or measured) value of the statey at w, ν > 0 is the cost of the control, f ∈ L2(Ω), and the bounds a, b ∈ L2(Ω) satisfy a < b almost everywhere in Ω, or a = −∞ and b = ∞. The function evaluation of y ∈ W01,q(Ω) inw is also denoted by ⟨δw, y⟩W−1,q(), where δw ∈ W1,q(Ω) = (W01,q(Ω)) with q = q−1q . The

(6)

objective functional J ∶ W01,q(Ω) ×L2(Ω) → Ris continuous and convex, and thus weakly lower semi-continuous. In its reduced form, the control problem reads

MinimizeJˆ(u) = 1 2∑

wI

∣y(u)(w) −yw2

2∥u∥2L2() over u∈Uad. (2.7) Remark 2.2. The differential operatorAis a topological isomorphism fromW01,q(Ω)intoW1,q(Ω) according to the proof of Proposition 2.1. We denote the adjoint ofAbyA∶W01,q(Ω) →W1,q(Ω), and a ∶ W01,q(Ω) ×W01,q(Ω) → R is the bilinear form induced by A. Note that ˆa ∶ W01,q(Ω) → R,ˆa(v) ∶=a(v, v)is convex and continuous, and thus weakly lower semi-continuous.

Remark 2.3. In the case of increased regularity, a, b, f ∈ Lp(Ω) for p ≥ 2, [15, Thm.II.1] yields thatAy∈Lp(Ω)and∥Ay∥Lp()can be bounded by∥f+u∥Lp()(see [15, Rem.II.2]). In particular, the slack variable ξ satisfies ξ = Ay−u−f ∈ Lp(Ω). Furthermore, ifu+f ∈ L(Ω), then [48, Rem.2.6] yields H¨older continuity of y ∶ L2(Ω) → W01,q(Ω) (or as a mapping to a more regular space, depending on the domainΩ) with exponent 12.

The following lemma serves as a tool to prove solvability of the model problem (2.6).

Lemma 2.4. Letf ∈L2(Ω). Ifuk ⇀u˜inL2(Ω), then(y(uk))k∈Nhas a subsequence converging weakly inW01,q(Ω)toy(u˜).

Proof. The sequence (uk)k∈N is bounded owing to its weak convergence. We use the estimate in Proposition 2.1 to obtain the uniform boundedness of(y(uk))k∈N in W01,q(Ω). Next we note that (yk)k∈Nhas a subsequence which converges weakly inW01,q(Ω)to a limity˜∈K∩W01,q(Ω)due to the weak closedness ofK. To complete the proof, insert the elements of this subsequence into the variational inequality (2.2) with controlukto show that the limit satisfiesy˜=y(u˜).

Lemma 2.4 and a standard infimizing sequence (Weierstraß) argument now prove the following result.

Proposition 2.5. Problem (2.7) has a solution.

Foru∈Uadof (2.7) we definey=y(u)andξ=Ay−u−f ∈L2(Ω). The set{x∈Ω∣y(x) =0}is calledactive set, its complement{x∈Ω∣y(x) >0}is calledinactive set, and the set{x∈Ω∣ξ(x) = 0a.e. andy(x) =0}is calledbiactive set.

For the derivation of a stationarity system for (2.6) we next study a penalized version of the problem.

2.2 Penalized variational inequality

The variational inequality (2.2) can be approximated by the semi-linear partial differential equation Ay−γmax(0,−y) =u+f inH1(Ω), (2.8)

(7)

which is the first order optimality condition of the problem Minimize 1

2⟨Ay, y⟩H−1()− (u+f, y)L2()

2∥max(0,−y)∥2L2() overy∈H01(Ω). (2.9) Above,γ >0is given andmax(0,⋅)is understood in the pointwise a.e. sense. Equation (2.8) has a unique solutionyγmax(u) ∈H01(Ω). According to [21, Thm. 3.1] it holds for allu∈L2(Ω)that

ymaxγ (u) →y(u) inH01(Ω)asγ→ ∞. (2.10) In a smoothed version of (2.8), themax(0,⋅)-operator is approximated by aC1-functionmax(0,⋅), which depends on a parameter>0, such that

max(0,⋅) →max(0,⋅)pointwise a.e., as→0.

For now we consider the local variant from [29, Eq. (2.4)], i.e.

max(0, t) =⎧⎪⎪⎪

⎨⎪⎪⎪⎩

t ift≥,

t2

4 +2t+4 ift∈ (−, ), 0 ift≤ −.

(2.11)

The smoothed penalized constraint then reads

Ay−γmax(0,−y) =u+f inH1(Ω). (2.12) Proposition 2.6(Solution operator of the semi-linear equation (2.12)). Forγ, >0,ϕ∈L2(Ω), the linearH01(Ω)-elliptic partial differential operatorA as defined in (2.1) andmax from (2.11), the equation

Ay−γmax(0,−y) =ϕ (2.13)

admits a unique solution y˜ ∈ W01,q(Ω). Furthermore there exists a constant C > 0 such that the following estimate holds true:

max{γ∥max(0,−y˜)∥L2(),∥y˜∥W01,q()} ≤C(∥ϕ∥L2()+γ), from some constantC=C(Ω) >0which is independent ofγand.

Proof. To analyze the solvability of the semi-linear equation (2.12), we defineψ∶H01(Ω) →Rvia ψ(y) = ∫0y(x)max(0,−s)ds dx.

This mapping is monotone and bounded from below, and its derivative reads d

dyψ(y) ⋅φ= − ∫max(0,−y)φ dx.

(8)

Now equation (2.12) can be interpreted as the optimality system of the problem MinimizeF(y) ∶=1

2⟨Ay, y⟩H−1()− ⟨ϕ, y⟩H−1()+γψ(y)overy∈H01(Ω), (2.14) which has a unique solutiony, because˜ F is strictly convex and coercive in H01(Ω). For all ≥ 0, it holds thatmax(0,−y˜) ∈ H1(Ω). Therefore the functiony˜can be interpreted as a solution to the linear elliptic equationAy =f˜with the right hand side f˜=γmax(0,−y˜) +ϕ∈L2(Ω). According to the proof of Proposition 2.1 this yields thaty˜∈W01,q(Ω)and

∥y˜∥W01,q()=∥A1(γmax(0,−y˜) +ϕ)∥W01,q()≤C(∥γmax(0,−y˜)∥L2()+ ∥ϕ∥L2()), (2.15) whereA1 denotes the solution operator associated with Av= f , v˜ ∈ W01,q(Ω). Next, we establish the bound on ∥γmax(0,−y˜)∥L2(). The semi-linear partial differential equation (2.13) holds in W1,q(Ω)and, owing tof˜=γmax(0,−y˜) +ϕ∈L2(Ω), even inL2(Ω).

Sincemax(0,−y˜) ∈H01(Ω)and applying Green’s theorem we get from (2.13) γ∥max(0,−y˜)∥2L2() ≤ γ∫max(0,−y˜)max(0,−y˜)dx

= − ∫g(x)∇y˜(x)T(aij(x))∇y˜(x)dx− ∫ϕmax(0,−y˜)dx, where we also use that0≤max(0,−y˜) ≤max(0,−y˜)a.e. inΩ. Here,0≤g(x) ∈∂max(0,−y˜(x)) for a.e. x ∈ Ω, where∂ denotes the subdifferential from convex analysis, cf. [14, Prop.6.45]. The positive definiteness of(aij)a.e. inΩand H¨older’s inequality then yield

γ∥max(0,−y˜)∥L2()≤ ∥ϕ∥L2(). (2.16) Next we observe that

∥max(0,−y˜)∥L2() ≤ ∥max(0,−y˜)∥L2()+ ∥max(0,−y˜) −max(0,−y˜)∥L2()

≤ γ1∥ϕ∥L2()+ 4∣Ω∣1/2. From (2.15) we consequently obtain

∥y˜∥W01,q()≤C(∥ϕ∥L2()+γ) from some constantC=C(Ω) >0which is independent ofγand.

As in [29] we invoke the following assumption onγ and, and we denote the solution operator of (2.12) with=(γ)byyγ∶L2(Ω) →H01(Ω).

Assumption 2.7. For eachγ>0, let(γ) >0be given such thatlimγ→∞γ(γ) =0.

(9)

Theorem 2.8. Let (uk)k∈Nbe a sequence in Uad converging weakly in L2(Ω)to u˜ ∈Uad. Assume furthermore that (γk)k∈N ⊂ R>0 withγk → ∞ and let k =(γk) satisfy Assumption 2.7. By yk = yγk(uk) and y˜ = y(u˜) we denote the solution of the smoothed penalized equation (2.12) and the variational inequality (2.2), respectively. For eachk ∈Nwe define ξk ∶= γkmaxk(0,−yk) and set ξ˜∶= A˜y−u˜−f. Then there exists a subsequence of (ξk)k∈N and (yk)k∈N (denoted the same) such that fork→ ∞,

ξk⇀ξ˜ inL2(Ω), yγk(uk) →y(u˜) inW01,q(Ω).

Proof. Theorem 2.3 in [29] provides the strong convergenceyk→y˜inH01(Ω). The uniform bound- edness of the weakly convergent sequence (∥ukL2())k∈N, together with Proposition 2.6 and the boundedness of the productγkk, yields a uniform bound on∥ξkL2(), and thus a subsequence con- verging weakly to the limitξ˜=Ay˜−u˜−f inL2(Ω). Owing to the compact embedding ofL2(Ω) intoW−1,q(Ω), this convergence and also the convergence of(uk)k∈Nis strong in W−1,q(Ω). The continuity ofA1 ∶W1,q(Ω) →W01,q(Ω)(see Remark 2.2) then implies strong convergence of the subsequence(yk)k∈NinW01,q(Ω).

2.3 Penalized optimal control problem with smoothed objective

We define the smoothed penalized optimal control problem with smoothed objective as follows:

MinimizeJr(y, u) = ∑

wI

1

2∣Br(w)∣ ∥y−yw2L2(Br(w))

2∥u∥2L2() (2.17a)

over(y, u) ∈H01(Ω) ×Uad (2.17b)

s.t.Ay−γmax(0,−y) =u+f inH1(Ω), (2.17c) wherer > 0is sufficiently small and Br(w) = {x∈ Ω∣ ∣x−w∣ < r}. For each r > 0, Jr is weakly lower semi-continuous. The smoothing of the objective in the sense of (2.17a) is used below in order to establish a suitable stationarity system for (2.6). In this section, we show that the averaged smooth-penalty scheme (2.17) is consistent with the optimal control problem (2.6) in the sense of Theorem 2.11 below.

Proposition 2.9. For allγ >0,>0andr>0, problem (2.17) has a solution.

Proof. The functionalJr is bounded from below. Therefore the set

{Jr(y, u) ∣ (y, u) ∈H01(Ω) ×Uadsolving (2.17c)}

has an infimum denoted by j, and we can choose an infimizing sequence (yk, uk)k∈N with limit limk→∞Jr(yk, uk) =j. The sequence(uk)k∈Nis uniformly bounded inL2(Ω)and therefore contains a subsequence, which we also denote by(uk), with weak limitu∈Uad. Together with(∥ukL2())k∈N, by Proposition 2.6, the sequence (∥ykW01,q())k∈N is bounded. By the continuous embedding of

(10)

W01,q(Ω)intoH01(Ω), (∥ykH01())k∈Nis also bounded and thus contains a weakly convergent sub- sequence with limity ∈ H01(Ω). The limiting pair(y,˜ u˜) is feasible for problem (2.17). The weak lower semi-continuity of the objectiveJrfinally implies

j=lim inf{Jr(yk, uk) ∣k∈N} ≥Jr(y,˜ u˜).

Lemma 2.10. Let the sequence (rk)k∈N ⊂R>0 converge to zero and(Gk)k∈N ⊂C0(Ω¯)converge to GinC0(Ω¯). Then for everyw∈Ω,

1

∣Brk(w)∣ ∫Brk(w)Gk(x)dx→G(w). Proof. For everyk∈N, we define the mapping

Fk∶C0(Ω¯) →R, g↦ 1

∣Brk(w)∣ ∫Brk(w)g(x)dx,

which is linear and bounded and thus an element of the dual spaceC0(Ω¯). Every pointw∈Ωis a Lebesgue point of the continuous functiong∈C0(Ω¯), i.e. fork→ ∞

⟨Fk, g⟩C0(¯) = 1

∣Brk(w)∣ ∫Brk(w)g(x)dx→g(w) = ⟨δw, g⟩C0(¯).

From this we concludeFkδw inC0(Ω¯). Together with the strong convergence ofGkinC0(Ω¯), this yields the convergence of the product

⟨Fk, GkC0(¯) → ⟨δw, G⟩C0(¯)=G(w).

In order to formulate the central theorem on consistency in this section, we denote a solution of the smoothed penalized problem (2.17) with parameters(γ, , r) = (γk, k, rk)by(yk, uk).

Theorem 2.11. Assume that (γk)k∈N,(rk)k∈N ⊂ R>0 converge to zero and let k = (γk) satisfy Assumption 2.7. Then there exists a subsequence of(yk, uk)k∈N(denoted the same) and a solution (y,˜ u˜)of the original problem (2.7) such that

yk→y˜ inW01,q(Ω), uk⇀u˜ inL2(Ω).

Proof. We choose u¯∈Uad and denote the solution of the semi-linear equation (2.17c) withγ =γk and=kbyyγk(u¯)for allk∈N. Proposition 2.6, together with Assumption 2.7, yields a bound on (∥yγk(u¯)∥W01,q())k∈Nthat does not depend onk∈N, and we can estimate

α

2∥uk2L2(Ω)≤Jrk(yk, uk) ≤Jrk(yγk(u¯),u¯) ≤C.

The uniform boundedness of(uk)k∈NinL2(Ω)thus yields a subsequence, still denoted by(uk)k∈N, with weak limit u˜ ∈ Uad. Theorem 2.8 then yields the strong convergence of a (sub-)subsequence

(11)

(yk)k∈Nto the solutiony˜=y(u˜)of (2.2) inW01,q(Ω). To prove optimality of the limiting pair(y,˜ u˜), let(y, u)denote a solution of problem (2.7). Using Lemma 2.10 for the first term inJrk(yγk(u), u), asyγk(u) →y(u)inC0(Ω¯)by Theorem 2.8 and the embedding ofW01,q(Ω)intoC0(Ω¯), we ob- serve that

lim inf

k→∞ Jrk(yγk(u), u) = lim

k→∞Jrk(yγk(u), u) =J(y(u), u) =J(y, u).

Furthermore, with the weak lower semi-continuity of the second term inJrk(yk, uk), and the same arguments for the first term, we note that

lim inf

k→∞ Jrk(yk, uk) ≥J(y,˜ u˜).

Now we exploit the optimality of(yk, uk)with respect to the objectiveJrk as well as the feasibility of(yγk(u), u) for the smoothed penalized problem to deriveJrk(yk, uk) ≤Jrk(yγk(u), u) for allk∈N. Altogether it holds that

J(y,˜ u˜) ≤lim inf

k→∞ Jrk(yk, uk) ≤lim inf

k→∞ Jrk(yγk(u), u) =J(y, u).

3 Derivation of stationarity conditions

3.1 Optimality system for the smoothed penalized problem

For the derivation of a first order optimality system for (2.17) we apply [51, Thm. 3.1]. For this purpose we have to check the associated constraint qualification, i.e. regularity of a solution.

Lemma 3.1. Every feasible point(yk, uk) ∈H01(Ω) ×L2(Ω)of (2.17) with parametersγk, k∈R>0 is regular in the sense of [51].

Proof. For(y, u) ∈ X ∶=H01(Ω)×L2(Ω), defineg(y, u) =Ay−γkmaxk(0,−y)−u−fwith Fr´echet derivative

g(yk, uk)(y, u) =Ay+γkmaxk(0,−yk)y−u. (3.1) We formulate (2.17c) asg(y, u) ∈ {0} ⊂H1(Ω) and, according to [51, Equ. (1.4)], have to show that

g(yk, uk)C(yk, uk) =H1(Ω),

whereC(yk, uk) ∶= {t((y, u)−(yk, uk)) ∣t≥0,(y, u) ∈H01(Ω)×Uad}. Note that the definitions ofA andmaxk in (2.1) and (2.11) yield immediately thatg(yk, uk)C(yk, uk) ⊂ H1(Ω). Furthermore, the operatorg(yk, uk) ∶ C(yk, uk) →H1(Ω)is surjective, because for any given φ∈H1(Ω), we can choose u = uk ∈ Uad and y = y˜+yk ∈ H01(Ω), where y˜ is the solution of the linear elliptic partial differential equation Ay+γkmax

k(0,−yk)y = φ. Then with t = 1, (y,˜ 0) ∈ C(yk, uk) and g(yk, uk)(y,˜ 0) =φ.

(12)

As a consequence of [51, Thm. 3.1] and Lemma 3.1 we obtain stationarity conditions for the smoothed penalized problem (2.17) as stated in the next proposition.

Proposition 3.2. If(yk, uk) ∈ H01(Ω) ×Uad withUad ≠L2(Ω)is optimal for problem (2.17) with parameters(γk, k, rk) ∈ (R>0)3, then there existspk ∈ H01(Ω)such that the following first order conditions hold:

uk= 1

νpk+ (a−1 νpk)

+− (1

νpk−b)

+ inL2(Ω), (3.2a) Apkkmaxk(0,−yk)pk= − ∑

wI

1

∣Brk(w)∣(yk−ywBrk(w) inH1(Ω). (3.2b) IfUad=L2(Ω), then (3.2a) is replaced byuk=ν1pk.

Remark 3.3. We introduce the variable σk = νuk−pk ∈ L2(Ω), which can be decomposed as σkak −σbk withσak =max(0, σk)andσbk = −min(0, σk)pointwise almost everywhere. Then the following conditions are equivalent to (3.2a):

(νuk, u−uk)L2()− (pk, u−uk)L2()≥0 ∀u∈Uad, (3.3) ukUad(1

νpk), (3.4)

uk∈Uad, σak ≥0, σbk ≥0, σak(a−uk) =σbk(b−uk) =0. (3.5) HereΠUad denotes theL2(Ω)-projection into the closed convex setUad.

3.2 Stationarity system for the optimal control problem

Borrowing terminology from [28], next we define the stationarity concept which is relevant in our context.

Definition 3.4(Limiting ε-almost C-stationarity). We call(y, u, ξ) ∈ W01,q(Ω) ×Uad×W1,q(Ω), Uad ≠ L2(Ω) limiting ε-almost C-stationary for problem (2.6) if y = y(u) solves the variational inequality (2.2), ξ is defined as ξ = Ay−u−f, and if there exist p ∈ W01,q(Ω), λ ∈ Mb(Ω¯) and sequences (pk)k∈N ⊂ H01(Ω) with pk ⇀ p in W01,q(Ω), and (λk)k∈N ⊂ H1(Ω) with λk λ in Mb(Ω¯)such that the following conditions are satisfied,

u−1

νp− (a−1 νp)

++ (1 νp−b)

+=0 inL2(Ω), (3.6a)

∀ψ∈W01,q(Ω) ∶ ⟨ψ, Ap⟩W01,q()− ⟨ψ, λ⟩W01,q()+ ∑

wI

(y(w) −yw)ψ(w) =0, (3.6b)

⟨λ, y⟩W−1,q()=0, (3.6c)

∀τ >0 ∃Eτ ⊂ {y>0}such that∣{y>0} ∖Eτ∣ <τ and

∀ϕ∈C0(Ω¯), ϕ∣Eτ =0, ⟨λ, ϕ⟩Mb(¯)=0, (3.6d) lim sup{⟨λk, pkH−1()∣k∈N} ≤0, (3.6e)

⟨ξ, p⟩W−1,q()=0. (3.6f)

(13)

IfUad=L2(Ω), then (3.6a) is again replaced byu= 1νp∈W01,q(Ω).

Theorem 3.5. For eachk ∈N, let γk, k =(γk), rk >0be penalization and smoothing parameters which satisfy Assumption 2.7, whereγk→ ∞andrk→0.

Furthermore let(yk, uk, pk)be stationary for problem (2.17) in the sense that the tuple is feasible and (3.2a), (3.2b) hold, and assume that(∥ukL2())k∈Nis bounded.

Then there exist(y,˜ u,˜ ξ,˜p,˜ λ˜) ∈W01,q(Ω)×L2(Ω)×L2(Ω)×H01(Ω)×Mb(Ω¯)and a subsequence (also denoted by indexk) such that

yk→y˜ inW01,q(Ω), (3.7a)

uk⇀u˜ inL2(Ω), (3.7b)

ξk∶=γkmaxk(0,−yk) ⇀ξ˜ inL2(Ω), (3.7c)

pk⇀p˜ inH01(Ω), (3.7d)

λk∶= −γkmax

k(0,−yk)pk ˜λ inC0(Ω¯), (3.7e) and(y,˜ u,˜ ξ˜)is a limitingε-almost C-stationary point for problem (2.6) with multipliersp,˜λ.˜ Proof. Convergence and feasibility of yk, uk, ξk andpk. Theorem 2.8 yields the assertions for yk

andξk. Testing the adjoint equation withpk∈H01(Ω), we obtain a uniform bound on(∥pkH01())k∈N

as follows,

C∥pk2H01()≤⟨pk, ApkH−1()

= −γkmax

k(0,−yk)p2kdx− ∑wI 1

Brk(w)∣Brk(w)(yk−yw)pkdx

≤C(card(I) ∥ykL()+sup{∣yw∣ ∣w∈I}) ∥pkL2().

The weak convergence of a subsequence of(uk)k∈NinL2(Ω)follows from the boundedness of the operator (⋅)+ ∶ L2(Ω) → L2(Ω). Note that if a, b ∈ H1(Ω), and a∣∂Ω < 0 < b∣∂Ω, then we have convergence along a subsequence inH01(Ω).

Convergence ofλk. Forδ>0we define the function

ρδ∶R→R, ρδ(p) =⎧⎪⎪⎪

⎨⎪⎪⎪⎩

−1 forp< −δ,

p

δ forp∈ [−δ, δ], 1 forp>δ,

and note that limδ0+ρδ(p) = sign(p) for allp ∈ R. Furthermore, ρδ(pk) ∈ H01(Ω) ∩L(Ω) is a feasible test function for the adjoint equation (3.2b), yielding

⟨Apk, ρδ(pk)⟩H−1()kmaxk(0,−yk)pkρδ(pk)dx

= − ∑

wI

1

∣Brk(w)∣ ∫Brk(w)(yk−ywδ(pk)dx≤card(I)∥ykL()+sup{∣yw∣ ∣w∈I} ≤C.

(14)

The first term on the left hand side is non-negative as

⟨Apk, ρδ(pk)⟩H−1(Ω)≥C∫ρ(pk)∣∇pk2dx≥0.

This yields the uniform boundedness of γkmaxk(0,−yk)pkρδ(pk)dx ≥ 0. Sending δ → 0, we obtain the bound

∥λkL1()= ∫∣λk∣dx=lim

δ0γkmax

k(0,−yk)pkρδ(pk)dx≤C. (3.8) The embedding ofL1(Ω)into Mb(Ω¯) = C0(Ω¯) and the Banach-Alaoglu theorem then provides the existence of a subsequence converging weakly toλ˜∈ Mb(Ω¯). Note that because of the embed- ding ofW01,q(Ω)intoC0(Ω¯), the subsequence also converges weakly inW1,q(Ω)and(λk)k∈N is bounded in this space.

Adjoint equation(3.6b) for y,˜ p,˜ λ.˜ Since pk ⇀ p˜in W01,q(Ω) ⊃ H01(Ω), so by the properties ofAwe haveApk ⇀Ap˜inW1,q(Ω). Ifϕ∈C0(Ω¯), then∑wI(yk−yw)ϕ→ ∑wI(y˜−yw)ϕin C0(Ω¯)and by Lemma 2.10,

⟨∑w∈I

1

∣Brk(w)∣(yk−ywBrk(w), ϕ⟩C0(¯) = ∑

w∈I

1

∣Brk(w)∣ ∫Brk(w)(yk−yw)ϕ dx

→ ∑

wI

(y˜(w) −yw)ϕ(w). The embedding ofC0(Ω¯)intoW1,q(Ω)then gives

wI

1

∣Brk(w)∣(yk−ywBrk(w)⇀ ∑

wI

(y˜−yww inW1,q(Ω). Together with the weak convergence ofλkinW1,q(Ω), this yields equation (3.6b).

Complementarity ofy˜ andλ˜ (3.6c). From the convergence yk → y˜in W01,q(Ω) we infer that (−yk)+→ (−y˜)+=0inW01,q(Ω), so

⟨λk,(−yk)+W−1,q′()→ ⟨λ,˜ 0⟩W−1,q′()=0.

Furthermore we observe that

∣⟨λk,(yk)+W−1,q()∣ =γkmaxk(0,−yk)∣pk∣(yk)+dx

k{0y

kk}max

k(0,−yk)∣pk∣ykdx+γk{yk>k}0dx≤γkk∥pkL1(),

which converges to zero due to the boundedness of (pk)k∈N inW01,q(Ω)and Assumption 2.7. So decomposingyk= (yk)+− (−yk)+we get

⟨λk, ykW−1,q()= ⟨λk,(yk)+W−1,q()− ⟨λk,(−yk)+W−1,q()→0.

We also have the convergence⟨λk, ykW−1,q()→ ⟨˜λ,y˜⟩W−1,q(), which yields the assertion (3.6c).

Referenzen

ÄHNLICHE DOKUMENTE

The task was to compute the expected value of the optimal controls corresponding to the different realizations of the random coefficient of the state equation utilizing the

This approach allows us to find limit forms of the classical transversality conditions for problems (P k ) as k → ∞ and formulate conditions that complement the core Pontryagin

A globalized semi-smooth Newton method for variational discretization of control constrained elliptic optimal control problems. International Series of Numerical Mathematics,

Tan, K.C., Optimal Control of Linear Econometric Systems with Linear Equality Constraints on the Control Variables, International Economic Review, Vol. 20,

For practical application it does not make sense to enlarge the reduced order model, i.e. With 25 POD elements, the reduced-order problem has to be solved two times; solvings of

Optimal control, inexact SQP method, proper orthogonal decomposition, a-posteriori error estimates, bilinear elliptic equation.. ∗ The authors gratefully acknowledge support by

First order necessary conditions of the set optimal control problem are derived by means of two different approaches: on the one hand a reduced approach via the elimination of the

For example, if the tangential step is dominant (which often happens close to the optimal solution), then normal and simplified normal step can be computed with low relative accuracy.