• Keine Ergebnisse gefunden

The optimal solution is no local minimum of F

One possible idea for solving the bilevel optimization problem numerically is to apply a gradient-based algorithm on the shape functionalF. However, the bilevel optimization problem is constrained, since one has to respect the strict inequality constraintymin < yJ < ymax in the candidate inactive set J. Nonetheless, one of the most fundamental ideas behind the whole approach of this thesis is, that this constraint does not need to be respected rigorously. To be more precise, the strict inequality is not relevant if first order optimality conditions are investigated, but it has some impact on optimality from a global point of view; cf.Paragraph 2.2.4.

With regard to the construction of algorithms the question arises, whether the constraint has to be taken into account, or not. Sure enough, it has to be considered when a steepest descent algorithm is applied, since the active setAis no strict (local) minimum of the shape functionalF. This fact is illustrated from the algorithmic point of view now and it is neatly proven afterwards.

Due toTheorem 8the shape gradient ofF can be represented by the non-positive function

1

2λ(p¯J −pmaxmin)2∈L1(β).

3.1.1 The optimal solution is no local minimum ofF 97 A steepest descent algorithm would choose the normal component of a perturbation vector field as a scalar multiple of the representative.4But the non-positivity of the gradient results in perturbation fields which can only shrink the current guess of the active set, since the step size is always positive. This behavior is due to the fact, that the unique global optimum of the reduced shape functionalF is given byB = ∅, which corresponds to the optimal solution of the state unconstrained version of the original optimal control problem (2.1). Therefore, such an algorithm can never reach the optimal active setA, if the initial guess Bis such thatA * Bas illustrated inFigure 3.6. In the special caseB ⊂ Aa steepest descent algorithm only has a chance to reachAif negative step sizes are allowed, too.

optimal configurationA current guessB possible guess after

applying the pertubation field

impossible

configuration after applying the pertubation field unreachable region

for steepest de-scent deformations starting atB

Figure 3.6:Illustration of an impossible step in a steepest descent algorithm.

Conversely this means that there are vector fields, which lead to perturbations of the optimal active setA, such that the value of the shape functional decreases. Thus,Acannot be a strict local minimum ofF. Proposition 7(Ais no strict local minimum ofF):

LetA ∈ O be the (optimal) active set for the original optimal control problem (2.1) and its equivalent reduced shape/topology optimization problem (2.45). Furthermore assume thatA 6= ∅, i. e. the state constraints are essential.

ThenAis a critical point, but is no strict local minimum of the reduced objectiveF.5

Proof. 1) According toTheorem 8the setAis a critical point of the reduced objectiveF with respect to shape calculus.

2) According to the14thitem of the discussion onpage 68the ballB1(0)⊂Θ0is continuously embedded inH()by means of the mapping f 7→Id+f. Moreover, the mapping is surjective onto a suitableε-ball.

Letε>0 be chosen adaptively. Furthermore, there exists an approximation f ∈ Θ0of the outward unit normal vector field of the active setA; see [69, Lem. 1.5.1.9]. That is, there exists aδ>0 and an f ∈ Θ0

such that

∀x∈γ: f(x)·nA(x)≥δ.

Chooseη > 0 such that F := Id−ηf ∈ Bε ⊂ H(Ω)and defineB := F(A) ∈ X(A) ⊂ O. Then there holdsB ⊂ A, since the transformationFmaps the boundaryγtowards the interior ofA, i. e.F(γ)⊂A˚. Let (u¯I, ¯uA, ¯yI, ¯uA)∈L2(I)×L2(A)˚ ×H1(I,∆)×H1(A˚,∆) be the optimal solution of the set optimal control problem (2.30). This tuple is the optimal solution of the bilevel problem (2.36), (2.37), too. In

4A steepest descent algorithm based on a Sobolev gradient, which was introduced in theRemarktoTheorem 7would act similar, since the weak maximum principle still holds true for the corresponding surface PDE with the usual proof, and thusSFhas the same sign as∇F.

5The assertion is related to shape calculus only, since the analysis of infinitesimal topology dependency goes beyond the scope of this thesis.

particular, it is the optimal solution of the inner optimization part (2.37) to the fixed setA. Additionally, the tuple is feasible for the inner optimization problem to the fixed setB: If one concatenates the states

¯

yA and ¯yI, one obtains a state on the whole domainΩ(cf.Proposition 4), which itself can be split again toyJ ∈ H1(J,∆)andyB ∈ H1(B˚,∆). Assembling and anew dissection can be done with the optimal control as well. Furthermore, there holdsyB = y¯A|B ≡ ymaxmin, sinceBis a subset ofA. Consequently, all constraints of the inner optimization problem (2.37) are fulfilled.6

Hence, there holds by definition ofF, cf. (2.38a) F(B) = min

uJ,uB,yJ,yBJ(B;uJ,uB,yJ,yB)≤J(B; ¯uI, ¯uA, ¯yI, ¯yA) =J(A; ¯uI, ¯uA, ¯yI, ¯yA) =F(A).

In other words, for each (sufficiently small)ε > 0 there is a setB ∈ O with d(B,A) ≤ ε (the metric induced inX(A), cf.Lemma 13) andF(B)≤ F(A). Hence,Ais no strict local minimum of the reduced objectiveF.

Remark:

The proof shows that there even holds

∀B1,B2∈ OwithB1⊂ B2: F(B1)≤ F(B2).

Thus, it is necessary to use algorithms which either rely on the strict inequality constraint or which search for critical points of the shape functionalF.

It is not possible to satisfy the strict inequality constraint directly, as it is done, for instance, by projected gradient methods, since the constraint poses an implicit condition on the feasibility of a setB. Here one recognizes the character of a state constraint. A common remedy is to fulfill such constraints iteratively by means of some penalization. However, this approach contradicts the original goal to get an algorithm which neither requires regularization nor penalization of the constraint.

At this point, it is worthwhile to comment on penalization of the strict inequality constraintymin<y¯J <

ymax. One nearby idea is to augmentFby a quadratic penalty term A(B):= c

2 Z

Jmax{0, ¯yJ −ymax}2+max{0,ymin−y¯J}2 (3.2) wherec>0; cf. [90, Eq. (3.6)]. Afterwards one studies the behavior whencis send to infinity. However, this approach only ensures, that the inequality constraintymin ≤ y¯J ≤ ymaxis respected in the limit, whereas one requires the strict inequality counterpart. This approach is not recommended, since a loss of unique solvability and of the precise meaning of the (in-)active set is concomitant with it; cf. Para-graph 2.2.4in particularpage 29. A remedy would be to sharpen the penalty term to

Aε(B):= c 2 Z

J max{0, ¯yJ −ymax+ε}2+max{0,ymin−y¯J +ε}2 and to driveεto zero. However,

¯

yJ(x)→ymaxmin(x), x→x¯∈β,

since ¯yJ|β = ymaxmin|βdue to (2.45f). Consequently, this idea induces a conflict between the evaluation of F and its gradient. It manifests itself in the impossibility of convergence of the algorithm unlessε=0, since otherwise the penalty term can never vanish and always gives a descent direction. Furthermore, the introduction of two penalty parameters always requires some smart coupling which is very likely to be problem dependent.

All in all, it is indicated to develop a method, which does without paying attention to the constraint ymin < yJ < ymaxbut searches for critical points of the reduced functionalF. An obvious choice is a Newton scheme, which is introduced in the nextsection 3.2.

However, despite the objections it seems to be reasonable to use the penalty approach as globalization strategy for a Newton algorithm. As long as the current guess of the active set is ”far away“ from the optimal one, an augmented functional may give the right idea how to deform the iterate. And when the guess is ”near enough“ one switches to a Newton scheme; cf. the remarks on the a posteriori step3on page 111.

6Note, that the stateyJ obtained by this means is not expected to fulfill the strict inequality constraint (2.36c). This is just the substance of considerations: analyze the critical pointAof theunconstrainedfunctionalF.

99