• Keine Ergebnisse gefunden

1: procedureBranch-and-boundMethod

2: InitializeLB= −∞, UB=∞, node queueP={P(Ω)},LBP(Ω) = −∞

3: while|P|> 0and UB−LB > do

4: LB←−min {LBP :P ∈P }

5: Choose P∈P, P←− P \P

6: SolvePe, save lower boundLBP ←−c>xe (if infeasibleLBP ←−∞)

7: ifLBP 6UBthen

8: Branch onP: create new subproblemsPi, i=1,. . .,k

9: P←−P ∪Pi

10: LBPi ←−LBP

11: ifA feasible solutionx ∈Ωis foundthen

12: ifc>x < UBthen

13: UB←− c>x

14: x←−x

15: ifUB6∞then

16: returnx 17: else

18: Infeasible

3.3.1 Branch-and-Bound for Optimization of MILPs and Convex MINLPs Let us consider a convex MINLP of the from (minlp) as the root, and at the start of the algorithm also a leaf, of the branching tree. During the branching step the branch-and-bound algorithm selects a node in the tree and a discrete variablexi,i∈J to branch on. Here,J⊂Idenotes the index set of discrete variables, which have not been branched on in the selected node of the branching tree. Two new sub-problems are created at the node by fixing xi = 0 in one sub-problem andxi = 1in the other. The sub-problems are added to the branching tree as child nodes of the previously selected node.

Tight convex relaxations to a node, needed to derive lower bounds, are given by replacing the constraintxJ ∈ {0,1}|J| by xJ ∈ [0,1]|J|. If the constraintsg are convex, branching on the discrete variables is sufficient to find the global optimal solution of (minlp). The continuous variables can be handled by

local optimization strategies, as introduced in Section3.1.2, as soon as the integer variables are fixed.

Example3.3.1. Solving a simple MILP via branch-and-bound Let us consider the MILP

minx x3

s.t. g1(x) =x1−x2−0.560 g2(x) = −x1/2−x2+0.7560 g3(x) =x1/3+x2−x3 60 x1,x2 ∈{0,1}

06x3610

(P(Ω))

from Example 3.2.1 again. The relaxed problem, where the integer constraints x1,x2 ∈ {0,1} are replaced by the convex hull x1,x2 ∈ [0,1], has the optimal solution x = [5/6,1/3,11/18], which was indicated by the point A in Example 3.2.1. The objective value of the relaxed problem x3 =11/18gives a lower bound LBon the optimal solution of (P(Ω)). Let us furthermore assume that we have an upper boundUB=4/3given by the feasible pointx = [1,1,4/3]. Branching on (P(Ω))by splitting the domain {0,1}of variablex2intox2=0andx2=1results in two new sub-problems.

minx x3

s.t. g1(x) =x1−0.560 g2(x) = −x1/2+0.7560 g3(x) =x1/3−x360 x1∈{0,1}

x2=0 06x3610

(P(Ω0))

minx x3

s.t. g1(x) =x1−1.560 g2(x) = −x1/2−0.2560 g3(x) =x1/3+1−x3 60 x1∈{0,1}

x2=1 06x3610

(P(Ω1))

Problem (P(Ω0)) is infeasible, because constraintg2 can not be fulfilled forx1 ∈ {0,1}. The node(P(Ω0))is pruned from the branching tree, i.e., not considered for further branching. Relaxing the integer constraintx1∈{0,1}of problem(P(Ω1)) results in a relaxed problem with optimal solution x = [0,1,1], which gives the lower boundLB1 =LB=1. Becausex = [0,1,1]is also a feasible point of (P(Ω)), the upper bound is updated (UB=1). The branch-and-bound method terminates successfully with the optimal solutionx= [0,1,1].

3.3.2 Spatial Branch-and-Bound for Global Optimization of NLPs and MINLPs The branch-and-bound algorithm can be used for global optimization of non-convex NLPs and MINLPs by branching on the continuous variables.

Let us consider problem (nlp) again, with box constraints lbi < ubi on each variable xi, i ∈ {1,. . .,n}. To create new sub-problems from any leaf node in the branching tree, one variable xi, i ∈ {1,. . .,n}, is selected and its domain[lbi,ubi]is split into two sets[lbi,ubnewi ],[lbnewi ,ubi]with ubnewi =lbnewi . A branch-and-bound algorithm, which includes branching on continuous variables, is referred to as spatial branch-and-bound. To solve non-convex MINLPs, branching on discrete and continuous variables is necessary. Convex envelopes are for example linear or quadratic functions, underestimating the problem constraints on the problem domain.

Example3.3.2. Spatial branch-and-bound with quadratic underestimators

Let us consider a non-convex, univariate function f(x) on a domain x ∈ Ω = [lb,ub], see Figure6. The root problem(P(Ω))is the minimization of this function fon the complete domain. Let an upper boundUBon the optimal value be given, e.g. via the local minimization of the objective function by use of a solver as introduced in Section3.1.2. The domain is split into two subsets0= [lb0,ub0] and Ω1 = [lb1,ub1], which are the domains of the new sub-problems. Convex relaxations fi(x) 6 f(x), x ∈ Ωi for the new problems (P(Ω0)) and (P(Ω1)) are derived, here marked with blue lines. The optimal solutions of (P(Ω0)) and (P(Ω1))give new lower boundsLB0andLB1on the optimal value of the objective in this domain. In this caseUB < LB1 holds. This implies that on the domain Ω1 = [lb1,ub1] no solution candidate exists, with a function value below the current best solution candidate. The node(P(Ω1))is pruned from the branching tree and the search continues on (P(Ω0)). Note, that while this example shows quadratic envelopes, other convex relaxations, e.g. linear envelopes, can also be applied.

Finding tight convex envelopes to NLPs or MINLPs is a challenging task in practice. Many approaches exploit certain problem structures to get tight relaxations. One method for finding convex envelopes, calledα

branch-and-x

Figure6: Example for branching and bounding via spatial branch-and-bound bound, is implemented in the solver ANTIGONE [77, 78]. Idea of the α branch-and-bound is, to find parametersα∈Rn such that

G(x) :=g(x) − Xn

i=1

αi(ubi−xi)(xi−lbi)

is convex. To do so, the algorithm uses second order derivative information of the constraintg. Other solvers like Couenne [79] and SCIP [80] use linear underestimators to get lower bounds.

3.3.3 Branch-and-Bound Solver SCIP

To solve the MINLPs, we use the software package Solving Constraint Integer Programs (SCIP) [80, 81,82] (SCIPOpt Suite 6.0.1) together with the Python interface PySCIPOpt [83] (PySCIPOpt1.2.0). SCIP is a state-of-the-art branch-and-bound solver, with extensive presolving. In the numerical

analysis carried out by Kronqvist [84], the solver SCIP showed performance comparative to the commercial solver Baron [85] for convex problems. As an open-source software, SCIP has the advantage over commercial solvers of being easily extendable by the user. We refer to Vigerske et al. [82] as a reference for the following underestimators used in SCIP.

SCIP uses a spatial branch-and-bound, where linear constraints, also called cuts, are added to the relaxed problem formulation, when constraints are violated. The SCIP algorithm reformulates the problem via expression trees to analyze, which functions are known to be convex or concave. This information makes it possible, to apply tighter bounds on the constraints.

SCIP has information about the convexity of simple functions. The con-vexity information is propagated along the expression tree, if beneficial by introducing new variables, via relations such as

g1,g2convex functions =⇒

The underestimators are then derived from the following functions:

Corollary3.3.1. For a convex functiong :Rn −→Rmg a linear underestimator Gis given by

G(x) =g(x) +ˆ ∇g(x)ˆ >(x−x)ˆ for any pointxˆ ∈Rn.

Corollary 3.3.2. For a concave functiong : Rn −→ Rmg on [lb,ub] ⊂ Rn, a linear underestimatorGis given by the solution of the following linear optimization problem

Corollary3.3.3. For an indefinite function on[lb,ub]⊂Rn, i.e., neither convex nor concave, the underestimatorGis given by

G(x) =g(x) +ˆ X

i:xi>xˆi

di(xi−xˆi) + X

i:xi6xˆi

di(xi−xˆi),

wherexˆ ∈Rn, anddanddare chosen such that∇g(x)∈[d,d]∀x∈[lb,ub]. For proofs we refer to [82] and [86]. Automatic differentiation and interval arithmetic is used to calculatedanddin SCIP.

3.4 p d e c o n s t r a i n e d o p t i m i z at i o n

Many mathematical problems in process engineering include mass balances and energy balances in form of partial differential equations (PDEs). Formu-lating these balances as first order PDEs in one spacial dimensionz∈[0,Z]

and timet∈[0,tf]results in conservation laws of the form

∂ψ(z,t)

∂t +∂a(ψ(z,t),z,t)

∂z +b(ψ(z,t),z,t) =0 (12) where ψ : R2 −→ Rnψ. To handle PDEs in the framework of numerical optimization, we first reduce the PDEs to systems of ordinary differential equations (ODEs) via the following two approaches: In the special case that the PDE is quasi-linear, we apply the method of characteristics. Otherwise, we semi-discretize the PDE via the Finite Volume method (FVM). The ODEs can then be handled in an NLP framework by using numerical integrators, such as IDAS [87]. We apply direct multiple shooting, to increase the numerical stability of solving the ODEs [88].

3.4.1 Method of Characteristics

The method of characteristics is used to reduce a linear or quasi-linear PDE into a system of ODEs, which describe the solution along characteristic curves.