• Keine Ergebnisse gefunden

SOLUTION TECHNIQUES

2.3 Review on Solution Techniques for Convex MINLPsMINLPs

There exists a variety of different algorithms to solve convex mixed-integer nonlinear optimization problems given by problem (1.6), see, e.g., Floudas [52], Grossmann and Kravanja [61] or Bonami, Kilinc and Linderoth [28] for review papers. This thesis focuses on the development of solution methods for convex MINLP problems based on the successive solution of mixed-integer quadratic subproblems. Our new algorithm called MIQP-Supported Outer Approximation (MIQPSOA) proposed in Chapter 3 as well as the solution methods, that are reviewed in this chapter rely on gradient information.

In general, for solving any mixed-integer optimization problem the existence of lower and upper bounds is of major importance. These bounds refer to the value of the objective function of the mixed-integer program. For a lower bound f, f(x, y) ≥ f holds at the optimal solution(x, y), whilef(x, y)≤fholds for an upper boundf.

The existing solution methods can be classified into branch-and-bound algorithms, methods based on the successive solution of mixed-integer linear relaxations and ad-vanced methods integrating continuous nonlinear and mixed-integer techniques.

Branch-and-bound methods are commonly used to solve non-convex MINLPs (1.1) by exploiting analytical problem structures within a global solver, such as BARON, see Sahinidis [92]. NLP-based branch-and-bound methods do not rely on any analytical knowledge of the problem and are often applied to solve the convex MINLP (1.6).

Although we concentrate on the development of competitive solution approaches we briefly review the basic concepts of general branch-and-bound algorithms and NLP-based branch-and-bound in Section 2.4. Furthermore, a basic implementation is used as benchmark for our numerical tests in Chapter 6.

Another class of algorithms relies on the successive solution of mixed-integer linear programming (MILP) relaxations of the convex MINLP (1.6). Among these methods there is the extended cutting plane method proposed by Westerlund and Petters-son [108], see also Westerlund and P¨orn [109] and Section 2.7 for details. Linear outer approximation and the generalized Benders’ decomposition proposed by Geoffrion [57]

are other frequently used algorithms. The basic concept of linear outer approximation methods is introduced by Duran and Grossmann [43] and is extended by Fletcher and Leyffer [50]. We present the basic theory of the linear outer approximation approach of Fletcher and Leyffer [50] in Section 2.5. Generalized Benders’ decomposition is a similar approach, that is briefly described in Section 2.6.

The last class of solvers for convex MINLPs contains advanced, state-of-the-art so-lution methods. Originally, continuous nonlinear and mixed-integer optimization is decoupled, since efficient black-box solvers for both problems are available. The ad-vanced methods improve the performance by integrating both techniques. This class is constituted by LP/NLP-based branch-and-bound proposed by Quesada and Gross-man [90], see Section 2.8 and nonlinear branch-and-bound with early branching ac-cording to Leyffer [76], which is reviewed in Section 2.9.

Convex MINLP problems are hard to solve and each available algorithm, such as NLP-based branch-and-bound, linear outer approximation, generalized Benders’ decompo-sition, LP/NLP-based branch-and-bound and the extended cutting plane method, is based on the successive solution of complex subproblems. Since the problem class of mixed-integer nonlinear optimization problems is the combination of mixed-integer linear programming and continuous nonlinear programming, these subproblems are either traditionally MILPs or NLPs.

Based on Grossmann [62] we are going to review the interrelation of these solution methods by analyzing their subproblems. To ease the readability we omit linear equal-ity constraints, i.e.,J= =∅and J=J>.

Essentially, three different continuous nonlinear subproblems might arise during the solution of the convex MINLP problem (1.6). One subproblem, which is associated with NLP-based branch-and-bound methods, is the continuous relaxation denoted by NLPr

x ∈X, y ∈YR: min f(x, y)

s.t. gj(x, y) ≥ 0, j∈J,

(2.47)

where YR is defined by (1.8). The optimal objective value of the continuous relax-ation provides a lower bound on the optimal solution of MINLP (1.6). NLP-based branch-and-bound methods successively solve problem (2.47). In each iteration the

box-constraints of the integer variables are refined, i.e., replaced by more stringent lower or upper ones, which are given by

yi ≥ (ykl)i > (yl)i, i ∈IkL (2.48) or

yi ≤ (yku)i < (yu)i, i∈IkU. (2.49) IkL, IkU denote the corresponding index subsets of the integer variables yi, i ∈I, with (yl)i <(ykl)i or (yku)i <(yu)i respectively, in iteration k, i.e.,

IkL := {i∈I:∃(ykl)i :yi ≥(ykl)i >(yl)i in iteration k}, (2.50) or

IkU := {i ∈I:∃(yku)i :yi ≤(yku)i <(yu)i in iteration k}. (2.51) The remaining two continuous nonlinear subproblems arise in linear outer approxi-mation, generalized Benders’ decomposition and LP/NLP-based branch-and-bound.

They are both derived from the original mixed-integer nonlinear program (1.6) by fix-ing the integer variables to a certain integer value yk ∈Y. kdenotes the k-th integer value, that is considered. Fixing the integer variables y ≡ yk yields the continuous problem NLP(yk) given by

x∈X:

min f(x, yk)

s.t. gj(x, yk) ≥ 0, j∈J.

(2.52)

Its solution ¯xyk ∈ X provides an upper bound for the mixed-integer nonlinear pro-gram (1.6). If problem NLP(yk) does not possess a feasible solution, i.e.,

@x ∈X: gj(x, yk)≥0, ∀j∈J, (2.53) for some integer value yk∈ Y, subproblem (2.52) has to be replaced by the so-called feasibility subproblem. The goal is to find a value of x∈X, such that the constraints are least violated in some norm for fixedyk∈Y. Choosing the∞-norm, the nonlinear feasibility problem F(yk) is given by

x ∈X, ηF ∈R+ : min ηF

s.t. gj(x, yk) ≥ −ηF, j∈J.

(2.54)

and we denote its solution by(¯xyk,η¯F)∈X×R+F ∈R+is an additional variable mea-suring the maximal constraint violation forgj(x, y), j∈J. Note; that problem (2.54) minimizes the maximal constraint violation, i.e., the k.k-norm of the constraint vi-olation. There exist many variants of feasibility problem (2.54), e.g., using different

norms measuring the constraint violation. In the sequel, all variants are denoted by F(yk), since a distinction is not necessary.

Some algorithms for solving convex MINLPs rely on the successive solution of mixed-integer linear programming problems, which are obtained by relaxing the nonlinear functions of the convex MINLP (1.6). These relaxations are denoted by MILPkr and often called master problem. The master problem is successively refined by additional linearizations obtained in previous iterationsi ≤ k at some points (xi, yi) ∈ X×YR. In iteration k, the master problem is given by

x ∈X, y ∈Y, η∈R: (2.55)

min η

s.t. f(xi, yi) +∇x,yf(xi, yi)T

x−xi y−yi

≤ η, ∀(xi, yi)∈Lk, gj(xi, yi) +∇x,ygj(xi, yi)T

x−xi y−yi

≥ 0, ∀(xi, yi)∈Lk, j∈Ji,

with Ji ⊆J, ∀ i=1, . . . , k, and Lk:=

(xi, yi)∈X×YR, i =1, . . . , k . The set Ji in iterationiis a subset ofJ, which can vary in each iterationi=1, . . . , k. Furthermore, its definition depends on the corresponding algorithm.(xi, yi)∈X×YR, i=1, . . . , k are linearization points obtained in iteration i ≤ k. For a convex MINLP (1.6) the corresponding MILPr (2.55) is defined such that the constraints

f(xi, yi) +∇x,yf(xi, yi)T

x−xi y−yi

≤ η, ∀(xi, yi)∈Lk (2.56)

underestimate the objective function f(x, y). Figure 2.1 illustrates the situation, the objective functionfis underestimated by two linearizations (2.56) denoted by lin1(f) and lin2(f). Since the constraints

gj(xi, yi) +∇x,ygj(xi, yi)T

x−xi y−yi

≥ 0, j∈Jk>, ∀(xi, yi)∈Lk (2.57)

overestimate the feasible region, the master problem (2.55) is a relaxation of the convex MINLP (1.6), for any linearization point (xi, yi)∈Lk. Figure 2.2 shows, that the feasible region of a convex MINLP described by g1(x, y), g2(x, y) and g3(x, y) is overestimated by any linearizations lin1, lin2 of any of the constraints g1(x, y), g2(x, y), g3(x, y) obtained from (2.57).

-5 0 5 10 15 20 25

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

f(x) lin1(f) lin2(f)

Fig. 2.1: Underestimating the Objective Function

-4 -2 0 2 4 6 8 10

-1 0 1 2 3 4 5 6 7

g1(x, y) g2(x, y) g3(x, y) lin1(g1) lin1(g2) lin1(g3) lin2(g1) lin2(g2) lin2(g3)

Fig. 2.2: Overestimating the Feasible Region

As a consequence, the value ofη, which is part of the optimal solution of MILP (2.55), provides a lower bound on the global solution of MINLP (1.6). The linearization points

(xi, yi) ∈ Lk often correspond to a solution of one of the continuous nonlinear sub-problems NLP(yi)or F(yi). More details concerning the subproblems NLP(yk), F(yk) and MILPkr in the context of linear outer approximation and generalized Benders’ de-composition are given in Section 2.5 and Section 2.6, respectively.