• Keine Ergebnisse gefunden

Branching for Inner Methods

Algorithmic Details

3.1 Branching for Inner Methods

Effective branching methods are an important element of branch-and-bound methods and have been studied extensively for the specific case of branch and cut [52]. For branch and price, there has also been some recent research in this area [88]. Branching methods in the context of and-cut have received far less attention in the literature. As with most resarch on branch-and-price-and-cut methods, most existing work on branching methods is application-specific. Here, we take a very generic approach to branching, with the goal of keeping the framework both theoretically and computationally straightforward to implement.

In this section, we explain how a generic branching strategy can be implemented in a straight-forward way in the context of inner methods by mapping back to the compact space. We first explain the general concept and illustrate it in the context of branch-and-price-and-cut. We then briefly consider the same setup in the context of branch-and-relax-and-cut. Since Lagrangian methods do not, by default, generate a primal solution that can be mapped back to the compact space, there are some additional considerations in this case.

In order to develop branching methods that are generic with regards to the overall decomposition framework, we recall the mapping(2.18)defined in Section 2.1.2. When integrating methods for generating valid inequalities with inner methods, this projection into the space of the compact for-mulation is precisely what allows us to add cuts to the extended forfor-mulation without any change in the complexity of the system. This same idea can also be used to allow for a simple implementation

of branching.

Branching for Price-and-Cut Consider the standard branching method used in the branch-and-cut algorithm. After solving the linear programming relaxation to obtainxˆ Rn, we choose an integer-constrained variablexifor whichxˆi ∈/ Zand produce two disjoint subproblems by enforcing xi ≤ bˆxicin one subproblem andxi≥ dˆxiein the other. In standard branch-and-cut, the branching constraints are enforced by simply updating the variable bounds in each subproblem. Now consider the Dantzig-Wolfe reformulation presented in Section 2.1.2 and the price-and-cut algorithm defined in Section 2.2.1. A simple trick is to let all variable bounds be considered explicitly as constraints in the compact formulation by moving them into the set of side constraints[A00, b00]. This approach greatly simplifies the process of branching in the context of the Dantzig-Wolfe reformulation. Using the mapping (2.18), the variable bounds can then be written as general constraints of the form li¡P

s∈Es¢

i ≤ui∀i∈I in the extended formulation. After solving the master linear program to obtainλ, we use the mapping to construct a solutionˆ xˆin the compact space. Then, as in standard branch-and-cut, we choose a variablexˆiwhose value is currently fractional³P

s∈Eˆs

´

i= ˆxi ∈/ Z and produce two disjoint subproblems by enforcing¡P

s∈Es¢

i ≤ bˆxicin one subproblem and

¡P

s∈Es¢

i ≥ dxˆiein the other. Since these arebranching constraintsrather than the standard branching variableswe enforce them by adding them directly to[A00, b00].

To illustrate this, we return to our first example, picking up from the last iteration of the Dantzig-Wolfe method as described in Section 2.1.2.

Example 1 : SILP (Continued) In the final iteration of the Dantzig-Wolfe method, we had con-structed the inner approximationPI0 = conv{(4,1),(5,5),(2,1),(3,4)}and solved the restricted master to give the solution(λDW)(2,1) = 0.58and (λDW)(3,4) = 0.42. For this example, let the superscript for the polyhedron represent the node number. Solving the subproblem, we showed that no more improving columns could be found and the current bound was optimal for the chosen relaxation. For the sake of illustration, let us assume that no violated cuts were found. Therefore, we have completed the calculation of the root node bound and now need to check to see whether the solution is in fact feasible to the original problem. Using the mapping(2.18), we compose the

(a) (b) (c)

Figure 3.1: Branching in the Dantzig-Wolfe method (Example 1: SILP)

solution in the compact space asxDW= (2.42,2.25). Since the solution is fractional, we branch on the most fractional variablex0by creating two new nodes:

Node 1: PI1 = conv ©

Notice here that we choose to include the branching constraint in both the master and the sub-problem. Adding these constraints to the subproblem can improve convergence at the expense of increasing the time required on each subproblem solve. The choice is purely empirical and the tradeoff can differ from application to application. This step is depicted in Figure 3.1(a).

We now use the mapping between the compact formulation and the extended formulation to produce the equivalent branching cuts in the master as follows:

Node 1: 4 (λDW)(4,1)+ 5 (λDW)(5,5)+ 2 (λDW)(2,1)+ 3 (λDW)(3,4) 2, Node 2: 4 (λDW)(4,1)+ 5 (λDW)(5,5)+ 2 (λDW)(2,1)+ 3 (λDW)(3,4) 3.

We then solve the master problem for node 1 and immediately declare this node infeasible. Mov-ing to node 2, we solve the master problem, which gives(λDW)(4,1) = 0.04,(λDW)(2,1) = 0.04, (λDW)(3,4) = 0.92, andxDW = (3,3.75). This solution is depicted in Figure 3.1(b). Next, we solve the subproblem in order to generate new columns but find that no more exist that can improve the bound. Therefore, we are done with node 2 and have a new lower boundzDW = 3. Since the solution is fractional, we once again branch, this time onx1, creating two new nodes, as depicted in Figure 3.1(b):

Node 3: PI3 = conv ©

x∈R2 |xsatisfies(1.6)(1.11),(1.17), x03andx1 , PO3 = ©

x∈R2 |xsatisfies(1.12)(1.16), x0 3andx1 , Node 4: PI4 = conv ©

x∈R2 |xsatisfies(1.6)(1.11),(1.17), x03andx1 , PO4 = ©

x∈R2 |xsatisfies(1.12)(1.16), x0 3andx1 . Using the mapping again gives the following branching cuts in the respective nodes:

Node 3: 4 (λDW)(4,1)+ 5 (λDW)(5,5)+ 2 (λDW)(2,1)+ 3 (λDW)(3,4) 3, 1 (λDW)(4,1)+ 5 (λDW)(5,5)+ 1 (λDW)(2,1)+ 4 (λDW)(3,4) 3,

Node 4: 4 (λDW)(4,1)+ 5 (λDW)(5,5)+ 2 (λDW)(2,1)+ 3 (λDW)(3,4) 3, 1 (λDW)(4,1)+ 5 (λDW)(5,5)+ 1 (λDW)(2,1)+ 4 (λDW)(3,4) 4.

Next, we solve the master problem for node 3, which gives the solution(λDW)(4,1) = 0.16,(λDW)(2,1) = 0.17,(λDW)(3,4) = 0.67, andxDW = (3,3). This solution is depicted in Figure 3.1(c). Since this solution is integer feasible, it now gives a global upper bound of3. Since the global lower bound is also3, we are done.

Branching for Relax-and-Cut We now address the additional considerations when using this idea in the context of relax-and-cut. As mentioned in Section 2.1.3, one way to solve the master

problem in the Lagrangian Method is to use the subgradient method. Standard versions of the sub-gradient method provide only a dual solution(uLR, αLR), which is sufficient when using this method only for bounding. As shown in Section 2.2.2, we can also apply cutting planes by separating the solutions∈ F0 to the Lagrangian subproblem. However, in order to use the branching framework we mention above, we need to be able to map back to the compact formulation so that we can con-struct dichotomies based on the bounds in the compact space. If this can be accomplished, then the rest of the machinery follows.

As with branch-and-price, the majority of literature on the topic of branching in the context of the Lagrangian Method uses application-specific information. Some authors have suggested the following approach [27]. Let B define the set of extreme points that have been found in Step 2 of Figure 2.8. After the method has converged and the boundzLDhas been found, take the set of extreme pointsBand form the inner approximationPIas we do in the context of the Dantzig-Wolfe method. That is,

Now, construct a linear program analogous to the Dantzig-Wolfe restricted master formulation using this set of extreme points as the inner approximation as follows:

min

Solve this linear program, giving an optimal primal solutionλ. Now, from this we can go back toˆ our mapping(2.18)to project to the compact space and construct the pointx. This idea of usingˆ the generated set of extreme points to construct a primal solution has close ties to several other important areas of research, includingbundle methods[18] and thevolume algorithm[7].

Just as with branching for price-and-cut, once the branching constraints have been constructed, we enforce them by adding them directly to[A00, b00]. In relax-and-cut, these constraints are im-mediately relaxed and new dual multipliers are added the Lagrangian master problem, as described in Steps 2 and 3(b) of the relax-and-cut procedure in Figure 2.16. As before, we have the choice

whether to enforce the branching constraint in the master problem or the subproblem.