Branch and cut

Last updated

Branch and cut [1] is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer values. [2] Branch and cut involves running a branch and bound algorithm and using cutting planes to tighten the linear programming relaxations. Note that if cuts are only used to tighten the initial LP relaxation, the algorithm is called cut and branch.

Contents

Algorithm

This description assumes the ILP is a maximization problem.

The method solves the linear program without the integer constraint using the regular simplex algorithm. When an optimal solution is obtained, and this solution has a non-integer value for a variable that is supposed to be integer, a cutting plane algorithm may be used to find further linear constraints which are satisfied by all feasible integer points but violated by the current fractional solution. These inequalities may be added to the linear program, such that resolving it will yield a different solution which is hopefully "less fractional".

At this point, the branch and bound part of the algorithm is started. The problem is split into multiple (usually two) versions. The new linear programs are then solved using the simplex method and the process repeats. During the branch and bound process, non-integral solutions to LP relaxations serve as upper bounds and integral solutions serve as lower bounds. A node can be pruned if an upper bound is lower than an existing lower bound. Further, when solving the LP relaxations, additional cutting planes may be generated, which may be either global cuts, i.e., valid for all feasible integer solutions, or local cuts, meaning that they are satisfied by all solutions fulfilling the side constraints from the currently considered branch and bound subtree.

The algorithm is summarized below.

  1. Add the initial ILP to , the list of active problems
  2. Set and
  3. while is not empty
    1. Select and remove (de-queue) a problem from
    2. Solve the LP relaxation of the problem.
    3. If the solution is infeasible, go back to 3 (while). Otherwise denote the solution by with objective value .
    4. If , go back to 3.
    5. If is integer, set and go back to 3.
    6. If desired, search for cutting planes that are violated by . If any are found, add them to the LP relaxation and return to 3.2.
    7. Branch to partition the problem into new problems with restricted feasible regions. Add these problem to and go back to 3
  4. return

Pseudocode

In C++-like pseudocode, this could be written:

// ILP branch and cut solution pseudocode, assuming objective is to be maximizedILP_solutionbranch_and_cut_ILP(IntegerLinearPrograminitial_problem){queueactive_list;// L, aboveactive_list.enqueue(initial_problem);// step 1// step 2ILP_solutionoptimal_solution;// this will hold x* abovedoublebest_objective=-std::numeric_limits<double>::infinity;// will hold v* abovewhile(!active_list.empty()){// step 3 aboveLinearProgram&curr_prob=active_list.dequeue();// step 3.1do{// steps 3.2-3.7RelaxedLinearProgram&relaxed_prob=LP_relax(curr_prob);// step 3.2LP_solutioncurr_relaxed_soln=LP_solve(relaxed_prob);// this is x aboveboolcutting_planes_found=false;if(!curr_relaxed_soln.is_feasible()){// step 3.3continue;// try another solution; continues at step 3}doublecurrent_objective_value=curr_relaxed_soln.value();// v aboveif(current_objective_value<=best_objective){// step 3.4continue;// try another solution; continues at step 3}if(curr_relaxed_soln.is_integer()){// step 3.5best_objective=current_objective_value;optimal_solution=cast_as_ILP_solution(curr_relaxed_soln);continue;// continues at step 3}// current relaxed solution isn't integralif(hunting_for_cutting_planes){// step 3.6violated_cutting_planes=search_for_violated_cutting_planes(curr_relaxed_soln);if(!violated_cutting_planes.empty()){// step 3.6cutting_planes_found=true;// will continue at step 3.2for(auto&&cutting_plane:violated_cutting_planes){active_list.enqueue(LP_relax(curr_prob,cutting_plane));}continue;// continues at step 3.2}}// step 3.7: either violated cutting planes not found, or we weren't looking for themauto&&branched_problems=branch_partition(curr_prob);for(auto&&branch:branched_problems){active_list.enqueue(branch);}continue;// continues at step 3}while(hunting_for_cutting_planes/* parameter of the algorithm; see 3.6 */&&cutting_planes_found);// end step 3.2 do-while loop}// end step 3 while loopreturnoptimal_solution;// step 4}

In the above pseudocode, the functions LP_relax, LP_solve and branch_partition called as subroutines must be provided as applicable to the problem. For example, LP_solve could call the simplex algorithm. Branching strategies for branch_partition are discussed below.

Branching strategies

An important step in the branch and cut algorithm is the branching step. At this step, there are a variety of branching heuristics that can be used. The branching strategies described below all involve what is called branching on a variable. [3] Branching on a variable involves choosing a variable, , with a fractional value, , in the optimal solution to the current LP relaxation and then adding the constraints and

Most infeasible branching
This branching strategy chooses the variable with the fractional part closest to 0.5.
Pseudo cost branching
The basic idea of this strategy is to keep track for each variable the change in the objective function when this variable was previously chosen as the variable to branch on. The strategy then chooses the variable that is predicted to have the most change on the objective function based on past changes when it was chosen as the branching variable. Note that pseudo cost branching is initially uninformative in the search since few variables have been branched on.
Strong branching
Strong branching involves testing which of the candidate variable gives the best improvement to the objective function before actually branching on them. Full strong branching tests all candidate variables and can be computationally expensive. The computational cost can be reduced by only considering a subset of the candidate variables and not solving each of the corresponding LP relaxations to completion.

There are also a large number of variations of these branching strategies, such as using strong branching early on when pseudo cost branching is relatively uninformative and then switching to pseudo cost branching later when there is enough branching history for pseudo cost to be informative.

Related Research Articles

<span class="mw-page-title-main">Linear programming</span> Method to solve some optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

Branch and bound is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem in mathematics that arises from applications in industry. In terms of computational complexity, the problem is an NP-hard problem reducible to the knapsack problem. The problem can be formulated as an integer linear programming problem.

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972.

<span class="mw-page-title-main">Cutting-plane method</span> Optimization technique for solving (mixed) integer linear programs

In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory.

In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

<span class="mw-page-title-main">Ellipsoid method</span> Iterative method for minimizing convex functions

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.

<span class="mw-page-title-main">Linear programming relaxation</span>

In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.

<span class="mw-page-title-main">COIN-OR</span>

Computational Infrastructure for Operations Research (COIN-OR), is a project that aims to "create for mathematical software what the open literature is for mathematical theory." The open literature provides the operations research (OR) community with a peer-review process and an archive. Papers in operations research journals on mathematical theory often contain supporting numerical results from computational studies. The software implementations, models, and data used to produce the numerical results are typically not published. The status quo impeded researchers needing to reproduce computational results, make fair comparisons, and extend the state of the art.

In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form

Within computer science and operations research, many combinatorial optimization problems are computationally intractable to solve exactly . Many such problems do admit fast approximation algorithms—that is, algorithms that are guaranteed to return an approximately optimal solution given any input.

In applied mathematics, branch and price is a method of combinatorial optimization for solving integer linear programming (ILP) and mixed integer linear programming (MILP) problems with many variables. The method is a hybrid of branch and bound and column generation methods.

In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.

The quadratic knapsack problem (QKP), first introduced in 19th century, is an extension of knapsack problem that allows for quadratic terms in the objective function: Given a set of items, each with a weight, a value, and an extra profit that can be earned if two items are selected, determine the number of items to include in a collection without exceeding capacity of the knapsack, so as to maximize the overall profit. Usually, quadratic knapsack problems come with a restriction on the number of copies of each kind of item: either 0, or 1. This special type of QKP forms the 0-1 quadratic knapsack problem, which was first discussed by Gallo et al. The 0-1 quadratic knapsack problem is a variation of knapsack problems, combining the features of unbounded knapsack problem, 0-1 knapsack problem and quadratic knapsack problem.

The configuration linear program (configuration-LP) is a particular linear programming used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to bin packing and job scheduling. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.

References

  1. Padberg, Manfred; Rinaldi, Giovanni (1991). "A Branch-and-Cut Algorithm for the Resolution of Large-Scale Symmetric Traveling Salesman Problems". SIAM Review. 33 (1): 60–100. doi:10.1137/1033004. ISSN   0036-1445.
  2. John E., Mitchell (2002). "Branch-and-Cut Algorithms for Combinatorial Optimization Problems" (PDF). Handbook of Applied Optimization: 65–77.
  3. Achterberg, Tobias; Koch, Thorsten; Martin, Alexander (2005). "Branching rules revisited". Operations Research Letters. 33 (1): 42–54. doi:10.1016/j.orl.2004.04.002.