In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.
For example, in a 0–1 integer program, all constraints are of the form
The relaxation of the original integer program instead uses a collection of linear constraints
The resulting relaxation is a linear program, hence the name. This relaxation technique transforms an NP-hard optimization problem (integer programming) into a related problem that is solvable in polynomial time (linear programming); the solution to the relaxed linear program can be used to gain information about the solution to the original integer program.
Consider the set cover problem, the linear programming relaxation of which was first considered by Lovász in 1975. [1] In this problem, one is given as input a family of sets F = {S0, S1, ...}; the task is to find a subfamily, with as few sets as possible, having the same union as F.
To formulate this as a 0–1 integer program, form an indicator variable xi for each set Si, that takes the value 1 when Si belongs to the chosen subfamily and 0 when it does not. Then a valid cover can be described by an assignment of values to the indicator variables satisfying the constraints
(that is, only the specified indicator variable values are allowed) and, for each element ej of the union of F,
(that is, each element is covered). The minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function
The linear programming relaxation of the set cover problem describes a fractional cover in which the input sets are assigned weights such that the total weight of the sets containing each element is at least one and the total weight of all sets is minimized.
As a specific example of the set cover problem, consider the instance F = {{a, b}, {b, c}, {a, c}}. There are three optimal set covers, each of which includes two of the three given sets. Thus, the optimal value of the objective function of the corresponding 0–1 integer program is 2, the number of sets in the optimal covers. However, there is a fractional solution in which each set is assigned the weight 1/2, and for which the total value of the objective function is 3/2. Thus, in this example, the linear programming relaxation has a value differing from that of the unrelaxed 0–1 integer program.
The linear programming relaxation of an integer program may be solved using any standard linear programming technique. If it happens that, in the optimal solution, all variables have integer values, then it will also be an optimal solution to the original integer program. However, this is generally not true, except for some special cases (e.g. problems with totally unimodular matrix specifications.)
In all cases, though, the solution quality of the linear program is at least as good as that of the integer program, because any integer program solution would also be a valid linear program solution. That is, in a maximization problem, the relaxed program has a value greater than or equal to that of the original program, while in a minimization problem such as the set cover problem the relaxed program has a value smaller than or equal to that of the original program. Thus, the relaxation provides an optimistic bound on the integer program's solution.
In the example instance of the set cover problem described above, in which the relaxation has an optimal solution value of 3/2, we can deduce that the optimal solution value of the unrelaxed integer program is at least as large. Since the set cover problem has solution values that are integers (the numbers of sets chosen in the subfamily), the optimal solution quality must be at least as large as the next larger integer, 2. Thus, in this instance, despite having a different value from the unrelaxed problem, the linear programming relaxation gives us a tight lower bound on the solution quality of the original problem.
Linear programming relaxation is a standard technique for designing approximation algorithms for hard optimization problems. In this application, an important concept is the integrality gap, the maximum ratio between the solution quality of the integer program and of its relaxation. In an instance of a minimization problem, if the real minimum (the minimum of the integer problem) is , and the relaxed minimum (the minimum of the linear programming relaxation) is , then the integrality gap of that instance is . In a maximization problem the fraction is reversed. The integrality gap is always at least 1. In the example above, the instance F = {{a, b}, {b, c}, {a, c}} shows an integrality gap of 4/3.
Typically, the integrality gap translates into the approximation ratio of an approximation algorithm. This is because an approximation algorithm relies on some rounding strategy that finds, for every relaxed solution of size , an integer solution of size at most (where RR is the rounding ratio). If there is an instance with integrality gap IG, then every rounding strategy will return, on that instance, a rounded solution of size at least . Therefore necessarily . The rounding ratio RR is only an upper bound on the approximation ratio, so in theory the actual approximation ratio may be lower than IG, but this may be hard to prove. In practice, a large IG usually implies that the approximation ratio in the linear programming relaxation might be bad, and it may be better to look for other approximation schemes for that problem.
For the set cover problem, Lovász proved that the integrality gap for an instance with n elements is Hn, the nth harmonic number. One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding. [2] Given a fractional cover, in which each set Si has weight wi, choose randomly the value of each 0–1 indicator variable xi to be 1 with probability wi × (ln n +1), and 0 otherwise. Then any element ej has probability less than 1/(e×n) of remaining uncovered, so with constant probability all elements are covered. The cover generated by this technique has total size, with high probability, (1+o(1))(ln n)W, where W is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As Young showed in 1995 [3] both the random part of this algorithm and the need to construct an explicit solution to the linear programming relaxation may be eliminated using the method of conditional probabilities, leading to a deterministic greedy algorithm for set cover, known already to Lovász, that repeatedly selects the set that covers the largest possible number of remaining uncovered elements. This greedy algorithm approximates the set cover to within the same Hn factor that Lovász proved as the integrality gap for set cover. There are strong complexity-theoretic reasons for believing that no polynomial time approximation algorithm can achieve a significantly better approximation ratio. [4]
Similar randomized rounding techniques, and derandomized approximation algorithms, may be used in conjunction with linear programming relaxation to develop approximation algorithms for many other problems, as described by Raghavan, Tompson, and Young.
As well as its uses in approximation, linear programming plays an important role in branch and bound algorithms for computing the true optimum solution to hard optimization problems.
If some variables in the optimal solution have fractional values, we may start a branch and bound type process, in which we recursively solve subproblems in which some of the fractional variables have their values fixed to either zero or one. In each step of an algorithm of this type, we consider a subproblem of the original 0–1 integer program in which some of the variables have values assigned to them, either 0 or 1, and the remaining variables are still free to take on either value. In subproblem i, let Vi denote the set of remaining variables. The process begins by considering a subproblem in which no variable values have been assigned, and in which V0 is the whole set of variables of the original problem. Then, for each subproblem i, it performs the following steps.
Although it is difficult to prove theoretical bounds on the performance of algorithms of this type, they can be very effective in practice.
Two 0–1 integer programs that are equivalent, in that they have the same objective function and the same set of feasible solutions, may have quite different linear programming relaxations: a linear programming relaxation can be viewed geometrically, as a convex polytope that includes all feasible solutions and excludes all other 0–1 vectors, and infinitely many different polytopes have this property. Ideally, one would like to use as a relaxation the convex hull of the feasible solutions; linear programming on this polytope would automatically yield the correct solution to the original integer program. However, in general, this polytope will have exponentially many facets and be difficult to construct. Typical relaxations, such as the relaxation of the set cover problem discussed earlier, form a polytope that strictly contains the convex hull and has vertices other than the 0–1 vectors that solve the unrelaxed problem.
The cutting-plane method for solving 0–1 integer programs, first introduced for the traveling salesman problem by Dantzig, Fulkerson, and Johnson in 1954 [5] and generalized to other integer programs by Gomory in 1958, [6] takes advantage of this multiplicity of possible relaxations by finding a sequence of relaxations that more tightly constrain the solution space until eventually an integer solution is obtained. This method starts from any relaxation of the given program, and finds an optimal solution using a linear programming solver. If the solution assigns integer values to all variables, it is also the optimal solution to the unrelaxed problem. Otherwise, an additional linear constraint (a cutting plane or cut) is found that separates the resulting fractional solution from the convex hull of the integer solutions, and the method repeats on this new more tightly constrained problem.
Problem-specific methods are needed to find the cuts used by this method. It is especially desirable to find cutting planes that form facets of the convex hull of the integer solutions, as these planes are the ones that most tightly constrain the solution space; there always exists a cutting plane of this type that separates any fractional solution from the integer solutions. Much research has been performed on methods for finding these facets for different types of combinatorial optimization problems, under the framework of polyhedral combinatorics. [7]
The related branch and cut method combines the cutting plane and branch and bound methods. In any subproblem, it runs the cutting plane method until no more cutting planes can be found, and then branches on one of the remaining fractional variables.
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.
Fractional coloring is a topic in a young branch of graph theory known as fractional graph theory. It is a generalization of ordinary graph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, and adjacent vertices — those connected by edges — must be assigned different colors. In a fractional coloring however, a set of colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common.
Column generation or delayed column generation is an efficient algorithm for solving large linear programs.
The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.
In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory.
Branch and cut is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer values. Branch and cut involves running a branch and bound algorithm and using cutting planes to tighten the linear programming relaxations. Note that if cuts are only used to tighten the initial LP relaxation, the algorithm is called cut and branch.
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In the mathematical area of graph theory, Kőnig's theorem, proved by Dénes Kőnig (1931), describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs.
In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.
In computer science and operations research, randomized rounding is a widely-used approach for designing and analyzing approximation algorithms.
In graph theory and computational complexity theory, a Frankl–Rödl graph is a graph defined by connecting pairs of vertices of a hypercube that are at a specified even distance from each other. The graphs of this type are parameterized by the dimension of the hypercube and by the distance between adjacent vertices.
The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.
The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.