Integer programming

Last updated

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.

Contents

Integer programming is NP-complete. In particular, the special case of 0–1 integer linear programming, in which unknowns are binary, and only the restrictions must be satisfied, is one of Karp's 21 NP-complete problems. [1]

If some decision variables are not discrete, the problem is known as a mixed-integer programming problem. [2]

Canonical and standard form for ILPs

In integer linear programming, the canonical form is distinct from the standard form. An integer linear program in canonical form is expressed thus (note that it is the vector which is to be decided): [3]

and an ILP in standard form is expressed as

where are vectors and is a matrix. As with linear programs, ILPs not in standard form can be converted to standard form by eliminating inequalities, introducing slack variables () and replacing variables that are not sign-constrained with the difference of two sign-constrained variables.

Example

IP polytope with LP relaxation IP polytope with LP relaxation.svg
IP polytope with LP relaxation

The plot on the right shows the following problem.

The feasible integer points are shown in red, and the red dashed lines indicate their convex hull, which is the smallest convex polyhedron that contains all of these points. The blue lines together with the coordinate axes define the polyhedron of the LP relaxation, which is given by the inequalities without the integrality constraint. The goal of the optimization is to move the black dashed line as far upward while still touching the polyhedron. The optimal solutions of the integer problem are the points and that both have an objective value of 2. The unique optimum of the relaxation is with objective value of 2.8. If the solution of the relaxation is rounded to the nearest integers, it is not feasible for the ILP.

Proof of NP-hardness

The following is a reduction from minimum vertex cover to integer programming that will serve as the proof of NP-hardness.

Let be an undirected graph. Define a linear program as follows:

Given that the constraints limit to either 0 or 1, any feasible solution to the integer program is a subset of vertices. The first constraint implies that at least one end point of every edge is included in this subset. Therefore, the solution describes a vertex cover. Additionally given some vertex cover C, can be set to 1 for any and to 0 for any thus giving us a feasible solution to the integer program. Thus we can conclude that if we minimize the sum of we have also found the minimum vertex cover. [4]

Variants

Mixed-integer linear programming (MILP) involves problems in which only some of the variables, , are constrained to be integers, while other variables are allowed to be non-integers.

Zero–one linear programming (or binary integer programming) involves problems in which the variables are restricted to be either 0 or 1. Any bounded integer variable can be expressed as a combination of binary variables. [5] For example, given an integer variable, , the variable can be expressed using binary variables:

Applications

There are two main reasons for using integer variables when modeling problems as a linear program:

  1. The integer variables represent quantities that can only be integer. For example, it is not possible to build 3.7 cars.
  2. The integer variables represent decisions (e.g. whether to include an edge in a graph) and so should only take on the value 0 or 1.

These considerations occur frequently in practice and so integer linear programming can be used in many applications areas, some of which are briefly described below.

Production planning

Mixed-integer programming has many applications in industrial productions, including job-shop modelling. One important example happens in agricultural production planning and involves determining production yield for several crops that can share resources (e.g. land, labor, capital, seeds, fertilizer, etc.). A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but the variables must be constrained to be integer.

Scheduling

These problems involve service and vehicle scheduling in transportation networks. For example, a problem may involve assigning buses or subways to individual routes so that a timetable can be met, and also to equip them with drivers. Here binary decision variables indicate whether a bus or subway is assigned to a route and whether a driver is assigned to a particular train or subway. The zero–one programming technique has been successfully applied to solve a project selection problem in which projects are mutually exclusive and/or technologically interdependent. It is used in a special case of integer programming, in which all the decision variables are integers. Variable can assume only the values zero or one.

Territorial partitioning

Territorial partitioning or districting problems consist of partitioning a geographical region into districts in order to plan some operations while considering different criteria or constraints. Some requirements for this problem are: contiguity, compactness, balance or equity, respect of natural boundaries, and socio-economic homogeneity. Some applications for this type of problem include: political districting, school districting, health services districting and waste management districting.

Telecommunications networks

The goal of these problems is to design a network of lines to install so that a predefined set of communication requirements are met and the total cost of the network is minimal. [6] This requires optimizing both the topology of the network along with setting the capacities of the various lines. In many cases, the capacities are constrained to be integer quantities. Usually there are, depending on the technology used, additional restrictions that can be modeled as linear inequalities with integer or binary variables.

Cellular networks

The task of frequency planning in GSM mobile networks involves distributing available frequencies across the antennas so that users can be served and interference is minimized between the antennas. [7] This problem can be formulated as an integer linear program in which binary variables indicate whether a frequency is assigned to an antenna.

Other applications

Algorithms

The naive way to solve an ILP is to simply remove the constraint that x is integer, solve the corresponding LP (called the LP relaxation of the ILP), and then round the entries of the solution to the LP relaxation. But, not only may this solution not be optimal, it may not even be feasible; that is, it may violate some constraint.

Using total unimodularity

While in general the solution to LP relaxation will not be guaranteed to be integral, if the ILP has the form such that where and have all integer entries and is totally unimodular, then every basic feasible solution is integral. Consequently, the solution returned by the simplex algorithm is guaranteed to be integral. To show that every basic feasible solution is integral, let be an arbitrary basic feasible solution . Since is feasible, we know that . Let be the elements corresponding to the basis columns for the basic solution . By definition of a basis, there is some square submatrix of with linearly independent columns such that .

Since the columns of are linearly independent and is square, is nonsingular, and therefore by assumption, is unimodular and so . Also, since is nonsingular, it is invertible and therefore . By definition, . Here denotes the adjugate of and is integral because is integral. Therefore,

Thus, if the matrix of an ILP is totally unimodular, rather than use an ILP algorithm, the simplex method can be used to solve the LP relaxation and the solution will be integer.

Exact algorithms

When the matrix is not totally unimodular, there are a variety of algorithms that can be used to solve integer linear programs exactly. One class of algorithms are cutting plane methods, which work by solving the LP relaxation and then adding linear constraints that drive the solution towards being integer without excluding any integer feasible points.

Another class of algorithms are variants of the branch and bound method. For example, the branch and cut method that combines both branch and bound and cutting plane methods. Branch and bound algorithms have a number of advantages over algorithms that only use cutting planes. One advantage is that the algorithms can be terminated early and as long as at least one integral solution has been found, a feasible, although not necessarily optimal, solution can be returned. Further, the solutions of the LP relaxations can be used to provide a worst-case estimate of how far from optimality the returned solution is. Finally, branch and bound methods can be used to return multiple optimal solutions.

Exact algorithms for a small number of variables

Suppose is an m-by-n integer matrix and is an m-by-1 integer vector. We focus on the feasibility problem, which is to decide whether there exists an n-by-1 vector satisfying .

Let V be the maximum absolute value of the coefficients in and . If n (the number of variables) is a fixed constant, then the feasibility problem can be solved in time polynomial in m and log V. This is trivial for the case n=1. The case n=2 was solved in 1981 by Herbert Scarf. [13] The general case was solved in 1983 by Hendrik Lenstra, combining ideas by László Lovász and Peter van Emde Boas. [14] Doignon's theorem asserts that an integer program is feasible whenever every subset of constraints is feasible; a method combining this result with algorithms for LP-type problems can be used to solve integer programs in time that is linear in and fixed-parameter tractable (but possibly doubly exponential) in , with no dependence on . [15]

In the special case of 0-1 ILP, Lenstra's algorithm is equivalent to complete enumeration: the number of all possible solutions is fixed (2n), and checking the feasibility of each solution can be done in time poly(m, log V). In the general case, where each variable can be an arbitrary integer, complete enumeration is impossible. Here, Lenstra's algorithm uses ideas from Geometry of numbers. It transforms the original problem into an equivalent one with the following property: either the existence of a solution is obvious, or the value of (the n-th variable) belongs to an interval whose length is bounded by a function of n. In the latter case, the problem is reduced to a bounded number of lower-dimensional problems. The run-time complexity of the algorithm has been improved in several steps:

Heuristic methods

Since integer linear programming is NP-hard, many problem instances are intractable and so heuristic methods must be used instead. For example, tabu search can be used to search for solutions to ILPs. [23] To use tabu search to solve ILPs, moves can be defined as incrementing or decrementing an integer constrained variable of a feasible solution while keeping all other integer-constrained variables constant. The unrestricted variables are then solved for. Short-term memory can consist of previously tried solutions while medium-term memory can consist of values for the integer constrained variables that have resulted in high objective values (assuming the ILP is a maximization problem). Finally, long-term memory can guide the search towards integer values that have not previously been tried.

Other heuristic methods that can be applied to ILPs include

There are also a variety of other problem-specific heuristics, such as the k-opt heuristic for the traveling salesman problem. A disadvantage of heuristic methods is that if they fail to find a solution, it cannot be determined whether it is because there is no feasible solution or whether the algorithm simply was unable to find one. Further, it is usually impossible to quantify how close to optimal a solution returned by these methods are.

Sparse integer programming

It is often the case that the matrix that defines the integer program is sparse . In particular, this occurs when the matrix has a block structure, which is the case in many applications. The sparsity of the matrix can be measured as follows. The graph of has vertices corresponding to columns of , and two columns form an edge if has a row where both columns have nonzero entries. Equivalently, the vertices correspond to variables, and two variables form an edge if they share an inequality. The sparsity measure of is the minimum of the tree-depth of the graph of and the tree-depth of the graph of the transpose of . Let be the numeric measure of defined as the maximum absolute value of any entry of . Let be the number of variables of the integer program. Then it was shown in 2018 [24] that integer programming can be solved in strongly polynomial and fixed-parameter tractable time parameterized by and . That is, for some computable function and some constant , integer programming can be solved in time . In particular, the time is independent of the right-hand side and objective function . Moreover, in contrast to the classical result of Lenstra, where the number of variables is a parameter, here the number of variables is a variable part of the input.

See also

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Linear programming</span> Method to solve optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.

In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

Branch and cut is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer values. Branch and cut involves running a branch and bound algorithm and using cutting planes to tighten the linear programming relaxations. Note that if cuts are only used to tighten the initial LP relaxation, the algorithm is called cut and branch.

Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.

<span class="mw-page-title-main">Linear programming relaxation</span>

In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.

In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1.

In computer science and operations research, randomized rounding is a widely used approach for designing and analyzing approximation algorithms.

In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.

In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a vertex of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS. Hence, to find an optimal solution, it is sufficient to consider the BFS-s. This fact is used by the simplex algorithm, which essentially travels from one BFS to another until an optimal solution is found.

In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.

Market equilibrium computation is a computational problem in the intersection of economics and computer science. The input to this problem is a market, consisting of a set of resources and a set of agents. There are various kinds of markets, such as Fisher market and Arrow–Debreu market, with divisible or indivisible resources. The required output is a competitive equilibrium, consisting of a price-vector, and an allocation, such that each agent gets the best bundle possible given the budget, and the market clears.

The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.

The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.

References

  1. Karp, Richard M. (1972). "Reducibility among Combinatorial Problems" (PDF). In R. E. Miller; J. W. Thatcher; J.D. Bohlinger (eds.). Complexity of Computer Computations. New York: Plenum. pp. 85–103. doi:10.1007/978-1-4684-2001-2_9. ISBN   978-1-4684-2003-6.
  2. "Mixed-Integer Linear Programming (MILP): Model Formulation" (PDF). Retrieved 16 April 2018.
  3. Papadimitriou, C. H.; Steiglitz, K. (1998). Combinatorial optimization: algorithms and complexity. Mineola, NY: Dover. ISBN   0486402584.
  4. Erickson, J. (2015). "Integer Programming Reduction" (PDF). Archived from the original (PDF) on 18 May 2015.
  5. Williams, H.P. (2009). Logic and integer programming. International Series in Operations Research & Management Science. Vol. 130. ISBN   978-0-387-92280-5.
  6. Borndörfer, R.; Grötschel, M. (2012). "Designing telecommunication networks by integer programming" (PDF).
  7. Sharma, Deepak (2010). "Frequency Planning".
  8. Morais, Hugo; Kádár, Péter; Faria, Pedro; Vale, Zita A.; Khodr, H. M. (2010-01-01). "Optimal scheduling of a renewable micro-grid in an isolated load area using mixed-integer linear programming". Renewable Energy. 35 (1): 151–156. doi:10.1016/j.renene.2009.02.031. hdl: 10400.22/1585 . ISSN   0960-1481.
  9. Omu, Akomeno; Choudhary, Ruchi; Boies, Adam (2013-10-01). "Distributed energy resource system optimisation using mixed integer linear programming". Energy Policy. 61: 249–266. doi:10.1016/j.enpol.2013.05.009. ISSN   0301-4215. S2CID   29369795.
  10. Schouwenaars, T.; Valenti, M.; Feron, E.; How, J. (2005). "Implementation and Flight Test Results of MILP-based UAV Guidance". 2005 IEEE Aerospace Conference. pp. 1–13. doi:10.1109/AERO.2005.1559600. ISBN   0-7803-8870-4. S2CID   13447718.
  11. Radmanesh, Mohammadreza; Kumar, Manish (2016-03-01). "Flight formation of UAVs in presence of moving obstacles using fast-dynamic mixed integer linear programming". Aerospace Science and Technology. 50: 149–160. doi: 10.1016/j.ast.2015.12.021 . ISSN   1270-9638.
  12. Bast, Hannah; Brosi, Patrick; Storandt, Sabine (2017-10-05). "Efficient Generation of Geographically Accurate Transit Maps". arXiv: 1710.02226 [cs.CG].
  13. Scarf, Herbert E. (1981). "Production Sets with Indivisibilities, Part I: Generalities". Econometrica. 49 (1): 1–32. doi:10.2307/1911124. ISSN   0012-9682. JSTOR   1911124.
  14. 1 2 Lenstra, H. W. (1983-11-01). "Integer Programming with a Fixed Number of Variables". Mathematics of Operations Research. 8 (4): 538–548. CiteSeerX   10.1.1.431.5444 . doi:10.1287/moor.8.4.538. ISSN   0364-765X.
  15. Amenta, Nina; De Loera, Jesús A.; Soberón, Pablo (2017). "Helly's theorem: new variations and applications". In Harrington, Heather A.; Omar, Mohamed; Wright, Matthew (eds.). Proceedings of the AMS Special Session on Algebraic and Geometric Methods in Applied Discrete Mathematics held in San Antonio, TX, January 11, 2015. Contemporary Mathematics. Vol. 685. Providence, Rhode Island: American Mathematical Society. pp. 55–95. arXiv: 1508.07606 . doi:10.1090/conm/685. ISBN   9781470423216. MR   3625571.
  16. Kannan, Ravi (1987-08-01). "Minkowski's Convex Body Theorem and Integer Programming". Mathematics of Operations Research. 12 (3): 415–440. doi:10.1287/moor.12.3.415. ISSN   0364-765X. S2CID   495512.
  17. Goemans, Michel X.; Rothvoss, Thomas (2020-11-07). "Polynomiality for Bin Packing with a Constant Number of Item Types". Journal of the ACM . 67 (6): 38:1–38:21. doi: 10.1145/3421750 . hdl: 1721.1/92865 . ISSN   0004-5411. S2CID   227154747.
  18. Frank, András; Tardos, Éva (1987-03-01). "An application of simultaneous diophantine approximation in combinatorial optimization". Combinatorica. 7 (1): 49–65. doi:10.1007/BF02579200. ISSN   1439-6912. S2CID   45585308.
  19. Bliem, Bernhard; Bredereck, Robert; Niedermeier, Rolf (2016-07-09). "Complexity of efficient and envy-free resource allocation: few agents, resources, or utility levels". Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. IJCAI'16. New York, New York, USA: AAAI Press: 102–108. ISBN   978-1-57735-770-4.
  20. Bredereck, Robert; Kaczmarczyk, Andrzej; Knop, Dušan; Niedermeier, Rolf (2019-06-17). "High-Multiplicity Fair Allocation: Lenstra Empowered by N-fold Integer Programming". Proceedings of the 2019 ACM Conference on Economics and Computation. EC '19. Phoenix, AZ, USA: Association for Computing Machinery. pp. 505–523. doi:10.1145/3328526.3329649. ISBN   978-1-4503-6792-9. S2CID   195298520.
  21. Dadush, Daniel (2012-06-14). "Integer Programming, Lattice Algorithms, and Deterministic Volume Estimation.
  22. Reis, Victor; Rothvoss, Thomas (2023-03-26). "The Subspace Flatness Conjecture and Faster Integer Programming".
  23. Glover, F. (1989). "Tabu search-Part II". ORSA Journal on Computing. 1 (3): 4–32. doi:10.1287/ijoc.2.1.4. S2CID   207225435.
  24. Koutecký, Martin; Levin, Asaf; Onn, Shmuel (2018). "A parameterized strongly polynomial algorithm for block structured integer programs". In Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, Dániel; Sannella, Donald (eds.). 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9–13, 2018, Prague, Czech Republic. LIPIcs. Vol. 107. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. pp. 85:1–85:14. arXiv: 1802.05859 . doi:10.4230/LIPICS.ICALP.2018.85.

Further reading