In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes the number of bins used.
This problem is a dual of the bin packing problem: in bin covering, the bin sizes are bounded from below and the goal is to maximize their number; in bin packing, the bin sizes are bounded from above and the goal is to minimize their number. [1]
The problem is NP-hard, but there are various efficient approximation algorithms:
Csirik, Frenk, Lebbe and Zhang [2] : 16–19 present the following simple algorithm for 2/3 approximation. Suppose the bin size is 1 and there are n items.
For any instance I, denote by the number of bins in the optimal solution, and by the number of full bins in the bidirectional filling algorithm. Then , or equivalently, .
For the proof, the following terminology is used.
The sum of each bin is at least 1, but if the final item is removed from it, then the remaining sum is smaller than 1. Each of the first bins contains an initial item, possibly some middle items, and a final item. Each of the last bins contains only an initial item and a final item, since both of them are larger than 1/2 and their sum is already larger than 1.
The proof considers two cases.
The easy case is , that is, all final items are smaller than 1/2. Then, the sum of every filled is at most 3/2, and the sum of remaining items is at most 1, so the sum of all items is at most . On the other hand, in the optimal solution the sum of every bin is at least 1, so the sum of all items is at least . Therefore, as required.
The hard case is , that is, some final items are larger than 1/2. We now prove an upper bound on by presenting it as a sum where:
We focus first on the optimal bins in and . We present a bijection between the items in each such bin to some items in which are at least as valuable.
We now focus on the optimal bins in and .
The 2/3 factor is tight for BDF. Consider the following instance (where is sufficiently small):
BDF initializes the first bin with the largest item and fills it with the smallest items. Then, the remaining items can cover bins only in triplets, so all in all bins are filled. But in OPT one can fill bins, each of which contains two of the middle-sized items and two small items.
Csirik, Frenk, Lebbe and Zhang [2] : 19–24 present another algorithm that attains a 3/4 approximation. The algorithm orders the items from large to small, and partitions them into three classes:
The algorithm works in two phases. Phase 1:
Phase 2:
In the above example, showing the tightness of BDF, the sets are:
TCF attains the optimal outcome, since it initializes all bins with pairs of items from Y, and fills them with pairs of items from Z.
For any instance I, denote by the number of bins in the optimal solution, and by the number of full bins in the three-classes filling algorithm. Then .
The 3/4 factor is tight for TCF. Consider the following instance (where is sufficiently small):
TCF initializes the first bin with the largest two items, and fills it with the smallest items. Then, the remaining items can cover bins only in groups of four, so all in all bins are filled. But in OPT one can fill bins, each of which contains 3 middle-sized items and 3 small items.
Csirik, Johnson and Kenyon [3] present an asymptotic PTAS. It is an algorithm that, for every ε>0, fills at least bins if the sum of all items is more than , and at least otherwise. It runs in time . The algorithm solves a variant of the configuration linear program, with variables and constraints. This algorithm is only theoretically interesting, since in order to get better than 3/4 approximation, we must take , and then the number of variables is more than .
They also present algorithms for the online version of the problem. In the online setting, it is not possible to get an asymptotic worst-case approximation factor better than 1/2. However, there are algorithms that perform well in the average case.
Jansen and Solis-Oba [4] present an asymptotic FPTAS. It is an algorithm that, for every ε>0, fills at least bins if the sum of all items is more than (if the sum of items is less than that, then the optimum is at most anyway). It runs in time , where is the runtime complexity of the best available algorithm for matrix inversion (currently, around ). This algorithm becomes better than the 3/4 approximation already when , and in this case the constants are reasonable - about .
An important special case of bin covering is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, then some of the heuristic algorithms for bin covering find an optimal solution. [5] : Sec.5
In the fair item allocation problem, there are different people each of whom attributes a different value to each item. The goal is to allocate to each person a "bin" full of items, such that the value of each bin is at least a certain constant, and as many people as possible receive a bin. Many techniques from bin covering are used in this problem too.
The knapsack problem is the following problem in combinatorial optimization:
The subset sum problem (SSP) is a decision problem in computer science. In its most general formulation, there is a multiset of integers and a target-sum , and the question is to decide whether any subset of the integers sum to precisely . The problem is known to be NP-hard. Moreover, some restricted variants of it are NP-complete too, for example:
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.
In extremal graph theory, Szemerédi’s regularity lemma states that a graph can be partitioned into a bounded number of parts so that the edges between parts are regular. The lemma shows that properties of random graphs can be applied to general graphs like counting the copies of a given subgraph within graphs. Endre Szemerédi proved the lemma over bipartite graphs for his theorem on arithmetic progressions in 1975 and for general graphs in 1978. Variants of the lemma use different notions of regularity and apply to other mathematical objects like hypergraphs.
A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value which is at least times the correct value, and at most times the correct value.
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n matrix with
In statistics, an additive model (AM) is a nonparametric regression method. It was suggested by Jerome H. Friedman and Werner Stuetzle (1981) and is an essential part of the ACE algorithm. The AM uses a one-dimensional smoother to build a restricted class of nonparametric regression models. Because of this, it is less affected by the curse of dimensionality than e.g. a p-dimensional smoother. Furthermore, the AM is more flexible than a standard linear model, while being more interpretable than a general regression surface at the cost of approximation errors. Problems with AM, like many other machine-learning methods, include model selection, overfitting, and multicollinearity.
In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item.
In quantum field theory, a Ward–Takahashi identity is an identity between correlation functions that follows from the global or gauge symmetries of the theory, and which remains valid after renormalization.
The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip, minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.
In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.
Identical-machines scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m identical machines, such that a certain objective function is optimized, for example, the makespan is minimized.
Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:
First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:
First-fit-decreasing (FFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem, so we use an approximately-optimal heuristic.
High-multiplicity bin packing is a special case of the bin packing problem, in which the number of different item-sizes is small, while the number of items with each size is large. While the general bin-packing problem is NP-hard, the high-multiplicity setting can be solved in polynomial time, assuming that the number of different sizes is a fixed constant.
The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.