First-fit-decreasing bin packing

Last updated

First-fit-decreasing (FFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem, so we use an approximately-optimal heuristic.

Contents

Description

The FFD algorithm works as follows.

In short: FFD orders the items by descending size, and then calls first-fit bin packing.

An equivalent description of the FFD algorithm is as follows.

In the standard description, we loop over the items once, but keep many open bins. In the equivalent description, we loop over the items many times, but keep only a single open bin each time.

Performance analysis

The performance of FFD was analyzed in several steps. Below, denotes the number of bins used by FFD for input set S and bin-capacity C.

Worst-case example

The lower bound example given in by Dósa is the following: Consider the two bin configurations:

If there are 4 copies of and 2 copies of in the optimal solution, FFD will compute the following bins:

That is, 8 bins total, while the optimum has only 6 bins. Therefore, the upper bound is tight, because .

This example can be extended to all sizes of : [5] in the optimal configuration there are 9k+6 bins: 6k+4 of type B1 and 3k+2 of type B2. But FFD needs at least 11k+8 bins, which is .

Performance with divisible item sizes

An important special case of bin-packing is that which the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. In this case, FFD always finds the optimal packing. [6] :Thm.2

Monotonicity properties

Contrary to intuition, is not a monotonic function of C. [7] :Fig.4 Similarly, is not a monotonic function of the sizes of items in S: it is possible that an item shrinks in size, but the number of bins increases.

However, the FFD algorithm has an "asymptotic monotonicity" property, defined as follows. [7] :Lem.2.1

Examples

For example, suppose the input is:

44, 24, 24, 22, 21, 17, 8, 8, 6, 6.

With capacity 60, FFD packs 3 bins:

But with capacity 61, FFD packs 4 bins:

This is because, with capacity 61, the 17 fits into the first bin, and thus blocks the way to the following 8, 8.

As another example, [8] : Ex.5.1  suppose the inputs are: 51, 28, 28, 28, 27, 25, 12, 12, 10, 10, 10, 10, 10, 10, 10, 10. With capacity 75, FFD packs 4 bins:

But with capacity 76, it needs 5 bins:

Consider the above example with capacity 60. If the 17 becomes 16, then the resulting packing is:

Modified first-fit-decreasing

Modified first fit decreasing (MFFD) [9] improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Then it proceeds through five phases:

  1. Allot a bin for each large item, ordered largest to smallest.
  2. Proceed forward through the bins. On each: If the smallest remaining medium item does not fit, skip this bin. Otherwise, place the largest remaining medium item that fits.
  3. Proceed backward through those bins that do not contain a medium item. On each: If the two smallest remaining small items do not fit, skip this bin. Otherwise, place the smallest remaining small item and the largest remaining small item that fits.
  4. Proceed forward through all bins. If the smallest remaining item of any size class does not fit, skip this bin. Otherwise, place the largest item that fits and stay on this bin.
  5. Use FFD to pack the remaining items into new bins.

This algorithm was first studied by Johnson and Garey [9] in 1985, where they proved that . This bound was improved in the year 1995 by Yue and Zhang [10] who proved that .

Other variants

Best-fit-decreasing (BFD) is very similar to FFD, except that after the list is sorted, it is processed by best-fit bin packing. Its asymptotic approximation ratio is the same as FFD - 11/9.

Implementations

See also

Related Research Articles

<span class="mw-page-title-main">Knapsack problem</span> Problem in combinatorial optimization

The knapsack problem is the following problem in combinatorial optimization:

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

In information theory, the noisy-channel coding theorem, establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.

<span class="mw-page-title-main">Viscoplasticity</span> Theory in continuum mechanics

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated.

In quantum information and computation, the Solovay–Kitaev theorem says that if a set of single-qubit quantum gates generates a dense subgroup of SU(2), then that set can be used to approximate any desired quantum gate with a short sequence of gates that can also be found efficiently. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. Michael Nielsen and Christopher M. Dawson have noted its importance in the field.

The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip, minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.

In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.

In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes the number of bins used.

The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine.

Identical-machines scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m identical machines, such that a certain objective function is optimized, for example, the makespan is minimized.

Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:

Best-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The best-fit algorithm uses the following heuristic:

First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:

Harmonic bin-packing is a family of online algorithms for bin packing. The input to such an algorithm is a list of items of different sizes. The output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem.

Next-fit-decreasing (NFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The NFD algorithm uses the following heuristic:

The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.


High-multiplicity bin packing is a special case of the bin packing problem, in which the number of different item-sizes is small, while the number of items with each size is large. While the general bin-packing problem is NP-hard, the high-multiplicity setting can be solved in polynomial time, assuming that the number of different sizes is a fixed constant.

The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.

References

  1. Johnson, David S (1973). "Near-optimal bin packing algorithms" (PDF). Massachusetts Institute of Technology.
  2. Baker, Brenda S. (1985). "A New Proof for the First-Fit Decreasing Bin-Packing Algorithm". J. Algorithms. 6 (1): 49–70. doi:10.1016/0196-6774(85)90018-5.
  3. Yue, Minyi (October 1991). "A simple proof of the inequality FFD (L) ≤ 11/9 OPT (L) + 1, ∀L for the FFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 7 (4): 321–331. doi:10.1007/BF02009683. S2CID   189915733.
  4. Li, Rongheng; Yue, Minyi (August 1997). "The proof of FFD(L) < -OPT(L) + 7/9". Chinese Science Bulletin. 42 (15): 1262–1265. Bibcode:1997ChSBu..42.1262L. doi:10.1007/BF02882754. S2CID   93280100.
  5. 1 2 Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm is FFD(I) ≤ 11/9OPT(I) + 6/9". In Chen Bo; Mike Paterson; Zhang Guochuan (eds.). Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. First International Symposium, ESCAPE 2007, Hangzhou, China, April 7–9, 2007. Lecture Notes in Computer Science. Vol. 4614. pp. 1–11. doi:10.1007/978-3-540-74450-4_1. ISBN   978-3-540-74449-8.
  6. Coffman, E. G; Garey, M. R; Johnson, D. S (1987-12-01). "Bin packing with divisible item sizes". Journal of Complexity. 3 (4): 406–428. doi:10.1016/0885-064X(87)90009-4. ISSN   0885-064X.
  7. 1 2 Coffman, E. G. Jr.; Garey, M. R.; Johnson, D. S. (1978-02-01). "An Application of Bin-Packing to Multiprocessor Scheduling". SIAM Journal on Computing. 7 (1): 1–17. doi:10.1137/0207001. ISSN   0097-5397.
  8. Huang, Xin; Lu, Pinyan (2021-07-18). "An Algorithmic Framework for Approximating Maximin Share Allocation of Chores". Proceedings of the 22nd ACM Conference on Economics and Computation. EC '21. New York, NY, USA: Association for Computing Machinery. pp. 630–631. arXiv: 1907.04505 . doi:10.1145/3465456.3467555. ISBN   978-1-4503-8554-1. S2CID   195874333.
  9. 1 2 Johnson, David S; Garey, Michael R (October 1985). "A 7160 theorem for bin packing". Journal of Complexity. 1 (1): 65–106. doi: 10.1016/0885-064X(85)90022-6 .
  10. Yue, Minyi; Zhang, Lei (July 1995). "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 11 (3): 318–330. doi:10.1007/BF02011198. S2CID   118263129.