The bin packing problem[1][2][3][4] is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, splitting a network prefix into multiple subnets,[5] and technology mapping in FPGAsemiconductor chip design.
Computationally, the problem is NP-hard, and the corresponding decision problem, deciding if items can fit into a specified number of bins, is NP-complete. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(nlogn) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution.[6]
There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterized by both a volume and a value, the problem of maximizing the value of items that can fit in the bin is known as the knapsack problem.
A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing[7] since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.
Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This of course requires additional storage for holding the items to be rearranged.
Formal statement
In Computers and Intractability[8]:226 Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.
Instance: Finite set of items, a size for each , a positive integer bin capacity , and a positive integer .
Question: Is there a partition of into disjoint sets such that the sum of the sizes of the items in each is or less?
Note that in the literature often an equivalent notation is used, where and for each . Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of . A solution is optimal if it has minimal . The -value for an optimal solution for a set of items is denoted by or just if the set of items is clear from the context.
Furthermore, there can be no approximation algorithm with absolute approximation ratio smaller than unless . This can be proven by a reduction from the partition problem:[10] given an instance of Partition where the sum of all input numbers is , construct an instance of bin-packing in which the bin size is T. If there exists an equal partition of the inputs, then the optimal packing needs 2 bins; therefore, every algorithm with an approximation ratio smaller than 3/2 must return less than 3 bins, which must be 2 bins. In contrast, if there is no equal partition of the inputs, then the optimal packing needs at least 3 bins.
On the other hand, bin packing is solvable in pseudo-polynomial time for any fixed number of bins K, and solvable in polynomial time for any fixed bin capacity B.[8]
Approximation algorithms for bin packing
To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items the number denotes the number of bins used when algorithm is applied to list , while denotes the optimum number for this list. The absolute worst-case performance ratio for an algorithm is defined as
On the other hand, the asymptotic worst-case ratio is defined as
Equivalently, is the smallest number such that there exists some constant K, such that for all lists L:[4]
.
Additionally, one can restrict the lists to those for which all items have a size of at most . For such lists, the bounded size performance ratios are denoted as and .
Approximation algorithms for bin packing can be classified into two categories:
Online heuristics, that consider the items in a given order and place them one by one inside the bins. These heuristics are also applicable to the offline version of this problem.
Offline heuristics, that modify the given list of items e.g. by sorting the items by size. These algorithms are no longer applicable to the online variant of this problem. However, they have an improved approximation guarantee while maintaining the advantage of their small time-complexity. A sub-category of offline heuristics is asymptotic approximation schemes. These algorithms have an approximation guarantee of the form for some constant that may depend on . For an arbitrarily large these algorithms get arbitrarily close to . However, this comes at the cost of a (drastically) increased time complexity compared to the heuristical approaches.
Online heuristics
In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. A diverse set of offline and online heuristics for bin-packing have been studied by David S. Johnson on his Ph.D. thesis.[11]
Single-class algorithms
There are many simple algorithms that use the following general scheme:
For each item in the input list:
If the item fits into one of the currently open bins, then put it in one of these bins;
Otherwise, open a new bin and put the new item in it.
The algorithms differ in the criterion by which they choose the open bin for the new item in step 1 (see the linked pages for more information):
Next Fit(NF) always keeps a single open bin. When the new item does not fit into it, it closes the current bin and opens a new bin. Its advantage is that it is a bounded-space algorithm since it only needs to keep a single open bin in memory. Its disadvantage is that its asymptotic approximation ratio is 2. In particular, , and for each there exists a list L such that and .[11] Its asymptotic approximation ratio can be somewhat improved based on the item sizes: for all and for all . For each algorithm A that is an AnyFit-algorithm it holds that .
Next-k-Fit(NkF) is a variant of Next-Fit, but instead of keeping only one bin open, the algorithm keeps the last k bins open and chooses the first bin in which the item fits. Therefore, it is called a k-bounded space algorithm.[12] For the NkF delivers results that are improved compared to the results of NF, however, increasing k to constant values larger than 2 improves the algorithm no further in its worst-case behavior. If algorithm A is an AlmostAnyFit-algorithm and then .[11]
First-Fit (FF) keeps all bins open, in the order in which they were opened. It attempts to place each new item into the first bin in which it fits. Its approximation ratio is , and there is a family of input lists L for which matches this bound.[13]
Best-Fit(BF), too, keeps all bins open, but attempts to place each new item into the bin with the maximum load in which it fits. Its approximation ratio is identical to that of FF, that is: , and there is a family of input lists L for which matches this bound.[14]
Worst-Fit (WF) attempts to place each new item into the bin with the minimum load. It can behave as badly as Next-Fit, and will do so on the worst-case list for that . Furthermore, it holds that . Since WF is an AnyFit-algorithm, there exists an AnyFit-algorithm such that .[11]
Almost Worst-Fit (AWF) attempts to place each new item inside the second most empty open bin (or emptiest bin if there are two such bins). If it does not fit, it tries the most empty one. It has an asymptotic worst-case ratio of .[11]
In order to generalize these results, Johnson introduced two classes of online heuristics called any-fit algorithm and almost-any-fit algorithm:[4]:470
In an AnyFit (AF) algorithm, if the current nonempty bins are B1,...,Bj, then the current item will not be packed into Bj+1 unless it does not fit in any of B1,...,Bj. The FF, WF, BF and AWF algorithms satisfy this condition. Johnson proved that, for any AnyFit algorithm A and any :
.
In an AlmostAnyFit (AAF) algorithm, if the current nonempty bins are B1,...,Bj, and of these bins, Bk is the unique bin with the smallest load, then the current item will not be packed into Bk, unless it does not fit into any of the bins to its left. The FF, BF and AWF algorithms satisfy this condition, but WF does not. Johnson proved that, for any AAF algorithm A and any α:
In particular: .
Refined algorithms
Better approximation ratios are possible with heuristics that are not AnyFit. These heuristics usually keep several classes of open bins, devoted to items of different size ranges (see the linked pages for more information):
Refined-first-fit bin packing (RFF) partitions the item sizes into four ranges: , , , and . Similarly, the bins are categorized into four classes. The next item is first assigned to its corresponding class. Inside that class, it is assigned to a bin using first-fit. Note that this algorithm is not an Any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin. This algorithm was first presented by Andrew Chi-Chih Yao,[15] who proved that it has an approximation guarantee of and presented a family of lists with for .
Harmonic-k partitions the interval of sizes based on a Harmonic progression into pieces for and such that . This algorithm was first described by Lee and Lee.[16] It has a time complexity of and at each step, there are at most k open bins that can be potentially used to place items, i.e., it is a k-bounded space algorithm. For , its approximation ratio satisfies , and it is asymptotically tight.
Refined-harmonic combines ideas from Harmonic-k with ideas from Refined-First-Fit. It places the items larger than similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than . This algorithm was first described by Lee and Lee.[16] They proved that for it holds that .
General lower bounds for online algorithms
Yao[15] proved in 1980 that there can be no online algorithm with an asymptotic competitive ratio smaller than . Brown[17] and Liang[18] improved this bound to 1.53635. Afterward, this bound was improved to 1.54014 by Vliet.[19] In 2012, this lower bound was again improved by Békési and Galambos[20] to .
In the offline version of bin packing, the algorithm can see all the items before starting to place them into bins. This allows to attain improved approximation ratios.
Multiplicative approximation
The simplest technique used by offline approximation schemes is the following:
Ordering the input list by descending size;
Run an online algorithm on the ordered list.
Johnson[11] proved that any AnyFit scheme A that runs on a list ordered by descending size has an asymptotic approximation ratio of
.
Some methods in this family are (see the linked pages for more information):
First-fit-decreasing (FFD) orders the items by descending size, then calls First-Fit. Its approximation ratio is , and this is tight.[23]
Next-fit-decreasing (NFD) orders the items by descending size, then calls Next-Fit. Its approximate ratio is slightly less than 1.7 in the worst case.[24] It has also been analyzed probabilistically.[25] Next-Fit packs a list and its inverse into the same number of bins. Therefore, Next-Fit-Increasing has the same performance as Next-Fit-Decreasing.[26]
Modified first-fit-decreasing (MFFD)[27], improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Its approximation guarantee is .[28]
Fernandez de la Vega and Lueker[29] presented a PTAS for bin packing. For every , their algorithm finds a solution with size at most and runs in time , where denotes a function only dependent on . For this algorithm, they invented the method of adaptive input rounding: the input numbers are grouped and rounded up to the value of the maximum in each group. This yields an instance with a small number of different sizes, which can be solved exactly using the configuration linear program.[30]
Additive approximation
The Karmarkar-Karp bin packing algorithm finds a solution with size at most , and runs in time polynomial in n (the polynomial has a high degree, at least 8).
Rothvoss[31] presented an algorithm that generates a solution with at most bins.
Hoberg and Rothvoss[32] improved this algorithm to generate a solution with at most bins. The algorithm is randomized, and its running-time is polynomial in n.
Martello and Toth[34] developed an exact algorithm for the 1-dimensional bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002[35] and later improved.[36]
A further improvement was presented by Schreiber and Korf in 2013.[37] The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like the number of items, the optimal number of bins, unused space in the optimal solution and value precision.
Small number of different sizes
A special case of bin packing is when there is a small number d of different item sizes. There can be many different items of each size. This case is also called high-multiplicity bin packing, and It admits more efficient algorithms than the general problem.
Bin-packing with fragmentation
Bin-packing with fragmentation or fragmentable object bin-packing is a variant of the bin packing problem in which it is allowed to break items into parts and put each part separately on a different bin. Breaking items into parts may allow for improving the overall performance, for example, minimizing the number of total bin. Moreover, the computational problem of finding an optimal schedule may become easier, as some of the optimization variables become continuous. On the other hand, breaking items apart might be costly. The problem was first introduced by Mandal, Chakrabary and Ghose.[38]
Variants
The problem has two main variants.
In the first variant, called bin-packing with size-increasing fragmentation (BP-SIF), each item may be fragmented; overhead units are added to the size of every fragment.
In the second variant, called bin-packing with size-preserving fragmentation (BP-SPF) each item has a size and a cost; fragmenting an item increases its cost but does not change its size.
Computational complexity
Mandal, Chakrabary and Ghose[38] proved that BP-SPF is NP-hard.
Menakerman and Rom[39] showed that BP-SIF and BP-SPF are both strongly NP-hard. Despite the hardness, they present several algorithms and investigate their performance. Their algorithms use classic algorithms for bin-packing, like next-fit and first-fit decreasing, as a basis for their algorithms.
Bertazzi, Golden and Wang[40] introduced a variant of BP-SIF with split rule: an item is allowed to be split in only one way according to its size. It is useful for the vehicle routing problem for example. In their paper, they provide the worst-case performance bound of the variant.
Shachnai, Tamir and Yehezkeli[41] developed approximation schemes for BP-SIF and BP-SPF; a dual PTAS (a PTAS for the dual version of the problem), an asymptotic PTAS called APTAS, and a dual asymptotic FPTAS called AFPTAS for both versions.
Ekici[42] introduced a variant of BP-SPF in which some items are in conflict, and it is forbidden to pack fragments of conflicted items into the same bin. They proved that this variant, too, is NP-hard.
Cassazza and Ceselli[43] introduced a variant with no cost and no overhead, and the number of bins is fixed. However, the number of fragmentations should be minimized. They present mathematical programming algorithms for both exact and approximate solutions.
Related problems
The problem of fractional knapsack with penalties was introduced by Malaguti, Monaci, Paronuzzi and Pferschy.[44] They developed an FPTAS and a dynamic program for the problem, and they showed an extensive computational study comparing the performance of their models. See also: Fractional job scheduling.
Performance with divisible item sizes
An important special case of bin packing is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, then some of the heuristic algorithms for bin packing find an optimal solution.[45]
Cardinality constraints on the bins
There is a variant of bin packing in which there are cardinality constraints on the bins: each bin can contain at most k items, for some fixed integer k.
Krause, Shen and Schwetman[46] introduce this problem as a variant of optimal job scheduling: a computer has some k processors. There are some n jobs that take unit time (1), but have different memory requirements. Each time-unit is considered a single bin. The goal is to use as few bins (=time units) as possible, while ensuring that in each bin, at most k jobs run. They present several heuristic algorithms that find a solution with at most bins.
Kellerer and Pferschy[47] present an algorithm with run-time , that finds a solution with at most bins. Their algorithm performs a binary search for OPT. For every searched value m, it tries to pack the items into 3m/2 bins.
Non-additive functions
There are various ways to extend the bin-packing model to more general cost and load functions:
Anily, Bramel and Simchi-Levi[48] study a setting where the cost of a bin is a concave function of the number of items in the bin. The objective is to minimize the total cost rather than the number of bins. They show that next-fit-increasing bin packing attains an absolute worst-case approximation ratio of at most 7/4, and an asymptotic worst-case ratio of 1.691 for any concave and monotone cost function.
Cohen, Keller, Mirrokni and Zadimoghaddam[49] study a setting where the size of the items is not known in advance, but it is a random variable. This is particularly common in cloud computing environments. While there is an upper bound on the amount of resources a certain user needs, most users use much less than the capacity. Therefore, the cloud manager may gain a lot by slight overcommitment. This induces a variant of bin packing with chance constraints: the probability that the sum of sizes in each bin is at most B should be at least p, where p is a fixed constant (standard bin packing corresponds to p=1). They show that, under mild assumptions, this problem is equivalent to a submodular bin packing problem, in which the "load" in each bin is not equal to the sum of items, but to a certain submodular function of it.
Related problems
In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).
In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).
In the inverse bin packing problem,[50] both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.
In the maximum resource bin packing problem,[51] the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.
In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.
In the fair indivisible chore allocation problem (a variant of fair item allocation), the items represent chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too.[52]
In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts.
In the selfish bin packing problem, each item is a player who wants to minimize its cost.[53]
There is also a variant of bin packing in which the cost that should be minimized is not the number of bins, but rather a certain concave function of the number of items in each bin.[48]
Other variants are two-dimensional bin packing,[54]three-dimensional bin packing,[55]bin packing with delivery,[56]
Resources
BPPLIB - a library of surveys, codes, benchmarks, generators, solvers, and bibliography.
Related Research Articles
The knapsack problem is the following problem in combinatorial optimization:
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function.
In computer science, particularly the study of approximation algorithms, an L-reduction is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems.
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.
The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip, minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.
Maximin share (MMS) is a criterion of fair item allocation. Given a set of items with different values, the 1-out-of-n maximin-share is the maximum value that can be gained by partitioning the items into parts and taking the part with the minimum value. An allocation of items among agents with different valuations is called MMS-fair if each agent gets a bundle that is at least as good as his/her 1-out-of-n maximin-share. MMS fairness is a relaxation of the criterion of proportionality - each agent gets a bundle that is at least as good as the equal split ( of every resource). Proportionality can be guaranteed when the items are divisible, but not when they are indivisible, even if all agents have identical valuations. In contrast, MMS fairness can always be guaranteed to identical agents, so it is a natural alternative to proportionality even when the agents are different.
In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.
In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes the number of bins used.
The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine.
Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:
Best-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The best-fit algorithm uses the following heuristic:
First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:
Harmonic bin-packing is a family of online algorithms for bin packing. The input to such an algorithm is a list of items of different sizes. The output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem.
First-fit-decreasing (FFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem, so we use an approximately-optimal heuristic.
Next-fit-decreasing (NFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The NFD algorithm uses the following heuristic:
The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.
High-multiplicity bin packing is a special case of the bin packing problem, in which the number of different item-sizes is small, while the number of items with each size is large. While the general bin-packing problem is NP-hard, the high-multiplicity setting can be solved in polynomial time, assuming that the number of different sizes is a fixed constant.
The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.
↑ Gonzalez, Teofilo F. (23 May 2018). Handbook of approximation algorithms and metaheuristics. Volume 2 Contemporary and emerging applications. Taylor & Francis Incorporated. ISBN9781498770156.
↑ Liang, Frank M. (1980). "A lower bound for on-line bin packing". Information Processing Letters. 10 (2): 76–79. doi:10.1016/S0020-0190(80)90077-0.
↑ van Vliet, André (1992). "An improved lower bound for on-line bin packing algorithms". Information Processing Letters. 43 (5): 277–284. doi:10.1016/0020-0190(92)90223-I.
1 2 3 Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ 11/9\mathrm{OPT}(I) + 6/9". Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. ESCAPE. doi:10.1007/978-3-540-74450-4_1.
1 2 Hoberg, Rebecca; Rothvoss, Thomas (2017-01-01), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, Society for Industrial and Applied Mathematics, pp.2616–2625, arXiv:1503.08796, doi:10.1137/1.9781611974782.172, ISBN978-1-61197-478-2, S2CID1647463
↑ Nir Menakerman and Raphael Rom "Bin Packing with Item Fragmentation". Algorithms and Data Structures, 7th International Workshop, WADS 2001, Providence, RI, USA, August 8-10, 2001, Proceedings.
↑ Lodi A., Martello S., Monaci, M., Vigo, D. (2010) "Two-Dimensional Bin Packing Problems". In V.Th. Paschos (Ed.), Paradigms of Combinatorial Optimization, Wiley/ISTE, pp.107–129
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.