Harmonic bin packing

Last updated

Harmonic bin-packing is a family of online algorithms for bin packing. The input to such an algorithm is a list of items of different sizes. The output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem.

Contents

The harmonic bin-packing algorithms rely on partitioning the items into categories based on their sizes, following a Harmonic progression. There are several variants of this idea.

Harmonic-k

The Harmonic-k algorithm partitions the interval of sizes harmonically into pieces for and such that . An item is called an -item, if .

The algorithm divides the set of empty bins into infinite classes for , one bin type for each item type. A bin of type is only used for bins to pack items of type . Each bin of type for can contain exactly -items. The algorithm now acts as follows:

This algorithm was first described by Lee and Lee. [1] It has a time complexity of where n is the number of input items. At each step, there are at most open bins that can be potentially used to place items, i.e., it is a k-bounded space algorithm.

Lee and Lee also studied the asymptotic approximation ratio. They defined a sequence , for and proved that for it holds that . For it holds that . Additionally, they presented a family of worst-case examples for that

Refined-Harmonic (RH)

The Refined-Harmonic combines ideas from the Harmonic-k algorithm with ideas from Refined-First-Fit. It places the items larger than similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than .

The algorithm classifies the items with regard to the following intervals: , , , , , for , and . The algorithm places the -items as in Harmonic-k, while it follows a different strategy for the items in and . There are four possibilities to pack -items and -items into bins.

An -bin denotes a bin that is designated to contain a second -item. The algorithm uses the numbers N_a, N_b, N_ab, N_bb, and N_b' to count the numbers of corresponding bins in the solution. Furthermore, N_c= N_b+N_ab

Algorithm Refined-Harmonic-k for a list L = (i_1, \dots i_n): 1. N_a = N_b = N_ab = N_bb = N_b' = N_c = 0 2. If i_j is an I_k-piece        then use algorithm Harmonic-k to pack it 3.     else if i_j is an I_a-item            then if N_b != 1,                 then pack i_j into any J_b-bin; N_b--;  N_ab++;                else place i_j in a new (empty) bin; N_a++; 4.         else if i_j is an I_b-item                then if N_b' = 1                    then place i_j into the I_b'-bin; N_b' = 0; N_bb++; 5.                 else if N_bb <= 3N_c                        then place i_j in a new bin and designate it as an I_b'-bin; N_b' = 1                        else if N_a != 0                            then place i_j into any I_a-bin; N_a--; N_ab++;N_c++                            else place i_j in a new bin; N_b++;N_c++

This algorithm was first described by Lee and Lee. [1] They proved that for it holds that .

Other variants

Modified Harmonic (MH) has asymptotic ratio . [2]

Modified Harmonic 2 (MH2) has asymptotic ratio . [2]

Harmonic + 1 (H+1) has asymptotic ratio . [3]

Harmonic ++ (H++) has asymptotic ratio [3] and . [3]

Related Research Articles

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.

<span class="mw-page-title-main">Divisor function</span> Arithmetic function related to the divisors of an integer

In mathematics, and specifically in number theory, a divisor function is an arithmetic function related to the divisors of an integer. When referred to as the divisor function, it counts the number of divisors of an integer. It appears in a number of remarkable identities, including relationships on the Riemann zeta function and the Eisenstein series of modular forms. Divisor functions were studied by Ramanujan, who gave a number of important congruences and identities; these are treated separately in the article Ramanujan's sum.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object. Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 but effective dimension 1.

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.

Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.

In mathematics, topological recursion is a recursive definition of invariants of spectral curves. It has applications in enumerative geometry, random matrix theory, mathematical physics, string theory, knot theory.

The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.

Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:

Best-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The best-fit algorithm uses the following heuristic:

First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:

Next-fit-decreasing (NFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The NFD algorithm uses the following heuristic:

The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.

<span class="mw-page-title-main">Chambolle-Pock algorithm</span> Primal-Dual algorithm optimization for convex problems

In mathematics, the Chambolle-Pock algorithm is an algorithm used to solve convex optimization problems. It was introduced by Antonin Chambolle and Thomas Pock in 2011 and has since become a widely used method in various fields, including image processing, computer vision, and signal processing.

References

  1. 1 2 Lee, C. C.; Lee, D. T. (July 1985). "A simple on-line bin-packing algorithm". Journal of the ACM. 32 (3): 562–572. doi: 10.1145/3828.3833 . S2CID   15441740.
  2. 1 2 Ramanan, Prakash; Brown, Donna J; Lee, C.C; Lee, D.T (September 1989). "On-line bin packing in linear time". Journal of Algorithms. 10 (3): 305–326. doi:10.1016/0196-6774(89)90031-X. hdl: 2142/74206 .
  3. 1 2 3 Seiden, Steven S. (2002). "On the online bin packing problem". Journal of the ACM. 49 (5): 640–671. doi:10.1145/585265.585269. S2CID   14164016.