Longest-processing-time-first scheduling

Last updated

Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of jobs, each of which has a specific processing-time. There is also a number m specifying the number of machines that can process the jobs. The LPT algorithm works as follows:

Contents

  1. Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first.
  2. Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest.

Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time.

LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem. [1] Later, it was applied to many other variants of the problem.

LPT can also be described in a more abstract way, as an algorithm for multiway number partitioning. The input is a set S of numbers, and a positive integer m; the output is a partition of S into m subsets. LPT orders the input from largest to smallest, and puts each input in turn into the part with the smallest sum so far.

Examples

If the input set is S = {4, 5, 6, 7, 8} and m = 2, then the resulting partition is {8, 5, 4}, {7, 6}. If m = 3, then the resulting 3-way partition is {8}, {7, 4}, {6, 5}.

Properties

LPT might not find the optimal partition. For example, in the above instance the optimal partition {8,7}, {6,5,4}, where both sums are equal to 15. However, its suboptimality is bounded both in the worst case and in the average case; see Performance guarantees below.

The running time of LPT is dominated by the sorting, which takes O(n log n) time, where n is the number of inputs.

LPT is monotone in the sense that, if one of the input numbers increases, the objective function (the largest sum or the smallest sum of a subset in the output) weakly increases. [2] This is in contrast to Multifit algorithm.

Performance guarantees: identical machines

When used for identical-machines scheduling, LPT attains the following approximation ratios.

Worst-case maximum sum

In the worst case, the largest sum in the greedy partition is at most times the optimal (minimum) largest sum. [3] [lower-alpha 1]

A more detailed analysis yields a factor of times the optimal (minimum) largest sum. [1] [4] (for example, when m =2 this ratio is ). [lower-alpha 2]

The factor is tight. Suppose there are inputs (where m is even): . Then the greedy algorithm returns:

with a maximum of , but the optimal partition is:

with a maximum of .

Input consideration

An even more detailed analysis takes into account the number of inputs in the max-sum part.

  1. In each part of the greedy partition, the j-th highest number is at most . [5]
  2. Suppose that, in the greedy part P with the max-sum, there are L inputs. Then, the approximation ratio of the greedy algorithm is . [4] It is tight for L≥3 (For L=3, we get the general factor ). Proof. [5] Denote the numbers in P by x1,...,xL. Before xL was inserted into P, its sum was smallest. Therefore, the average sum per part is at least the sum x1+...+xL-1+ xL/m. The optimal max-sum must be at least the average sum. In contrast, the greedy sum is x1+...+xL-1+xL. So the difference is at most (1-1/m)xL, which by (1) is at most (1-1/m)*OPT/L. So the ratio is at most (1 + 1/L - 1/Lm).

Worst-case minimum sum

In the worst case, the smallest sum in the returned partition is at least times the optimal (maximum) smallest sum. [6]

Proof

The proof is by contradiction. We consider a minimal counterexample, that is, a counterexample with a smallest m and fewest input numbers. Denote the greedy partition by P1,...,Pm, and the optimal partition by Q1,...,Qm. Some properties of a minimal counterexample are:

  • The min-sum in the optimal partition is 4, and the min-sum in the greedy partition is less than 3 (this is just normalization - it is without loss of generality).
  • The max-sum in the greedy partition is more than 4 (since the total sum in both partitions is the same, and it is at least 4m).
  • If sum(Pi)≥3 for some greedy bin Pi, then Pi is not dominated by any optimal bin Qj. Proof: if Pi is dominated by Qj, then we can construct a smaller counterexample by decreasing m to m-1 and removing the items in Pi. The min-sum in the greedy partition remains less than 3. In the optimal partition, the items in Pi can be replaced by their dominating items in Qj, so the min-sum remains at least 4.
  • If sum(Pi)≥3 for some greedy bin Pi, then Pi contains at least two numbers. Proof: if Pi contains only one number x, then it is dominated by the optimal bin Qj which contains x.givese some input x is at least 3, and the greedy algorithm puts it in some Pi. Then, since there is a bundle with sum less than 3, the greedy algorithm will not put any other input in Pi, contradicting the previous lemma.
  • Every greedy bin Pi contains at most one input weakly-larger than 3/2. Proof: Let Pi be the first greedy bin which is assigned two such inputs. Since inputs are assigned in descending order, Piis the first greedy bin assigned two inputs. This means that it must contain the smallest two inputs from among the largest m+1 inputs. Moreover, since the sum of these two items is at least 3/2+3/2=3, Pi is not assigned any other input. On the other hand, by the pigeonhole principle, there must be some optimal bin Qj that contains some two inputs from among the largest m+1 inputs; so Pi is dominated by Qj.
  • During the run of the greedy algorithm, the sum in every bin Pi becomes at least 8/3 before the sum of any bin exceeds 4. Proof: Let y be the first input added to some bin Pi, which made its sum larger than 4. Before y was added, Pi had the smallest sum, which by assumption was smaller than 8/3; this means that y>4/3. Let T denote the set of all inputs from the first one down to y; all these inputs are larger than 4/3 too. Since Pi was smaller than 8/3, it contained exactly one item x from T. So now Pi contains exactly 2 items {x,y}, and remains with these 2 items until the algorithm ends. Let m be the number of items from the first one down to x. We now show a contradiction by counting the items in T in two ways.
    • First, consider the n optimal bins. If any such bin contains an item at least as large as x, then it cannot contain any other item of T, since otherwise it dominates {x,y}. Moreover, any such bin cannot contain three items from T, since the sum of any two of them is larger than 8/3, which is larger than x; and the third one is at least y, so it dominates {x,y}. Therefore, the number of items in T is at most 1*m + 2*(n-m) = 2n-m.
    • Now, consider the n greedy bins. When y is added to the bundle containing x, it is the bundle with the smallest sum. Therefore, all elements of T that are smaller than x, must be in a greedy bin with at least one other item of T. The same is true for x and y. Therefore, the number of items in T is at least (m-1)+2*(n-m+1) = 2n-m+1 - contradiction.
  • We can assume, without loss of generality, that all inputs are either smaller than 1/3, or at least 1. Proof: Suppose some input x is in (1/3,1). We replace x with 1. This obviously does not decrease the optimal min-sum. We show that it does not change the greedy min-sum. We know that some greedy bundle Pi has a final sum larger than 4. Before the last input was added into Pi, its sum was smaller than 3; so Pi became larger than 4 when some input larger than 1 was added to it. By the previous lemma, at that point the sum of all other greedy bundles was at least 8/3. The algorithm arrives at x afterwards. Once the algorithm adds x to some bin Pj, the sum of Pj becomes at least 8/3+1/3=3, so no more items are added into Pj. So Pj contains only one input with size in [1/3,1). Once x is replaced with 1, it is still inserted into Pj, and its sum is still above 3. So the greedy min-sum does not change.
  • We can now partition the inputs into small (less than 1/3) and large (at least 1). The set of small items in Pi is denoted by Si. Note that, when the algorithm starts processing small items, the sum in all bundles is at least 8/3.

The proof that a minimal counterexample does not exist uses a weighting scheme. Each input x is assigned a weight w(x) according to its size and greedy bundle Pi:

  • If x is a large item:
    • If x is the single large item in Pi, then w(x)=8/3.
    • If Pi contains exactly two items {x,y} and both of them are large, and x>y, and sum(Pi)≥3, then w(x)=8/3.
    • Otherwise, w(x)=4/3.
  • If x is a small item:
    • if sum(Pi)≥3, then w(x) = 4x/(3 sum(Si)); so w(Si) = 4/3.
    • if sum(Pi)<3, then w(x) = 2x/(3 sum(Si)); so w(Si) = 2/3.

This weighting scheme has the following properties:

  • If x≥2, then w(x)=8/3. Proof: x is large. Suppose it is in Pi. If Pi contains another large item y, then x+y≥3 so there is no other item in Pi. Moreover, x>y since there is at most one item larger than 3/2. So w(x)=8/3.
  • If x<1/3, then w(x) > 2x. Proof: x is small. Suppose it is in Pi.
    • If sum(Pi)≥3 then, since sum(Pi) was smaller than 3 before x was added to it, it is now smaller than 10/3. But when the algorithm started processing small items, sum(Pi) was at least 8/3. This means that sum(Si) < 2/3, so w(x) = 4x/(3 sum(Si)) > 2x.
    • If sum(Pi)<3 then sum(Si) < 3-8/3=1/3, so w(x) = 2x/(3 sum(Si)) > 2x.
  • The weight of every greedy bin Pi is at most 4, and the weight of at least one greedy bin is at most 10/3. Proof:
    • If all inputs in Pi are large, then it contains either a single input with weight 8/3, two inputs with weights 8/3+4/3, or three inputs with weights 4/3+4/3+4/3.
    • If some inputs in Pi are small, then their total weight is at most 4/3. There are at most two large inputs, and their weights are either 8/3 or 4/3+4/3.
    • Finally, the weight of the greedy bin with sum smaller than 3 is at most 8/3 (if it has only large inputs) or 10/3 (if it has some small inputs).
  • The weight of every optimal bin Qj is at least 4. Proof:
    • If Qj contains only small items, then each of them satisfies w(x) > 2x, so w(Qj) > 2 sum(Qj) ≥ 8.
    • If Qj contains exactly one large item x, then it must contain some small items whose sum is at least 4-x and weight at least 8-2x. Then, either x<2 and the weight of small items is at least 8-4=4, or x in (2,3) and w(x)=8/3 and the weight of small items is at least 8-6=2. In both cases the total weight is at least 4.
    • If Qj contains exactly two large items x>y, and x≥2, then there is at least 8/3+4/3=4. If x+y≤10/3, then the sum of small items must be at least 2/3, so the total weight is at least 4/3+4/3+2*2/3=4. Otherwise, x>5/3. So x was the first input in some greedy bin Pm. Let z be the second input added into Pm. If x+z≥3, then there are no more inputs in Pm, so w(x)=8/3 and we are done. Otherwise, x+z<3. Let v be the smallest input in some greedy bin whose sum exceeds 4. Since x<8/3, z must have been processed before v, so z≥v. Consider now any small item t in Qj, and suppose it is in some greedy bin Pi.
      • If sum(Pi)<3, then the fact that v was not put in Pi implies that v > 4-sum(large-items-in-Pi) > 1+sum(small-items-in-Pi). Therefore, 1+sum(Si)+x < v+x ≤ z+x < 3 and sum(Si) < 2-x. This means that 2*sum(Si) < 4-2x ≤ 4-x-y ≤ sum(small-items-in-Qj). So w(t) = 2t/(3sum(Si)) > 4t/(3sum(small-items-in-Qj)).
      • If sum(Pi)≥3, and sum(Si)≤1, then w(t)=4/3 and we are done. Since sum(Pi) was less than 3 before t was added into it, sum(Pi)<3+sum(Si)/2. The fact that v was not put in Pi implies that v > 4-sum(large-items-in-Pi) > 1+sum(small-items-in-Pi)/2. Similariy to the previous paragraph, w(t) > 4t/(3sum(small-items-in-Qj)).
      • Therefore, the total weight of all small items in Qj is at least 4/3, so the total weight of Qj is at least 4/3+10/3>4.
    • If Qj contains exactly three or more large items, then its total weight is at least 4/3+4/3+4/3=4.
  • The last two claims are contradictory, since the former implies that the weight of all inputs is at most 4m-2/3, and the latter implies that the weight of all inputs is at least 4m. Therefore, a counterexample does not exist.

Upper bound on the ratio

A more sophisticated analysis shows that the ratio is at most (for example, when m=2 the ratio is 5/6). [7] [8]

Tightness and example

The above ratio is tight. [6]

Suppose there are 3m-1 inputs (where m is even). The first 2m inputs are: 2m-1, 2m-1, 2m-2, 2m-2, ..., m, m. The last m-1 inputs are all m. Then the greedy algorithm returns:

  • 2m-1, m, m
  • 2m-1, m, m
  • 2m-2, m+1, m
  • 2m-2, m+1, m
  • ...
  • 3 m/2, 3 m/2-1, m
  • 3 m/2, 3 m/2-1

with a minimum of 3m-1. But the optimal partition is:

  • 2m-1, 2m-1
  • 2m-2, m, m
  • 2m-2, m, m
  • 2m-3, m+1, m
  • 2m-3, m+1, m
  • ...
  • 3 m/2, 3 m/2-2, m
  • 3 m/2, 3 m/2-2, m
  • 3 m/2-1, 3 m/2-1, m

with a minimum of 4m-2.

Restricted LPT

There is a variant of LPT, called Restricted-LPT or RLPT, [9] in which the inputs are partitioned into subsets of size m called ranks (rank 1 contains the largest m inputs, rank 2 the next-largest m inputs, etc.). The inputs in each rank must be assigned to m different bins: rank 1 first, then rank 2, etc. The minimum sum in RLPT is at most the minimum sum at LPT. The approximation ratio of RLPT for maximizing the minimum sum is at most m.

Average-case maximum sum

In the average case, if the input numbers are distributed uniformly in [0,1], then the largest sum in an LPT schedule satisfies the following properties:

General objectives

Let Ci (for i between 1 and m) be the sum of subset i in a given partition. Instead of minimizing the objective function max(Ci), one can minimize the objective function max(f(Ci)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(Ci)). Alon, Azar, Woeginger and Yadid [13] prove that, if f satisfies the following two conditions:

  1. A strong continuity condition called Condition F*: for every ε>0 there exists δ>0 such that, if |y-x|<δx, then |f(y)-f(x)|<εf(x).
  2. Convexity.

Then the LPT rule has a finite approximation ratio for minimizing sum(f(Ci)).

Performance with divisible item sizes

An important special case is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, and in addition, the largest item sizes divides the bin size, then LPT always finds a scheduling that minimizes the maximum size, [14] :Thm.4 and maximizes the minimum size. [14] :Thm.5

Adaptations to other settings

Besides the simple case of identical-machines scheduling, LPT has been adapted to more general settings.

Uniform machines

In uniform-machines scheduling, different machines may have different speeds. The LPT rule assigns each job to the machine on which its completion time will be earliest (that is, LPT may assign a job to a machine with a larger current load, if this machine is so fast that it would finish that job earlier than all other machines). [15]

Cardinality constraints

In the balanced partition problem, there are constraints on the number of jobs that can be assigned to each machine. A simple constraint is that each machine can process at most c jobs. The LPT rule assigns each job to the machine with the smallest load from among those with fewer than c jobs. This rule is called modified LPT or MLPT.

Another constraint is that the number of jobs on all machines should be rounded either up or down. In an adaptation of LPT called restricted LPT or RLPT, inputs are assigned in pairs - one to each machine (for m=2 machines). [10] The resulting partition is balanced by design.

Kernel constraints - non-simultaneous availability

In the kernel partitioning problem, there are some m pre-specified jobs called kernels, and each kernel must be scheduled to a unique machine. An equivalent problem is scheduling when machines are available in different times: each machine i becomes available at some time ti 0 (the time ti can be thought of as the length of the kernel job).

A simple heuristic algorithm, called SLPT, [23] assigns each kernel to a different subset, and then runs the LPT algorithm.

Online settings

Often, the inputs come online, and their sizes becomes known only when they arrive. In this case, it is not possible to sort them in advance. List scheduling is a similar algorithm that takes a list in any order, not necessarily sorted. Its approximation ratio is .

A more sophisticated adaptation of LPT to an online setting attains an approximation ratio of 3/2. [27]

Implementations

See also

Notes

  1. Proof. Normalize the input items such that OPT=1. This implies that the sum of all items is at most m. Partition the items into large (more than 2/3), medium (between 1/3 and 2/3), and small (less than 1/3). Let their numbers be nL, nM and nS. In each optimal partition, each part contains at most one large item, so nL ≤ m. Moreover, each optimal part cannot contain both a large and a medium item, or three medium items; so nM ≤ 2(m-nL). The operation of the greedy algorithm can be partitioned into three phases: 1. Allocating the large items - each of which is put in a different bin. Since nL ≤ m, when this phase completes, each bin contains at most one item, so the max-sum is at most 1. 2. Allocating the medium items. The first m-nL ones are put in empty bins, and the next m-nL ones (if any) are added into the same bins. Since nM ≤ 2(m-nL), when this phase completes, each bin contains either one large item - with sum at most 1, or at most two medium items - with sum at most 4/3. 3. Allocating the small items. Each item is added into the bin with the smallest sum. The smallest sum is at most the average sum, which is at most 1. Hence, once a small item is added, the new sum becomes at most 4/3.
  2. Proof. The previous proof can be refined in two ways. First, one can prove that, once all large and medium items are allocated, the sum in each bin is at most 1. If there are at most m-nL medium items, then each large and medium item is placed in a separate bin, so the sum is clearly at most 1. Otherwise, denote the first m-nL medium items by top-medium items, and the others (at most m-nL) by bottom-medium items. Assume first that item #m is larger than 1/2. This means that the items #1,...,#m are all larger than 1/2, so each must be in a different optimal part. Each of the bottom-medium items (items #m+1,...#nM) must fit into an optimal part with exactly one of the items #1,...,#m . Let us call two items matchable if their sum is at most 1, so that they can fit into the same optimal part. By Hall's theorem, each subset of k bottom-medium items must be matchable to at least k of the items #1,...,#m. In particular, the item #m+1 must be matchable to item #m; items #m+1 and #m+2 must be matchable to item #m-1; and in general, item #m+k must be matchable to item #'m-k+1. LPT indeed puts item #m+k in the same bin as #'m-k+1, so the sum of each bin is at most 1. Second, one can prove that, when allocating the small inputs, the sum of every new bin is at most 4/3-1/(3m). There are two cases: 1. If the current smallest sum is at most 1-1/(3m), then the new sum - after adding one small input - is at most 1-1/(3m)+1/3 = 4/3-1/(3m). 2. Otherwise, all sums are larger than 1-1/(3m), so the sum of the m-1 largest bins is larger than m-1-1/3+1/(3m) = m-(4/3-1/(3m)). Since the total sum of all inputs is at most m, the new sum must be less than 4/3-1/(3m).

Related Research Articles

<span class="mw-page-title-main">Knapsack problem</span> Problem in combinatorial optimization

The knapsack problem is the following problem in combinatorial optimization:

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.

The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size in terms of N1 smaller DFTs of sizes N2, recursively, to reduce the computation time to O(N log N) for highly composite N (smooth numbers). Because of the algorithm's importance, specific variants and implementation styles have become known by their own names, as described below.

List scheduling is a greedy algorithm for Identical-machines scheduling. The input to this algorithm is a list of jobs that should be executed on a set of m machines. The list is ordered in a fixed order, which can be determined e.g. by the priority of executing the jobs, or by their order of arrival. The algorithm repeatedly executes the following steps until a valid schedule is obtained:

In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset S of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2. Although the partition problem is NP-complete, there is a pseudo-polynomial time dynamic programming solution, and there are heuristics that solve the problem in many instances, either optimally or approximately. For this reason, it has been called "the easiest hard problem".

In computer science, the earth mover's distance (EMD) is a measure of dissimilarity between two frequency distributions, densities, or measures, over a metric space D. Informally, if the distributions are interpreted as two different ways of piling up earth (dirt) over D, the EMD captures the minimum cost of building the smaller pile using dirt taken from the larger, where cost is defined as the amount of dirt moved multiplied by the distance over which it is moved.

<span class="mw-page-title-main">Maximum cut</span> Problem of finding a maximum cut in a graph

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.

<span class="mw-page-title-main">Parallel external memory</span>

In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory.

In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.

In computer science, greedy number partitioning is a class of greedy algorithms for multiway number partitioning. The input to the algorithm is a set S of numbers, and a parameter k. The required output is a partition of S into k subsets, such that the sums in the subsets are as nearly equal as possible. Greedy algorithms process the numbers sequentially, and insert the next number into a bin in which the sum of numbers is currently smallest.

In computer science, the largest differencing method is an algorithm for solving the partition problem and the multiway number partitioning. It is also called the Karmarkar–Karp algorithm after its inventors, Narendra Karmarkar and Richard M. Karp. It is often abbreviated as LDM.

In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes the number of bins used.

The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine.

The multiple subset sum problem is an optimization problem in computer science and operations research. It is a generalization of the subset sum problem. The input to the problem is a multiset of n integers and a positive integer m representing the number of subsets. The goal is to construct, from the input integers, some m subsets. The problem has several variants:

Identical-machines scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m identical machines, such that a certain objective function is optimized, for example, the makespan is minimized.

Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:

First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:

Balanced number partitioning is a variant of multiway number partitioning in which there are constraints on the number of items allocated to each set. The input to the problem is a set of n items of different sizes, and two integers mk. The output is a partition of the items into m subsets, such that the number of items in each subset is at most k. Subject to this, it is required that the sums of sizes in the m subsets are as similar as possible.

The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.

The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.

References

  1. 1 2 Graham, R. L. (March 1969). "Bounds on Multiprocessing Timing Anomalies". SIAM Journal on Applied Mathematics. 17 (2): 416–429. CiteSeerX   10.1.1.90.8131 . doi:10.1137/0117039. JSTOR   2099572. NAID   30006801878 ProQuest   917998761.
  2. Segal-Halevi, Erel (2021-10-17), On Monotonicity of Number-Partitioning Algorithms, arXiv: 2110.08886 , retrieved 2024-02-22
  3. Xiao, Xin (2017-04-01). "A Direct Proof of the 4/3 Bound of LPT Scheduling Rule". Proceedings of the 2017 5th International Conference on Frontiers of Manufacturing Science and Measuring Technology (FMSMT 2017). Atlantis Press. pp. 486–489. doi: 10.2991/fmsmt-17.2017.102 . ISBN   978-94-6252-331-9.
  4. 1 2 Coffman, E. G.; Sethi, Ravi (1976-03-29). "A generalized bound on LPT sequencing". Proceedings of the 1976 ACM SIGMETRICS conference on Computer performance modeling measurement and evaluation - SIGMETRICS '76. New York, NY, USA: Association for Computing Machinery. pp. 306–310. doi:10.1145/800200.806205. ISBN   978-1-4503-7497-2. S2CID   16980306.
  5. 1 2 Chen, Bo (1993-10-01). "A note on LPT scheduling". Operations Research Letters. 14 (3): 139–142. doi:10.1016/0167-6377(93)90024-B. ISSN   0167-6377.
  6. 1 2 Deuermeyer, Bryan L.; Friesen, Donald K.; Langston, Michael A. (June 1982). "Scheduling to Maximize the Minimum Processor Finish Time in a Multiprocessor System". SIAM Journal on Algebraic and Discrete Methods. 3 (2): 190–196. doi:10.1137/0603019.
  7. Csirik, János; Kellerer, Hans; Woeginger, Gerhard (June 1992). "The exact LPT-bound for maximizing the minimum completion time". Operations Research Letters. 11 (5): 281–287. doi:10.1016/0167-6377(92)90004-M.
  8. Wu, Bang Ye (December 2005). "An analysis of the LPT algorithm for the max–min and the min–ratio partition problems". Theoretical Computer Science. 349 (3): 407–419. doi: 10.1016/j.tcs.2005.08.032 .
  9. Walter, Rico (2013-01-01). "Comparing the minimum completion times of two longest-first scheduling-heuristics". Central European Journal of Operations Research. 21 (1): 125–139. doi:10.1007/s10100-011-0217-4. ISSN   1613-9178. S2CID   254144745.
  10. 1 2 3 Coffman, E. G.; Frederickson, G. N.; Lueker, G. S. (1984-05-01). "A Note on Expected Makespans for Largest-First Sequences of Independent Tasks on Two Processors". Mathematics of Operations Research. 9 (2): 260–266. doi:10.1287/moor.9.2.260. ISSN   0364-765X.
  11. Frenk, J.B.G.; Kan, A.H.G.Rinnooy (June 1986). "The rate of convergence to optimality of the LPT rule". Discrete Applied Mathematics. 14 (2): 187–197. doi:10.1016/0166-218X(86)90060-0. hdl: 1765/11698 .
  12. Frenk, J. B. G.; Rinnooy Kan, A. H. G. (1987-05-01). "The Asymptotic Optimality of the LPT Rule". Mathematics of Operations Research. 12 (2): 241–254. doi:10.1287/moor.12.2.241. ISSN   0364-765X. S2CID   31770203.
  13. Alon, Noga; Azar, Yossi; Woeginger, Gerhard J.; Yadid, Tal (1998). "Approximation schemes for scheduling on parallel machines". Journal of Scheduling. 1 (1): 55–66. doi:10.1002/(SICI)1099-1425(199806)1:1<55::AID-JOS2>3.0.CO;2-J. ISSN   1099-1425.
  14. 1 2 Coffman, E. G; Garey, M. R; Johnson, D. S (1987-12-01). "Bin packing with divisible item sizes". Journal of Complexity. 3 (4): 406–428. doi:10.1016/0885-064X(87)90009-4. ISSN   0885-064X.
  15. 1 2 Gonzalez, Teofilo; Ibarra, Oscar H.; Sahni, Sartaj (1977-03-01). "Bounds for LPT Schedules on Uniform Processors". SIAM Journal on Computing. 6 (1): 155–166. doi:10.1137/0206013. ISSN   0097-5397.
  16. Mireault, Paul; Orlin, James B.; Vohra, Rakesh V. (1997-02-01). "A Parametric Worst Case Analysis of the LPT Heuristic for Two Uniform Machines". Operations Research. 45 (1): 116–125. doi:10.1287/opre.45.1.116. ISSN   0030-364X.
  17. Koulamas, Christos; Kyparisis, George J. (2009-07-01). "A modified LPT algorithm for the two uniform parallel machine makespan minimization problem". European Journal of Operational Research. 196 (1): 61–68. doi:10.1016/j.ejor.2008.02.008. ISSN   0377-2217.
  18. Kellerer, Hans; Woeginger, Gerhard (1993-09-07). "A tight bound for 3-partitioning". Discrete Applied Mathematics. 45 (3): 249–259. doi: 10.1016/0166-218X(93)90013-E . ISSN   0166-218X.
  19. Babel, Luitpold; Kellerer, Hans; Kotov, Vladimir (1998-02-01). "The m-partitioning problem". Mathematical Methods of Operations Research. 47 (1): 59–82. doi:10.1007/BF01193837. ISSN   1432-5217. S2CID   5594197.
  20. Dell'Amico, Mauro; Martello, Silvano (2001). "Bounds for the cardinality constrained P∥Cmax problem". Journal of Scheduling. 4 (3): 123–138. doi:10.1002/jos.68. ISSN   1099-1425.
  21. Chen, Shi Ping; He, Yong; Lin, Guohui (2002-03-01). "3-Partitioning Problems for Maximizing the Minimum Load". Journal of Combinatorial Optimization. 6 (1): 67–80. doi:10.1023/A:1013370208101. ISSN   1573-2886. S2CID   9053629.
  22. Tsai, Li-Hui (1992-02-01). "Asymptotic Analysis of an Algorithm for Balanced Parallel Processor Scheduling". SIAM Journal on Computing. 21 (1): 59–64. doi:10.1137/0221007. ISSN   0097-5397.
  23. Chen, S. -P.; He, Y.; Yao, E. -Y. (1996-09-01). "Three-partitioning containing kernels: Complexity and heuristic". Computing. 57 (3): 255–271. doi:10.1007/BF02247409. ISSN   1436-5057. S2CID   21935917.
  24. Lee, Chung-Yee (1991-01-07). "Parallel machines scheduling with nonsimultaneous machine available time". Discrete Applied Mathematics. 30 (1): 53–61. doi: 10.1016/0166-218X(91)90013-M . ISSN   0166-218X.
  25. Lin, Guo-Hui; Yao, En-Yu; He, Yong (1998-03-01). "Parallel machine scheduling to maximize the minimum load with nonsimultaneous machine available times". Operations Research Letters. 22 (2): 75–81. doi:10.1016/S0167-6377(97)00053-9. ISSN   0167-6377.
  26. Shen, Lixin; Wang, Dan; Wang, Xiao-Yuan (2013-04-01). "Parallel-machine scheduling with non-simultaneous machine available time". Applied Mathematical Modelling. 37 (7): 5227–5232. doi:10.1016/j.apm.2012.09.053. ISSN   0307-904X.
  27. Chen, Bo; Vestjens, Arjen P. A. (1 November 1997). "Scheduling on identical machines: How good is LPT in an on-line setting?" (PDF). Operations Research Letters. 21 (4): 165–169. doi:10.1016/S0167-6377(97)00040-0.