Samplesort

Last updated

Samplesort is a sorting algorithm that is a divide and conquer algorithm often used in parallel processing systems. [1] Conventional divide and conquer sorting algorithms partitions the array into sub-intervals or buckets. The buckets are then sorted individually and then concatenated together. However, if the array is non-uniformly distributed, the performance of these sorting algorithms can be significantly throttled. Samplesort addresses this issue by selecting a sample of size s from the n-element sequence, and determining the range of the buckets by sorting the sample and choosing p−1 <s elements from the result. These elements (called splitters) then divide the array into p approximately equal-sized buckets. [2] Samplesort is described in the 1970 paper, "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting", by W. D. Frazer and A. C. McKellar. [3]

Contents

Algorithm

Samplesort is a generalization of quicksort. Where quicksort partitions its input into two parts at each step, based on a single value called the pivot, samplesort instead takes a larger sample from its input and divides its data into buckets accordingly. Like quicksort, it then recursively sorts the buckets.

To devise a samplesort implementation, one needs to decide on the number of buckets p. When this is done, the actual algorithm operates in three phases: [4]

  1. Sample p−1 elements from the input (the splitters). Sort these; each pair of adjacent splitters then defines a bucket.
  2. Loop over the data, placing each element in the appropriate bucket. (This may mean: send it to a processor, in a multiprocessor system.)
  3. Sort each of the buckets.

The full sorted output is the concatenation of the buckets.

A common strategy is to set p equal to the number of processors available. The data is then distributed among the processors, which perform the sorting of buckets using some other, sequential, sorting algorithm.

Pseudocode

The following listing shows the above mentioned three step algorithm as pseudocode and shows how the algorithm works in principle. [5] In the following, A is the unsorted data, k is the oversampling factor, discussed later, and p is the number of splitters.

function sampleSort(A[1..n], k, p)     // if average bucket size is below a threshold switch to e.g. quicksort     ifn / k < threshold then smallSort(A)      /* Step 1 */     select S = [S1, ..., S(p−1)k] randomly from // select samples     sort S // sort sample     [s0, s1, ..., sp−1, sp] <- [-∞, Sk, S2k, ..., S(p−1)k, ∞] // select splitters     /* Step 2 */     for eacha in A         find j such that sj−1 < a <= sj         place a in bucket bj     /* Step 3 and concatenation */     return concatenate(sampleSort(b1), ..., sampleSort(bk))

The pseudo code is different from the original Frazer and McKellar algorithm. [3] In the pseudo code, samplesort is called recursively. Frazer and McKellar called samplesort just once and used quicksort in all following iterations.

Complexity

The complexity, given in Big O notation, for a parallelized implementation with processors:

Find the splitters.

Send to buckets.

for reading all nodes
for broadcasting
for binary search for all keys
to send keys to bucket

Sort buckets.

where is the complexity of the underlying sequential sorting method. [1] Often .

The number of comparisons, performed by this algorithm, approaches the information theoretical optimum for big input sequences. In experiments, conducted by Frazer and McKellar, the algorithm needed 15% fewer comparisons than quicksort.

Sampling the data

The data may be sampled through different methods. Some methods include:

  1. Pick evenly spaced samples.
  2. Pick randomly selected samples.

Oversampling

The oversampling ratio determines how many times more data elements to pull as samples, before determining the splitters. The goal is to get a good representation of the distribution of the data. If the data values are widely distributed, in that there are not many duplicate values, then a small sampling ratio is sufficient. In other cases where there are many duplicates in the distribution, a larger oversampling ratio will be necessary. In the ideal case, after step 2, each bucket contains elements. In this case, no bucket takes longer to sort than the others, because all buckets are of equal size.

After pulling times more samples than necessary, the samples are sorted. Thereafter, the splitters used as bucket boundaries are the samples at position of the sample sequence (together with and as left and right boundaries for the left most and right most buckets respectively). This provides a better heuristic for good splitters than just selecting splitters randomly.

Bucket size estimate

With the resulting sample size, the expected bucket size and especially the probability of a bucket exceeding a certain size can be estimated. The following will show that for an oversampling factor of the probability that no bucket has more than elements is larger than .

To show this let be the input as a sorted sequence. For a processor to get more than elements, there has to exist a subsequence of the input of length , of which a maximum of S samples are picked. These cases constitute the probability . This can be represented as the random variable:

For the expected value of holds:

This will be used to estimate :

Using the Chernoff bound now, it can be shown:

Many identical keys

In case of many identical keys, the algorithm goes through many recursion levels where sequences are sorted, because the whole sequence consists of identical keys. This can be counteracted by introducing equality buckets. Elements equal to a pivot are sorted into their respective equality bucket, which can be implemented with only one additional conditional branch. Equality buckets are not further sorted. This works, since keys occurring more than times are likely to become pivots.

Uses in parallel systems

Example of parallel Samplesort on
p
=
3
{\displaystyle p=3}
processors and an oversampling factor of
k
=
3
{\displaystyle k=3}
. Parallelersamplesort.svg
Example of parallel Samplesort on processors and an oversampling factor of .

Samplesort is often used in parallel systems, including distributed systems such as bulk synchronous parallel machines. [6] [4] [7] Due to the variable amount of splitters (in contrast to only one pivot in Quicksort), Samplesort is very well suited and intuitive for parallelization and scaling. Furthermore Samplesort is also more cache-efficient than implementations of e.g. quicksort.

Parallelization is implemented by splitting the sorting for each processor or node, where the number of buckets is equal to the number of processors . Samplesort is efficient in parallel systems because each processor receives approximately the same bucket size . Since the buckets are sorted concurrently, the processors will complete the sorting at approximately the same time, thus not having a processor wait for others.

On distributed systems, the splitters are chosen by taking elements on each processor, sorting the resulting elements with a distributed sorting algorithm, taking every -th element and broadcasting the result to all processors. This costs for sorting the elements on processors, as well as for distributing the chosen splitters to processors.

With the resulting splitters, each processor places its own input data into local buckets. This takes with binary search. Thereafter, the local buckets are redistributed to the processors. Processor gets the local buckets of all other processors and sorts these locally. The distribution takes time, where is the size of the biggest bucket. The local sorting takes .

Experiments performed in the early 1990s on Connection Machine supercomputers showed samplesort to be particularly good at sorting large datasets on these machines, because its incurs little interprocessor communication overhead. [8] On latter-day GPUs, the algorithm may be less effective than its alternatives. [9] [ citation needed ]

Efficient Implementation of Samplesort

Animated example of Super Scalar Samplesort. In each step, numbers that are compared are marked blue and numbers that are otherwise read or written are marked red. Animation.png
Animated example of Super Scalar Samplesort. In each step, numbers that are compared are marked blue and numbers that are otherwise read or written are marked red.

As described above, the samplesort algorithm splits the elements according to the selected splitters. An efficient implementation strategy is proposed in the paper "Super Scalar Sample Sort". [5] The implementation proposed in the paper uses two arrays of size (the original array containing the input data and a temporary one) for an efficient implementation. Hence, this version of the implementation is not an in-place algorithm.

In each recursion step, the data gets copied to the other array in a partitioned fashion. If the data is in the temporary array in the last recursion step, then the data is copied back to the original array.

Determining buckets

In a comparison based sorting algorithm the comparison operation is the most performance critical part. In Samplesort this corresponds to determining the bucket for each element. This needs time for each element.

Super Scalar Sample Sort uses a balanced search tree which is implicitly stored in an array t. The root is stored at 0, the left successor of is stored at and the right successor is stored at . Given the search tree t, the algorithm calculates the bucket number j of element as follows (assuming evaluates to 1 if it is true and 0 otherwise):

j := 1 repeat log2(p) times     j := 2j + (a > tj) j := jp + 1

Since the number of buckets k is known at compile time, this loop can be unrolled by the compiler. The comparison operation is implemented with predicated instructions. Thus, there occur no branch mispredictions, which would slow down the comparison operation significantly.

Partitioning

For an efficient partitioning of the elements, the algorithm needs to know the sizes of the buckets in advance. To partition the elements of the sequence and put them into the array, we need to know the size of the buckets in advance. A naive algorithm could count the number of elements of each bucket. Then the elements could be inserted to the other array at the right place. Using this, one has to determine the bucket for each elements twice (one time for counting the number of elements in a bucket, and one time for inserting them).

To avoid this doubling of comparisons, Super Scalar Sample Sort uses an additional array (called oracle) which assigns each index of the elements to a bucket. First, the algorithm determines the contents of by determining the bucket for each element and the bucket sizes, and then placing the elements into the bucket determined by . The array also incurs cost in storage space, but as it only needs to store bits, these cost are small compared to the space of the input array.

In-place samplesort

A key disadvantage of the efficient Samplesort implementation shown above is that it is not in-place and requires a second temporary array of the same size as the input sequence during sorting. Efficient implementations of e.g. quicksort are in-place and thus more space efficient. However, Samplesort can be implemented in-place as well. [10]

The in-place algorithm is separated into four phases:

  1. Sampling which is equivalent to the sampling in the above mentioned efficient implementation.
  2. Local Classification on each processor, which groups the input into blocks such that all elements in each block belong to the same bucket, but buckets are not necessarily continuous in memory.
  3. Block permutation brings the blocks into the globally correct order.
  4. Cleanup moves some elements on the edges of the buckets.

One obvious disadvantage of this algorithm is that it reads and writes every element twice, once in the classification phase and once in the block permutation phase. However, the algorithm performs up to three times faster than other state of the art in-place competitors and up to 1.5 times faster than other state of the art sequential competitors. As sampling was already discussed above, the three later stages will be further detailed in the following.

Local classification

In a first step, the input array is split up into stripes of blocks of equal size, one for each processor. Each processor additionally allocates buffers that are of equal size to the blocks, one for each bucket. Thereafter, each processor scans its stripe and moves the elements into the buffer of the according bucket. If a buffer is full, the buffer is written into the processors stripe, beginning at the front. There is always at least one buffer size of empty memory, because for a buffer to be written (i.e. buffer is full), at least a whole buffer size of elements more than elements written back had to be scanned. Thus, every full block contains elements of the same bucket. While scanning, the size of each bucket is kept track of.

Block permutation

Firstly, a prefix sum operation is performed that calculates the boundaries of the buckets. However, since only full blocks are moved in this phase, the boundaries are rounded up to a multiple of the block size and a single overflow buffer is allocated. Before starting the block permutation, some empty blocks might have to be moved to the end of its bucket. Thereafter, a write pointer is set to the start of the bucket subarray for each bucket and a read pointer is set to the last non empty block in the bucket subarray for each bucket.

To limit work contention, each processor is assigned a different primary bucket and two swap buffers that can each hold a block. In each step, if both swap buffers are empty, the processor decrements the read pointer of its primary bucket and reads the block at and places it in one of its swap buffers. After determining the destination bucket of the block by classifying the first element of the block, it increases the write pointer , reads the block at into the other swap buffer and writes the block into its destination bucket. If , the swap buffers are empty again. Otherwise the block remaining in the swap buffers has to be inserted into its destination bucket.

If all blocks in the subarray of the primary bucket of a processor are in the correct bucket, the next bucket is chosen as the primary bucket. If a processor chose all buckets as primary bucket once, the processor is finished.

Cleanup

Since only whole blocks were moved in the block permutation phase, some elements might still be incorrectly placed around the bucket boundaries. Since there has to be enough space in the array for each element, those incorrectly placed elements can be moved to empty spaces from left to right, lastly considering the overflow buffer.

See also

Related Research Articles

<span class="mw-page-title-main">Merge sort</span> Divide and conquer-based sorting algorithm

In computer science, merge sort is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is a divide-and-conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine and von Neumann as early as 1948.

Merge algorithms are a family of algorithms that take multiple sorted lists as input and produce a single list as output, containing all the elements of the inputs lists in sorted order. These algorithms are used as subroutines in various sorting algorithms, most famously merge sort.

In computer science, radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For elements with more than one significant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, radix sort has also been called bucket sort and digital sort.

<span class="mw-page-title-main">Sorting algorithm</span> Algorithm that arranges lists in order

In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.

<span class="mw-page-title-main">Bucket sort</span> Sorting algorithm

Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .

A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". Elements can be added to the set, but not removed ; the more items added, the larger the probability of false positives.

External sorting is a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device and instead they must reside in the slower external memory, usually a disk drive. Thus, external sorting algorithms are external memory algorithms and thus applicable in the external memory model of computation.

<span class="mw-page-title-main">Quicksort</span> Divide and conquer sorting algorithm

Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions.

In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers x0, x1, x2, ... is a second sequence of numbers y0, y1, y2, ..., the sums of prefixes of the input sequence:

Spreadsort is a sorting algorithm invented by Steven J. Ross in 2002. It combines concepts from distribution-based sorts, such as radix sort and bucket sort, with partitioning concepts from comparison sorts such as quicksort and mergesort. In experimental results it was shown to be highly efficient, often outperforming traditional algorithms such as quicksort, particularly on distributions exhibiting structure and string sorting. There is an open-source implementation with performance analysis and benchmarks, and HTML documentation .

Flashsort is a distribution sorting algorithm showing linear computational complexity O(n) for uniformly distributed data sets and relatively little additional memory requirement. The original work was published in 1998 by Karl-Dietrich Neubert.

In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item.

<span class="mw-page-title-main">Block sort</span> Efficient sorting algorithm that combines insert and merge operations

Block sort, or block merge sort, is a sorting algorithm combining at least two merge operations with an insertion sort to arrive at O(n log n) (see Big O notation) in-place stable sorting time. It gets its name from the observation that merging two sorted lists, A and B, is equivalent to breaking A into evenly sized blocks, inserting each A block into B under special rules, and merging AB pairs.

Funnelsort is a comparison-based sorting algorithm. It is similar to mergesort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit in a cache where operations are done. It was introduced by Matteo Frigo, Charles Leiserson, Harald Prokop, and Sridhar Ramachandran in 1999 in the context of the cache oblivious model.

In the design and analysis of algorithms for combinatorial optimization, parametric search is a technique invented by Nimrod Megiddo (1983) for transforming a decision algorithm into an optimization algorithm. It is frequently used for solving optimization problems in computational geometry.

<span class="mw-page-title-main">Parallel external memory</span>

In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory.

Interpolation sort is a kind of bucket sort. It uses an interpolation formula to assign data to the bucket. A general interpolation formula is:

Proportion extend sort is an in-place, comparison-based sorting algorithm which attempts to improve on the performance, particularly the worst-case performance, of quicksort.

References

  1. 1 2 "Samplesort using the Standard Template Adaptive Parallel Library" (PDF) (Technical report). Texas A&M University.
  2. Grama, Ananth; Karypis, George; Kumar, Vipin (2003). "9.5 Bucket and Sample Sort". Introduction to Parallel Computing (2nd ed.). ISBN   0-201-64865-2. Archived from the original on 2016-12-13. Retrieved 2014-10-28.
  3. 1 2 Frazer, W. D.; McKellar, A. C. (1970-07-01). "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting". Journal of the ACM. 17 (3): 496–507. doi: 10.1145/321592.321600 . S2CID   16958223.
  4. 1 2 Hill, Jonathan M. D.; McColl, Bill; Stefanescu, Dan C.; Goudreau, Mark W.; Lang, Kevin; Rao, Satish B.; Suel, Torsten; Tsantilas, Thanasis; Bisseling, Rob H. (1998). "BSPlib: The BSP Programming Library". Parallel Computing. 24 (14): 1947–1980. CiteSeerX   10.1.1.48.1862 . doi:10.1016/S0167-8191(98)00093-3.
  5. 1 2 Sanders, Peter; Winkel, Sebastian (2004-09-14). "Super Scalar Sample Sort". Algorithms – ESA 2004. Lecture Notes in Computer Science. Vol. 3221. pp. 784–796. CiteSeerX   10.1.1.68.9881 . doi:10.1007/978-3-540-30140-0_69. ISBN   978-3-540-23025-0.
  6. Gerbessiotis, Alexandros V.; Valiant, Leslie G. (1992). "Direct Bulk-Synchronous Parallel Algorithms". J. Parallel and Distributed Computing. 22: 22–251. CiteSeerX   10.1.1.51.9332 .
  7. Hightower, William L.; Prins, Jan F.; Reif, John H. (1992). Implementations of randomized sorting on large parallel machines (PDF). ACM Symp. on Parallel Algorithms and Architectures.
  8. Blelloch, Guy E.; Leiserson, Charles E.; Maggs, Bruce M.; Plaxton, C. Gregory; Smith, Stephen J.; Zagha, Marco (1991). A Comparison of Sorting Algorithms for the Connection Machine CM-2. ACM Symp. on Parallel Algorithms and Architectures. CiteSeerX   10.1.1.131.1835 .
  9. Satish, Nadathur; Harris, Mark; Garland, Michael. Designing Efficient Sorting Algorithms for Manycore GPUs. Proc. IEEE Int'l Parallel and Distributed Processing Symp. CiteSeerX   10.1.1.190.9846 .
  10. Axtmann, Michael; Witt, Sascha; Ferizovic, Daniel; Sanders, Peter (2017). "In-Place Parallel Super Scalar Samplesort (IPSSSSo)". 25th Annual European Symposium on Algorithms (ESA 2017). 87 (Leibniz International Proceedings in Informatics (LIPIcs)): 9:1–9:14. doi:10.4230/LIPIcs.ESA.2017.9.

Frazer and McKellar's samplesort and derivatives:

Adapted for use on parallel computers: