Shellsort

Last updated
Shellsort
Sorting shellsort anim.gif
Shellsort with gaps 23, 10, 4, 1 in action
Class Sorting algorithm
Data structure Array
Worst-case performance O(n2) (worst known worst case gap sequence)
O(n log2n) (best known worst case gap sequence) [1]
Best-case performance O(n log n) (most gap sequences)
O(n log2n) (best known worst-case gap sequence) [2]
Average performance depends on gap sequence
Worst-case space complexity О(n) total, O(1) auxiliary
OptimalNo
Swapping pairs of items in successive steps of Shellsort with gaps 5, 3, 1 Shell sorting algorithm color bars.svg
Swapping pairs of items in successive steps of Shellsort with gaps 5, 3, 1

Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can be seen as either a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort). [3] The method starts by sorting pairs of elements far apart from each other, then progressively reducing the gap between elements to be compared. By starting with far-apart elements, it can move some out-of-place elements into the position faster than a simple nearest-neighbor exchange. Donald Shell published the first version of this sort in 1959. [4] [5] The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexity remains an open problem.

Contents

Description

Shellsort is an optimization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, taking every hth element produces a sorted list. Such a list is said to be h-sorted. It can also be thought of as h interleaved lists, each individually sorted. [6] Beginning with large values of h allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do. [7] If the list is then k-sorted for some smaller integer k, then the list remains h-sorted. A final sort with h = 1 ensures the list is fully sorted at the end, [6] but a judiciously chosen decreasing sequence of h values leaves very little work for this final pass to do.

In simplistic terms, this means if we have an array of 1024 numbers, our first gap (h) could be 512. We then run through the list comparing each element in the first half to the element in the second half. Our second gap (k) is 256, which breaks the array into four sections (starting at 0, 256, 512, 768), and we make sure the first items in each section are sorted relative to each other, then the second item in each section, and so on. In practice the gap sequence could be anything, but the last gap is always 1 to finish the sort (effectively finishing with an ordinary insertion sort).

An example run of Shellsort with gaps 5, 3 and 1 is shown below.

a1a2a3a4a5a6a7a8a9a10a11a12
Input data628318530717958647692528
After 5-sorting172818470725838653696295
After 3-sorting170718472825696253838695
After 1-sorting071718252847536269838695

The first pass, 5-sorting, performs insertion sort on five separate subarrays (a1, a6, a11), (a2, a7, a12), (a3, a8), (a4, a9), (a5, a10). For instance, it changes the subarray (a1, a6, a11) from (62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three subarrays (a1, a4, a7, a10), (a2, a5, a8, a11), (a3, a6, a9, a12). The last pass, 1-sorting, is an ordinary insertion sort of the entire array (a1,..., a12).

As the example illustrates, the subarrays that Shellsort operates on are initially short; later they are longer but almost ordered. In both cases insertion sort works efficiently.

Unlike insertion sort, Shellsort is not a stable sort since gapped insertions transport equal elements past one another and thus lose their original order. It is an adaptive sorting algorithm in that it executes faster when the input is partially sorted.

Pseudocode

Using Marcin Ciura's gap sequence, with an inner insertion sort.

# Sort an array a[0...n-1].gaps=[701,301,132,57,23,10,4,1]#Ciuragapsequence# Start with the largest gap and work down to a gap of 1# similar to insertion sort but instead of 1, gap is being used in each stepforeach(gapingaps){# Do a gapped insertion sort for every elements in gaps# Each loop leaves a[0..gap-1] in gapped orderfor(i=gap;i<n;i+=1){# save a[i] in temp and make a hole at position itemp=a[i]# shift earlier gap-sorted elements up until the correct location for a[i] is foundfor(j=i;(j>=gap)&&(a[j-gap]>temp);j-=gap){a[j]=a[j-gap]}# put temp (the original a[i]) in its correct locationa[j]=temp}}

Gap sequences

The question of deciding which gap sequence to use is difficult. Every gap sequence that contains 1 yields a correct sort (as this makes the final pass an ordinary insertion sort); however, the properties of thus obtained versions of Shellsort may be very different. Too few gaps slows down the passes, and too many gaps produces an overhead.

The table below compares most proposed gap sequences published so far. Some of them have decreasing elements that depend on the size of the sorted array (N). Others are increasing infinite sequences, whose elements less than N should be used in reverse order.

OEIS General term (k ≥ 1)Concrete gapsWorst-case
time complexity
Author and year of publication
[e.g. when N = 2p] Shell, 1959 [4]
Frank & Lazarus, 1960 [8]
A000225 Hibbard, 1963 [9]
A083318 , prefixed with 1Papernov & Stasevich, 1965 [10]
A003586 Successive numbers of the form (3-smooth numbers) Pratt, 1971 [1]
A003462 , not greater than Knuth, 1973, [3] based on Pratt, 1971 [1]
A036569 Incerpi & Sedgewick, 1985, [11] Knuth [3]
A036562 , prefixed with 1Sedgewick, 1982 [6]
A033622 Sedgewick, 1986 [12]
Un­known Gonnet & Baeza-Yates, 1991 [13]
A108870 (or equivalently, )Un­knownTokuda, 1992 [14] (misquote per OEIS)
A102549 Unknown (experimentally derived)Un­knownCiura, 2001 [15]
A366726 Un­knownLee, 2021 [16]
Un­knownSkean, Ehrenborg, Jaromczyk, 2023 [17]

When the binary representation of N contains many consecutive zeroes, Shellsort using Shell's original gap sequence makes Θ(N2) comparisons in the worst case. For instance, this case occurs for N equal to a power of two when elements greater and smaller than the median occupy odd and even positions respectively, since they are compared only in the last pass.

Although it has higher complexity than the O(N log N) that is optimal for comparison sorts, Pratt's version lends itself to sorting networks and has the same asymptotic gate complexity as Batcher's bitonic sorter.

Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average when the ratios of successive gaps are roughly equal to 2.2. [13] This is why their sequence with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not known why this is so. Sedgewick recommends using gaps which have low greatest common divisors or are pairwise coprime. [18] [ failed verification ] Gaps which are odd numbers seem to work well in practice: 25% reductions have been observed by avoiding even-numbered gaps. Gaps which avoid multiples of 3 and 5 seem to produce small benefits of < 10%.[ original research? ]

With respect to the average number of comparisons, Ciura's sequence [15] has the best known performance; gaps greater than 701 were not determined but the sequence can be further extended according to the recursive formula .

Tokuda's sequence, defined by the simple formula , where , , can be recommended for practical applications.

If the maximum input size is small, as may occur if Shellsort is used on small subarrays by another recursive sorting algorithm such as quicksort or merge sort, then it is possible to tabulate an optimal sequence for each input size. [19] [20]

Computational complexity

The following property holds: after h2-sorting of any h1-sorted array, the array remains h1-sorted. [21] Every h1-sorted and h2-sorted array is also (a1h1+a2h2)-sorted, for any nonnegative integers a1 and a2. The worst-case complexity of Shellsort is therefore connected with the Frobenius problem: for given integers h1,..., hn with gcd = 1, the Frobenius number g(h1,..., hn) is the greatest integer that cannot be represented as a1h1+ ... +anhn with nonnegative integer a1,..., an. Using known formulae for Frobenius numbers, we can determine the worst-case complexity of Shellsort for several classes of gap sequences. [22] Proven results are shown in the above table.

Mark Allen Weiss proved that Shellsort runs in O(N log N) time when the input array is in reverse order. [23]

With respect to the average number of operations, none of the proven results concerns a practical gap sequence. For gaps that are powers of two, Espelid computed this average as . [24] Knuth determined the average complexity of sorting an N-element array with two gaps (h, 1) to be . [3] It follows that a two-pass Shellsort with h = Θ(N1/3) makes on average O(N5/3) comparisons/inversions/running time. Yao found the average complexity of a three-pass Shellsort. [25] His result was refined by Janson and Knuth: [26] the average number of comparisons/inversions/running time made during a Shellsort with three gaps (ch, cg, 1), where h and g are coprime, is in the first pass, in the second pass and in the third pass. ψ(h, g) in the last formula is a complicated function asymptotically equal to . In particular, when h = Θ(N7/15) and g = Θ(N1/5), the average time of sorting is O(N23/15).

Based on experiments, it is conjectured that Shellsort with Hibbard's gap sequence runs in O(N5/4) average time, [3] and that Gonnet and Baeza-Yates's sequence requires on average 0.41N ln N (ln ln N + 1/6) element moves. [13] Approximations of the average number of operations formerly put forward for other sequences fail when sorted arrays contain millions of elements.

The graph below shows the average number of element comparisons use by various gap sequences, divided by the theoretical lower bound, i.e. log2N!. Ciuria's sequence 1, 4, 10, 23, 57, 132, 301, 701 (labelled Ci01) has been extended according to the formula .

Shell sort average number of comparisons (English).svg

Applying the theory of Kolmogorov complexity, Jiang, Li, and Vitányi [27] proved the following lower bound for the order of the average number of operations/running time in a p-pass Shellsort: Ω(pN1+1/p) when p  log2N and Ω(pN) when p > log2N. Therefore, Shellsort has prospects of running in an average time that asymptotically grows like N logN only when using gap sequences whose number of gaps grows in proportion to the logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymptotic order of average-case complexity, which is optimal for comparison sorts. The lower bound was improved by Vitányi [28] for every number of passes to where . This result implies for example the Jiang-Li-Vitányi lower bound for all -pass increment sequences and improves that lower bound for particular increment sequences. In fact all bounds (lower and upper) currently known for the average case are precisely matched by this lower bound. For example, this gives the new result that the Janson-Knuth upper bound is matched by the resulting lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses comparisons/inversions/running time. The formula allows us to search for increment sequences that yield lower bounds which are unknown; for example an increment sequence for four passes which has a lower bound greater than for the increment sequence . The lower bound becomes

The worst-case complexity of any version of Shellsort is of higher order: Plaxton, Poonen, and Suel showed that it grows at least as rapidly as . [29] [30] Robert Cypher proved a stronger lower bound: when for all . [31]

Applications

Shellsort performs more operations and has higher cache miss ratio than quicksort. However, since it can be implemented using little code and does not use the call stack, some implementations of the qsort function in the C standard library targeted at embedded systems use it instead of quicksort. Shellsort is, for example, used in the uClibc library. [32] For similar reasons, in the past, Shellsort was used in the Linux kernel. [33]

Shellsort can also serve as a sub-algorithm of introspective sort, to sort short subarrays and to prevent a slowdown when the recursion depth exceeds a given limit. This principle is employed, for instance, in the bzip2 compressor. [34]

See also

Related Research Articles

<span class="mw-page-title-main">Binary search algorithm</span> Search algorithm finding the position of a target value within a sorted array

In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.

<span class="mw-page-title-main">Insertion sort</span> Sorting algorithm

Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:

<span class="mw-page-title-main">Merge sort</span> Divide and conquer sorting algorithm

In computer science, merge sort is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is a divide-and-conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine and von Neumann as early as 1948.

In computer science, radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For elements with more than one significant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, radix sort has also been called bucket sort and digital sort.

<span class="mw-page-title-main">Sorting algorithm</span> Algorithm that arranges lists in order

In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.

<span class="mw-page-title-main">Binary heap</span> Variant of heap data structure

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.

<span class="mw-page-title-main">Bucket sort</span> Sorting algorithm

Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed.

In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently.

Introsort or introspective sort is a hybrid sorting algorithm that provides both fast average performance and (asymptotically) optimal worst-case performance. It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertion sort when the number of elements is below some threshold. This combines the good parts of the three algorithms, with practical performance comparable to quicksort on typical data sets and worst-case O(n log n) runtime due to the heap sort. Since the three algorithms it uses are comparison sorts, it is also a comparison sort.

In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .

<span class="mw-page-title-main">Comparison sort</span> Type of sorting algorithm that works by comparing pairs of elements

A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms a total preorder over the data, with:

  1. if ab and bc then ac (transitivity)
  2. for all a and b, ab or ba (connexity).
<span class="mw-page-title-main">Quicksort</span> Divide and conquer sorting algorithm

Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions.

In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.

Algorithms that construct convex hulls of various objects have a broad range of applications in mathematics and computer science.

<span class="mw-page-title-main">Bubble sort</span> Simple comparison sorting algorithm

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, swapping their values if needed. These passes through the list are repeated until no swaps have to be performed during a pass, meaning that the list has become fully sorted. The algorithm, which is a comparison sort, is named for the way the larger elements "bubble" up to the top of the list.

<span class="mw-page-title-main">Proxmap sort</span>

ProxmapSort, or Proxmap sort, is a sorting algorithm that works by partitioning an array of data items, or keys, into a number of "subarrays". The name is short for computing a "proximity map," which indicates for each key K the beginning of a subarray where K will reside in the final sorted order. Keys are placed into each subarray using insertion sort. If keys are "well distributed" among the subarrays, sorting occurs in linear time. The computational complexity estimates involve the number of subarrays and the proximity mapping function, the "map key," used. It is a form of bucket and radix sort.

<span class="mw-page-title-main">Block sort</span> Efficient sorting algorithm that combines insert and merge operations

Block sort, or block merge sort, is a sorting algorithm combining at least two merge operations with an insertion sort to arrive at O(n log n) (see Big O notation) in-place stable sorting time. It gets its name from the observation that merging two sorted lists, A and B, is equivalent to breaking A into evenly sized blocks, inserting each A block into B under special rules, and merging AB pairs.

Funnelsort is a comparison-based sorting algorithm. It is similar to mergesort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit in a cache where operations are done. It was introduced by Matteo Frigo, Charles Leiserson, Harald Prokop, and Sridhar Ramachandran in 1999 in the context of the cache oblivious model.

The cache-oblivious distribution sort is a comparison-based sorting algorithm. It is similar to quicksort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit in a cache where operations are done. In the external memory model, the number of memory transfers it needs to perform a sort of items on a machine with cache of size and cache lines of length is , under the tall cache assumption that . This number of memory transfers has been shown to be asymptotically optimal for comparison sorts. This distribution sort also achieves the asymptotically optimal runtime complexity of .

In mathematics, the incompressibility method is a proof method like the probabilistic method, the counting method or the pigeonhole principle. To prove that an object in a certain class satisfies a certain property, select an object of that class that is incompressible. If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved. To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits.

References

  1. 1 2 3 Pratt, Vaughan Ronald (1979). Shellsort and Sorting Networks (Outstanding Dissertations in the Computer Sciences) (PDF). Garland. ISBN   978-0-8240-4406-0. Archived (PDF) from the original on 7 September 2021.
  2. "Shellsort & Comparisons". Archived from the original on 20 December 2019. Retrieved 14 November 2015.
  3. 1 2 3 4 5 Knuth, Donald E. (1997). "Shell's method". The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 83–95. ISBN   978-0-201-89685-5.
  4. 1 2 Shell, D. L. (1959). "A High-Speed Sorting Procedure" (PDF). Communications of the ACM. 2 (7): 30–32. doi:10.1145/368370.368387. S2CID   28572656. Archived from the original (PDF) on 30 August 2017. Retrieved 18 October 2011.
  5. Some older textbooks and references call this the "Shell–Metzner" sort after Marlene Metzner Norton, but according to Metzner, "I had nothing to do with the sort, and my name should never have been attached to it." See "Shell sort". National Institute of Standards and Technology. Retrieved 17 July 2007.
  6. 1 2 3 Sedgewick, Robert (1998). Algorithms in C . Vol. 1 (3rd ed.). Addison-Wesley. pp.  273–281. ISBN   978-0-201-31452-6.
  7. Kernighan, Brian W.; Ritchie, Dennis M. (1996). The C Programming Language (2nd ed.). Prentice Hall. p. 62. ISBN   978-7-302-02412-5.
  8. Frank, R. M.; Lazarus, R. B. (1960). "A High-Speed Sorting Procedure". Communications of the ACM. 3 (1): 20–22. doi: 10.1145/366947.366957 . S2CID   34066017.
  9. Hibbard, Thomas N. (1963). "An Empirical Study of Minimal Storage Sorting". Communications of the ACM. 6 (5): 206–213. doi: 10.1145/366552.366557 . S2CID   12146844.
  10. Papernov, A. A.; Stasevich, G. V. (1965). "A Method of Information Sorting in Computer Memories" (PDF). Problems of Information Transmission. 1 (3): 63–75.
  11. Incerpi, Janet; Sedgewick, Robert (1985). "Improved Upper Bounds on Shellsort" (PDF). Journal of Computer and System Sciences. 31 (2): 210–224. doi:10.1016/0022-0000(85)90042-x.
  12. Sedgewick, Robert (1986). "A New Upper Bound for Shellsort". Journal of Algorithms. 7 (2): 159–173. doi:10.1016/0196-6774(86)90001-5.
  13. 1 2 3 Gonnet, Gaston H.; Baeza-Yates, Ricardo (1991). "Shellsort". Handbook of Algorithms and Data Structures: In Pascal and C (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 161–163. ISBN   978-0-201-41607-7. Extensive experiments indicate that the sequence defined by α = 0.45454 < 5/11 performs significantly better than other sequences. The easiest way to compute 0.45454n is by (5 * n — 1)/11 using integer arithmetic.
  14. Tokuda, Naoyuki (1992). "An Improved Shellsort". In van Leeuven, Jan (ed.). Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture. Amsterdam: North-Holland Publishing Co. pp. 449–457. ISBN   978-0-444-89747-3.
  15. 1 2 Ciura, Marcin (2001). "Best Increments for the Average Case of Shellsort" (PDF). In Freiwalds, Rusins (ed.). Proceedings of the 13th International Symposium on Fundamentals of Computation Theory. London: Springer-Verlag. pp. 106–117. ISBN   978-3-540-42487-1. Archived from the original (PDF) on 23 September 2018.
  16. Lee, Ying Wai (21 December 2021). "Empirically Improved Tokuda Gap Sequence in Shellsort". arXiv: 2112.11112 [cs.DS].
  17. Skean, Oscar; Ehrenborg, Richard; Jaromczyk, Jerzy W. (1 January 2023). "Optimization Perspectives on Shellsort". arXiv: 2301.00316 [cs.DS].
  18. Sedgewick, Robert (1998). "Shellsort". Algorithms in C++, Parts 1–4: Fundamentals, Data Structure, Sorting, Searching. Reading, Massachusetts: Addison-Wesley. pp. 285–292. ISBN   978-0-201-35088-3.
  19. Forshell, Olof (22 May 2018). "How to choose the lengths of my sub sequences for a shell sort?". Stack Overflow . Additional commentary at Fastest gap sequence for shell sort? (23 May 2018).
  20. Lee, Ying Wai (21 December 2021). "Optimal Gap Sequences in Shellsort for n ≤ 16 Elements". arXiv: 2112.11127 [math.CO].
  21. Gale, David; Karp, Richard M. (April 1972). "A Phenomenon in the Theory of Sorting" (PDF). Journal of Computer and System Sciences. 6 (2): 103–115. doi: 10.1016/S0022-0000(72)80016-3 .
  22. Selmer, Ernst S. (March 1989). "On Shellsort and the Frobenius Problem" (PDF). BIT Numerical Mathematics. 29 (1): 37–40. doi:10.1007/BF01932703. hdl: 1956/19572 . S2CID   32467267.
  23. Weiss, Mark Allen (1989). "A good case for Shellsort". Congressus Numerantium. 73: 59–62.
  24. Espelid, Terje O. (December 1973). "Analysis of a Shellsort Algorithm". BIT Numerical Mathematics. 13 (4): 394–400. doi:10.1007/BF01933401. S2CID   119443598. The quoted result is equation (8) on p. 399.
  25. Yao, Andrew Chi-Chih (1980). "An Analysis of (h, k, 1)-Shellsort" (PDF). Journal of Algorithms. 1 (1): 14–50. doi:10.1016/0196-6774(80)90003-6. S2CID   3054966. STAN-CS-79-726. Archived from the original (PDF) on 4 March 2019.
  26. Janson, Svante; Knuth, Donald E. (1997). "Shellsort with Three Increments" (PDF). Random Structures and Algorithms. 10 (1–2): 125–142. arXiv: cs/9608105 . CiteSeerX   10.1.1.54.9911 . doi:10.1002/(SICI)1098-2418(199701/03)10:1/2<125::AID-RSA6>3.0.CO;2-X.
  27. Jiang, Tao; Li, Ming; Vitányi, Paul (September 2000). "A Lower Bound on the Average-Case Complexity of Shellsort" (PDF). Journal of the ACM . 47 (5): 905–911. arXiv: cs/9906008 . CiteSeerX   10.1.1.6.6508 . doi:10.1145/355483.355488. S2CID   3265123.
  28. Vitányi, Paul (March 2018). "On the average-case complexity of Shellsort" (PDF). Random Structures and Algorithms. 52 (2): 354–363. arXiv: 1501.06461 . doi:10.1002/rsa.20737. S2CID   6833808.
  29. Plaxton, C. Greg; Poonen, Bjorn; Suel, Torsten (24–27 October 1992). "Improved lower bounds for Shellsort" (PDF). Proceedings., 33rd Annual Symposium on Foundations of Computer Science. Vol. 33. Pittsburgh, United States. pp. 226–235. CiteSeerX   10.1.1.43.1393 . doi:10.1109/SFCS.1992.267769. ISBN   978-0-8186-2900-6. S2CID   15095863.{{cite book}}: CS1 maint: location missing publisher (link)
  30. Plaxton, C. Greg; Suel, Torsten (May 1997). "Lower Bounds for Shellsort" (PDF). Journal of Algorithms. 23 (2): 221–240. CiteSeerX   10.1.1.460.2429 . doi:10.1006/jagm.1996.0825.
  31. Cypher, Robert (1993). "A Lower Bound on the Size of Shellsort Sorting Networks". SIAM Journal on Computing. 22: 62–71. doi:10.1137/0222006.
  32. Novoa, Manuel III. "libc/stdlib/stdlib.c" . Retrieved 29 October 2014.
  33. "kernel/groups.c". GitHub . Retrieved 5 May 2012.
  34. Julian Seward. "bzip2/blocksort.c" . Retrieved 30 March 2011.

Bibliography