Gnome sort

Last updated
Gnome sort
Visualization of Gnome sort.gif
Visualisation of Gnome sort
Class Sorting algorithm
Data structure Array
Worst-case performance
Best-case performance
Average performance
Worst-case space complexity auxiliary

Gnome sort (nicknamed stupid sort) is a variation of the insertion sort sorting algorithm that does not use nested loops. Gnome sort was originally proposed by Iranian computer scientist Hamid Sarbazi-Azad (professor of Computer Science and Engineering at Sharif University of Technology) [1] in 2000. The sort was first called stupid sort [2] (not to be confused with bogosort), and then later described by Dick Grune and named gnome sort. [3]

Contents

Gnome sort performs at least as many comparisons as insertion sort and has the same asymptotic run time characteristics. Gnome sort works by building a sorted list one element at a time, getting each item to the proper place in a series of swaps. The average running time is O(n2) but tends towards O(n) if the list is initially almost sorted. [4] [note 1]

Dick Grune described the sorting method with the following story: [3]

Gnome Sort is based on the technique used by the standard Dutch Garden Gnome (Du.: tuinkabouter).
Here is how a garden gnome sorts a line of flower pots.
Basically, he looks at the flower pot next to him and the previous one; if they are in the right order he steps one pot forward, otherwise, he swaps them and steps one pot backward.
Boundary conditions: if there is no previous pot, he steps forwards; if there is no pot next to him, he is done.

"Gnome Sort - The Simplest Sort Algorithm". Dickgrune.com

Pseudocode

Here is pseudocode for the gnome sort using a zero-based array:

procedure gnomeSort(a[]):    pos := 0    while pos < length(a):        if (pos == 0 or a[pos] >= a[pos-1]):            pos := pos + 1        else:            swap a[pos] and a[pos-1]            pos := pos - 1

Example

Given an unsorted array, a = [5, 3, 2, 4], the gnome sort takes the following steps during the while loop. The current position is highlighted in bold and indicated as a value of the variable pos.

Current arrayposCondition in effectAction to take
[5, 3, 2, 4]0pos == 0increment pos
[5, 3, 2, 4]1a[pos] < a[pos-1]swap, decrement pos
[3, 5, 2, 4]0pos == 0increment pos
[3, 5, 2, 4]1a[pos] ≥ a[pos-1]increment pos
[3, 5, 2, 4]2a[pos] < a[pos-1]swap, decrement pos
[3, 2, 5, 4]1a[pos] < a[pos-1]swap, decrement pos
[2, 3, 5, 4]0pos == 0increment pos
[2, 3, 5, 4]1a[pos] ≥ a[pos-1]increment pos
[2, 3, 5, 4]2a[pos] ≥ a[pos-1]increment pos:
[2, 3, 5, 4]3a[pos] < a[pos-1]swap, decrement pos
[2, 3, 4, 5]2a[pos] ≥ a[pos-1]increment pos
[2, 3, 4, 5]3a[pos] ≥ a[pos-1]increment pos
[2, 3, 4, 5]4pos == length(a)finished

Notes

  1. Almost sorted means that each item in the list is not far from its proper position (not farther than some small constant distance).

Related Research Articles

<span class="mw-page-title-main">Heap (data structure)</span> Computer science data structure

In computer science, a heap is a tree-based data structure that satisfies the heap property: In a max heap, for any given node C, if P is a parent node of C, then the key of P is greater than or equal to the key of C. In a min heap, the key of P is less than or equal to the key of C. The node at the "top" of the heap is called the root node.

<span class="mw-page-title-main">Insertion sort</span> Sorting algorithm

Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:

<span class="mw-page-title-main">Sorting algorithm</span> Algorithm that arranges lists in order

In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.

In computer science, selection sort is an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.

<span class="mw-page-title-main">Binary heap</span> Variant of heap data structure

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.

In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently.

In computer science, bogosort is a sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one that is sorted. It is not considered useful for sorting, but may be used for educational purposes, to contrast it with more efficient algorithms.

<span class="mw-page-title-main">Sorting</span> Action of arranging objects into order

Sorting refers to ordering data in an increasing or decreasing manner according to some linear relationship among the data items.

  1. ordering: arranging items in a sequence ordered by some criterion;
  2. categorizing: grouping items with similar properties.
<span class="mw-page-title-main">Comb sort</span> Interval based sorting algorithm

Comb sort is a relatively simple sorting algorithm originally designed by Włodzimierz Dobosiewicz and Artur Borowy in 1980, later rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort in the same way that Shellsort improves on insertion sort, in that they both allow elements that start far away from their intended position to move more than one space per swap.

<span class="mw-page-title-main">Cocktail shaker sort</span> Sorting algorithm

Cocktail shaker sort, also known as bidirectional bubble sort, cocktail sort, shaker sort, ripple sort, shuffle sort, or shuttle sort, is an extension of bubble sort. The algorithm extends bubble sort by operating in two directions. While it improves on bubble sort by more quickly moving items to the beginning of the list, it provides only marginal performance improvements.

Introsort or introspective sort is a hybrid sorting algorithm that provides both fast average performance and (asymptotically) optimal worst-case performance. It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertion sort when the number of elements is below some threshold. This combines the good parts of the three algorithms, with practical performance comparable to quicksort on typical data sets and worst-case O(n log n) runtime due to the heap sort. Since the three algorithms it uses are comparison sorts, it is also a comparison sort.

Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions. The name comes from an analogy:

Suppose a librarian were to store their books alphabetically on a long shelf, starting with the As at the left end, and continuing to the right along the shelf with no spaces between the books until the end of the Zs. If the librarian acquired a new book that belongs to the B section, once they find the correct space in the B section, they will have to move every book over, from the middle of the Bs all the way down to the Zs in order to make room for the new book. This is an insertion sort. However, if they were to leave a space after every letter, as long as there was still space after B, they would only have to move a few books to make room for the new one. This is the basic principle of the Library Sort.

<span class="mw-page-title-main">Quicksort</span> Divide and conquer sorting algorithm

Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions.

Dick Grune is a Dutch computer scientist and university lecturer best known for inventing and developing the first version of the Concurrent Versions System (CVS). Grune was involved in the construction of Algol 68 compilers in the 1970s and the Amsterdam Compiler Kit in the 1980s.

<span class="mw-page-title-main">Fisher–Yates shuffle</span> Algorithm for generating a random permutation of a finite set

The Fisher–Yates shuffle is an algorithm for shuffling a finite sequence. The algorithm takes a list of all the elements of the sequence, and continually determines the next element in the shuffled sequence by randomly drawing an element from the list until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely. The modern version of the algorithm takes time proportional to the number of items being shuffled and shuffles them in place.

In computer science, a search data structure is any data structure that allows the efficient retrieval of specific items from a set of items, such as a specific record from a database.

<span class="mw-page-title-main">Bubble sort</span> Simple comparison sorting algorithm

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, swapping their values if needed. These passes through the list are repeated until no swaps have to be performed during a pass, meaning that the list has become fully sorted. The algorithm, which is a comparison sort, is named for the way the larger elements "bubble" up to the top of the list.

<span class="mw-page-title-main">Cycle sort</span> Comparison sorting algorithm

Cycle sort is an in-place, unstable sorting algorithm, a comparison sort that is theoretically optimal in terms of the total number of writes to the original array, unlike any other in-place sorting algorithm. It is based on the idea that the permutation to be sorted can be factored into cycles, which can individually be rotated to give a sorted result.

<span class="mw-page-title-main">Proxmap sort</span>

ProxmapSort, or Proxmap sort, is a sorting algorithm that works by partitioning an array of data items, or keys, into a number of "subarrays". The name is short for computing a "proximity map," which indicates for each key K the beginning of a subarray where K will reside in the final sorted order. Keys are placed into each subarray using insertion sort. If keys are "well distributed" among the subarrays, sorting occurs in linear time. The computational complexity estimates involve the number of subarrays and the proximity mapping function, the "map key," used. It is a form of bucket and radix sort.

<span class="mw-page-title-main">Block sort</span> Efficient sorting algorithm that combines insert and merge operations

Block sort, or block merge sort, is a sorting algorithm combining at least two merge operations with an insertion sort to arrive at O(n log n) (see Big O notation) in-place stable sorting time. It gets its name from the observation that merging two sorted lists, A and B, is equivalent to breaking A into evenly sized blocks, inserting each A block into B under special rules, and merging AB pairs.

References

  1. Hamid, Sarbazi-Azad. "Hamid Sarbazi-Azad profile page". Archived from the original on 2018-10-16. Retrieved October 16, 2018.
  2. Sarbazi-Azad, Hamid (2 October 2000). "Stupid Sort: A new sorting algorithm" (PDF). Newsletter (599). Computing Science Department, Univ. of Glasgow: 4. Archived (PDF) from the original on 7 March 2012. Retrieved 25 November 2014.
  3. 1 2 "Gnome Sort - The Simplest Sort Algorithm". Dickgrune.com. 2000-10-02. Archived from the original on 2017-08-31. Retrieved 2017-07-20.
  4. Paul E. Black. "gnome sort". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Archived from the original on 2011-08-11. Retrieved 2011-08-20.