Slowsort

Last updated

Slowsort is a sorting algorithm. It is of humorous nature and not useful. It is a reluctant algorithm based on the principle of multiply and surrender (a parody formed by taking the opposites of divide and conquer ). It was published in 1984 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis [1] (a parody of optimal algorithms and complexity analysis ).

Contents

Algorithm

Slowsort is a recursive algorithm.

This is an implementation in pseudocode:

procedureslowsort(A[],start_idx,end_idx)// Sort array range A[start ... end] in-place.ifstart_idxend_idxthenreturnmiddle_idx:=floor((start_idx+end_idx)/2)slowsort(A,start_idx,middle_idx)// (1.1)slowsort(A,middle_idx+1,end_idx)// (1.2)ifA[end_idx]<A[middle_idx]thenswap(A,end_idx,middle_idx)// (1.3)slowsort(A,start_idx,end_idx-1)// (2)

An unoptimized implementation in Haskell (purely functional) may look as follows:

slowsort::(Orda)=>[a]->[a]slowsortxs|lengthxs<=1=xs|otherwise=slowsortxs'++[maxllastrlast]-- (2)wherem=lengthxs`div`2l=slowsort$takemxs-- (1.1)r=slowsort$dropmxs-- (1.2)llast=lastlrlast=lastrxs'=initl++minllastrlast:initr

Complexity

The runtime for Slowsort is .

A lower asymptotic bound for in Landau notation is for any , where .

Slowsort is therefore not in polynomial time. Even the best case is worse than Bubble sort.

Related Research Articles

<span class="mw-page-title-main">Computable number</span> Real number that can be computed within arbitrary precision

In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers or the computable reals or recursive reals. The concept of a computable real number was introduced by Emile Borel in 1912, using the intuitive notion of computability available at the time.

<span class="mw-page-title-main">Merge sort</span> Divide and conquer sorting algorithm

In computer science, merge sort is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is a divide-and-conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine and von Neumann as early as 1948.

The subset sum problem (SSP) is a decision problem in computer science. In its most general formulation, there is a multiset of integers and a target-sum , and the question is to decide whether any subset of the integers sum to precisely . The problem is known to be NP-hard. Moreover, some restricted variants of it are NP-complete too, for example:

Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

<span class="mw-page-title-main">Stooge sort</span> Inefficient recursive sorting algorithm

Stooge sort is a recursive sorting algorithm. It is notable for its exceptionally bad time complexity of = The running time of the algorithm is thus slower compared to reasonable sorting algorithms, and is slower than bubble sort, a canonical example of a fairly inefficient sort. It is however more efficient than Slowsort. The name comes from The Three Stooges.

In mathematics and computer algebra, automatic differentiation, also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program.

An envy-free cake-cutting is a kind of fair cake-cutting. It is a division of a heterogeneous resource ("cake") that satisfies the envy-free criterion, namely, that every partner feels that their allocated share is at least as good as any other share, according to their own subjective valuation.

Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation.

<span class="mw-page-title-main">Quicksort</span> Divide and conquer sorting algorithm

Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions.

<span class="mw-page-title-main">Recursion (computer science)</span> Use of functions that call themselves

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.

The fast multipole method (FMM) is a numerical technique that was developed to speed up the calculation of long-ranged forces in the n-body problem. It does this by expanding the system Green's function using a multipole expansion, which allows one to group sources that lie close together and treat them as if they are a single source.

Simplexity is a neologism which proposes a possible complementary relationship between complexity and simplicity.

Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media.

A schema is a template in computer science used in the field of genetic algorithms that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, forming a basis for a product topology on strings. In other words, schemata can be used to generate a topology on a space of strings.

Samplesort is a sorting algorithm that is a divide and conquer algorithm often used in parallel processing systems. Conventional divide and conquer sorting algorithms partitions the array into sub-intervals or buckets. The buckets are then sorted individually and then concatenated together. However, if the array is non-uniformly distributed, the performance of these sorting algorithms can be significantly throttled. Samplesort addresses this issue by selecting a sample of size s from the n-element sequence, and determining the range of the buckets by sorting the sample and choosing p−1 < s elements from the result. These elements then divide the array into p approximately equal-sized buckets. Samplesort is described in the 1970 paper, "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting", by W. D. Frazer and A. C. McKellar.

In computer science and data mining, MinHash is a technique for quickly estimating how similar two sets are. The scheme was invented by Andrei Broder, and initially used in the AltaVista search engine to detect duplicate web pages and eliminate them from search results. It has also been applied in large-scale clustering problems, such as clustering documents by the similarity of their sets of words.

In the design and analysis of algorithms for combinatorial optimization, parametric search is a technique invented by Nimrod Megiddo for transforming a decision algorithm into an optimization algorithm. It is frequently used for solving optimization problems in computational geometry.

In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes the number of bins used.

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies, since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

References

  1. Andrei Broder; Jorge Stolfi (1984). "Pessimal Algorithms and Simplexity Analysis" (PDF). ACM SIGACT News. 16 (3): 49–53. CiteSeerX   10.1.1.116.9158 . doi:10.1145/990534.990536. S2CID   6566140.