Super-recursive algorithm

Last updated

In computability theory, super-recursive algorithms are posited as a generalization of hypercomputation: hypothetical algorithms that are more powerful, that is, compute more than Turing machines.

Contents

The term was introduced by Mark Burgin, whose book Super-recursive algorithms develops their theory and presents several mathematical models.

Burgin argues that super-recursive algorithms can be used to disprove the Church–Turing thesis. This point of view has been criticized within the mathematical community and is not widely accepted.

Definition

Burgin (2005: 13) uses the term recursive algorithms for algorithms that can be implemented on Turing machines, and uses the word algorithm in a more general sense. Then a super-recursive class of algorithms is "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005: 107)

Super-recursive algorithms are also related to algorithmic schemes, another novel concept from Burgin, which are more general than super-recursive algorithms. Burgin argues (2005: 115) that it is necessary to make a clear distinction between super-recursive algorithms and those algorithmic schemes that are not algorithms. Under this distinction, some types of hypercomputation are obtained by super-recursive algorithms.

Relation to the Church–Turing thesis

The Church–Turing thesis in recursion theory relies on a particular definition of the term algorithm. Based on his personal definitions that are more general than the one commonly used in recursion theory, Burgin argues that super-recursive algorithms disprove the Church–Turing thesis. He furthermore claims to prove that super-recursive algorithms could hypothetically provide even greater efficiency gains than using quantum algorithms.

Burgin's interpretation of super-recursive algorithms has encountered opposition in the mathematical community. One critic is logician Martin Davis, who argues that Burgin's claims have been well understood "for decades". Davis states,

"The present criticism is not about the mathematical discussion of these matters but only about the misleading claims regarding physical systems of the present and future."(Davis 2006: 128)

Davis disputes Burgin's claims that sets at level of the arithmetical hierarchy can be called computable, saying

"It is generally understood that for a computational result to be useful one must be able to at least recognize that it is indeed the result sought." (Davis 2006: 128)

Related Research Articles

<span class="mw-page-title-main">Algorithm</span> Sequence of operations for a task

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences.

A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms.

In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In computability theory, a system of data-manipulation rules is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decode other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.

In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:

Solomonoff's theory of inductive inference proves that, under its common sense assumptions (axioms), the best possible scientific model is the shortest algorithm that generates the empirical data under consideration. In addition to the choice of data, other assumptions are that, to avoid the post-hoc fallacy, the programming language must be chosen prior to the data and that the environment being observed is generated by an unknown algorithm. This is also called a theory of induction. Due to its basis in the dynamical character of Algorithmic Information Theory, it encompasses statistical as well as dynamical information criteria for model selection. It was introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine that decides problem given an oracle for . It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems.

In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation.

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects, such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.

The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.

References