Real computation

Last updated
Circuit diagram of an analog computing element to integrate a given function. Real computation theory investigates properties of such devices under the idealizing assumption of infinite precision. Operational amplifier integrator.svg
Circuit diagram of an analog computing element to integrate a given function. Real computation theory investigates properties of such devices under the idealizing assumption of infinite precision.

In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."

Contents

These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers. They may be further subdivided into differential and algebraic models (digital computers, in this context, should be thought of as topological, at least insofar as their operation on computable reals is concerned [1] ). Depending on the model chosen, this may enable real computers to solve problems that are inextricable on digital computers (For example, Hava Siegelmann's neural nets can have noncomputable real weights, making them able to compute nonrecursive languages.) or vice versa. (Claude Shannon's idealized analog computer can only solve algebraic differential equations, while a digital computer can solve some transcendental equations as well. However this comparison is not entirely fair since in Claude Shannon's idealized analog computer computations are immediately done; i.e. computation is done in real time. Shannon's model can be adapted to cope with this problem.) [2]

A canonical model of computation over the reals is Blum–Shub–Smale machine (BSS).

If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. Unlimited precision real numbers in the physical universe are prohibited by the holographic principle and the Bekenstein bound. [3]

See also

Related Research Articles

<span class="mw-page-title-main">Analog computer</span> Computer that uses continuously varying data technology

An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Discrete mathematics</span> Study of discrete mathematical structures

Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".

<span class="mw-page-title-main">Theory of computation</span> Academic subfield of computer science

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.

Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, formal language theory, the lambda calculus and type theory.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, R is the class of decision problems solvable by a Turing machine, which is the set of all recursive languages.

In computation theory, the Blum–Shub–Smale machine, or BSS machine, is a model of computation introduced by Lenore Blum, Michael Shub and Stephen Smale, intended to describe computations over the real numbers. Essentially, a BSS machine is a Random Access Machine with registers that can store arbitrary real numbers and that can compute rational functions over reals in a single time step. It is closely related to the Real RAM model.

In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines. The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models. Turing machines and other mathematical models of conventional algorithms allow researchers to find properties of recursive algorithms and their computations. In a similar way, mathematical models of super-recursive algorithms, such as inductive Turing machines, allow researchers to find properties of super-recursive algorithms and their computations.

Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize.

In computing, especially computational geometry, a real RAM is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation.

In mathematical logic, computational complexity theory, and computer science, the existential theory of the reals is the set of all true sentences of the form

<span class="mw-page-title-main">Michael Shub</span> American mathematician

Michael Ira Shub is an American mathematician who has done research into dynamical systems and the complexity of real number algorithms.

The general purpose analog computer (GPAC) is a mathematical model of analog computers first introduced in 1941 by Claude Shannon. This model consists of circuits where several basic units are interconnected in order to compute some function. The GPAC can be implemented in practice through the use of mechanical devices or analog electronics. Although analog computers have fallen almost into oblivion due to emergence of the digital computer, the GPAC has recently been studied as a way to provide evidence for the physical Church–Turing thesis. This is because the GPAC is also known to model a large class of dynamical systems defined with ordinary differential equations, which appear frequently in the context of physics. In particular it was shown in 2007 that the GPAC is equivalent, in computability terms, to Turing machines, thereby proving the physical Church–Turing thesis for the class of systems modelled by the GPAC. This was recently strengthened to polynomial time equivalence.

Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"

<span class="mw-page-title-main">Felipe Cucker</span> Uruguayan mathematician

Juan Felipe Cucker Farkas is an Uruguayan mathematician and theoretical computer scientist who has done research into the complexity theory of the Blum–Shub–Smale computational model and the complexity of numerical algorithms in linear programming and numerical algebraic geometry.

References

  1. Klaus Weihrauch (1995). A Simple Introduction to Computable Analysis.
  2. O. Bournez; M. L. Campagnolo; D. S. Graça & E. Hainry (Jun 2007). "Polynomial differential equations compute all real computable functions on computable compact intervals". Journal of Complexity. 23 (3): 317–335. doi: 10.1016/j.jco.2006.12.005 . hdl: 10400.1/1011 .
  3. Scott Aaronson, NP-complete Problems and Physical Reality , ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52.

Further reading