In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."
These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers. They may be further subdivided into differential and algebraic models (digital computers, in this context, should be thought of as topological, at least insofar as their operation on computable reals is concerned [1] ). Depending on the model chosen, this may enable real computers to solve problems that are inextricable on digital computers (For example, Hava Siegelmann's neural nets can have noncomputable real weights, making them able to compute nonrecursive languages.) or vice versa. (Claude Shannon's idealized analog computer can only solve algebraic differential equations, while a digital computer can solve some transcendental equations as well. However this comparison is not entirely fair since in Claude Shannon's idealized analog computer computations are immediately done; i.e. computation is done in real time. Shannon's model can be adapted to cope with this problem.) [2]
A canonical model of computation over the reals is Blum–Shub–Smale machine (BSS).
If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. Unlimited precision real numbers in the physical universe are prohibited by the holographic principle and the Bekenstein bound. [3]
An analog computer or analogue computer is a type of computation machine (computer) that uses physical phenomena such as electrical, mechanical, or hydraulic quantities behaving according to the mathematical principles in question to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude.
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.
Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
In computational complexity theory, R is the class of decision problems solvable by a Turing machine, which is the set of all recursive languages.
Smale's problems is a list of eighteen unsolved problems in mathematics proposed by Steve Smale in 1998 and republished in 1999. Smale composed this list in reply to a request from Vladimir Arnold, then vice-president of the International Mathematical Union, who asked several mathematicians to propose a list of problems for the 21st century. Arnold's inspiration came from the list of Hilbert's problems that had been published at the beginning of the 20th century.
In computation theory, the Blum–Shub–Smale machine, or BSS machine, is a model of computation introduced by Lenore Blum, Michael Shub and Stephen Smale, intended to describe computations over the real numbers. Essentially, a BSS machine is a Random Access Machine with registers that can store arbitrary real numbers and that can compute rational functions over reals in a single time step. It is closely related to the Real RAM model.
Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize.
In computing, especially computational geometry, a real RAM is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation.
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.
In mathematical logic, computational complexity theory, and computer science, the existential theory of the reals is the set of all true sentences of the form where the variables are interpreted as having real number values, and where is a quantifier-free formula involving equalities and inequalities of real polynomials. A sentence of this form is true if it is possible to find values for all of the variables that, when substituted into formula , make it become true.
Michael Ira Shub is an American mathematician who has done research into dynamical systems and the complexity of real number algorithms.
The general purpose analog computer (GPAC) is a mathematical model of analog computers first introduced in 1941 by Claude Shannon. This model consists of circuits where several basic units are interconnected in order to compute some function. The GPAC can be implemented in practice through the use of mechanical devices or analog electronics. Although analog computers have fallen almost into oblivion due to emergence of the digital computer, the GPAC has recently been studied as a way to provide evidence for the physical Church–Turing thesis. This is because the GPAC is also known to model a large class of dynamical systems defined with ordinary differential equations, which appear frequently in the context of physics. In particular it was shown in 2007 that the GPAC is equivalent, in computability terms, to Turing machines, thereby proving the physical Church–Turing thesis for the class of systems modelled by the GPAC. This was recently strengthened to polynomial time equivalence.
Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"
{{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link)