General purpose analog computer

Last updated

The general purpose analog computer (GPAC) is a mathematical model of analog computers first introduced in 1941 by Claude Shannon. [1] This model consists of circuits where several basic units are interconnected in order to compute some function. The GPAC can be implemented in practice through the use of mechanical devices or analog electronics. Although analog computers have fallen almost into oblivion due to emergence of the digital computer, the GPAC has recently been studied as a way to provide evidence for the physical Church–Turing thesis. [2] This is because the GPAC is also known to model a large class of dynamical systems defined with ordinary differential equations, which appear frequently in the context of physics. [3] In particular it was shown in 2007 that (a deterministic variant of) the GPAC is equivalent, in computability terms, to Turing machines, thereby proving the physical Church–Turing thesis for the class of systems modelled by the GPAC. [4] This was recently strengthened to polynomial time equivalence. [5]

Contents

Definition and history

The General Purpose Analog Computer was originally introduced by Claude Shannon. [1] This model came as a result of his work on Vannevar Bush's differential analyzer, an early analog computer. [6] Shannon defined the GPAC as an analog circuit consisting of five types of units: adders (which add their inputs), multipliers (which multiply their inputs), integrators, constant units (which always output the value 1), and constant multipliers (which always multiply their input by a fixed constant k). More recently, and for simplicity, the GPAC has instead been defined using the equivalent four types of units: adders, multipliers, integrators and real constant units (which always output the value k, for some fixed real number k).

In his original paper, Shannon presented a result which stated that functions computable by the GPAC are those functions which are differentially algebraic.

See also

Related Research Articles

<span class="mw-page-title-main">Analog computer</span> Computer that uses continuously variable technology

An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude.

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Discrete mathematics</span> Study of discrete mathematical structures

Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".

In computational complexity theory, the class NC is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O(logc n) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In computability theory, a system of data-manipulation rules is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

In computer science, a universal Turing machine (UTM) is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input to that machine from its own tape. Alan Turing introduced the idea of such a machine in 1936–1937. This principle is considered to be the origin of the idea of a stored-program computer used by John von Neumann in 1946 for the "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture.

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. Super-Turing computing, introduced at the early 1990's by Hava Siegelmann, refers to such neurological inspired, biological and physical realizable computing; It became the mathematical foundations of Lifelong Machine Learning. Hypercomputation, introduced as a field of science in the late 1990s, is said to be based on the Super Turing but it also includes constructs which are philosophical. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

<span class="mw-page-title-main">Real computation</span> Concept in computability theory

In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

<span class="mw-page-title-main">Boolean circuit</span> Model of computation

In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy.

Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"

References

  1. 1 2 Shannon, Claude E. (1941). "Mathematical Theory of the Differential Analyzer". Journal of Mathematics and Physics. 20 (1–4): 337–354. doi:10.1002/sapm1941201337.
  2. O. Bournez and M. L. Campagnolo. A Survey on Continuous Time Computations. In New Computational Paradigms. Changing Conceptions of What is Computable. (Cooper, S.B. and Löwe, B. and Sorbi, A., Eds.) Springer, pages 383–423. 2008.
  3. D. S. Graça and J. F. Costa. Analog computers and recursive functions over the reals. Journal of Complexity, 19(5):644–664, 2003
  4. O. Bournez, M. L. Campagnolo, D. S. Graça, and E. Hainry. Polynomial differential equations compute all real computable functions on computable compact intervals. Journal of Complexity, 23:317–335, 2007
  5. Bournez, Olivier; Graça, Daniel S.; Pouly, Amaury (2016). Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length: The General Purpose Analog Computer and Computable Analysis Are Two Efficiently Equivalent Models of Computations. Leibniz International Proceedings in Informatics (LIPIcs). Vol. 55. Schloss Dagstuhl. pp. 109:1–109:15. doi:10.4230/LIPIcs.ICALP.2016.109. ISBN   9783959770132. S2CID   1942575.
  6. Robert Price (1982). "Claude E. Shannon, an oral history". IEEE Global History Network. IEEE. Retrieved July 14, 2011.