Turing completeness

Last updated

In computability theory, a system of data-manipulation rules (such as a model of computation, a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine [ citation needed ] (devised by English mathematician and computer scientist Alan Turing). This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.[ citation needed ]

Contents

A related concept is that of Turing equivalence  two computers P and Q are called equivalent if P can simulate Q and Q can simulate P.[ citation needed ] The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the purely computational aspects of any possible real-world computer.[ citation needed ]

To show that something is Turing-complete, it is enough to demonstrate that it can be used to simulate some Turing-complete system. No physical system can have infinite memory, but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.[ citation needed ]

Non-mathematical usage

In colloquial usage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life, this leads to the practical concepts of computing virtualization and emulation.[ citation needed ]

Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (which uses a "tape" for memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, the abstraction of a universal computer is defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time.[ citation needed ]

Formal definitions

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):

Turing completeness
A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every function it can compute is also Turing-computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known physically-implementable Turing-complete systems are Turing-equivalent, which adds support to the Church–Turing thesis.[ citation needed ])
(Computational) universality
A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, the term 'universality' is tacitly used with respect to a Turing-complete class of systems. The term "weakly universal" is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include input streams with infinitely many 1s.

History

Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics  that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better.[ citation needed ] From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing-complete.

In the late 19th century, Leopold Kronecker formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions that compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction that could be performed by a machine. Soon it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved by Kurt Gödel in 1930 to be enough to produce every theorem.

The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation that deduces their theorems. Church and Turing independently demonstrated that Hilbert's Entscheidungsproblem (decision problem) was unsolvable, [1] thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work on general recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.

In 1941 Konrad Zuse completed the Z3 computer. Zuse was not familiar with Turing's work on computability at the time. In particular, the Z3 lacked dedicated facilities for a conditional jump, thereby precluding it from being Turing complete. However, in 1998, it was shown by Rojas that the Z3 is capable of simulating conditional jumps, and therefore Turing complete in theory. To do this, its tape program would have to be long enough to execute every possible path through both sides of every branch. [2]

The first computer capable of conditional branching in practice, and therefore Turing complete in practice, was the ENIAC in 1946. Zuse's Z4 computer was operational in 1945, but it did not support conditional branching until 1950. [3]

Computability theory

Computability theory uses models of computation to analyze problems and determine whether they are computable and under what circumstances. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time.

The classic example is the halting problem: create an algorithm that takes as input a program in some Turing-complete language and some data to be fed to that program, and determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this for some inputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold.

This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops or protects users from supplying input that would cause infinite loops.

One can instead limit a program to executing only for a fixed period of time (timeout) or limit the power of flow-control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., languages that guarantee that every program will eventually finish to a halt). So any such language is not Turing-complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function produced by Cantor's diagonal argument on all computable functions in that language.

Turing oracles

A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to the halting problem or some other Turing-undecidable problem. Such an infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.

Digital physics

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically. [4]

Examples

The computational systems (algebras, calculi) that are discussed as Turing-complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:

Most programming languages (their abstract models, maybe with some particular constructs that assume finite memory omitted), conventional and unconventional, are Turing-complete. This includes:

Some rewrite systems are Turing-complete.

Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly even goto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Most programming languages are describing computations on von Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even pure functional languages are Turing-complete. [7] [8]

Turing completeness in declarative SQL is implemented through recursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing-complete.

Unintentional Turing completeness

Some software and video games are Turing-complete by accident, i.e. not by design.

Software:

Games:

Social media:

Computational languages:

Biology:

Non-Turing-complete languages

Many computational languages exist that are not Turing-complete. One such example is the set of regular languages, which are generated by regular expressions and which are recognized by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compiling. Further examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions.[ citation needed ]

In total functional programming languages, such as Charity and Epigram, all functions are total and must terminate. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types. The LOOP language is designed so that it computes only the functions that are primitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is not computably enumerable. Also, since all functions in these languages are total, algorithms for recursively enumerable sets cannot be written in these languages, in contrast with Turing machines.

Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not.

See also

Related Research Articles

<span class="mw-page-title-main">Kolmogorov complexity</span> Measure of algorithmic complexity

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.

In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In mathematics, logic and computer science, a formal language is called recursively enumerable if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language.

In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q 1: q 2. .... qI; which will be called "m-configurations". He then described the operation of such machine, as described below, and argued:

It is my contention that these operations include all those which are used in the computation of a number.

<i>A New Kind of Science</i> Book by Stephen Wolfram

A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.

<span class="mw-page-title-main">Automata theory</span> Study of abstract machines and automata

Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science with close connections to mathematical logic. The word automata comes from the Greek word αὐτόματος, which means "self-acting, self-willed, self-moving". An automaton is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically. An automaton with a finite number of states is called a finite automaton (FA) or finite-state machine (FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists of states and transitions. As the automaton sees a symbol of input, it makes a transition to another state, according to its transition function, which takes the previous state and current input symbol as its arguments.

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.

In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All models of register machines are Turing equivalent.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

In computability theory, a decider is a Turing machine that halts for every input. A decider is also called a total Turing machine as it represents a total function.

<span class="mw-page-title-main">History of computer science</span>

The history of computer science began long before the modern discipline of computer science, usually appearing in forms like mathematics or physics. Developments in previous centuries alluded to the discipline that we now know as computer science. This progression, from mechanical inventions and mathematical theories towards modern computer concepts and machines, led to the development of a major academic field, massive technological advancement across the Western world, and the basis of a massive worldwide trade and culture.

A counter machine is an abstract machine used in a formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines. A counter machine comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer, and a list of arithmetic and control instructions for the machine to follow. The counter machine is typically used in the process of designing parallel algorithms in relation to the mutual exclusion principle. When used in this manner, the counter machine is used to model the discrete time-steps of a computational system in relation to memory accesses. By modeling computations in relation to the memory accesses for each respective computational step, parallel algorithms may be designed in such a matter to avoid interlocking, the simultaneous writing operation by two threads to the same memory address.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.

References

  1. Hodges, Andrew (1992) [1983], Alan Turing: The Enigma, London: Burnett Books, p. 111, ISBN   0-04-510060-8
  2. Rojas, Raul (1998). "How to make Zuse's Z3 a universal computer". Annals of the History of Computing. 20 (3): 51–54. doi:10.1109/85.707574.
  3. Rojas, Raúl (1 February 2014). "Konrad Zuse und der bedingte Sprung" [Konrad Zuse and the conditional jump]. Informatik-Spektrum (in German). 37 (1): 50–53. doi:10.1007/s00287-013-0717-9. ISSN   0170-6012. S2CID   1086397.
  4. Schmidhuber, Jürgen (1997), Freksa, Christian; Jantzen, Matthias; Valk, Rüdiger (eds.), "A computer scientist's view of life, the universe, and everything", Foundations of Computer Science: Potential — Theory — Cognition, Lecture Notes in Computer Science, vol. 1337, Berlin, Heidelberg: Springer, pp. 201–208, arXiv: quant-ph/9904050 , doi:10.1007/bfb0052088, ISBN   978-3-540-69640-7, S2CID   17830241 , retrieved 23 May 2022
  5. Dfetter; Breinbaas (8 August 2011). "Cyclic Tag System". PostgreSQL wiki. Retrieved 10 September 2014.
  6. Lyons, Bob (30 March 2001). "Universal Turing Machine in XSLT". B2B Integration Solutions from Unidex. Archived from the original on 17 July 2011. Retrieved 5 July 2010.
  7. Boyer, Robert S.; Moore, J. Strother (May 1983). A Mechanical Proof of the Turing Completeness of Pure Lisp (PDF) (Technical report). Institute for Computing Science, University of Texas at Austin. 37. Archived (PDF) from the original on 22 September 2017.
  8. Rauber, Thomas; Rünger, Gudula (2013). Parallel programming: for multicore and cluster systems (2nd ed.). Springer. ISBN   9783642378010.
  9. "Announcing LAMBDA: Turn Excel formulas into custom functions". TECHCOMMUNITY.MICROSOFT.COM. 3 December 2020. Retrieved 8 December 2020.
  10. Cedotal, Andrew (16 April 2010). "Man Uses World's Most Difficult Computer Game to Create … A Working Turing Machine". The Mary Sue. Archived from the original on 27 June 2015. Retrieved 2 June 2015.
  11. Plunkett, Luke (16 July 2019). "Cities: Skylines Map Becomes A Poop-Powered Computer". Kotaku . Retrieved 16 July 2019.
  12. Caldwell, Brendan (20 November 2017). "Opus Magnum player makes an alchemical computer". Rock Paper Shotgun . Retrieved 23 September 2019.
  13. Crider, Michael. "This 8-bit processor built in Minecraft can run its own games". PCWorld. Retrieved 21 July 2022.
  14. Churchill, Alex; Biderman, Stella; Herrick, Austin (2020). Magic: The Gathering Is Turing Complete (PDF). 10th International Conference on Fun with Algorithms.
  15. Ouellette, Jennifer (23 June 2019). "It's possible to build a Turing machine within Magic: The Gathering". Ars Technica . Retrieved 12 March 2023.
  16. Kaye, Richard (31 May 2007). "Infinite versions of minesweeper are Turing complete" (PDF). Archived from the original (PDF) on 3 August 2016. Retrieved 8 July 2016.
  17. "Habbo's Twitter thread on the implementation of a Turing machine inside the game". 9 November 2020. Retrieved 11 November 2020.
  18. Meyers, Scott (Scott Douglas) (2005). Effective C++ : 55 specific ways to improve your programs and designs (3rd ed.). Upper Saddle River, NJ: Addison-Wesley. ISBN   0321334876. OCLC   60425273.
  19. A 27th IOCCC Winner
    Carlini, Nicolas; Barresi, Antonio; Payer, Mathias; Wagner, David; Gross, Thomas R. (August 2015). "Control-flow bending: on the effectiveness of control-flow integrity". Proceedings of the 24th USENIX Conference on Security Symposium. pp. 161–176. ISBN   9781931971232.
  20. Dabler, Ryan (23 September 2021). "TypeScript and Turing Completeness". ITNEXT. LINKIT. Retrieved 12 November 2022.
  21. Dolan, Stephen. "mov is Turing-complete" (PDF). stedolan.net. Archived from the original (PDF) on 14 February 2021. Retrieved 9 May 2019.
  22. Williams, Al (21 March 2021). "One Instruction To Rule Them All: C Compiler Emits Only MOV". Hackaday . Retrieved 23 October 2023.
  23. Break Me00 The MoVfuscator Turning mov into a soul crushing RE nightmare Christopher Domas , retrieved 5 November 2022
  24. Shah, Shalin; Wee, Jasmine; Song, Tianqi; Ceze, Luis; Strauss, Karin; Chen, Yuan-Jyue; Reif, John (4 May 2020). "Using Strand Displacing Polymerase To Program Chemical Reaction Networks". Journal of the American Chemical Society. 142 (21): 9587–9593. doi:10.1021/jacs.0c02240. ISSN   0002-7863. PMID   32364723. S2CID   218504535.
  25. Chen, Yuan-Jyue; Dalchau, Neil; Srinivas, Niranjan; Phillips, Andrew; Cardelli, Luca; Soloveichik, David; Seelig, Georg (October 2013). "Programmable chemical controllers made from DNA". Nature Nanotechnology. 8 (10): 755–762. Bibcode:2013NatNa...8..755C. doi:10.1038/nnano.2013.189. ISSN   1748-3395. PMC   4150546 . PMID   24077029.
  26. Srinivas, Niranjan; Parkin, James; Seelig, Georg; Winfree, Erik; Soloveichik, David (15 December 2017). "Enzyme-free nucleic acid dynamical systems". Science. 358 (6369): eaal2052. doi: 10.1126/science.aal2052 . ISSN   0036-8075. PMID   29242317.
  27. Soloveichik, David; Seelig, Georg; Winfree, Erik (23 March 2010). "DNA as a universal substrate for chemical kinetics". Proceedings of the National Academy of Sciences. 107 (12): 5393–5398. Bibcode:2010PNAS..107.5393S. doi: 10.1073/pnas.0909380107 . ISSN   0027-8424. PMC   2851759 . PMID   20203007.
  28. Shapiro, Ehud (7 December 1999). "A Mechanical Turing Machine: Blueprint for a Biomolecular Computer". Interface Focus. 2 (4). Weizmann Institute of Science: 497–503. doi:10.1098/rsfs.2011.0118. PMC   3363030 . PMID   22649583. Archived from the original on 3 January 2009. Retrieved 13 August 2009.

Further reading