Undecidable problem

Last updated

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run. [1]

Contents

Background

A decision problem is a question which, for every input in some infinite set of inputs, answers "yes" or "no". [2] . Those inputs can be numbers (for example, the decision problem "is the input a prime number?") or other values of some other kind, such as strings of a formal language.

The formal representation of a decision problem is a subset of the natural numbers. For decision problems on natural numbers, the set consists of those numbers that the decision problem answers "yes" to. For example, the decision problem "is the input even?" is formalized as the set of even numbers. A decision problem whose input consists of strings or more complex values is formalized as the set of numbers that, via a specific Gödel numbering, correspond to inputs that satisfy the decision problem's criteria.

A decision problem A is called decidable or effectively solvable if the formalized set of A is a recursive set. Otherwise, A is called undecidable. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set. [nb 1]

Example: the halting problem in computability theory

In computability theory, the halting problem is a decision problem which can be stated as follows:

Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.

Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.

Relationship with Gödel's incompleteness theorem

The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof.

The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. [3] Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n such that N(n) = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

Examples of undecidable problems

Undecidable problems can be related to different topics, such as logic, abstract machines or topology. Since there are uncountably many undecidable problems, [nb 2] any list, even one of infinite length, is necessarily incomplete.

Examples of undecidable statements

There are two distinct senses of the word "undecidable" in contemporary use. The first of these is the sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either "the answer to A is yes" or "the answer to A is no".

Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. It can mean just "not provable", leaving open whether an independent statement might be refuted.

Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools.

One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1952.

The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC.

In 1970, Russian mathematician Yuri Matiyasevich showed that Hilbert's Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert's challenge sought an algorithm which finds all solutions of a Diophantine equation. A Diophantine equation is a more general case of Fermat's Last Theorem; we seek the integer roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n variables, infinitely many solutions exist (and are easy to find) in the complex plane; however, the problem becomes impossible if solutions are constrained to integer values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gödel's Incompleteness Theorem. [4]

In 1936, Alan Turing proved that the halting problem—the question of whether or not a Turing machine halts on a given program—is undecidable, in the second sense of the term. This result was later generalized by Rice's theorem.

In 1973, Saharon Shelah showed the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory. [5]

In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic.

Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism.

Goodstein's theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic.

Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.

In 2007, researchers Kurtz and Simon, building on earlier work by J.H. Conway in the 1970s, proved that a natural generalization of the Collatz problem is undecidable. [6]

In 2019, Ben-David and colleagues constructed an example of a learning model (named EMX), and showed a family of functions whose learnability in EMX is undecidable in standard set theory. [7] [8]

See also

Notes

  1. This means that there exists an algorithm that halts eventually when the answer is yes but may run forever if the answer is no.
  2. There are uncountably many subsets of , only countably many of which can be decided by algorithms. However, also only countably many decision problems can be stated in any language.

Related Research Articles

In mathematics and computer science, the Entscheidungsproblem is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. The problem asks for an algorithm that considers, as input, a statement and answers "yes" or "no" according to whether the statement is universally valid, i.e., valid in every structure.

Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.

In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior, unlike a syntactic property. A property is non-trivial if it is neither true for every program, nor false for every program.

Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.

In mathematics, a Diophantine equation is an equation of the form P(x1, ..., xj, y1, ..., yk) = 0 (usually abbreviated P(x, y) = 0) where P(x, y) is a polynomial with integer coefficients, where x1, ..., xj indicate parameters and y1, ..., yk indicate unknowns.

Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation, can decide whether the equation has a solution with all unknowns taking integer values.

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

<span class="mw-page-title-main">Metamathematics</span> Study of mathematics itself

Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic". An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.

In the mathematical field of set theory, a large cardinal property is a certain kind of property of transfinite cardinal numbers. Cardinals with such properties are, as the name suggests, generally very "large". The proposition that such cardinals exist cannot be proved in the most common axiomatization of set theory, namely ZFC, and such propositions can be viewed as ways of measuring how "much", beyond ZFC, one needs to assume to be able to prove certain desired results. In other words, they can be seen, in Dana Scott's phrase, as quantifying the fact "that if you want more you have to assume more".

Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic".

In computability theory, a decider is a Turing machine that halts for every input. A decider is also called a total Turing machine as it represents a total function.

In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable.

In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.

This article gives a sketch of a proof of Gödel's first incompleteness theorem. This theorem applies to any formal theory that satisfies certain technical hypotheses, which are discussed as needed during the sketch. We will assume for the remainder of the article that a fixed theory satisfying these hypotheses has been selected.

The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing.

A timeline of mathematical logic; see also history of logic.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.

References

  1. "Formal Computational Models and Computability". www.cs.rochester.edu. Retrieved 2022-06-12.
  2. "decision problem". Oxford Reference. Retrieved 2022-06-12.
  3. Aaronson, Scott (21 July 2011). "Rosser's Theorem via Turing machines". Shtetl-Optimized. Retrieved 2 November 2022.
  4. Matiyasevich, Yuri (1970). Диофантовость перечислимых множеств[Enumerable sets are Diophantine]. Doklady Akademii Nauk SSSR (in Russian). 191: 279–282.
  5. Shelah, Saharon (1974). "Infinite Abelian groups, Whitehead problem and some constructions". Israel Journal of Mathematics . 18 (3): 243–256. doi:10.1007/BF02757281. MR   0357114. S2CID   123351674.
  6. Kurtz, Stuart A.; Simon, Janos, "The Undecidability of the Generalized Collatz Problem", in Proceedings of the 4th International Conference on Theory and Applications of Models of Computation, TAMC 2007, held in Shanghai, China in May 2007. ISBN   3-540-72503-2. doi : 10.1007/978-3-540-72504-6_49
  7. Ben-David, Shai; Hrubeš, Pavel; Moran, Shay; Shpilka, Amir; Yehudayoff, Amir (2019-01-07). "Learnability can be undecidable". Nature Machine Intelligence. 1 (1): 44–48. doi:10.1038/s42256-018-0002-3. ISSN   2522-5839. S2CID   257109887.
  8. Reyzin, Lev (2019). "Unprovability comes to machine learning". Nature. 565 (7738): 166–167. Bibcode:2019Natur.565..166R. doi: 10.1038/d41586-019-00012-4 . ISSN   0028-0836. PMID   30617250.