Selman's theorem

Last updated

In computability theory, Selman's theorem is a theorem relating enumeration reducibility with enumerability relative to oracles. It is named after Alan Selman, who proved it as part of his PhD thesis in 1971.

Contents

Statement

Informally, a set A is enumeration-reducible to a set B if there is a Turing machine which receives an enumeration of B (it has a special instruction to get the next element, or none if it has not yet been provided), and produces an enumeration of A. See enumeration reducibility for a precise account.

A set A is computably enumerable with oracle B (or simply "in B") when there is a Turing machine with oracle B which enumerates the members of A; this is the relativized version of computable enumerability.

Selman's theorem: [1] [2] [3] [4] [5] A set A is enumeration-reducible to a set B if and only if A is computably enumerable with an oracle X whenever B is computably enumerable with the same oracle X.

Discussion

Informally, the hypothesis means that whenever there is a program enumerating B using some source of information (the oracle), there is also a program enumerating A using the same source of information. A priori, the program enumerating A could be running the program enumerating B as a subprogram in order to produce the elements of A from those of B, but it could also be using the source of information directly, perhaps in a different way than the program enumerating B, and it could be difficult to deduce from the program enumerating B. However, the theorem asserts that, in fact, there exists a single program which produces an enumeration of A solely from an enumeration of B, without direct access to the source of information used to enumerate B.

From a slightly different point of view, the theorem is an automatic uniformity result. Let P be the set of total computable functions such that the range of f with ⊥ removed equals A, and let Q be similarly defined for B. A possible reformulation of the theorem is that if P is Mučnik-reducible to Q, then it is also Medvedev-reducible to Q. [6] :5. Informally: if every enumeration of B can be used to compute an enumeration of A, then there is a single (uniform) oracle Turing machine which computes some enumeration of A whenever it is given an enumeration of B as the oracle.

Proof

If A is enumeration-reducible to B and B is computably enumerable with oracle X, then A is computably enumerable with oracle X (it suffices to compose a machine that enumerates A given an enumeration of B with a machine that enumerates B with an oracle X).

Conversely, assume that A is not enumeration-reducible to B. We shall build X such that B is computably enumerable with oracle X, but A is not.

Let denote some computable pairing function. We build X as a set of elements where , such that for each , there is at least one pair in X. This ensures that B is computably enumerable with oracle X (through a semi-algorithm that takes an input x and searches for y such that ).

The construction of X is done by stages, following the priority method. It is convenient to view the eventual value of X as an infinite bit string (i-th bit is the boolean ) which is constructed by incrementally appending to a finite bit string. Initially, X is the empty string.

We describe the n-th step of the construction. It extends X in two ways.

First, we ensure that X has a 1 bit at some index , where x is the n-th element of X. If there is none yet, we choose y large enough such that the index is outside the current string X, and we add a 1 bit at this index (padding with 0 bits before it). Doing this ensures that in the eventual value of X, there is some pair for each .

Second, let us call "admissible extension" an extension of the current X which respects the property that 1 bits are pairs . Denote by M the n-th oracle Turing machine. We use M(Z) to mean M associated to a specific oracle Z (if Z is a finite bit string, out of bounds requests return 0).

We distinguish three cases.

1. There is an admissible extension Y such that M(Y) enumerates some x that is not in A. Fix such an x. We further extend Y by padding it with 0s until all oracle queries that were used by M(Y) before enumerating x become in bounds, and we set X to this extended Y. This ensures that, however X is later extended, M(X) does not enumerate A, as it enumerates x which is not in A.

2. There is some value x in A which is not enumerated by any M(Y), for any admissible extension Y. In this case, we do not change X; it is already ensured that eventually M(X) will not enumerate A, because it cannot enumerate x — indeed, if it did, this would be done after a finite number of oracle invocations, which would lie in some admissible extension Y.

3. We show that the remaining case is absurd. Here, we know that all values enumerated by M(Y), for Y admissible extension, are in A, and conversely, every element of A is enumerated by M(Y) for at least one admissible extension Y. In other words, A is exactly the set of all values enumerated by M(Y) for an admissible extension Y. We can build a machine which receives an enumeration of B, uses it to enumerates admissible extensions Y, runs the M(Y) in parallel, and enumerates the values they yield. This machine is an enumeration reduction from A to B, which is absurd since we assumed no such reduction exists.

See also

Related Research Articles

<span class="mw-page-title-main">Kolmogorov complexity</span> Measure of algorithmic complexity

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:

Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

<span class="mw-page-title-main">Quantum circuit</span> Model of quantum computing

In quantum information theory, a quantum circuit is a model for quantum computation, similar to classical circuits, in which a computation is a sequence of quantum gates, measurements, initializations of qubits to known values, and possibly other actions. The minimum set of actions that a circuit needs to be able to perform on the qubits to enable quantum computation is known as DiVincenzo's criteria.

In computer science and mathematical logic the Turing degree or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set.

In computability theory Post's theorem, named after Emil Post, describes the connection between the arithmetical hierarchy and the Turing degrees.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine that decides problem given an oracle for . It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems.

Intuitively, an algorithmically random sequence is a sequence of binary digits that appears random to any algorithm running on a universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet. Random sequences are key objects of study in algorithmic information theory.

In computability theory, many reducibility relations are studied. They are motivated by the question: given sets and of natural numbers, is it possible to effectively convert a method for deciding membership in into a method for deciding membership in ? If the answer to this question is affirmative then is said to be reducible to.

In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals . An admissible set is closed under functions, where denotes a rank of Godel's constructible hierarchy. is an admissible ordinal if is a model of Kripke–Platek set theory. In what follows is considered to be fixed.

In computability theory, hyperarithmetic theory is a generalization of Turing computability. It has close connections with definability in second-order arithmetic and with weak systems of set theory such as Kripke–Platek set theory. It is an important tool in effective descriptive set theory.

Flow chart language (FCL) is a simple imperative programming language designed for the purposes of explaining fundamental concepts of program analysis and specialization, in particular, partial evaluation. The language was first presented in 1989 by Carsten K. Gomard and Neil D. Jones. It later resurfaced in their book with Peter Sestoft in 1993, and in John Hatcliff's lecture notes in 1998. The below describes FCL as it appeared in John Hatcliff's lecture notes.

In mathematics, a set of natural numbers is called a K-trivial set if its initial segments viewed as binary strings are easy to describe: the prefix-free Kolmogorov complexity is as low as possible, close to that of a computable set. Solovay proved in 1975 that a set can be K-trivial without being computable.

In computability theory, enumeration reducibility is a specific type of reducibility. Roughly speaking, A is enumeration-reducible to B if an enumeration of B can be algorithmically converted to an enumeration of A. In particular, if B is computably enumerable, then A also is.

References

  1. Selman, Alan (1971). "Arithmetical Reducibilities I". Mathematical Logic Quarterly. 17: 335–350. doi:10.1002/malq.19710170139.
  2. Cooper, Barry (2004). Computability theory. p. 179. ISBN   978-1584882374.
  3. Copestake, Kate (1997). "On Nondeterminism, Enumeration Reducibility and Polynomial Bounds". Mathematical Logic Quarterly. 43 (3): 287–310. doi:10.1002/malq.19970430302.
  4. Nattingga, Dávid (2019). "α degrees as an automorphism base for the α-enumeration degrees". arXiv: 1812.11308 .
  5. Jacobsen-Grocott, Josiah (2024). Structural and topological aspects of the enumeration and hyperenumeration degrees (PDF) (PhD thesis). Retrieved 2024-06-01.
  6. Miller, Joseph S. (2004). "Degrees of Unsolvability of Continuous Functions" (PDF). Journal of Symbolic Logic. 69 (2): 555–584. doi:10.2178/jsl/1082418543 . Retrieved 2024-06-03.