Low (complexity)

Last updated

In computational complexity theory, a language B (or a complexity class B) is said to be low for a complexity class A (with some reasonable relativized version of A) if AB = A; that is, A with an oracle for B is equal to A. [1] Such a statement implies that an abstract machine which solves problems in A achieves no additional power if it is given the ability to solve problems in B at unit cost. In particular, this means that if B is low for A then B is contained in A. Informally, lowness means that problems in B are not only solvable by machines which can solve problems in A, but are “easy to solve”. An A machine can simulate many oracle queries to B without exceeding its resource bounds.

Contents

Results and relationships that establish one class as low for another are often called lowness results. The set of languages low for a complexity class A is denoted Low(A).

Classes that are low for themselves

Several natural complexity classes are known to be low for themselves. Such a class is sometimes called self-low. [2] Scott Aaronson calls such a class a physical complexity class. [3] Note that being self-low is a stronger condition than being closed under complement. Informally, a class being low for itself means a problem can use other problems in the class as unit-cost subroutines without exceeding the power of the complexity class.

The following classes are known to be self-low: [3]

Every class which is low for itself is closed under complement, provided that it is powerful enough to negate the boolean result. This implies that NP isn't low for itself unless NP = co-NP , which is considered unlikely because it implies that the polynomial hierarchy collapses to the first level, whereas it is widely believed that the hierarchy is infinite. The converse to this statement is not true. If a class is closed under complement, it does not mean that the class is low for itself. An example of such a class is EXP, which is closed under complement, but is not low for itself.

Classes that are low for other complexity classes

Some of the more complex and famous results regarding lowness of classes include:

Applications

Lowness is particularly valuable in relativization arguments, where it can be used to establish that the power of a class does not change in the "relativized universe" where a particular oracle machine is available for free. This allows us to reason about it in the same manner we normally would. For example, in the relativized universe of BQP, PP is still closed under union and intersection. It's also useful when seeking to expand the power of a machine with oracles, because lowness results determine when the machine's power remains the same.

See also

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

<span class="mw-page-title-main">ZPP (complexity)</span>

In complexity theory, ZPP is the complexity class of problems for which a probabilistic Turing machine exists with these properties:

In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the complexity class PH is the union of all complexity classes in the polynomial hierarchy:

<span class="mw-page-title-main">PP (complexity)</span> Class of problems in computer science

In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977.

In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public. Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.

In computational complexity theory, the complexity class ⊕P is the class of decision problems solvable by a nondeterministic Turing machine in polynomial time, where the acceptance condition is that the number of accepting computation paths is odd. An example of a ⊕P problem is "does a given graph have an odd number of perfect matchings?" The class was defined by Papadimitriou and Zachos in 1983.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. it is a problem for which the correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  2. the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

References

  1. 1 2 3 4 Köbler, Johannes; Torán, Jacobo (2015). "Lowness results: the next generation". Bulletin of the EATCS. 117.
  2. Rothe, J. (2006). Complexity Theory and Cryptology: An Introduction to Cryptocomplexity. Texts in Theoretical Computer Science. An EATCS Series. Springer Berlin Heidelberg. ISBN   978-3-540-28520-5 . Retrieved 2017-05-15.
  3. 1 2 "Lens of Computation on the Sciences". 25 November 2014. Archived from the original on 6 May 2021. Retrieved 17 October 2021.
  4. Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997. Archived 2011-05-25 at the Wayback Machine
  5. "Archived copy" (PDF). Archived (PDF) from the original on 2021-05-06. Retrieved 2021-10-17.{{cite web}}: CS1 maint: archived copy as title (link)
  6. L. Fortnow and J. D. Rogers. Complexity limitations on quantum computation. In Proceedings of IEEE Complexity '98, p.202-209. 1998. arXiv : cs.CC/9811023.
  7. V. Arvind and P. Kurur. Graph isomorphism is in SPP. ECCC   TR02-037. 2002.
  8. Vikraman Arvind and Johannes Köbler. Graph Isomorphism Is Low for ZPP(NP) and Other Lowness Results. Proceedings of the 17th Annual Symposium on Theoretical Aspects of Computer Science, ISBN   3-540-67141-2, p.431-442. 2000.
  9. L. Li. On the Counting Functions. PhD thesis, University of Chicago. 1993.