♯P

Last updated

In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP . More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

Contents

Relation to decision problems

An NP decision problem is often of the form "Are there any solutions that satisfy certain constraints?" For example:

The corresponding #P function problems ask "how many" rather than "are there any". For example:

Clearly, a #P problem must be at least as hard as the corresponding NP problem. If it's easy to count answers, then it must be easy to tell whether there are any answers—just count them and see whether the count is greater than zero. Some of these problems, such as root finding, are easy enough to be in FP, while others are #P-complete.

One consequence of Toda's theorem is that a polynomial-time machine with a #P oracle (P#P) can solve all problems in PH , the entire polynomial hierarchy. In fact, the polynomial-time machine only needs to make one #P query to solve any problem in PH. This is an indication of the extreme difficulty of solving #P-complete problems exactly.

Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For more information on this, see #P-complete.

The closest decision problem class to #P is PP , which asks whether a majority (more than half) of the computation paths accept. This finds the most significant bit in the #P problem answer. The decision problem class ⊕P (pronounced "Parity-P") instead asks for the least significant bit of the #P answer.

Formal definitions

#P is formally defined as follows:

#P is the set of all functions such that there is a polynomial time nondeterministic Turing machine such that for all , equals the number of accepting branches in 's computation graph on . [1]

#P can also be equivalently defined in terms of a verifer. A decision problem is in NP if there exists a polynomial-time checkable certificate to a given problem instance—that is, NP asks whether there exists a proof of membership for the input that can be checked for correctness in polynomial time. The class #P asks how many certificates there exist for a problem instance that can be checked for correctness in polynomial time. [1] In this context, #P is defined as follows:

#P is the set of functions such that there exists a polynomial and a polynomial-time deterministic Turing machine , called the verifier, such that for every , . [2] (In other words, equals the size of the set containing all of the polynomial-size certificates).

History

The complexity class #P was first defined by Leslie Valiant in a 1979 article on the computation of the permanent of a square matrix, in which he proved that permanent is #P-complete. [3]

Larry Stockmeyer has proved that for every #P problem there exists a randomized algorithm using an oracle for SAT, which given an instance of and returns with high probability a number such that . [4] The runtime of the algorithm is polynomial in and . The algorithm is based on the leftover hash lemma.

See also

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

<span class="mw-page-title-main">Computable number</span> Real number that can be computed within arbitrary precision

In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers or the computable reals or recursive reals. The concept of a computable real number was introduced by Emile Borel in 1912, using the intuitive notion of computability available at the time.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

<span class="mw-page-title-main">Probably approximately correct learning</span> Framework for mathematical analysis of machine learning

In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

<span class="mw-page-title-main">PH (complexity)</span> Class in computational complexity theory

In computational complexity theory, the complexity class PH is the union of all complexity classes in the polynomial hierarchy:

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

<span class="mw-page-title-main">PP (complexity)</span> Class of problems in computer science

In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977.

In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

Generic-case complexity is a subfield of computational complexity theory that studies the complexity of computational problems on "most inputs".

In computational complexity theory, NP/poly is a complexity class, a non-uniform analogue of the class NP of problems solvable in polynomial time by a non-deterministic Turing machine. It is the non-deterministic complexity class corresponding to the deterministic class P/poly.

References

  1. 1 2 Barak, Boaz (Spring 2006). "Complexity of counting" (PDF). Computer Science 522: Computational Complexity. Princeton University.
  2. Arora, Sanjeev; Barak, Boaz (2009). Computational Complexity: A Modern Approach. Cambridge University Press. p. 344. ISBN   978-0-521-42426-4.
  3. Leslie G. Valiant (1979). "The Complexity of Computing the Permanent". Theoretical Computer Science. Elsevier. 8 (2): 189–201. doi: 10.1016/0304-3975(79)90044-6 .
  4. Stockmeyer, Larry (November 1985). "On Approximation Algorithms for #P" (PDF). SIAM Journal on Computing. 14 (4): 849. doi:10.1137/0214060. Archived from the original (PDF) on 2009-10-28.