P/poly

Last updated

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. [1] [2] These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

Contents

For example, the popular Miller–Rabin primality test can be formulated as a P/poly algorithm: the "advice" is a list of candidate values to test. It is possible to precompute a list of values such that every composite n-bit number will be certain to have a witness a in the list. [3] For example, to correctly determine the primality of 32-bit numbers, it is enough to test . [4] [5] The existence of short lists of candidate witnesses follows from the fact that for each composite n, three out of four candidate values successfully detect that n is composite. From this, a simple counting argument similar to the one in the proof that below shows that there exists a suitable list of candidate values for every input size, and more strongly that most long-enough lists of candidate values will work correctly, although finding a list that is guaranteed to work may be expensive. [3]

P/poly, unlike other polynomial-time classes such as P or BPP , is not generally considered a practical class for computing. Indeed, it contains every undecidable unary language, none of which can be solved in general by real computers. On the other hand, if the input length is bounded by a relatively small number and the advice strings are short, it can be used to model practical algorithms with a separate expensive preprocessing phase and a fast processing phase, as in the Miller–Rabin example.

Formal definition

The complexity class P/poly can be defined in terms of SIZE as follows:

where is the set of decision problems that can be solved by circuits having no more than gates.

Alternatively, can be defined using Turing machines that "take advice". Such a machine has, for each n, an advice string , which it is allowed to use in its computation whenever the input has size n. To help visualize this equivalence, imagine the advice for each n is a description of a boolean circuit having n inputs, and the Turing Machine evaluating that boolean circuit on inputs of length n.

Let be functions. The class of languages decidable by time-T(n) Turing machines with advice, denoted , contains every language L such that there exists a sequence of strings with and a TM M satisfying

for every , where on input the machine M runs for at most steps. [6]

Importance of P/poly

P/poly is an important class for several reasons. For theoretical computer science, there are several important properties that depend on P/poly:

Proof: Consider a language L from PSPACE. It is known that there exists an interactive proof system for L, where actions of the prover can be carried out by a PSPACE machine. By assumption, the prover can be replaced by a polynomial-size circuit. Therefore, L has a MA protocol: Merlin sends the circuit as proof, and Arthur can simulate the IP protocol himself without any additional help.
Proof: If MAEXPP/poly then PSPACE = MA (see above). By padding, EXPSPACE = MAEXP, therefore EXPSPACEP/poly but this can be proven false with diagonalization.

One of the most interesting reasons that P/poly is important is the property that if NP is not a subset of P/poly, then PNP. This observation was the center of many attempts to prove PNP. It is known that for a random oracle A, NPA is not a subset of PA/poly with probability 1. [1]

P/poly is also used in the field of cryptography. Security is often defined 'against' P/poly adversaries. Besides including most practical models of computation like BPP, this also admits the possibility that adversaries can do heavy precomputation for inputs up to a certain length, as in the construction of rainbow tables.

Although not all languages in P/poly are sparse languages, there is a polynomial-time Turing reduction from any language in P/poly to a sparse language. [11]

Bounded-error probabilistic polynomial is contained in P/poly

Adleman's theorem states that BPP P/poly, where BPP is the set of problems solvable with randomized algorithms with two-sided error in polynomial time. A weaker result was initially proven by Leonard Adleman, namely, that RP P/poly; [12] and this result was generalized to BPPP/poly by Bennett and Gill. [13] Variants of the theorem show that BPL is contained in L/poly and AM is contained in NP/poly .

Proof

Let L be a language in BPP, and let M(x,r) be a polynomial-time algorithm that decides L with error ≤ 1/3 (where x is the input string and r is a set of random bits).

Construct a new machine M'(x,R), which runs M 48n times and takes a majority vote of the results (where n is the input length and R is a sequence of 48n independently random rs). Thus, M' is also polynomial-time, and has an error probability ≤ 1/en by the Chernoff bound (see BPP). If we can fix R then we obtain an algorithm that is deterministic.

If is defined as , we have:

The input size is n, so there are 2n possible inputs. Thus, by the union bound, the probability that a random R is bad for at least one input x is

In words, the probability that R is bad for some x is less than 1, therefore there must be an R that is good for all x. Take such an R to be the advice string in our P/poly algorithm.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

<span class="mw-page-title-main">PSPACE</span> Set of decision problems

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p(n)) time, where p(n) is a polynomial function of n.

In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties:

<span class="mw-page-title-main">Interactive proof system</span>

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover is assumed to possess unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time, where n is the input length.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

<span class="mw-page-title-main">PH (complexity)</span> Class in computational complexity theory

In computational complexity theory, the complexity class PH is the union of all complexity classes in the polynomial hierarchy:

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public. Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

<span class="mw-page-title-main">Boolean circuit</span> Model of computation

In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

References

  1. 1 2 Lutz, Jack H.; Schmidt, William J. (1993), "Circuit size relative to pseudorandom oracles", Theoretical Computer Science , 107 (1): 95–119, doi:10.1016/0304-3975(93)90256-S, MR   1201167
  2. Lecture notes on computational complexity by Peter Bro Miltersen (PDF), archived from the original (PDF) on 2012-02-23, retrieved 2009-12-25
  3. 1 2 Goldreich, Oded; Wigderson, Avi (2002), "Derandomization that is rarely wrong from short advice that is typically good", in Rolim, José D. P.; Vadhan, Salil P. (eds.), Randomization and Approximation Techniques, 6th International Workshop, RANDOM 2002, Cambridge, MA, USA, September 13-15, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2483, Springer, pp. 209–223, doi:10.1007/3-540-45726-7_17, ECCC   TR02-39
  4. Caldwell, Chris, "2.3: Strong probable-primality and a practical test", Finding primes & proving primality
  5. Jaeschke, Gerhard (1993), "On strong pseudoprimes to several bases", Mathematics of Computation, 61 (204): 915–926, doi: 10.2307/2153262 , MR   1192971 ; see p. 926.
  6. Arora, Sanjeev; Barak, Boaz (2009), Computational complexity. A modern approach, Cambridge University Press, ISBN   978-0-521-42426-4, Zbl   1193.68112
  7. Arvind, Vikraman; Köbler, Johannes; Schöning, Uwe; Schuler, Rainer (1995), "If NP has polynomial-size circuits, then MA = AM", Theoretical Computer Science , 137 (2): 279–282, doi: 10.1016/0304-3975(95)91133-B , MR   1311226
  8. Babai, László; Fortnow, Lance; Lund, Carsten (1991), "Nondeterministic exponential time has two-prover interactive protocols", Computational Complexity, 1 (1): 3–40, doi:10.1007/BF01200056, MR   1113533, archived from the original on 2012-03-31, retrieved 2011-10-02
  9. Impagliazzo, Russell; Kabanets, Valentine; Wigderson, Avi (2002), "In search of an easy witness: exponential time vs. probabilistic polynomial time" (PDF), Journal of Computer and System Sciences , 65 (4): 672–694, doi:10.1016/S0022-0000(02)00024-7, MR   1964649
  10. A Note on the Karp-Lipton Collapse for the Exponential Hierarchy
  11. Jin-Yi Cai. Lecture 11: P/poly, Sparse Sets, and Mahaney's Theorem. CS 810: Introduction to Complexity Theory. The University of Wisconsin–Madison. September 18, 2003.
  12. Adleman, L. M. (1978), "Two theorems on random polynomial time", Proceedings of the Nineteenth Annual IEEE Symposium on Foundations of Computer Science, pp. 75–83, doi:10.1109/SFCS.1978.37
  13. Charles H. Bennett, John Gill. Relative to a Random Oracle A, PA ≠ NPA ≠ co-NPA with probability 1. Open Access logo PLoS transparent.svg