Probabilistically checkable proof

Last updated

In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.

Contents

Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. [1] Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP.

Definition

Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (xL, the proof is a string ∈ Σ). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or xL) and decides whether to accept the statement. The system has the following properties:

For the computational complexity of the verifier, we have the randomness complexityr(n) to measure the maximum number of random bits that V uses over all x of length n and the query complexityq(n) of the verifier is the maximum number of queries that V makes to π over all x of length n.

In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language.

The verifier is said to be non-adaptive if it makes all its queries before it receives any of the answers to previous queries.

The complexity class PCPc(n), s(n)[r(n), q(n)] is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completeness c(n) and soundness s(n), where the verifier is nonadaptive, runs in polynomial time, and it has randomness complexity r(n) and query complexity q(n).

The shorthand notation PCP[r(n), q(n)] is sometimes used for PCP1, 1/2[r(n), q(n)]. The complexity class PCP is defined as PCP1, 1/2[O(log n), O(1)].

History and significance

The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications to computational complexity (in particular hardness of approximation) and cryptography.

The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992, [2] although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved that PCP[poly(n), poly(n)] = NEXP , providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs. [3] The PCP theorem proved in 1992 states that PCP[O(log n),O(1)] = NP. [2] [4]

The theory of hardness of approximation requires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs.

Properties

From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standard complexity classes. For example, we have the following for different setting of PCP[r(n), q(n)]:

The PCP theorem and MIP = NEXP can be characterized as follows:

It is also known that PCP[r(n), q(n)] ⊆ NTIME (poly(n,2O(r(n))q(n))). In particular, PCP[log n, poly(n)] = NP . On the other hand, if NPPCP [o(log n),o(log n)] then P = NP. [2]

Linear PCP

A Linear PCP is a PCP in which the proof is a vector of elements of a finite field , and such that the PCP oracle is only allowed to do linear operations on the proof. Namely, the response from the oracle to a verifier query is a linear function . Linear PCPs have important applications in proof systems that can be compiled into SNARKs.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

<span class="mw-page-title-main">Interactive proof system</span>

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover possesses unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.

In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public. Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an advice string is an extra input to a Turing machine that is allowed to depend on the length n of the input, but not on the input itself. A decision problem is in the complexity class P/f(n) if there is a polynomial time Turing machine M with the following property: for any n, there is an advice string A of length f(n) such that, for any input x of length n, the machine M correctly decides the problem on the input x, given x and A.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

Richard Jay Lipton is an American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology. He has worked in computer science theory, cryptography, and DNA computing.

In computational complexity theory, a certificate is a string that certifies the answer to a computation, or certifies the membership of some string in a language. A certificate is often thought of as a solution path within a verification process, which is used to check whether a problem gives the answer "Yes" or "No".

References

  1. Arora, Sanjeev; Barak, Boaz (2007), Computational Complexity: A Modern Approach, Cambridge University Press, p. 241, ISBN   978-0-521-42426-4
  2. 1 2 3 Arora, Sanjeev; Safra, Shmuel (1998), "Probabilistic checking of proofs: A new characterization of NP", Journal of the ACM , 45 (1): 70–122, doi: 10.1145/273865.273901 , S2CID   751563
  3. Babai, László; Fortnow, Lance; Lund, Carsten (1990), "Nondeterministic exponential time has two-prover interactive protocols", Proceedings of the 31st Annual Symposium on Foundations of Computer Science (FOCS 1990), pp. 16–25, CiteSeerX   10.1.1.130.9311 , doi:10.1109/FSCS.1990.89520, ISBN   978-0-8186-2082-9, S2CID   38429596
  4. Arora, Sanjeev; Lund, Carsten; Motwani, Rajeev; Sudan, Madhu; Szegedy, Mario (1998), "Proof verification and the hardness of approximation problems", Journal of the ACM , 45 (3): 501–555, doi:10.1145/278298.278306, S2CID   8561542