P-complete

Last updated

In computational complexity theory, a decision problem is P-complete (complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction.

Contents

The notion of P-complete decision problems is useful in the analysis of which problems are difficult to parallelize effectively and which problems are difficult to solve in limited space, specifically when stronger notions of reducibility than polytime-reducibility are considered.

The specific type of reduction used varies and may affect the exact set of problems. Generically, reductions stricter than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions. If we use NC reductions, that is, reductions which can operate in polylogarithmic time on a parallel computer with a polynomial number of processors, then all P-complete problems lie outside NC and so cannot be effectively parallelized, under the unproven assumption that NC  P. If we use the stronger log-space reduction, this remains true, but additionally we learn that all P-complete problems lie outside L under the weaker unproven assumption that L  P. In this latter case the set P-complete may be smaller.

Motivation

The class P, typically taken to consist of all the "tractable" problems for a sequential computer, contains the class NC, which consists of those problems which can be efficiently solved on a parallel computer. This is because parallel computers can be simulated on a sequential machine. It is not known whether NC = P. In other words, it is not known whether there are any tractable problems that are inherently sequential. Just as it is widely suspected that P does not equal NP, so it is widely suspected that NC does not equal P.

Similarly, the class L contains all problems that can be solved by a sequential computer in logarithmic space. Such machines run in polynomial time because they can have a polynomial number of configurations. It is suspected that L  P; that is, that some problems that can be solved in polynomial time also require more than logarithmic space.

Similarly to the use of NP-complete problems to analyze the P = NP question, the P-complete problems, viewed as the "probably not parallelizable" or "probably inherently sequential" problems, serves in a similar manner to study the NC = P question. Finding an efficient way to parallelize the solution to some P-complete problem would show that NC = P. It can also be thought of as the "problems requiring superlogarithmic space"; a log-space solution to a P-complete problem (using the definition based on log-space reductions) would imply L = P.

The logic behind this is analogous to the logic that a polynomial-time solution to an NP-complete problem would prove P = NP: if we have a NC reduction from any problem in P to a problem A, and an NC solution for A, then NC = P. Similarly, if we have a log-space reduction from any problem in P to a problem A, and a log-space solution for A, then L = P.

Reductions

There are many many-one reduction used when defining P-completeness, with variable strengths. [1] :Section 3.3

At the lowest level is NC1-reduction, then L-reduction, then NC2-reduction, NC3-reduction, and so on. Their union is NC-reduction. They are ordered since .

For NCk-reduction and NC-reduction, uniformity is imposed, because the intention of P-completeness theory is to prove upper bounds. Non-uniformity is useful for proving lower bounds, but for upper bounds, non-uniformity is unsatisfactory, since they are too powerful for this purpose. The standard uniformity condition is L-uniformity, meaning that the circuit family should be constructable by a Turing machine, such that given as input, it outputs a description of the -th circuit using working tape. [1]

Given two languages , define iff there exists a L-uniform NCk boolean circuit family that together computes a function , such that iff .

Define iff for some .

Define iff there exists a function that is implicitly logspace computable, such that iff .

P-complete

Define a language to be P-complete relative to NCk-reduction iff for any language in P, . Similarly for the other cases.

Usually for P-completeness, NC-reduction is meant by default, though many results in the literature concerning P-completeness still holds even under the strongest assumption of NC1-reduction.

P-completeness is usually used thus: First, a problem is shown to be P-complete relative to NCk-reduction. Next, assuming that the L-uniform NCk complexity class is strictly smaller than the P class, one immediately conclude that all P-complete and P-hard problems (assuming the same reduction type) are impossible to solve by L-uniform NCk circuit families. In other words, such problems cannot be parallelized, for a certain sense of "parallelization".

P-complete problems

The most basic P-complete problem under logspace many-one reductions is following: given a Turing machine , an input for that machine x, and a number T (written in unary), does that machine halt on that input within the first T steps? For any x in in P, output the encoding of the Turing machine which accepts it in polynomial-time, the encoding of x itself, and a number of steps corresponding to the p which is there polynomial-time bound on the operation of the Turing Machine deciding , . The machine M halts on x within steps if and only if x is in L. Clearly, if we can parallelize a general simulation of a sequential computer (ie. The Turing machine simulation of a Turing machine), then we will be able to parallelize any program that runs on that computer. If this problem is in NC, then so is every other problem in P. If the number of steps is written in binary, the problem is EXPTIME-complete. This problem illustrates a common trick in the theory of P-completeness. We aren't really interested in whether a problem can be solved quickly on a parallel machine. We're just interested in whether a parallel machine solves it much more quickly than a sequential machine. Therefore, we have to reword the problem so that the sequential version is in P. That is why this problem required T to be written in unary. If a number T is written as a binary number (a string of n ones and zeros, where n = log T), then the obvious sequential algorithm can take time 2n. On the other hand, if T is written as a unary number (a string of n ones, where n = T), then it only takes time n. By writing T in unary rather than binary, we have reduced the obvious sequential algorithm from exponential time to linear time. That puts the sequential problem in P. Then, it will be in NC if and only if it is parallelizable.

Many other problems have been proved to be P-complete, and therefore are widely believed to be inherently sequential. These include the following problems which are P-complete under at least logspace reductions, either as given, or in a decision-problem form:

Most of the languages above are P-complete under even stronger notions of reduction, such as uniform many-one reductions, DLOGTIME reductions, or polylogarithmic projections.

In order to prove that a given problem in P is P-complete, one typically tries to reduce a known P-complete problem to the given one.

In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Ogihara, showed that if there exists a sparse language that is P-complete, then L = P. [3]

P-complete problems may be solvable with different time complexities. For instance, the circuit value problem can be solved in linear time by a topological sort. Of course, because the reductions to a P-complete problem may have different time complexities, this fact does not imply that all the problems in P can also be solved in linear time.

Notes

  1. 1 2 Greenlaw, Raymond; Hoover, H. James; Ruzzo, Walter L. (1995). Limits to parallel computation: P-completeness theory. New York: Oxford University Press. ISBN   978-0-19-508591-4.
  2. Cook, Stephen A. (1985-01-01). "A taxonomy of problems with fast parallel algorithms". Information and Control. International Conference on Foundations of Computation Theory. 64 (1): 2–22. doi: 10.1016/S0019-9958(85)80041-3 . ISSN   0019-9958.
  3. Cai, Jin-Yi; Sivakumar, D. (1999), "Sparse hard sets for P: resolution of a conjecture of Hartmanis", Journal of Computer and System Sciences, 58 (2): 280–296, doi: 10.1006/jcss.1998.1615

References