Complement (complexity)

Last updated

In computational complexity theory, the complement of a decision problem is the decision problem resulting from reversing the yes and no answers. [1] Equivalently, if we define decision problems as sets of finite strings, then the complement of this set over some fixed domain is its complement problem. [2]

For example, one important problem is whether a number is a prime number. Its complement is to determine whether a number is a composite number (a number which is not prime). Here the domain of the complement is the set of all integers exceeding one. [3]

There is a Turing reduction from every problem to its complement problem. [4] The complement operation is an involution, meaning it "undoes itself", or the complement of the complement is the original problem.

One can generalize this to the complement of a complexity class, called the complement class, which is the set of complements of every problem in the class. [5] If a class is called C, its complement is conventionally labelled co-C. Notice that this is not the complement of the complexity class itself as a set of problems, which would contain a great deal more problems.

A class is said to be closed under complement if the complement of any problem in the class is still in the class. [6] Because there are Turing reductions from every problem to its complement, any class which is closed under Turing reductions is closed under complement. Any class which is closed under complement is equal to its complement class. However, under many-one reductions, many important classes, especially NP, are believed to be distinct from their complement classes (although this has not been proven). [7]

The closure of any complexity class under Turing reductions is a superset of that class which is closed under complement. The closure under complement is the smallest such class. If a class is intersected with its complement, we obtain a (possibly empty) subset which is closed under complement.

Every deterministic complexity class (DSPACE(f(n)), DTIME(f(n)) for all f(n)) is closed under complement, [8] because one can simply add a last step to the algorithm which reverses the answer. This doesn't work for nondeterministic complexity classes, because if there exist both computation paths which accept and paths which reject, and all the paths reverse their answer, there will still be paths which accept and paths which reject consequently, the machine accepts in both cases.

Similarly, probabilistic classes such as BPP, ZPP, BQP or PP which are defined symmetrically with regards to their yes and no instances are closed under complement. In contrast, classes such as RP and co-RP define their probabilities with one-sided error, and so are not (currently known to be) closed under complement.

Some of the most surprising complexity results shown to date showed that the complexity classes NL and SL are in fact closed under complement, whereas before it was widely believed they were not (see Immerman–Szelepcsényi theorem). The latter has become less surprising now that we know SL equals L , which is a deterministic class.

Every class which is low for itself is closed under complement.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP precisely if only no-instances have a polynomial-length "certificate" and there is a polynomial-time algorithm that can be used to verify any purported certificate.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, the complexity class #P is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

<span class="mw-page-title-main">NP-hardness</span> Complexity class

In computational complexity theory, NP-hardness is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties:

In complexity theory, UP is the complexity class of decision problems solvable in polynomial time on an unambiguous Turing machine with at most one accepting path for each input. UP contains P and is contained in NP.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, the complexity class FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. It is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s ) = co-NSPACE(s ) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.

In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s.

References

  1. Itô, Kiyosi (1993), Encyclopedic Dictionary of Mathematics, Volume 1, MIT Press, p. 269, ISBN   9780262590204 .
  2. Schrijver, Alexander (1998), Theory of Linear and Integer Programming, Wiley Series in Discrete Mathematics & Optimization, John Wiley & Sons, p. 19, ISBN   9780471982326 .
  3. Homer, Steven; Selman, Alan L. (2011), Computability and Complexity Theory, Texts in Computer Science, Springer, ISBN   9781461406815 .
  4. Singh, Arindama (2009), Elements of Computation Theory, Texts in Computer Science, Springer, Exercise 9.10, p. 287, ISBN   9781848824973 .
  5. Bovet, Daniele; Crescenzi, Pierluigi (1994), Introduction to the Theory of Complexity, Prentice Hall International Series in Computer Science, Prentice Hall, pp. 133–134, ISBN   9780139153808 .
  6. Vollmer, Heribert (1999), Introduction to Circuit Complexity: A Uniform Approach, Texts in Theoretical Computer Science. An EATCS Series, Springer, p. 113, ISBN   9783540643104 .
  7. Pruim, R.; Wegener, Ingo (2005), Complexity Theory: Exploring the Limits of Efficient Algorithms, Springer, p. 66, ISBN   9783540274773 .
  8. Ausiello, Giorgio (1999), Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties, Springer, p. 189, ISBN   9783540654315 .