NC (complexity)

Last updated
Unsolved problem in computer science:

Contents

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O ((log n)c) using O (nk) parallel processors. Stephen Cook [1] [2] coined the name "Nick's class" after Nick Pippenger, who had done extensive research [3] on circuits with polylogarithmic depth and polynomial size. [4]

Just as the class P can be thought of as the tractable problems (Cobham's thesis), so NC can be thought of as the problems that can be efficiently solved on a parallel computer. [5] NC is a subset of P because polylogarithmic parallel computations can be simulated by polynomial-time sequential ones. It is unknown whether NC = P, but most researchers suspect this to be false, meaning that there are probably some tractable problems that are "inherently sequential" and cannot significantly be sped up by using parallelism. Just as the class NP-complete can be thought of as "probably intractable", so the class P-complete , when using NC reductions, can be thought of as "probably not parallelizable" or "probably inherently sequential".

The parallel computer in the definition can be assumed to be a parallel, random-access machine (PRAM). That is a parallel computer with a central pool of memory, and any processor can access any bit of memory in constant time. The definition of NC is not affected by the choice of how the PRAM handles simultaneous access to a single bit by more than one processor. It can be CRCW, CREW, or EREW. See PRAM for descriptions of those models.

Equivalently, NC can be defined as those decision problems decidable by a uniform Boolean circuit (which can be calculated from the length of the input, for NC, we suppose we can compute the Boolean circuit of size n in logarithmic space in n) with polylogarithmic depth and a polynomial number of gates with a maximum fan-in of 2.

RNC is a class extending NC with access to randomness.

Problems in NC

As with P, by a slight abuse of language, one might classify function problems and search problems as being in NC. NC is known to include many problems, including

Often algorithms for those problems had to be separately invented and could not be naïvely adapted from well-known algorithms – Gaussian elimination and Euclidean algorithm rely on operations performed in sequence. One might contrast ripple carry adder with a carry-lookahead adder.

Example

An example of problem in NC1 is the parity check on a bit string. [6] The problem consists in counting the number of 1s in a string made of 1 and 0. A simple solution consists in summing all the string's bits. Since addition is associative, . Recursively applying such property, it is possible to build a binary tree of length in which every sum between two bits and is expressible by means of basic logical operators, e.g. through the boolean expression .

The NC hierarchy

NCi is the class of decision problems decidable by uniform boolean circuits with a polynomial number of gates of at most two inputs and depth O((log n)i), or the class of decision problems solvable in time O((log n)i) on a parallel computer with a polynomial number of processors. Clearly, we have

which forms the NC-hierarchy.

We can relate the NC classes to the space classes L and NL [7] and AC . [8]

The NC classes are related to the AC classes, which are defined similarly, but with gates having unbounded fan-in. For each i, we have [5] [8]

As an immediate consequence of this, we have that NC = AC. [9] It is known that both inclusions are strict for i = 0. [5]

Similarly, we have that NC is equivalent to the problems solvable on an alternating Turing machine restricted to at most two options at each step with O(log n) space and alternations. [10]

Open problem: Is NC proper?

One major open question in complexity theory is whether or not every containment in the NC hierarchy is proper. It was observed by Papadimitriou that, if NCi = NCi+1 for some i, then NCi = NCj for all j  i, and as a result, NCi = NC. This observation is known as NC-hierarchy collapse because even a single equality in the chain of containments

implies that the entire NC hierarchy "collapses" down to some level i. Thus, there are 2 possibilities:

It is widely believed that (1) is the case, although no proof as to the truth of either statement has yet been discovered.

NC0

The special class NC0 operates only on a constant length of input bits. It is therefore described as the class of functions definable by uniform boolean circuits with constant depth and bounded fan-in.

Barrington's theorem

A branching program with n variables of width k and length m consists of a sequence of m instructions. Each of the instructions is a tuple (i, p, q) where i is the index of variable to check (1 in), and p and q are functions from {1, 2, ..., k} to {1, 2, ..., k}. Numbers 1, 2, ..., k are called states of the branching program. The program initially starts in state 1, and each instruction (i, p, q) changes the state from x to p(x) or q(x), depending on whether the ith variable is 0 or 1. The function mapping an input to a final state of the program is called the yield of the program (more precisely, the yield on an input is the function mapping any initial state to the corresponding final state). The program accepts a set of variable values when there is some set of functions such that a variable sequence is in A precisely when its yield is in F.

A family of branching programs consists of a branching program with n variables for each n. It accepts a language when the n variable program accepts the language restricted to length n inputs.

It is easy to show that every language L on {0,1} can be recognized by a family of branching programs of width 5 and exponential length, or by a family of exponential width and linear length.

Every regular language on {0,1} can be recognized by a family of branching programs of constant width and linear number of instructions (since a DFA can be converted to a branching program). BWBP denotes the class of languages recognizable by a family of branching programs of bounded width and polynomial length. [11]

Barrington's theorem [12] says that BWBP is exactly nonuniform NC1. The proof uses the nonsolvability of the symmetric group S5. [11]

The theorem is rather surprising. For instance, it implies that the majority function can be computed by a family of branching programs of constant width and polynomial size, while intuition might suggest that to achieve polynomial size, one needs a linear number of states.

Proof of Barrington's theorem

A branching program of constant width and polynomial size can be easily converted (via divide-and-conquer) to a circuit in NC1.

Conversely, suppose a circuit in NC1 is given. Without loss of generality, assume it uses only AND and NOT gates.

Lemma 1  If there exists a branching program that sometimes works as a permutation P and sometimes as a permutation Q, by right-multiplying permutations in the first instruction by α, and in the last instruction left-multiplying by β, we can make a circuit of the same length that behaves as βPα or βQα, respectively.

Call a branching program α-computing a circuit C if it works as identity when C's output is 0, and as α when C's output is 1.

As a consequence of Lemma 1 and the fact that all cycles of length 5 are conjugate, for any two 5-cycles α, β, if there exists a branching program α-computing a circuit C, then there exists a branching program β-computing the circuit C, of the same length.

Lemma 2  There exist 5-cycles γ, δ such that their commutator ε=γδγ−1δ−1 is a 5-cycle. For example, γ = (1 2 3 4 5), δ = (1 3 5 4 2) giving ε = (1 3 2 5 4).

Proof

We will now prove Barrington's theorem by induction:

Suppose we have a circuit C which takes inputs x1,...,xn and assume that for all subcircuits D of C and 5-cycles α, there exists a branching program α-computing D. We will show that for all 5-cycles α, there exists a branching program α-computing C.

  • If the circuit C simply outputs some input bit xi, the branching program we need has just one instruction: checking xi's value (0 or 1), and outputting the identity or α (respectively).
  • If the circuit C outputs ¬A for some different circuit A, create a branching program α−1-computing A and then multiply the output of the program by α. By Lemma 1, we get a branching program for A outputting the identity or α, i.e. α-computing ¬A=C.
  • If the circuit C outputs AB for circuits A and B, join the branching programs that γ-compute A, δ-compute B, γ−1-compute A, and δ−1-compute B for a choice of 5-cycles γ and δ such that their commutator ε=γδγ−1δ−1 is also a 5-cycle. (The existence of such elements was established in Lemma 2.) If one or both of the circuits outputs 0, the resulting program will be the identity due to cancellation; if both circuits output 1, the resulting program will output the commutator ε. In other words, we get a program ε-computing AB. Because ε and α are two 5-cycles, they are conjugate, and hence there exists a program α-computing AB by Lemma 1.

By assuming the subcircuits have branching programs so that they are α-computing for all 5-cycles αS5, we have shown C also has this property, as required.

The size of the branching program is at most 4d, where d is the depth of the circuit. If the circuit has logarithmic depth, the branching program has polynomial length.

Notes

  1. Cook, S.A. (1981). "Towards a complexity theory of synchronous parallel computation". L’Enseignement Mathématique. 27: 99–124. Archived from the original on 2022-03-10.
  2. Cook, Stephen A. (1985-01-01). "A taxonomy of problems with fast parallel algorithms". Information and Control. International Conference on Foundations of Computation Theory. 64 (1): 2–22. doi: 10.1016/S0019-9958(85)80041-3 . ISSN   0019-9958.
  3. Pippenger, Nicholas (1979). "On simultaneous resource bounds". 20th Annual Symposium on Foundations of Computer Science (SFCS 1979): 307–311. doi:10.1109/SFCS.1979.29. ISSN   0272-5428. S2CID   7029313.
  4. Arora & Barak (2009) p.120
  5. 1 2 3 Arora & Barak (2009) p.118
  6. David Mix Barrington; Alexis Maciel (2000-07-18). "Lecture 2: The Complexity of Some Problems" (PDF). IAS/PCMI Summer Session 2000 - Clay Mathematics Undergraduate Program - Basic Course on Computational Complexity. Clarkson University . Retrieved 2021-11-11.
  7. Papadimitriou (1994) Theorem 16.1
  8. 1 2 Clote & Kranakis (2002) p.437
  9. Clote & Kranakis (2002) p.12
  10. S. Bellantoni and I. Oitavem (2004). "Separating NC along the delta axis". Theoretical Computer Science. 318 (1–2): 57–78. doi:10.1016/j.tcs.2003.10.021.
  11. 1 2 Clote & Kranakis (2002) p.50
  12. Barrington, David A. (1989). "Bounded-Width Polynomial-Size Branching Programs Recognize Exactly Those Languages in NC1" (PDF). J. Comput. Syst. Sci. 38 (1): 150–164. doi: 10.1016/0022-0000(89)90037-8 . ISSN   0022-0000. Zbl   0667.68059.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

In computational complexity theory, the complexity class TFNP is the class of total function problems which can be solved in nondeterministic polynomial time. That is, it is the class of function problems that are guaranteed to have an answer, and this answer can be checked in polynomial time, or equivalently it is the subset of FNP where a solution is guaranteed to exist. The abbreviation TFNP stands for "Total Function Nondeterministic Polynomial".

In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error.

<span class="mw-page-title-main">Boolean circuit</span> Model of computation

In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.

TC0 is a complexity class used in circuit complexity. It is the first class in the hierarchy of TC classes.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize.

In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof that convinces a polynomial time quantum verifier of this fact with high probability. Moreover, when the string is not in the language, every polynomial-size quantum state is rejected by the verifier with high probability.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

References