NL (complexity)

Last updated
Unsolved problem in computer science:

Contents

In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

NL is a generalization of L , the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL.

NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log n).

Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science).

Occasionally NL is referred to as RL due to its probabilistic definition below; however, this name is more frequently used to refer to randomized logarithmic space, which is not known to equal NL.

NL-complete problems

Several problems are known to be NL-complete under log-space reductions, including ST-connectivity and 2-satisfiability. ST-connectivity asks, for nodes S and T in a directed graph, whether T is reachable from S. 2-satisfiability asks, given a propositional formula of which each clause is the disjunction of two literals, if there is a variable assignment that makes the formula true. An example instance, where indicates not, might be:

Containments

It is known that NL is contained in P , since there is a polynomial-time algorithm for 2-satisfiability, but it is not known whether NL = P or whether L = NL. It is known that NL = co-NL, where co-NL is the class of languages whose complements are in NL. This result (the Immerman–Szelepcsényi theorem) was independently discovered by Neil Immerman and Róbert Szelepcsényi in 1987; they received the 1995 Gödel Prize for this work.

In circuit complexity, NL can be placed within the NC hierarchy. In Papadimitriou 1994, Theorem 16.1, we have:

.

More precisely, NL is contained in AC1 . It is known that NL is equal to ZPL , the class of problems solvable by randomized algorithms in logarithmic space and unbounded time, with no error. It is not, however, known or believed to be equal to RLP or ZPLP , the polynomial-time restrictions of RL and ZPL, which some authors refer to as RL and ZPL.

We can relate NL to deterministic space using Savitch's theorem, which tells us that any nondeterministic algorithm can be simulated by a deterministic machine in at most quadratically more space. From Savitch's theorem, we have directly that:

This was the strongest deterministic-space inclusion known in 1994 (Papadimitriou 1994 Problem 16.4.10, "Symmetric space"). Since larger space classes are not affected by quadratic increases, the nondeterministic and deterministic classes are known to be equal, so that for example we have PSPACE = NPSPACE .

Alternative definitions

Probabilistic definition

Suppose C is the complexity class of decision problems solvable in logarithmithic space with probabilistic Turing machines that never accept incorrectly but are allowed to reject incorrectly less than 1/3 of the time; this is called one-sided error. The constant 1/3 is arbitrary; any x with 0 ≤ x < 1/2 would suffice.

It turns out that C = NL. Notice that C, unlike its deterministic counterpart L , is not limited to polynomial time, because although it has a polynomial number of configurations it can use randomness to escape an infinite loop. If we do limit it to polynomial time, we get the class RL , which is contained in but not known or believed to equal NL.

There is a simple algorithm that establishes that C = NL. Clearly C is contained in NL, since:

To show that NL is contained in C, we simply take an NL algorithm and choose a random computation path of length n, and execute this 2n times. Because no computation path exceeds length n, and because there are 2n computation paths in all, we have a good chance of hitting the accepting one (bounded below by a constant).

The only problem is that we don't have room in log space for a binary counter that goes up to 2n. To get around this we replace it with a randomized counter, which simply flips n coins and stops and rejects if they all land on heads. Since this event has probability 2n, we expect to take 2n steps on average before stopping. It only needs to keep a running total of the number of heads in a row it sees, which it can count in log space.

Because of the Immerman–Szelepcsényi theorem, according to which NL is closed under complements, the one-sided error in these probabilistic computations can be replaced by zero-sided error. That is, these problems can be solved by probabilistic Turing machines that use logarithmic space and never make errors. The corresponding complexity class that also requires the machine to use only polynomial time is called ZPLP.

Thus, when we only look at space alone, it seems that randomization and nondeterminism are equally powerful.

Certificate definition

NL can equivalently be characterised by certificates, analogous to classes such as NP . Consider a deterministic logarithmic-space bounded Turing machine that has an additional read-only read-once input tape. A language is in NL if and only if such a Turing machine accepts any word in the language for an appropriate choice of certificate in its additional input tape, and rejects any word not in the language regardless of the certificate. [1]

Cem Say and Abuzer Yakaryılmaz have proven that the deterministic logarithmic-space Turing machine in the statement above can be replaced by a bounded-error probabilistic constant-space Turing machine that is allowed to use only a constant number of random bits. [2]

Descriptive complexity

There is a simple logical characterization of NL: it contains precisely those languages expressible in first-order logic with an added transitive closure operator.

Closure properties

The class NL is closed under the operations complementation, union, and therefore intersection, concatenation, and Kleene star.

Notes

  1. Arora, Sanjeev; Barak, Boaz (2009). "Definition 4.19". Complexity Theory: A Modern Approach. Cambridge University Press. ISBN   978-0-521-42426-4.
  2. A. C. Cem Say, Abuzer Yakaryılmaz, "Finite state verifiers with constant randomness," Logical Methods in Computer Science, Vol. 10(3:6)2014, pp. 1-17.

Related Research Articles

In computational complexity theory, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded away from 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

NP (complexity) Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.

The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.

Complexity class Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, P, also known as PTIME or DTIME(nO ), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

Randomized Logarithmic-space (RL), sometimes called RLP, is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction.

In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s ) = co-NSPACE(s ) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.

In computational complexity theory, NL-complete is a complexity class containing the languages that are complete for NL, the class of decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. The NL-complete languages are the most "difficult" or "expressive" problems in NL. If a method exists for solving any one of the NL-complete problems in logarithmic memory space, then NL = L.

In computational complexity theory, BPL, sometimes called BPLP, is the complexity class of problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with two-sided error. It is named in analogy with BPP, which is similar but has no logarithmic space restriction.

In computational complexity theory, SC is the complexity class of problems solvable by a deterministic Turing machine in polynomial time and polylogarithmic space. It may also be called DTISP(poly, polylog), where DTISP stands for deterministic time and space. Note that the definition of SC differs from PPolyL, since for the former, it is required that a single algorithm runs in both polynomial time and polylogarithmic space; while for the latter, two separate algorithms will suffice: one that runs in polynomial time, and another that runs in polylogarithmic space..

Structural complexity theory

In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.

References