L (complexity)

Last updated

In computational complexity theory, L (also known as LSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. [1] [2] Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, [3] whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input [1] and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

Contents

Complete problems and logical characterization

Every non-trivial problem in L is complete under log-space reductions, [4] so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions.

A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path between two vertices in a given undirected graph, is in L, showing that L = SL , since USTCON is SL-complete. [5]

One consequence of this is a simple logical characterization of L: it contains precisely those languages expressible in first-order logic with an added commutative transitive closure operator (in graph theoretical terms, this turns every connected component into a clique). This result has application to database query languages: data complexity of a query is defined as the complexity of answering a fixed query considering the data size as the variable input. For this measure, queries against relational databases with complete information (having no notion of nulls) as expressed for instance in relational algebra are in L.

L is a subclass of NL , which is the class of languages decidable in logarithmic space on a nondeterministic Turing machine. A problem in NL may be transformed into a problem of reachability in a directed graph representing states and state transitions of the nondeterministic machine, and the logarithmic space bound implies that this graph has a polynomial number of vertices and edges, from which it follows that NL is contained in the complexity class P of problems solvable in deterministic polynomial time. [6] Thus L  NL  P. The inclusion of L into P can also be proved more directly: a decider using O(log n) space cannot use more than 2O(log n) = nO(1) time, because this is the total number of possible configurations.

L further relates to the class NC in the following way: NC1  L  NL  NC2. In words, given a parallel computer C with a polynomial number O(nk) of processors for some constant k, any problem that can be solved on C in O(log n) time is in L, and any problem in L can be solved in O(log2 n) time on C.

Unsolved problem in computer science:

Important open problems include whether L = P, [2] and whether L = NL. [7] It is not even known whether L =  NP . [8]

The related class of function problems is FL . FL is often used to define logspace reductions.

Additional properties

L is low for itself, because it can simulate log-space oracle queries (roughly speaking, "function calls which use log space") in log space, reusing the same space for each query.

Other uses

The main idea of logspace is that one can store a polynomial-magnitude number in logspace and use it to remember pointers to a position of the input.

The logspace class is therefore useful to model computation where the input is too big to fit in the RAM of a computer. Long DNA sequences and databases are good examples of problems where only a constant part of the input will be in RAM at a given time and where we have pointers to compute the next part of the input to inspect, thus using only logarithmic memory.

See also

Notes

  1. 1 2 Sipser (1997) , p. 295, Definition 8.12
  2. 1 2 Garey & Johnson (1979) , p. 177
  3. On a read/write input tape, a linear amount of memory could be obtained by packing of symbols (as in the proof of the linear speedup theorem), thus evading the logspace contraint.
  4. See Garey & Johnson (1979) , p. 179, Theorem 7.13 (claim 2)
  5. Reingold, Omer (2005). Undirected ST-connectivity in log-space. STOC'05: Proceedings of the 37th Annual ACM Symposium on Theory of Computing. ACM, New York. pp. 376–385. doi:10.1145/1060590.1060647. MR   2181639. ECCC   TR04-094.
  6. Sipser (1997), Corollary 8.21, p. 299.
  7. Sipser (1997) , p. 297; Garey & Johnson (1979) , p. 180
  8. "Complexity theory - is it possible that L = NP".

Related Research Articles

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.

In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, the complexity class FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. It is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization.

In computational complexity theory, an advice string is an extra input to a Turing machine that is allowed to depend on the length n of the input, but not on the input itself. A decision problem is in the complexity class P/f(n) if there is a polynomial time Turing machine M with the following property: for any n, there is an advice string A of length f(n) such that, for any input x of length n, the machine M correctly decides the problem on the input x, given x and A.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, the complexity class FL is the set of function problems which can be solved by a deterministic Turing machine in a logarithmic amount of memory space. As in the definition of L, the machine reads its input from a read-only tape and writes its output to a write-only tape; the logarithmic space restriction applies only to the read/write working tape.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

Randomized Logarithmic-space (RL), sometimes called RLP, is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction.

In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.

In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s.

In computational complexity theory, NL-complete is a complexity class containing the languages that are complete for NL, the class of decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. The NL-complete languages are the most "difficult" or "expressive" problems in NL. If a deterministic algorithm exists for solving any one of the NL-complete problems in logarithmic memory space, then NL = L.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

References