NSPACE

Last updated

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

Contents

Complexity classes

The measure NSPACE is used to define the complexity class whose solutions can be determined by a non-deterministic Turing machine. The complexity class NSPACE(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine, M, using space O(f(n)), where n is the length of the input. [1]

Several important complexity classes can be defined in terms of NSPACE. These include:

The Immerman–Szelepcsényi theorem states that NSPACE(s(n)) is closed under complement for every function s(n) ≥ log n.

A further generalization is ASPACE, defined with alternating Turing machines.

Relation with other complexity classes

DSPACE

NSPACE is the non-deterministic counterpart of DSPACE, the class of memory space on a deterministic Turing machine. First by definition, then by Savitch's theorem, we have that:

Time

NSPACE can also be used to determine the time complexity of a deterministic Turing machine by the following theorem:

If a language L is decided in space S(n) (where S(n) ≥ log n) by a non-deterministic TM, then there exists a constant C such that L is decided in time O(CS(n)) by a deterministic one. [2]

Limitations

The measure of space complexity in terms of DSPACE is useful because it represents the total amount of memory that an actual computer would need to solve a given computational problem with a given algorithm. The reason is that DSPACE describes the space complexity used by deterministic Turing machines, which can represent actual computers. On the other hand, NSPACE describes the space complexity of non-deterministic Turing machines, which are not useful when trying to represent actual computers. For this reason, NSPACE is limited in its usefulness to real-world applications.

Related Research Articles

Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

NP (complexity) computational complexity class of decision problems solvable by a non-deterministic Turing machine in polynomial time

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine.

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the complexity class EXPTIME is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p) time, where p(n) is a polynomial function of n. EXPTIME is one class in an exponential hierarchy of complexity classes with increasingly more complex oracles or quantifier alternations. For example, the class 2-EXPTIME is defined similarly to EXPTIME but with a doubly exponential time bound . This can be generalized to higher and higher time bounds. EXPTIME can also be reformulated as the space class APSPACE, the set of all problems that can be solved by an alternating Turing machine in polynomial space.

In computational complexity theory, EXPSPACE is the set of all decision problems solvable by a deterministic Turing machine in exponential space, i.e., in space, where is a polynomial function of . Some authors restrict to be a linear function, but most authors instead call the resulting class ESPACE. If we use a nondeterministic machine instead, we get the class NEXPSPACE, which is equal to EXPSPACE by Savitch's theorem.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm to execute a program and produce output.

Complexity class set of problems in computational complexity theory of related resource-based complexity

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most common resources considered are time and memory.

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class NTIME(f ) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f ). Here O is the big O notation, f is some function, and n is the size of the input.

In computational complexity theory, P, also known as PTIME or DTIME(nO ), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, NL is the complexity class containing decision problems which can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

Circuit complexity model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

Structural complexity theory

In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.

References

  1. Sipser, Michael (2006). Introduction to the Theory of Computation (2nd ed.) . Course Technology. pp.  303–304. ISBN   978-0-534-95097-2.
  2. Goddard, Wayne (2008). Introducing the Theory of Computation. Jones and Bartlett Publishers, Inc. p. 183. ISBN   978-0-7637-4125-9.