DSPACE

Last updated

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

Contents

Complexity classes

The measure DSPACE is used to define complexity classes, sets of all of the decision problems that can be solved using a certain amount of memory space. For each function f(n), there is a complexity class SPACE(f(n)), the set of decision problems that can be solved by a deterministic Turing machine using space O(f(n)). There is no restriction on the amount of computation time that can be used, though there may be restrictions on some other complexity measures (like alternation).

Several important complexity classes are defined in terms of DSPACE. These include:

Proof: Suppose that there exists a non-regular language L ∈ DSPACE(s(n)), for s(n) = o(log log n). Let M be a Turing machine deciding L in space s(n). By our assumption L ∉ DSPACE(O(1)); thus, for any arbitrary , there exists an input of M requiring more space than k.

Let x be an input of smallest size, denoted by n, that requires more space than k, and be the set of all configurations of M on input x. Because M ∈ DSPACE(s(n)), then , where c is a constant depending on M.

Let S denote the set of all possible crossing sequences of M on x. Note that the length of a crossing sequence of M on x is at most : if it is longer than that, then some configuration will repeat, and M will go into an infinite loop. There are also at most possibilities for every element of a crossing sequence, so the number of different crossing sequences of M on x is

According to pigeonhole principle, there exist indexes i < j such that , where and are the crossing sequences at boundary i and j, respectively.

Let x' be the string obtained from x by removing all cells from i + 1 to j. The machine M still behaves exactly the same way on input x' as on input x, so it needs the same space to compute x' as to compute x. However, |x' | < |x|, contradicting the definition of x. Hence, there does not exist such a language L as assumed. □

The above theorem implies the necessity of the space-constructible function assumption in the space hierarchy theorem.

Machine models

DSPACE is traditionally measured on a deterministic Turing machine. Several important space complexity classes are sublinear, that is, smaller than the size of the input. Thus, "charging" the algorithm for the size of the input, or for the size of the output, would not truly capture the memory space used. This is solved by defining the multi-tape Turing machine with input and output, which is a standard multi-tape Turing machine, except that the input tape may never be written-to, and the output tape may never be read from. This allows smaller space classes, such as L (logarithmic space), to be defined in terms of the amount of space used by all of the work tapes (excluding the special input and output tapes).

Since many symbols might be packed into one by taking a suitable power of the alphabet, for all c ≥ 1 and f such that f(n) ≥ 1, the class of languages recognizable in c f(n) space is the same as the class of languages recognizable in f(n) space. This justifies usage of big O notation in the definition.

Hierarchy theorem

The space hierarchy theorem shows that, for every space-constructible function , there exists some language L which is decidable in space but not in space .

Relation with other complexity classes

DSPACE is the deterministic counterpart of NSPACE , the class of memory space on a non-deterministic Turing machine. By Savitch's theorem, [3] we have that

NTIME is related to DSPACE in the following way. For any time constructible function t(n), we have

.

Related Research Articles

Computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

NP (complexity) computational complexity class of decision problems solvable by a non-deterministic Turing machine in polynomial time

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine.

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, EXPSPACE is the set of all decision problems solvable by a deterministic Turing machine in exponential space, i.e., in space, where is a polynomial function of . Some authors restrict to be a linear function, but most authors instead call the resulting class ESPACE. If we use a nondeterministic machine instead, we get the class NEXPSPACE, which is equal to EXPSPACE by Savitch's theorem.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm to execute a program and produce output.

Complexity class Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class NTIME(f ) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f ). Here O is the big O notation, f is some function, and n is the size of the input.

In computational complexity theory, P, also known as PTIME or DTIME(nO ), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, NL is the complexity class containing decision problems which can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

Circuit complexity model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

Structural complexity theory

In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.

References

  1. Szepietowski (1994) p. 28
  2. Alberts, Maris (1985), Space complexity of alternating Turing machines
  3. Arora & Barak (2009) p. 86