Space complexity

Last updated

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. [1] This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

Contents

Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as etc., where n is a characteristic of the input influencing space complexity.

Space complexity classes

Analogously to time complexity classes DTIME(f(n)) and NTIME(f(n)), the complexity classes DSPACE(f(n)) and NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic) Turing machines that use space. The complexity classes PSPACE and NPSPACE allow to be any polynomial, analogously to P and NP. That is,

and

Relationships between classes

The space hierarchy theorem states that, for all space-constructible functions there exists a problem that can be solved by a machine with memory space, but cannot be solved by a machine with asymptotically less than space.

The following containments between complexity classes hold. [2]

Furthermore, Savitch's theorem gives the reverse containment that if

As a direct corollary, This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the exponential time hypothesis conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.

The Immerman–Szelepcsényi theorem states that, again for is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠ co-NP. [3] [4]

LOGSPACE

L or LOGSPACE is the set of problems that can be solved by a deterministic Turing machine using only memory space with regards to input size. Even a single counter that can index the entire -bit input requires space, so LOGSPACE algorithms can maintain only a constant number of counters or other variables of similar bit complexity.

LOGSPACE and other sub-linear space complexity is useful when processing large data that cannot fit into a computer's RAM. They are related to Streaming algorithms, but only restrict how much memory can be used, while streaming algorithms have further constraints on how the input is fed into the algorithm. This class also sees use in the field of pseudorandomness and derandomization, where researchers consider the open problem of whether L = RL. [5] [6]

The corresponding nondeterministic space complexity class is NL.

Auxiliary space complexity

The term auxiliary space refers to space other than that consumed by the input. Auxiliary space complexity could be formally defined in terms of a Turing machine with a separate input tape which cannot be written to, only read, and a conventional working tape which can be written to. The auxiliary space complexity is then defined (and analyzed) via the working tape. For example, consider the depth-first search of a balanced binary tree with nodes: its auxiliary space complexity is

See also

Related Research Articles

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

<span class="mw-page-title-main">PSPACE</span> Set of decision problems

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p(n)) time, where p(n) is a polynomial function of n.

In computational complexity theory, EXPSPACE is the set of all decision problems solvable by a deterministic Turing machine in exponential space, i.e., in space, where is a polynomial function of . Some authors restrict to be a linear function, but most authors instead call the resulting class ESPACE. If we use a nondeterministic machine instead, we get the class NEXPSPACE, which is equal to EXPSPACE by Savitch's theorem.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class NTIME(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f(n)). Here O is the big O notation, f is some function, and n is the size of the input (for which the problem is to be decided).

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

<span class="mw-page-title-main">Structural complexity theory</span>

In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.

References

  1. Kuo, Way; Zuo, Ming J. (2003), Optimal Reliability Modeling: Principles and Applications, John Wiley & Sons, p. 62, ISBN   9780471275459
  2. Arora, Sanjeev; Barak, Boaz (2007), Computational Complexity : A Modern Approach (PDF) (draft ed.), p. 76, ISBN   9780511804090
  3. Immerman, Neil (1988), "Nondeterministic space is closed under complementation" (PDF), SIAM Journal on Computing, 17 (5): 935–938, doi:10.1137/0217058, MR   0961049
  4. Szelepcsényi, Róbert (1987), "The method of forcing for nondeterministic automata", Bulletin of the EATCS, 33: 96–100
  5. Nisan, Noam (1992), "RL ⊆ SC", Proceedings of the 24th ACM Symposium on Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623, doi:10.1145/129712.129772, S2CID   11651375 {{citation}}: CS1 maint: location missing publisher (link).
  6. Reingold, Omer; Trevisan, Luca; Vadhan, Salil (2006), "Pseudorandom walks on regular digraphs and the RL vs. L problem" (PDF), STOC'06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, New York: ACM, pp. 457–466, doi:10.1145/1132516.1132583, MR   2277171, S2CID   17360260