DTIME

Last updated

In computational complexity theory, DTIME (or TIME) is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time (or number of computation steps) that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource (the amount of time it takes a computer to solve a problem).

Contents

The resource DTIME is used to define complexity classes, sets of all of the decision problems which can be solved using a certain amount of computation time. If a problem of input size n can be solved in , we have a complexity class (or ). There is no restriction on the amount of memory space used, but there may be restrictions on some other complexity resources (like alternation).

Complexity classes in DTIME

Many important complexity classes are defined in terms of DTIME, containing all of the problems that can be solved in a certain amount of deterministic time. Any proper complexity function can be used to define a complexity class, but only certain classes are useful to study. In general, we desire our complexity classes to be robust against changes in the computational model, and to be closed under composition of subroutines.

DTIME satisfies the time hierarchy theorem, meaning that asymptotically larger amounts of time always create strictly larger sets of problems.

The well-known complexity class P comprises all of the problems which can be solved in a polynomial amount of DTIME. It can be defined formally as:

P is the smallest robust class which includes linear-time problems (AMS 2004, Lecture 2.2, pg. 20). P is one of the largest complexity classes considered "computationally feasible".

A much larger class using deterministic time is EXPTIME, which contains all of the problems solvable using a deterministic machine in exponential time. Formally, we have

Larger complexity classes can be defined similarly. Because of the time hierarchy theorem, these classes form a strict hierarchy; we know that , and on up.

Machine model

For robust classes, such as P, the exact machine model used to define DTIME can vary without affecting the power of the resource. The Computational Complexity literature often defines DTIME based on multitape Turing machines, particularly when discussing very small time classes. A multitape deterministic Turing machine can never provide more than a quadratic time speedup over a singletape machine. [1]

Due to the Linear speedup theorem for Turing machines, multiplicative constants in the time bound do not affect the extent of DTIME classes; a constant multiplicative speedup can always be obtained by increasing the number of states in the finite state control and the size of the tape alphabet. In the statement of Papadimitriou, [2] for a language L,

Let . Then, for any , , where .

Generalizations

Using a model other than a deterministic Turing machine, there are various generalizations and restrictions of DTIME. For example, if we use a nondeterministic Turing machine, we have the resource NTIME. The relationship between the expressive powers of DTIME and other computational resources are very poorly understood. One of the few known results [3] is

for multitape machines. This was extended to

by Santhanam. [4]

If we use an alternating Turing machine, we have the resource ATIME.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

<span class="mw-page-title-main">PSPACE</span> Set of decision problems

In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p(n)) time, where p(n) is a polynomial function of n.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, the complexity class NTIME(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f(n)). Here O is the big O notation, f is some function, and n is the size of the input (for which the problem is to be decided).

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22p(n)) time, where p(n) is a polynomial function of n.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

References

  1. Papadimitriou 1994, Thrm. 2.1
  2. 1994, Thrm. 2.2
  3. Paul Wolfgang, Nick Pippenger, Endre Szemerédi, William Trotter. On determinism versus non-determinism and related problems. 24th Annual Symposium on Foundations of Computer Science, 1983. doi : 10.1109/SFCS.1983.39
  4. Rahul Santhanam, On separators, segregators and time versus space, 16th Annual IEEE Conference on Computational Complexity, 2001.