In computational complexity theory, the **time hierarchy theorems** are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with *n*^{2} time but not *n* time.

- Background
- Proof overview
- Deterministic time hierarchy theorem
- Statement
- Proof
- Extension
- Non-deterministic time hierarchy theorem
- Consequences
- Sharper hierarchy theorems
- See also
- References
- Further reading

The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965.^{ [1] } It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine.^{ [2] } Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions *f*(*n*),

- ,

where DTIME(*f*(*n*)) denotes the complexity class of decision problems solvable in time O(*f*(*n*)). Note that the left-hand class involves little o notation, referring to the set of decision problems solvable in asymptotically **less** than *f*(*n*) time.

The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972.^{ [3] } It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978.^{ [4] } Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today.^{ [5] } The time hierarchy theorem for nondeterministic Turing machines states that if *g*(*n*) is a time-constructible function, and *f*(*n*+1) = o(*g*(*n*)), then

- .

The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of advice.^{ [6] }

Both theorems use the notion of a time-constructible function. A function is time-constructible if there exists a deterministic Turing machine such that for every , if the machine is started with an input of *n* ones, it will halt after precisely *f*(*n*) steps. All polynomials with non-negative integer coefficients are time-constructible, as are exponential functions such as 2^{n}.

We need to prove that some time class **TIME**(*g*(*n*)) is strictly larger than some time class **TIME**(*f*(*n*)). We do this by constructing a machine which cannot be in **TIME**(*f*(*n*)), by diagonalization. We then show that the machine is in **TIME**(*g*(*n*)), using a simulator machine.

Time Hierarchy Theorem.Iff(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic timeo(f(n)) but can be solved in worst-case deterministic timeO(f(n)logf(n)). Thus

**Note 1.***f*(*n*) is at least *n*, since smaller functions are never time-constructible.

**Example.** There are problems solvable in time *n*log^{2}*n* but not time *n*. This follows by setting , since *n* is in

We include here a proof of a weaker result, namely that **DTIME**(*f*(*n*)) is a strict subset of **DTIME**(*f*(2*n* + 1)^{3}), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to *f*(*n*)log*f*(*n*).

To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within f

Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (*M*,*x*) so that the machine *M* accepts within *f*(|*x*|) steps.

Here, *M* is a deterministic Turing machine, and *x* is its input (the initial contents of its tape). [*M*] denotes an input that encodes the Turing machine *M*. Let *m* be the size of the tuple ([*M*], *x*).

We know that we can decide membership of *H _{f}* by way of a deterministic Turing machine

The rest of the proof will show that

so that if we substitute 2*n* + 1 for *m*, we get the desired result. Let us assume that *H _{f}* is in this time complexity class, and we will reach a contradiction.

If *H _{f}* is in this time complexity class, then there exists a machine

We use this *K* to construct another machine, *N*, which takes a machine description [*M*] and runs *K* on the tuple ([*M*], [*M*]), ie. M is simulated on its own code by *K*, and then *N* accepts if *K* rejects, and rejects if *K* accepts. If *n* is the length of the input to *N*, then *m* (the length of the input to *K*) is twice *n* plus some delimiter symbol, so *m* = 2*n* + 1. *N'*s running time is thus

Now if we feed [*N*] as input into *N* itself (which makes *n* the length of [*N*]) and ask the question whether *N* accepts its own description as input, we get:

- If
*N***accepts**[*N*] (which we know it does in at most*f*(*n*) operations since*K*halts on ([*N*], [*N*]) in*f*(*n*) steps), this means that*K***rejects**([*N*], [*N*]), so ([*N*], [*N*]) is not in*H*, and so by the definition of_{f}*H*, this implies that_{f}*N*does not accept [*N*] in*f*(*n*) steps. Contradiction.

- If
*N***rejects**[*N*] (which we know it does in at most*f*(*n*) operations), this means that*K***accepts**([*N*], [*N*]), so ([*N*], [*N*])**is**in*H*, and thus_{f}*N***does**accept [*N*] in*f*(*n*) steps. Contradiction.

We thus conclude that the machine *K* does not exist, and so

The reader may have realised that the proof gives the weaker result because we have chosen a simple Turing machine simulation for which we know that

It is known^{ [7] } that a more efficient simulation exists which establishes that

- .

If *g*(*n*) is a time-constructible function, and *f*(*n*+1) = o(*g*(*n*)), then there exists a decision problem which cannot be solved in non-deterministic time *f*(*n*) but can be solved in non-deterministic time *g*(*n*). In other words, the complexity class ** NTIME **(*f*(*n*)) is a strict subset of **NTIME**(*g*(*n*)).

The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words ** P ** ⊊ ** EXPTIME ** ⊊ ** 2-EXP ** ⊊ ... and ** NP ** ⊊ ** NEXPTIME ** ⊊ **2-NEXP** ⊊ ....

For example, since . Indeed, from the time hierarchy theorem.

The theorem also guarantees that there are problems in **P** requiring arbitrarily large exponents to solve; in other words, **P** does not collapse to **DTIME**(*n*^{k}) for any fixed *k*. For example, there are problems solvable in *n*^{5000} time but not *n*^{4999} time. This is one argument against Cobham's thesis, the convention that **P** is a practical class of algorithms. If such a collapse did occur, we could deduce that **P** ≠ **PSPACE**, since it is a well-known theorem that **DTIME**(*f*(*n*)) is strictly contained in **DSPACE**(*f*(*n*)).

However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether **P** and **NP**, **NP** and ** PSPACE **, **PSPACE** and **EXPTIME**, or **EXPTIME** and **NEXPTIME** are equal or not.

The gap of approximately between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for:

- The unit-cost random-access machine
^{ [8] } - A programming language model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by Neil D. Jones
^{ [9] }is stronger than a deterministic Turing machine but weaker than a random-access machine.

For these models, the theorem has the following form:

If

f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic timef(n) but can be solved in worst-case timeaf(n) for some constanta(dependent onf).

Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see Linear speedup theorem). Moreover, Ben-Amram proved^{ [10] } that, in the above movels, for *f* of polynomial growth rate (but more than linear), it is the case that for all , there exists a decision problem which cannot be solved in worst-case deterministic time *f*(*n*) but can be solved in worst-case time .

In computational complexity theory, a branch of computer science, **bounded-error probabilistic polynomial time** (**BPP**) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. **BPP** is one of the largest *practical* classes of problems, meaning most problems of interest in **BPP** have efficient probabilistic algorithms that can be run quickly on real modern machines. **BPP** also contains ** P**, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

In theoretical computer science and mathematics, **computational complexity theory** focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computational complexity theory, **NP** is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, **PSPACE** is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.

In computational complexity theory, the complexity class **EXPTIME** (sometimes called **EXP** or **DEXPTIME**) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2^{p(n)}) time, where *p*(*n*) is a polynomial function of *n*.

The **space complexity** of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called **input space**, and any other (auxiliary) memory it uses during execution, which is called **auxiliary space**.

In theoretical computer science, the **time complexity** is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

In computational complexity theory, a **complexity class** is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, **Savitch's theorem**, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, **DSPACE** or **SPACE** is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, **DTIME** is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class **NTIME( f(n))** is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time

In computational complexity theory, **P**, also known as **PTIME** or **DTIME**(*n*^{O(1)}), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the **polynomial hierarchy** is a hierarchy of complexity classes that generalize the classes **NP** and **co-NP**. Each class in the hierarchy is contained within **PSPACE**. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted **PH**.

In computational complexity theory, the **space hierarchy theorems** are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space *n* log *n* than in space *n*. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class **NEXPTIME** is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an **alternating Turing machine** (**ATM**) is a non-deterministic Turing machine (**NTM**) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, **NL** is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, **P/poly** is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, **P/poly** is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make **P/poly** central to circuit complexity and non-uniform complexity.

In computational complexity theory, the complexity class **2-EXPTIME** (sometimes called **2-EXP**) is the set of all decision problems solvable by a deterministic Turing machine in O(2^{2p(n)}) time, where *p*(*n*) is a polynomial function of *n*.

- ↑ Hartmanis, J.; Stearns, R. E. (1 May 1965). "On the computational complexity of algorithms".
*Transactions of the American Mathematical Society*. American Mathematical Society.**117**: 285–306. doi: 10.2307/1994208 . ISSN 0002-9947. JSTOR 1994208. MR 0170805. - ↑ Hennie, F. C.; Stearns, R. E. (October 1966). "Two-Tape Simulation of Multitape Turing Machines".
*J. ACM*. New York, NY, USA: ACM.**13**(4): 533–546. doi:10.1145/321356.321362. ISSN 0004-5411. S2CID 2347143. - ↑ Cook, Stephen A. (1972). "A hierarchy for nondeterministic time complexity".
*Proceedings of the fourth annual ACM symposium on Theory of computing*. STOC '72. Denver, Colorado, United States: ACM. pp. 187–192. doi: 10.1145/800152.804913 . - ↑ Seiferas, Joel I.; Fischer, Michael J.; Meyer, Albert R. (January 1978). "Separating Nondeterministic Time Complexity Classes".
*J. ACM*. New York, NY, USA: ACM.**25**(1): 146–167. doi: 10.1145/322047.322061 . ISSN 0004-5411. S2CID 13561149. - ↑ Žák, Stanislav (October 1983). "A Turing machine time hierarchy".
*Theoretical Computer Science*. Elsevier Science B.V.**26**(3): 327–333. doi: 10.1016/0304-3975(83)90015-4 . - ↑ Fortnow, L.; Santhanam, R. (2004). "Hierarchy Theorems for Probabilistic Polynomial Time".
*45th Annual IEEE Symposium on Foundations of Computer Science*. p. 316. doi:10.1109/FOCS.2004.33. ISBN 0-7695-2228-9. S2CID 5555450. - ↑ Sipser, Michael (27 June 2012).
*Introduction to the Theory of Computation*(3rd ed.). CENGAGE learning. ISBN 978-1-133-18779-0. - ↑ Sudborough, Ivan H.; Zalcberg, A. (1976). "On Families of Languages Defined by Time-Bounded Random Access Machines".
*SIAM Journal on Computing*.**5**(2): 217–230. doi:10.1137/0205018. - ↑ Jones, Neil D. (1993). "Constant factors
*do*matter".*25th Symposium on the Theory of Computing*: 602–611. doi:10.1145/167088.167244. S2CID 7527905. - ↑ Ben-Amram, Amir M. (2003). "Tighter constant-factor time hierarchies".
*Information Processing Letters*.**87**(1): 39–44. doi:10.1016/S0020-0190(03)00253-9.

- Michael Sipser (1997).
*Introduction to the Theory of Computation*. PWS Publishing. ISBN 0-534-94728-X. Pages 310–313 of section 9.1: Hierarchy theorems. - Christos Papadimitriou (1993).
*Computational Complexity*(1st ed.). Addison Wesley. ISBN 0-201-53082-1. Section 7.2: The Hierarchy Theorem, pp. 143–146.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.