ELEMENTARY

Last updated

In computational complexity theory, the complexity class consists of the decision problems that can be solved in time bounded by an elementary recursive function. The most quickly-growing elementary functions are obtained by iterating an exponential function such as for a bounded number of iterations,

Thus, is the union of the classes

It is sometimes described as iterated exponential time, [1] though this term more commonly refers to time bounded by the tetration function. [2]


This complexity class can be characterized by a certain class of "iterated stack automata", pushdown automata that can store the entire state of a lower-order iterated stack automaton in each cell of their stack. These automata can compute every language in , and cannot compute languages beyond this complexity class. [3] The time hierarchy theorem implies that has no complete problems.

Every elementary recursive function can be computed in a time bound of this form, and therefore every decision problem whose calculation uses only elementary recursive functions belongs to the complexity class .

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time, where n is the input length.

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In recursion theory, an elementary recursive function, also called an elementary function, or a Kalmár elementary function, is a restricted form of a primitive recursive function, allowing bounded applications of exponentiation.

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

TC0 is a complexity class used in circuit complexity. It is the first class in the hierarchy of TC classes.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22p(n)) time, where p(n) is a polynomial function of n.

The Grzegorczyk hierarchy, named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels.

In proof theory, ordinal analysis assigns ordinals to mathematical theories as a measure of their strength. If theories have the same proof-theoretic ordinal they are often equiconsistent, and if one theory has a larger proof-theoretic ordinal than another it can often prove the consistency of the second theory.

In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.

In proof theory, a branch of mathematical logic, elementary function arithmetic (EFA), also called elementary arithmetic and exponential function arithmetic, is the system of arithmetic with the usual elementary properties of 0, 1, +, ×, , together with induction for formulas with bounded quantifiers.

References

  1. "ELEMENTARY", Complexity Zoo, retrieved 2024-11-03
  2. Friedman, Harvey (1999), "Some decision problems of enormous complexity" (PDF), 14th Annual IEEE Symposium on Logic in Computer Science, Trento, Italy, July 2-5, 1999, {IEEE} Computer Society, pp. 2–12, doi:10.1109/LICS.1999.782577, ISBN   0-7695-0158-3, MR   1942515
  3. Engelfriet, Joost (1991), "Iterated stack automata and complexity classes", Information and Computation, 95 (1): 21–75, doi:10.1016/0890-5401(91)90015-T, MR   1133778