Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP.
It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. [1] The arity required by the second-order formula was improved (in one direction) by James Lynch in 1981, [2] and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.[ citation needed ]
In addition to Fagin's 1974 paper, [1] the 1999 textbook by Immerman provides a detailed proof of the theorem. [3] It is straightforward to show that every existential second-order formula can be recognized in NP, by nondeterministically choosing the value of all existentially-qualified variables, so the main part of the proof is to show that every language in NP can be described by an existential second-order formula. To do so, one can use second-order existential quantifiers to arbitrarily choose a computation tableau. In more detail, for every timestep of an execution trace of a non-deterministic Turing machine, this tableau encodes the state of the Turing machine, its position in the tape, the contents of every tape cell, and which nondeterministic choice the machine makes at that step. A first-order formula can constrain this encoded information so that it describes a valid execution trace, one in which the tape contents and Turing machine state and position at each timestep follow from the previous timestep.
A key lemma used in the proof is that it is possible to encode a linear order of length (such as the linear orders of timesteps and tape contents at any timestep) as a -ary relation on a universe of size . One way to achieve this is to choose a linear ordering of and then define to be the lexicographical ordering of -tuples from with respect to .
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.
In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine.
In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p(n)) time, where p(n) is a polynomial function of n.
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.
The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.
In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.
In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.
Finite model theory is a subarea of model theory. Model theory is the branch of logic which deals with the relation between a formal language (syntax) and its interpretations (semantics). Finite model theory is a restriction of model theory to interpretations on finite structures, which have a finite universe.
In computer science, a linear bounded automaton is a restricted form of Turing machine.
In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified, using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false. If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT.
In mathematical logic, the spectrum of a sentence is the set of natural numbers occurring as the size of a finite model in which a given sentence is true. By a result in descriptive complexity, a set of natural numbers is a spectrum if and only if it can be recognized in non-deterministic exponential time.
Descriptive Complexity is a book in mathematical logic and computational complexity theory by Neil Immerman. It concerns descriptive complexity theory, an area in which the expressibility of mathematical properties using different types of logic is shown to be equivalent to their computability in different types of resource-bounded models of computation. It was published in 1999 by Springer-Verlag in their book series Graduate Texts in Computer Science.