A formula of the predicate calculus is in prenex [1] normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. [2] Together with the normal forms in propositional logic (e.g. disjunctive normal form or conjunctive normal form), it provides a canonical normal form useful in automated theorem proving.
Every formula in classical logic is logically equivalent to a formula in prenex normal form. For example, if , , and are quantifier-free formulas with the free variables shown then
is in prenex normal form with matrix , while
is logically equivalent but not in prenex normal form.
This section needs additional citations for verification .(August 2018) |
Every first-order formula is logically equivalent (in classical logic) to some formula in prenex normal form. [3] There are several conversion rules that can be recursively applied to convert a formula to prenex normal form. The rules depend on which logical connectives appear in the formula.
The rules for conjunction and disjunction say that
and
The equivalences are valid when does not appear as a free variable of ; if does appear free in , one can rename the bound in and obtain the equivalent .
For example, in the language of rings,
but
because the formula on the left is true in any ring when the free variable x is equal to 0, while the formula on the right has no free variables and is false in any nontrivial ring. So will be first rewritten as and then put in prenex normal form .
The rules for negation say that
and
There are four rules for implication: two that remove quantifiers from the antecedent and two that remove quantifiers from the consequent. These rules can be derived by rewriting the implication as and applying the rules for disjunction and negation above. As with the rules for disjunction, these rules require that the variable quantified in one subformula does not appear free in the other subformula.
The rules for removing quantifiers from the antecedent are (note the change of quantifiers):
The rules for removing quantifiers from the consequent are:
For example, when the range of quantification is the non-negative natural number (viz. ), the statement
is logically equivalent to the statement
The former statement says that if x is less than any natural number, then x is less than zero. The latter statement says that there exists some natural number n such that if x is less than n, then x is less than zero. Both statements are true. The former statement is true because if x is less than any natural number, it must be less than the smallest natural number (zero). The latter statement is true because n=0 makes the implication a tautology.
Note that the placement of brackets implies the scope of the quantification, which is very important for the meaning of the formula. Consider the following two statements:
and its logically equivalent statement
The former statement says that for any natural number n, if x is less than n then x is less than zero. The latter statement says that if there exists some natural number n such that x is less than n, then x is less than zero. Both statements are false. The former statement doesn't hold for n=2, because x=1 is less than n, but not less than zero. The latter statement doesn't hold for x=1, because the natural number n=2 satisfies x<n, but x=1 is not less than zero.
Suppose that , , and are quantifier-free formulas and no two of these formulas share any free variable. Consider the formula
By recursively applying the rules starting at the innermost subformulas, the following sequence of logically equivalent formulas can be obtained:
This is not the only prenex form equivalent to the original formula. For example, by dealing with the consequent before the antecedent in the example above, the prenex form
can be obtained:
The ordering of the two universal quantifier with the same scope doesn't change the meaning/truth value of the statement.
The rules for converting a formula to prenex form make heavy use of classical logic. In intuitionistic logic, it is not true that every formula is logically equivalent to a prenex formula. The negation connective is one obstacle, but not the only one. The implication operator is also treated differently in intuitionistic logic than classical logic; in intuitionistic logic, it is not definable using disjunction and negation.
The BHK interpretation illustrates why some formulas have no intuitionistically-equivalent prenex form. In this interpretation, a proof of
is a function which, given a concrete x and a proof of , produces a concrete y and a proof of . In this case it is allowable for the value of y to be computed from the given value of x. A proof of
on the other hand, produces a single concrete value of y and a function that converts any proof of into a proof of . If each x satisfying can be used to construct a y satisfying but no such y can be constructed without knowledge of such an x then formula (1) will not be equivalent to formula (2).
The rules for converting a formula to prenex form that do fail in intuitionistic logic are:
(x does not appear as a free variable of in (1) and (3); x does not appear as a free variable of in (2) and (4)).
Some proof calculi will only deal with a theory whose formulae are written in prenex normal form. The concept is essential for developing the arithmetical hierarchy and the analytical hierarchy.
Gödel's proof of his completeness theorem for first-order logic presupposes that all formulae have been recast in prenex normal form.
Tarski's axioms for geometry is a logical system whose sentences can all be written in universal–existential form, a special case of the prenex normal form that has every universal quantifier preceding any existential quantifier, so that all sentences can be rewritten in the form , where is a sentence that does not contain any quantifier. This fact allowed Tarski to prove that Euclidean geometry is decidable.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.
In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.
In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any", "for all", or "for any". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers.
In the foundations of mathematics, von Neumann–Bernays–Gödel set theory (NBG) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class, which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals. Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not.
In computability theory Post's theorem, named after Emil Post, describes the connection between the arithmetical hierarchy and the Turing degrees.
Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.
Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.
Tarski's axioms are an axiom system for Euclidean geometry, specifically for that portion of Euclidean geometry that is formulable in first-order logic with identity. As such, it does not require an underlying set theory. The only primitive objects of the system are "points" and the only primitive predicates are "betweenness" and "congruence". The system contains infinitely many axioms.
In mathematical logic, a Lindström quantifier is a generalized polyadic quantifier. Lindström quantifiers generalize first-order quantifiers, such as the existential quantifier, the universal quantifier, and the counting quantifiers. They were introduced by Per Lindström in 1966. They were later studied for their applications in logic in computer science and database query languages.
In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.
In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.
Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it." In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.
In set theory and mathematical logic, the Lévy hierarchy, introduced by Azriel Lévy in 1965, is a hierarchy of formulas in the formal language of the Zermelo–Fraenkel set theory, which is typically called just the language of set theory. This is analogous to the arithmetical hierarchy, which provides a similar classification for sentences of the language of arithmetic.
Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .
In mathematical logic, the quantifier rank of a formula is the depth of nesting of its quantifiers. It plays an essential role in model theory.