In mathematical logic, a witness is a specific value t to be substituted for variable x of an existential statement of the form ∃xφ(x) such that φ(t) is true.
For example, a theory T of arithmetic is said to be inconsistent if there exists a proof in T of the formula "0 = 1". The formula I(T), which says that T is inconsistent, is thus an existential formula. A witness for the inconsistency of T is a particular proof of "0 = 1" in T.
Boolos, Burgess, and Jeffrey (2002:81) define the notion of a witness with the example, in which S is an n-place relation on natural numbers, R is an (n+1)-place recursive relation, and ↔ indicates logical equivalence (if and only if):
In this particular example, the authors defined s to be (positively) recursively semidecidable, or simply semirecursive.
In predicate calculus, a Henkin witness for a sentence in a theory T is a term c such that T proves φ(c) (Hinman 2005:196). The use of such witnesses is a key technique in the proof of Gödel's completeness theorem presented by Leon Henkin in 1949.
The notion of witness leads to the more general idea of game semantics. In the case of sentence the winning strategy for the verifier is to pick a witness for . For more complex formulas involving universal quantifiers, the existence of a winning strategy for the verifier depends on the existence of appropriate Skolem functions. For example, if S denotes then an equisatisfiable statement for S is . The Skolem function f (if it exists) actually codifies a winning strategy for the verifier of S by returning a witness for the existential sub-formula for every choice of x the falsifier might make.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.
The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.
In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.
In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th-century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete.
In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and .
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
In logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.
In mathematical logic, the diagonal lemma establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem.
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers.
In predicate logic, generalization is a valid inference rule. It states that if has been derived, then can be derived.
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.
In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.
In mathematical logic, second-order arithmetic is a collection of axiomatic systems that formalize the natural numbers and their subsets. It is an alternative to axiomatic set theory as a foundation for much, but not all, of mathematics.
In logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering
In mathematical logic, more specifically in the proof theory of first-order theories, extensions by definitions formalize the introduction of new symbols by means of a definition. For example, it is common in naive set theory to introduce a symbol for the set that has no member. In the formal setting of first-order theories, this can be done by adding to the theory a new constant and the new axiom , meaning "for all x, x is not a member of ". It can then be proved that doing so adds essentially nothing to the old theory, as should be expected from a definition. More precisely, the new theory is a conservative extension of the old one.
In constructive mathematics, Church's thesis is the principle stating that all total functions are computable functions.
In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified, using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false. If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT.