Robinson's joint consistency theorem is an important theorem of mathematical logic. It is related to Craig interpolation and Beth definability.
The classical formulation of Robinson's joint consistency theorem is as follows:
Let and be first-order theories. If and are consistent and the intersection is complete (in the common language of and ), then the union is consistent. A theory is called complete if it decides every formula, meaning that for every sentence the theory contains the sentence or its negation but not both (that is, either or ).
Since the completeness assumption is quite hard to fulfill, there is a variant of the theorem:
Let and be first-order theories. If and are consistent and if there is no formula in the common language of and such that and then the union is consistent.
First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.
The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.
In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and . A trivial theory is clearly inconsistent. Conversely, in an explosive formal system every inconsistent theory is trivial. Consistency of a theory is a syntactic notion, whose semantic counterpart is satisfiability. A theory is satisfiable if it has a model, i.e., there exists an interpretation under which all axioms in the theory are true. This is what consistent meant in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead.
In traditional logic, a contradiction occurs when a proposition conflicts either with itself or established fact. It is often used as a tool to detect disingenuous beliefs and bias. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."
In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. Intuitively, forcing can be thought of as a technique to expand the set theoretical universe to a larger universe by introducing a new "generic" object .
In mathematical logic, the compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This theorem is an important tool in model theory, as it provides a useful method for constructing models of any set of sentences that is finitely consistent.
In mathematical logic, the Löwenheim–Skolem theorem is a theorem on the existence and cardinality of models, named after Leopold Löwenheim and Thoralf Skolem.
In propositional logic, the double negation of a statement states that "it is not the case that the statement is not true". In classical logic, every statement is logically equivalent to its double negation, but this is not true in intuitionistic logic; this can be expressed by the formula A ≡ ~(~A) where the sign ≡ expresses logical equivalence and the sign ~ expresses negation.
In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula P, if it is provable in PA that "if P is provable in PA then P is true", then P is provable in PA. If Prov(P) means that the formula P is provable, we may express this more formally as
In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).
In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.
In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.
In mathematical logic, a theory is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
In mathematical logic, an ω-consistenttheory is a theory that is not only (syntactically) consistent, but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem.
In logic, especially mathematical logic, an axiomatic system, sometimes called a "Hilbert-style" deductive system, is a type of system of formal deduction developed by Gottlob Frege, Jan Łukasiewicz, Russell and Whitehead, and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.
In mathematical logic, a theory is complete if it is consistent and for every closed formula in the theory's language, either that formula or its negation is provable. That is, for every sentence the theory contains the sentence or its negation but not both. Recursively axiomatizable first-order theories that are consistent and rich enough to allow general mathematical reasoning to be formulated cannot be complete, as demonstrated by Gödel's first incompleteness theorem.
In constructive mathematics, Church's thesis is the principle stating that all total functions are computable functions.
In mathematical logic, the Hilbert–Bernays provability conditions, named after David Hilbert and Paul Bernays, are a set of requirements for formalized provability predicates in formal theories of arithmetic.
In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true.