In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. [1] Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.
Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide.
Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi.
Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair (n,m) is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number n, φn is the computable function with index n. The following clauses are used to define a relation "n realizes A" between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation: [2]
With this definition, the following theorem is obtained: [3]
On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. [4] So realizability does not exactly mirror intuitionistic reasoning.
Further analysis of the method can be used to prove that HA has the "disjunction and existence properties": [5]
More such properties are obtained involving Harrop formulas.
Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise:
Relative realizability [6] is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems.
Classical realizability was introduced by Krivine [7] and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory. Understood as a generalization of Cohen’s forcing, it was used to provide new models of set theory [8] .
Linear realizability extends realizability techniques to linear logic. The term was coined by Seiller [9] to encompass several constructions, such as geometry of interaction models [10] , ludics [11] , interaction graphs models [12] .
Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq.
In the philosophy of mathematics, intuitionism, or neointuitionism, is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of fundamental principles claimed to exist in an objective reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed and applied, but are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality.
Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.
In the philosophy of mathematics, constructivism asserts that it is necessary to find a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth, which in classical logic has only two possible values.
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
In mathematics, constructive analysis is mathematical analysis done according to some principles of constructive mathematics.
In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs. It is also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation.
In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b of implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by Arend Heyting (1930) to formalize intuitionistic logic.
In mathematical logic, a superintuitionistic logic is a propositional logic extending intuitionistic logic. Classical logic is the strongest consistent superintuitionistic logic; thus, consistent superintuitionistic logics are called intermediate logics.
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
In mathematical logic, the Brouwer–Heyting–Kolmogorov interpretation, or BHK interpretation, of intuitionistic logic was proposed by L. E. J. Brouwer and Arend Heyting, and independently by Andrey Kolmogorov. It is also sometimes called the realizability interpretation, because of the connection with the realizability theory of Stephen Kleene. It is the standard explanation of intuitionistic logic.
In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.
Logics for computability are formulations of logic that capture some aspect of computability as a basic notion. This usually involves a mix of special logical connectives as well as a semantics that explains how the logic is to be interpreted in a computational way.
Axiomatic constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories.
In proof theory, a discipline within mathematical logic, double-negation translation, sometimes called negative translation, is a general approach for embedding classical logic into intuitionistic logic. Typically it is done by translating formulas to formulas that are classically equivalent but intuitionistically inequivalent. Particular instances of double-negation translations include Glivenko's translation for propositional logic, and the Gödel–Gentzen translation and Kuroda's translation for first-order logic.
Markov's principle, named after Andrey Markov Jr, is a conditional existence statement for which there are many equivalent formulations, as discussed below. The principle is logically valid classically, but not in intuitionistic constructive mathematics. However, many particular instances of it are nevertheless provable in a constructive context as well.
In constructive mathematics, Church's thesis is the principle stating that all total functions are computable functions.
In proof theory, the Dialectica interpretation is a proof interpretation of intuitionistic logic into a finite type extension of primitive recursive arithmetic, the so-called System T. It was developed by Kurt Gödel to provide a consistency proof of arithmetic. The name of the interpretation comes from the journal Dialectica, where Gödel's paper was published in a 1958 special issue dedicated to Paul Bernays on his 70th birthday.
In intuitionistic logic, the Harrop formulae, named after Ronald Harrop, are the class of formulae inductively defined as follows:
In mathematics, the effective topos introduced by Martin Hyland captures the mathematical idea of effectivity within the category theoretical framework.