Identity type

Last updated

In type theory, the identity type represents the concept of equality. It is also known as propositional equality to differentiate it from "judgemental equality". Equality in type theory is a complex topic and has been the subject of research, such as the field of homotopy type theory. [1]

Contents

Comparison with Judgemental Equality

The identity type is one of 2 different notions of equality in type theory. [2] The more fundamental notion is "judgemental equality", which is a judgement.

Beyond Judgemental Equality

The identity type can do more than what judgemental equality can do. It can be used to show "for all ", which is impossible to show with judgemental equality. This is accomplished by using the eliminator (or "recursor") of the natural numbers, known as "R".

The "R" function lets us define a new function on the natural numbers. That new function "P" is defined to be "(λ x:nat . x+1 = 1+x)". The other arguments act like the parts of an induction proof. The argument "PZ : P 0" becomes the base case "0+1 = 1+0", which is the term "refl nat 1". The argument "PS : P n → P (S n)" becomes the inductive case. Essentially, this says that when "x+1 = 1+x" has "x" replaced with a canonical value, the expression will be the same as "refl nat (x+1)".

Versions of the Identity Type

The identity type is complex and is the subject of research in type theory. While every version agrees on the constructor, "refl". Their properties and eliminator functions differ dramatically.

For "extensional" versions, any identity type can be converted into a judgemental equality. A computational version is known as "Axiom K" due to Thomas Streicher. [3] These are not very popular lately.

Complexity of Identity Type

Martin Hoffman and Thomas Streicher refuted that idea type theory required all terms of the identity type to be the same. [4]

A popular branches of research into the identity type are homotopy type theory [5] and its Cubical type theory.

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.

In the philosophy of mathematics, constructivism asserts that it is necessary to find a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.

In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems.

In computer science, denotational semantics is an approach of formalizing the meanings of programming languages by constructing mathematical objects that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.

In mathematics, equality is a relationship between two quantities or, more generally, two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. The equality between A and B is written A = B, and pronounced "A equals B". The symbol "=" is called an "equals sign". Two objects that are not equal are said to be distinct.

<span class="mw-page-title-main">Homotopy</span> Continuous deformation between two continuous functions

In topology, a branch of mathematics, two continuous functions from one topological space to another are called homotopic if one can be "continuously deformed" into the other, such a deformation being called a homotopy between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology.

In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs.

Intuitionistic type theory is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.

System F is a typed lambda calculus that introduces, to simply typed lambda calculus, a mechanism of universal quantification over types. System F formalizes parametric polymorphism in programming languages, thus forming a theoretical basis for languages such as Haskell and ML. It was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds.

In logic, a true/false decision problem is decidable if there exists an effective method for deriving the correct answer. Zeroth-order logic is decidable, whereas first-order and higher-order logic are not. Logical systems are decidable if membership in their set of logically valid formulas can be effectively determined. A theory in a fixed logical system is decidable if there is an effective method for determining whether arbitrary formulas are included in the theory. Many important problems are undecidable, that is, it has been proven that no effective method for determining membership can exist for them.

In computer science and logic, a dependent type is a type whose definition depends on a value. It is an overlapping feature of type theory and type systems. In intuitionistic type theory, dependent types are used to encode logic's quantifiers like "for all" and "there exists". In functional programming languages like Agda, ATS, Coq, F*, Epigram, and Idris, dependent types help reduce bugs by enabling the programmer to assign types that further restrain the set of possible implementations.

<span class="mw-page-title-main">Agda (programming language)</span> Functional programming language

Agda is a dependently typed functional programming language originally developed by Ulf Norell at Chalmers University of Technology with implementation described in his PhD thesis. The original Agda system was developed at Chalmers by Catarina Coquand in 1999. The current version, originally known as Agda 2, is a full rewrite, which should be considered a new language that shares a name and tradition.

In type theory, a system has inductive types if it has facilities for creating a new type from constants and functions that create terms of that type. The feature serves a role similar to data structures in a programming language and allows a type theory to add concepts like numbers, relations, and trees. As the name suggests, inductive types can be self-referential, but usually only in a way that permits structural recursion.

<span class="mw-page-title-main">Homotopy type theory</span> Type theory in logic and mathematics

In mathematical logic and computer science, homotopy type theory refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.

The type theory was initially created to avoid paradoxes in a variety of formal logics and rewrite systems. Later, type theory referred to a class of formal systems, some of which can serve as alternatives to naive set theory as a foundation for all mathematics.

Thomas Streicher is an Austrian mathematician who is a Professor of Mathematics at Technische Universität Darmstadt. He received his PhD in 1988 from the University of Passau with advisor Manfred Broy.

Univalent foundations are an approach to the foundations of mathematics in which mathematical structures are built out of objects called types. Types in univalent foundations do not correspond exactly to anything in set-theoretic foundations, but they may be thought of as spaces, with equal types corresponding to homotopy equivalent spaces and with equal elements of a type corresponding to points of a space connected by a path. Univalent foundations are inspired both by the old Platonic ideas of Hermann Grassmann and Georg Cantor and by "categorical" mathematics in the style of Alexander Grothendieck. Univalent foundations depart from the use of classical predicate logic as the underlying formal deduction system, replacing it, at the moment, with a version of Martin-Löf type theory. The development of univalent foundations is closely related to the development of homotopy type theory.

In mathematics, homotopy theory is a systematic study of situations in which maps can come with homotopies between them. It originated as a topic in algebraic topology but nowadays is learned as an independent discipline. Besides algebraic topology, the theory has also been used in other areas of mathematics such as algebraic geometry (e.g., A1 homotopy theory) and category theory (specifically the study of higher categories).

Lean is a proof assistant and programming language. It is based on the calculus of constructions with inductive types. It is an open-source project hosted on GitHub. It was principally developed by Leonardo de Moura while employed by Microsoft Research and now Amazon Web Services, and has had significant contributions from other coauthors and collaborators during its history.

References

  1. "Identity Type". nLab. Retrieved 19 January 2022.
  2. Martin-Löf, Per (June 1980). Intuitionistic Type Theory (PDF).
  3. Streicher, Thomas (1993). Investigations into intensional type theory (PDF).
  4. Hoffman, Martin; Streicher, Thomas (July 1994). "The groupoid model refutes uniqueness of identity proofs". Proceedings Ninth Annual IEEE Symposium on Logic in Computer Science. pp. 208–212. doi:10.1109/LICS.1994.316071. ISBN   0-8186-6310-3. S2CID   19496198.
  5. Univalent Foundations Program (12 March 2013). Homotopy Type Theory. Institute for Advanced Study.