Many-sorted logic can reflect formally our intention not to handle the universe as a homogeneous collection of objects, but to partition it in a way that is similar to types in typeful programming. Both functional and assertive "parts of speech" in the language of the logic reflect this typeful partitioning of the universe, even on the syntax level: substitution and argument passing can be done only accordingly, respecting the "sorts".
There are various ways to formalize the intention mentioned above; a many-sorted logic is any package of information which fulfils it. In most cases, the following are given:
The domain of discourse of any structure of that signature is then fragmented into disjoint subsets, one for every sort.
When reasoning about biological organisms, it is useful to distinguish two sorts: and . While a function makes sense, a similar function usually does not. Many-sorted logic allows one to have terms like , but to discard terms like as syntactically ill-formed.
The algebraization of many-sorted logic is explained in an article by Caleiro and Gonçalves, [1] which generalizes abstract algebraic logic to the many-sorted case, but can also be used as introductory material.
While many-sorted logic requires two distinct sorts to have disjoint universe sets, order-sorted logic allows one sort to be declared a subsort of another sort , usually by writing or similar syntax. In the above biology example, it is desirable to declare
and so on; cf. picture.
Wherever a term of some sort is required, a term of any subsort of may be supplied instead ( Liskov substitution principle ). For example, assuming a function declaration , and a constant declaration , the term is perfectly valid and has the sort . In order to supply the information that the mother of a dog is a dog in turn, another declaration may be issued; this is called function overloading, similar to overloading in programming languages.
Order-sorted logic can be translated into unsorted logic, using a unary predicate for each sort , and an axiom for each subsort declaration . The reverse approach was successful in automated theorem proving: in 1985, Christoph Walther could solve a then benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. [2]
In order to incorporate order-sorted logic into a clause-based automated theorem prover, a corresponding order-sorted unification algorithm is necessary, which requires for any two declared sorts their intersection to be declared, too: if and are variables of sort and , respectively, the equation has the solution , where .
Smolka generalized order-sorted logic to allow for parametric polymorphism. [3] [4] In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort may be declared (with being a type parameter as in a C++ template), and from a subsort declaration the relation is automatically inferred, meaning that each list of integers is also a list of floats.
Schmidt-Schauß generalized order-sorted logic to allow for term declarations. [5] As an example, assuming subsort declarations and , a term declaration like allows to declare a property of integer addition that could not be expressed by ordinary overloading.
In mathematics, a binary relation associates elements of one set, called the domain, with elements of another set, called the codomain. A binary relation over sets and is a set of ordered pairs consisting of elements from and from . It is a generalization of the more widely understood idea of a unary function. It encodes the common concept of relation: an element is related to an element , if and only if the pair belongs to the set of ordered pairs that defines the binary relation. A binary relation is the most studied special case of an -ary relation over sets , which is a subset of the Cartesian product
In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.
In mathematics, a Lie algebra is a vector space together with an operation called the Lie bracket, an alternating bilinear map , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors and is denoted . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, .
In mathematics, a topological vector space is one of the basic structures investigated in functional analysis. A topological vector space is a vector space that is also a topological space with the property that the vector space operations are also continuous functions. Such a topology is called a vector topology and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness. Some authors also require that the space is a Hausdorff space. One of the most widely studied categories of TVSs are locally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Banach spaces, Hilbert spaces and Sobolev spaces are other well-known examples of TVSs.
In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution.
In mathematics, specifically in topology, the interior of a subset S of a topological space X is the union of all subsets of S that are open in X. A point that is in the interior of S is an interior point of S.
In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.
In mathematics, Hilbert's Nullstellensatz is a theorem that establishes a fundamental relationship between geometry and algebra. This relationship is the basis of algebraic geometry. It relates algebraic sets to ideals in polynomial rings over algebraically closed fields. This relationship was discovered by David Hilbert, who proved the Nullstellensatz in his second major paper on invariant theory in 1893.
In topology and related branches of mathematics, the Kuratowski closure axioms are a set of axioms that can be used to define a topological structure on a set. They are equivalent to the more commonly used open set definition. They were first formalized by Kazimierz Kuratowski, and the idea was further studied by mathematicians such as Wacław Sierpiński and António Monteiro, among others.
In mathematics, more specifically in functional analysis, a positive linear functional on an ordered vector space is a linear functional on so that for all positive elements that is it holds that
In mathematics, the Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions.
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions.
In mathematics, the notion of cylindric algebra, developed by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.
In mathematics, the relative interior of a set is a refinement of the concept of the interior, which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces.
In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects.
In abstract algebra, an associated prime of a module M over a ring R is a type of prime ideal of R that arises as an annihilator of a (prime) submodule of M. The set of associated primes is usually denoted by and sometimes called the assassin or assassinator of M.
In mathematical logic, a term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. This is analogous to natural language, where a noun phrase refers to an object and a whole sentence refers to a fact.
In functional analysis, a branch of mathematics, the algebraic interior or radial kernel of a subset of a vector space is a refinement of the concept of the interior.
Early papers on many-sorted logic include: