Bunched logic [1] is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about resource composition, which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs. [2] The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, [3] and in systems modelling, where it provides a way to decompose the resources used by components of a system. [4] [5] [6]
The deduction theorem of classical logic relates conjunction and implication:
Bunched logic has two versions of the deduction theorem:
and are forms of conjunction and implication that take resources into account (explained below). In addition to these connectives bunched logic has a formula, sometimes written I or emp, which is the unit of *. In the original version of bunched logic and were the connectives from intuitionistic logic, while a boolean variant takes and (and ) as from traditional boolean logic. Thus, bunched logic is compatible with constructive principles, but is in no way dependent on them.
The easiest way to understand these formulae is in terms of its truth-functional semantics. In this semantics a formula is true or false with respect to given resources. asserts that the resource at hand can be decomposed into resources that satisfy and . says that if we compose the resource at hand with additional resource that satisfies , then the combined resource satisfies . and have their familiar meanings.
The foundation for this reading of formulae was provided by a forcing semantics advanced by Pym, where the forcing relation means 'A holds of resource r'. The semantics is analogous to Kripke's semantics of intuitionistic or modal logic, but where the elements of the model are regarded as resources that can be composed and decomposed, rather than as possible worlds that are accessible from one another. For example, the forcing semantics for the conjunction is of the form
where is a way of combining resources and is a relation of approximation.
This semantics of bunched logic draws on prior work in relevance logic (especially the operational semantics of Routley–Meyer), but differs from it by not requiring and by accepting the semantics of standard intuitionistic or classical versions of and . The property is justified when thinking about relevance but denied by considerations of resource; having two copies of a resource is not the same as having one, and in some models (e.g. heap models) might not even be defined. The standard semantics of (or of negation) is often rejected by relevantists in their bid to escape the `paradoxes of material implication', which are not a problem from the perspective of modelling resources and so not rejected by bunched logic. The semantics is also related to the 'phase semantics' of linear logic, but again is differentiated by accepting the standard (even boolean) semantics of and , which in linear logic is rejected in a bid to be constructive. These considerations are discussed in detail in an article on resource semantics by Pym, O'Hearn and Yang. [7]
The double version of the deduction theorem of bunched logic has a corresponding category-theoretic structure. Proofs in intuitionistic logic can be interpreted in cartesian closed categories, that is, categories with finite products satisfying the (natural in A and C) adjunction correspondence relating hom sets:
Bunched logic can be interpreted in categories possessing two such structures
A host of categorial models can be given using Day's tensor product construction. [8] Additionally, the implicational fragment of bunched logic has been given a game semantics. [9]
The algebraic semantics of bunched logic is a special case of its categorical semantics, but is simple to state and can be more approachable.
The boolean version of bunched logic has models as follows.
The proof calculus of bunched logic differs from usual sequent calculi in having a tree-like context of hypotheses instead of a flat list-like structure. In its sequent-based proof theories, the context in an entailment judgement is a finite rooted tree whose leaves are propositions and whose internal nodes are labelled with modes of composition corresponding to the two conjunctions. The two combining operators, comma and semicolon, are used (for instance) in the introduction rules for the two implications.
The difference between the two composition rules comes from additional rules that apply to them.
The structural rules and other operations on bunches are often applied deep within a tree-context, and not only at the top level: it is thus in a sense a calculus of deep inference.
Corresponding to bunched logic is a type theory having two kinds of function type. Following the Curry–Howard correspondence, introduction rules for implications correspond to introduction rules for function types.
Here, there are two distinct binders, and , one for each kind of function type.
The proof theory of bunched logic has an historical debt to the use of bunches in relevance logic. [10] But the bunched structure can in a sense be derived from the categorical and algebraic semantics: to formulate an introduction rule for we should mimick on the left in sequents, and to introduce we should mimick . This consideration leads to the use of two combining operators.
James Brotherston has done further significant work on a unified proof theory for bunched logic and variants, [11] employing Belnap's notion of display logic. [12]
Galmiche, Méry, and Pym have provided a comprehensive treatment of bunched logic, including completeness and other meta-theory, based on labelled tableaux. [13]
In perhaps the first use of substructural type theory to control resources, John C. Reynolds showed how to use an affine type theory to control aliasing and other forms of interference in Algol-like programming languages. [14] O'Hearn used bunched type theory to extend Reynolds' system by allowing interference and non-interference to be more flexibly mixed. [2] This resolved open problems concerning recursion and jumps in Reynolds' system.
Separation logic is an extension of Hoare logic that facilitates reasoning about mutable data structures that use pointers. Following Hoare logic the formulae of separation logic are of the form , but the preconditions and postconditions are formulae interpreted in a model of bunched logic. The original version of the logic was based on models as follows:
It is the undefinedness of the composition on overlapping heaps that models the separation idea. This is a model of the boolean variant of bunched logic.
Separation logic was used originally to prove properties of sequential programs, but then was extended to concurrency using a proof rule
that divides the storage accessed by parallel threads. [15]
Later, the greater generality of the resource semantics was utilized: an abstract version of separation logic works for Hoare triples where the preconditions and postconditions are formulae interpreted over an arbitrary partial commutative monoid instead of a particular heap model. [16] By suitable choice of commutative monoid, it was surprisingly found that the proofs rules of abstract versions of concurrent separation logic could be used to reason about interfering concurrent processes, for example by encoding rely-guarantee and trace-based reasoning. [17] [18]
Separation logic is the basis of a number of tools for automatic and semi-automatic reasoning about programs, and is used in the Infer program analyzer currently deployed at Facebook. [19]
Bunched logic has been used in connection with the (synchronous) resource-process calculus SCRP [4] [5] [6] in order to give a (modal) logic that characterizes, in the sense of Hennessy–Milner, the compositional structure of concurrent systems.
SCRP is notable for interpreting in terms of both parallel composition of systems and composition of their associated resources. The semantic clause of SCRP's process logic that corresponds to separation logic's rule for concurrency asserts that a formula is true in resource-process state , just in case there are decompositions of the resource and process ~ , where ~ denotes bisimulation, such that is true in the resource-process state , and is true in the resource-process state , ; that is iff and .
The system SCRP [4] [5] [6] is based directly on bunched logic's resource semantics; that is, on ordered monoids of resource elements. While direct and intuitively appealing, this choice leads to a specific technical problem: the Hennessy–Milner completeness theorem holds only for fragments of the modal logic that exclude the multiplicative implication and multiplicative modalities. This problem is solved by basing resource-process calculus on a resource semantics in which resource elements are combined using two combinators, one corresponding to concurrent composition and one corresponding to choice. [20]
Cardelli, Caires, Gordon and others have investigated a series of logics of process calculi, where a conjunction is interpreted in terms of parallel composition.[ citation needed ] Unlike the work of Pym et al. in SCRP, they do not distinguish between parallel composition of systems and composition of resources accessed by the systems.
Their logics are based on instances of the resource semantics that give rise to models of the boolean variant of bunched logic. Although these logics give rise to instances of boolean bunched logic, they appear to have been arrived at independently, and in any case have significant additional structure in the way of modalities and binders. Related logics have been proposed as well for modelling XML data.
In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.
Relevance logic, also called relevant logic, is a kind of non-classical logic requiring the antecedent and consequent of implications to be relevantly related. They may be viewed as a family of substructural or modal logics. It is generally, but not universally, called relevant logic by British and, especially, Australian logicians, and relevance logic by American logicians.
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems of a first-order theory rather than conditional tautologies.
In mathematical logic, a sequent is a very general kind of conditional assertion.
In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs. It is also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation.
In type theory, a typing rule is an inference rule that describes how a type system assigns a type to a syntactic construction. These rules may be applied by the type system to determine if a program is well-typed and what type expressions have. A prototypical example of the use of typing rules is in defining type inference in the simply typed lambda calculus, which is the internal language of Cartesian closed categories.
In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
In the logical discipline of proof theory, a structural rule is an inference rule of a sequent calculus that does not refer to any logical connective but instead operates on the sequents directly. Structural rules often mimic the intended meta-theoretic properties of the logic. Logics that deny one or more of the structural rules are classified as substructural logics.
Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.
In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:
In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.
In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
In computer science, Programming Computable Functions (PCF) is a typed functional language introduced by Gordon Plotkin in 1977, based on previous unpublished material by Dana Scott. It can be considered to be an extended version of the typed lambda calculus or a simplified version of modern typed functional languages such as ML or Haskell.
In logic, general frames are Kripke frames with an additional structure, which are used to model modal and intermediate logics. The general frame semantics combines the main virtues of Kripke semantics and algebraic semantics: it shares the transparent geometrical insight of the former, and robust completeness of the latter.
In mathematical logic, monoidal t-norm based logic, the logic of left-continuous t-norms, is one of the t-norm fuzzy logics. It belongs to the broader class of substructural logics, or logics of residuated lattices; it extends the logic of commutative bounded integral residuated lattices by the axiom of prelinearity.
Minimalist grammars are a class of formal grammars that aim to provide a more rigorous, usually proof-theoretic, formalization of Chomskyan Minimalist program than is normally provided in the mainstream Minimalist literature. A variety of particular formalizations exist, most of them developed by Edward Stabler, Alain Lecomte, Christian Retoré, or combinations thereof.
Logical consequence is a fundamental concept in logic which describes the relationship between statements that hold true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusion is entailed by the premises, because the conclusion is the consequence of the premises. The philosophical analysis of logical consequence involves the questions: In what sense does a conclusion follow from its premises? and What does it mean for a conclusion to be a consequence of premises? All of philosophical logic is meant to provide accounts of the nature of logical consequence and the nature of logical truth.
A non-normal modal logic is a variant of modal logic that deviates from the basic principles of normal modal logics.