Linear logic

Last updated

Linear logic is a substructural logic proposed by French logician Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. [1] Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such as programming languages, game semantics, and quantum physics (because linear logic can be seen as the logic of quantum information theory), [2] as well as linguistics, [3] particularly because of its emphasis on resource-boundedness, duality, and interaction.

Contents

Linear logic lends itself to many different presentations, explanations, and intuitions. Proof-theoretically, it derives from an analysis of classical sequent calculus in which uses of (the structural rules) contraction and weakening are carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulating resources that cannot always be duplicated or thrown away at will. In terms of simple denotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacing cartesian (closed) categories by symmetric monoidal (closed) categories, or the interpretation of classical logic by replacing Boolean algebras by C*-algebras.[ citation needed ]

Connectives, duality, and polarity

Syntax

The language of classical linear logic (CLL) is defined inductively by the BNF notation

A ::=pp
AAAA
A & AAA
1 ∣ 0 ∣ ⊤ ∣ ⊥
 !A ?A

Here p and p range over logical atoms. For reasons to be explained below, the connectives ⊗, ⅋, 1, and ⊥ are called multiplicatives, the connectives &, ⊕, ⊤, and 0 are called additives, and the connectives ! and ? are called exponentials. We can further employ the following terminology:

SymbolName
multiplicative conjunction timestensor
additive disjunction plus
&additive conjunction with
multiplicative disjunction par
!of coursebang
?why notquest


Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &.

Every proposition A in CLL has a dualA, defined as follows:

(p) = p(p) = p
(AB) = AB(AB) = AB
(AB) = A & B(A & B) = AB
(1) = ⊥(⊥) = 1
(0) = ⊤(⊤) = 0
(!A) = ?(A)(?A) = !(A)
Classification of connectives
addmulexp
pos⊕ 0⊗ 1 !
neg& ⊤⅋ ⊥ ?

Observe that (-) is an involution, i.e., A⊥⊥ = A for all propositions. A is also called the linear negation of A.

The columns of the table suggest another way of classifying the connectives of linear logic, termed polarity: the connectives negated in the left column (⊗, ⊕, 1, 0, !) are called positive, while their duals on the right (⅋, &, ⊥, ⊤, ?) are called negative; cf. table on the right.

Linear implication is not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, by AB := AB. The connective ⊸ is sometimes pronounced "lollipop", owing to its shape.

Sequent calculus presentation

One way of defining linear logic is as a sequent calculus. We use the letters Γ and Δ to range over lists of propositions A1, ..., An, also called contexts. A sequent places a context to the left and the right of the turnstile, written Γ Δ. Intuitively, the sequent asserts that the conjunction of Γ entails the disjunction of Δ (though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using only one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised.

We now give inference rules describing how to build proofs of sequents. [4]

First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of exchange:

Γ, A1, A2, Δ
Γ, A2, A1, Δ

Note that we do not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present.

Next we add initial sequents and cuts:

 
A, A
Γ, A    A, Δ
Γ, Δ

The cut rule can be seen as a way of composing proofs, and initial sequents serve as the units for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus.

Now, we explain the connectives by giving logical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain "harmony" between the rule(s) for a connective and the rule(s) for its dual.

Multiplicatives

The rules for multiplicative conjunction (⊗) and disjunction (⅋):

Γ, A     Δ, B
Γ, Δ, AB
Γ, A, B
Γ, AB

and for their units:

 
1
Γ
Γ, ⊥

Observe that the rules for multiplicative conjunction and disjunction are admissible for plain conjunction and disjunction under a classical interpretation (i.e., they are admissible rules in LK).

Additives

The rules for additive conjunction (&) and disjunction (⊕) :

Γ, A     Γ, B
Γ, A & B
Γ, A
Γ, AB
Γ, B
Γ, AB

and for their units:

 
Γ, ⊤
(no rule for 0)

Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises.

Exponentials

The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for ?'d propositions: [5]

Γ
Γ, ?A
Γ, ?A, ?A
Γ, ?A

and use the following logical rules, in which stands for a list of propositions each prefixed with ?:

?Γ, A
?Γ, !A
Γ, A
Γ, ?A

One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of the normal modal logic S4, and that there is no longer such a clear symmetry between the duals ! and ?. This situation is remedied in alternative presentations of CLL (e.g., the LU presentation).

Remarkable formulas

In addition to the De Morgan dualities described above, some important equivalences in linear logic include:

Distributivity
A ⊗ (BC) ≣ (AB) ⊕ (AC)
(AB) ⊗ C ≣ (AC) ⊕ (BC)
A ⅋ (B & C) ≣ (AB) & (AC)
(A & B) ⅋ C ≣ (AC) & (BC)

By definition of AB as AB, the last two distributivity laws also give:

A ⊸ (B & C) ≣ (AB) & (AC)
(AB) ⊸ C ≣ (AC) & (BC)

(Here AB is (AB) & (BA).)

Exponential isomorphism
!(A & B) ≣ !A !B
?(AB) ≣ ?A ?B
Linear distributions

A map that is not an isomorphism yet plays a crucial role in linear logic is:

(A ⊗ (BC)) ⊸ ((AB) ⅋ C)

Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in [6] and called a "weak distribution". In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic.

Other implications

The following distributivity formulas are not in general an equivalence, only an implication:

!A !B !(AB)
!A !B !(AB)
?(AB) ⊸ ?A ?B
?(A & B) ⊸ ?A & ?B
(A & B) ⊗ C ⊸ (AC) & (BC)
(A & B) ⊕ C ⊸ (AC) & (BC)
(AC) ⊕ (BC) ⊸ (AB) ⅋ C
(A & C) ⊕ (B & C) ⊸ (AB) & C

Encoding classical/intuitionistic logic in linear logic

Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as !AB, while classical implication can be encoded as !?A ?B or !A ?!B (or a variety of alternative possible translations). [7] The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic.

Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using the Gödel–Gentzen negative translation, we can thus embed classical first-order logic into linear first-order logic.

The resource interpretation

Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations. Tony Hoare (1985)'s classic example of the vending machine can be used to illustrate this idea.

Suppose we represent having a candy bar by the atomic proposition candy, and having a dollar by $1. To state the fact that a dollar will buy you one candy bar, we might write the implication $1candy. But in ordinary (classical or intuitionistic) logic, from A and AB one can conclude AB. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, we can avoid this problem by using more sophisticated encodings,[ clarification needed ] although typically such encodings suffer from the frame problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than $1candy, we express the property of the vending machine as a linear implication $1candy. From $1 and this fact, we can conclude candy, but not$1candy. In general, we can use the linear logic proposition AB to express the validity of transforming resource A into resource B.

Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.)

Multiplicative conjunction (AB) denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requesting gumdrink. The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗.

Additive conjunction (A & B) represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write $1 ⊸ (candy & chips & drink). We do not write $1 ⊸ (candychipsdrink), which would imply that one dollar suffices for buying all three products together. However, from $1 ⊸ (candy & chips & drink), we can correctly deduce $3 ⊸ (candychipsdrink), where $3 := $1$1$1. The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write $3 ⊸ (candy ⊗ ⊤) to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.).

Additive disjunction (AB) represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as $1 ⊸ (candychipsdrink). The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produce A or 0 is as good as a machine that always produces A because it will never succeed in producing a 0). So unlike above, we cannot deduce $3 ⊸ (candychipsdrink) from this.

Other proof systems

Proof nets

Introduced by Jean-Yves Girard, proof nets have been created to avoid the bureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view.

For instance, these two proofs are "morally" identical:

A, B, C, D
AB, C, D
AB, CD
A, B, C, D
A, B, CD
AB, CD

The goal of proof nets is to make them identical by creating a graphical representation of them.

Semantics

Algebraic semantics

Decidability/complexity of entailment

The entailment relation in full CLL is undecidable. [8] When considering fragments of CLL, the decision problem has varying complexity:

Variants

Many variations of linear logic arise by further tinkering with the structural rules:

Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (see first-order logic and higher-order logic).

See also

Related Research Articles

In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.

<span class="mw-page-title-main">Negation</span> Logical operation

In logic, negation, also called the logical not or logical complement, is an operation that takes a proposition to another proposition "not ", standing for " is not true", written , or . It is interpreted intuitively as being true when is false, and false when is true. Negation is thus a unary logical connective. It may be applied as an operation on notions, propositions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition is the proposition whose proofs are the refutations of .

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

Proof theory is a major branch of mathematical logic and theoretical computer science within which proofs are treated as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of a given logical system. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In mathematical logic, a sequent is a very general kind of conditional assertion.

In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs.

In logic, a substructural logic is a logic lacking one of the usual structural rules, such as weakening, contraction, exchange or associativity. Two of the more significant substructural logics are relevance logic and linear logic.

Paraconsistent logic is an attempt at a logical system to deal with contradictions in a discriminating way. Alternatively, paraconsistent logic is the subfield of logic that is concerned with studying and developing "inconsistency-tolerant" systems of logic, which reject the principle of explosion.

Computability logic (CoL) is a research program and mathematical framework for redeveloping logic as a systematic formal theory of computability, as opposed to classical logic, which is a formal theory of truth. It was introduced and so named by Giorgi Japaridze in 2003.

Affine logic is a substructural logic whose proof theory rejects the structural rule of contraction. It can also be characterized as linear logic with weakening.

Noncommutative logic is an extension of linear logic that combines the commutative connectives of linear logic with the noncommutative multiplicative connectives of the Lambek calculus. Its sequent calculus relies on the structure of order varieties, and the correctness criterion for its proof nets is given in terms of partial permutations. It also has a denotational semantics in which formulas are interpreted by modules over some specific Hopf algebras.

The cut-elimination theorem is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in his landmark 1934 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule.

Bunched logic is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about resource composition, which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs. The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, and in systems modelling, where it provides a way to decompose the resources used by components of a system.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

In mathematical logic, monoidal t-norm based logic, the logic of left-continuous t-norms, is one of the t-norm fuzzy logics. It belongs to the broader class of substructural logics, or logics of residuated lattices; it extends the logic of commutative bounded integral residuated lattices by the axiom of prelinearity.

In mathematics and philosophy, Łukasiewicz logic is a non-classical, many-valued logic. It was originally defined in the early 20th century by Jan Łukasiewicz as a three-valued modal logic; it was later generalized to n-valued as well as infinitely-many-valued (0-valued) variants, both propositional and first order. The ℵ0-valued version was published in 1930 by Łukasiewicz and Alfred Tarski; consequently it is sometimes called the Łukasiewicz–Tarski logic. It belongs to the classes of t-norm fuzzy logics and substructural logics.

In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.

T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning.

In mathematical logic, focused proofs are a family of analytic proofs that arise through goal-directed proof-search, and are a topic of study in structural proof theory and reductive logic. They form the most general definition of goal-directed proof-search—in which someone chooses a formula and performs hereditary reductions until the result meets some condition. The extremal case where reduction only terminates when axioms are reached forms the sub-family of uniform proofs.

References

  1. Girard, Jean-Yves (1987). "Linear logic" (PDF). Theoretical Computer Science . 50 (1): 1–102. doi: 10.1016/0304-3975(87)90045-4 . hdl:10338.dmlcz/120513.
  2. Baez, John; Stay, Mike (2008). Bob Coecke (ed.). "Physics, Topology, Logic and Computation: A Rosetta Stone" (PDF). New Structures of Physics.
  3. de Paiva, V.; van Genabith, J.; Ritter, E. (1999). "Dagstuhl Seminar 99341 on Linear Logic and Applications" (PDF). Drops-Idn/V2/Document/10.4230/Dagsemrep.248. Schloss Dagstuhl – Leibniz-Zentrum für Informatik: 1–21. doi: 10.4230/DagSemRep.248 .
  4. Girard (1987), p.22, Def.1.15
  5. Girard (1987), p.25-26, Def.1.21
  6. J. Robin Cockett and Robert Seely "Weakly distributive categories" Journal of Pure and Applied Algebra 114(2) 133-173, 1997
  7. Di Cosmo, Roberto. The Linear Logic Primer . Course notes; chapter 2.
  8. 1 2 For this result and discussion of some of the fragments below, see: Lincoln, Patrick; Mitchell, John; Scedrov, Andre; Shankar, Natarajan (1992). "Decision Problems for Propositional Linear Logic". Annals of Pure and Applied Logic . 56 (1–3): 239–311. doi: 10.1016/0168-0072(92)90075-B .
  9. Kanovich, Max I. (1992-06-22). "Horn programming in linear logic is NP-complete". Seventh Annual IEEE Symposium on Logic in Computer Science, 1992. LICS '92. Proceedings. Seventh Annual IEEE Symposium on Logic in Computer Science, 1992. LICS '92. Proceedings. pp. 200–210. doi:10.1109/LICS.1992.185533. ISBN   0-8186-2735-2.
  10. Lincoln, Patrick; Winkler, Timothy (1994). "Constant-only multiplicative linear logic is NP-complete". Theoretical Computer Science. 135: 155–169. doi: 10.1016/0304-3975(94)00108-1 .
  11. Gunter, C. A.; Gehlot, V. (1989). Nets as Tensor Theories (PDF) (Technical report). University of Pennsylvania. MS-CIS-89-68.
  12. Bimbó, Katalin (2015-09-13). "The decidability of the intensional fragment of classical linear logic". Theoretical Computer Science. 597: 1–17. doi: 10.1016/j.tcs.2015.06.019 . ISSN   0304-3975.
  13. Straßburger, Lutz (2019-05-10). "On the decision problem for MELL". Theoretical Computer Science. 768: 91–98. doi: 10.1016/j.tcs.2019.02.022 . ISSN   0304-3975.
  14. Kopylov, A. P. (1995-06-01). "Decidability of linear affine logic". Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings. Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings. pp. 496–504. CiteSeerX   10.1.1.23.9226 . doi:10.1109/LICS.1995.523283. ISBN   0-8186-7050-9.

Further reading