Conjunction/disjunction duality

Last updated

In propositional logic and Boolean algebra, there is a duality between conjunction and disjunction , [1] [2] [3] also called the duality principle. [4] [5] [6] It is the most widely known example of duality in logic. [1] The duality consists in these metalogical theorems:

Contents

Mutual definability

The connectives may be defined in terms of each other as follows:

(1)
(2)
(3)

Functional completeness

Since the Disjunctive Normal Form Theorem shows that the set of connectives is functionally complete, these results show that the sets of connectives and are themselves functionally complete as well.

De Morgan's laws

De Morgan's laws also follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. [1]

(4)
(5)

Duality properties

The dual of a sentence is what you get by swapping all occurrences of ∨ and &, while also negating all propositional constants. For example, the dual of (A & B ∨ C) would be (¬A ∨ ¬B & ¬C). The dual of a formula φ is notated as φ*. The Duality Principle states that in classical propositional logic, any sentence is equivalent to the negation of its dual. [4] [7]

Duality Principle: For all φ, we have that φ = ¬(φ*). [4] [7]
Proof: By induction on complexity. For the base case, we consider an arbitrary atomic sentence A. Since its dual is ¬A, the negation of its dual will be ¬¬A, which is indeed equivalent to A. For the induction step, we consider an arbitrary φ and assume that the result holds for all sentences of lower complexity. Three cases:
  1. If φ is of the form ¬ψ for some ψ, then its dual will be ¬(ψ*) and the negation of its dual will therefore be ¬¬(ψ*). Now, since ψ is less complex than φ, the induction hypothesis gives us that ψ = ¬(ψ*). By substitution, this gives us that φ = ¬¬(ψ*), which is to say that φ is equivalent to the negation of its dual.
  2. If φ is of the form (ψ ∨ χ) for some ψ and χ, then its dual will be (ψ* & χ*), and the negation of its dual will therefore be ¬(ψ* & χ*). Now, since ψ and χ are less complex than φ, the induction hypothesis gives us that ψ = ¬(ψ*) and χ = ¬(χ*). By substitution, this gives us that φ = ¬(ψ*) ∨ ¬(χ*) which in turn gives us that φ = ¬(ψ* & χ*) by DeMorgan's Law. And that is once again just to say that φ is equivalent to the negation of its dual.
  3. If φ is of the form ψ ∨ χ, the result follows by analogous reasoning.

Further duality theorems

Assume . Then by uniform substitution of for . Hence, , by contraposition; so finally, , by the property that  ⟚ , which was just proved above. [7] And since , it is also true that if, and only if, . [7] And it follows, as a corollary, that if , then . [7]

Conjunctive and disjunctive normal forms

For a formula in disjunctive normal form, the formula will be in conjunctive normal form, and given the result that § Negation is semantically equivalent to dual, it will be semantically equivalent to . [8] [9] This provides a procedure for converting between conjunctive normal form and disjunctive normal form. [10] Since the Disjunctive Normal Form Theorem shows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. [9]

Related Research Articles

<span class="mw-page-title-main">Logical disjunction</span> Logical connective OR

In logic, disjunction, also known as logical disjunction or logical or or logical addition or inclusive disjunction, is a logical connective typically notated as and read aloud as "or". For instance, the English language sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formula , assuming that abbreviates "it is sunny" and abbreviates "it is warm".

First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

The propositional calculus is a branch of logic. It is also called (first-order) propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.

<span class="mw-page-title-main">De Morgan's laws</span> Pair of logical equivalences

In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or — in philosophical logic — a cluster concept. As a normal form, it is useful in automated theorem proving.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

In classical, deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and . A trivial theory is clearly inconsistent. Conversely, in an explosive formal system every inconsistent theory is trivial. Consistency of a theory is a syntactic notion, whose semantic counterpart is satisfiability. A theory is satisfiable if it has a model, i.e., there exists an interpretation under which all axioms in the theory are true. This is what consistent meant in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead.

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

In propositional logic, the double negation of a statement states that "it is not the case that the statement is not true". In classical logic, every statement is logically equivalent to its double negation, but this is not true in intuitionistic logic; this can be expressed by the formula A ≡ ~(~A) where the sign ≡ expresses logical equivalence and the sign ~ expresses negation.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.

Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.

In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.

In logic, more specifically proof theory, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style system, Hilbert-style proof system, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of formal proof system attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In constructive mathematics, pseudo-order is a name given to certain binary relations appropriate for modeling continuous orderings.

Metric temporal logic (MTL) is a special case of temporal logic. It is an extension of temporal logic in which temporal operators are replaced by time-constrained versions like until, next, since and previous operators. It is a linear-time logic that assumes both the interleaving and fictitious-clock abstractions. It is defined over a point-based weakly-monotonic integer-time semantics.

Inquisitive semantics is a framework in logic and natural language semantics. In inquisitive semantics, the semantic content of a sentence captures both the information that the sentence conveys and the issue that it raises. The framework provides a foundation for the linguistic analysis of statements and questions. It was originally developed by Ivano Ciardelli, Jeroen Groenendijk, Salvador Mascarenhas, and Floris Roelofsen.

A non-normal modal logic is a variant of modal logic that deviates from the basic principles of normal modal logics.

References

[11] [12]

  1. 1 2 3 4 "Duality in Logic and Language | Internet Encyclopedia of Philosophy" . Retrieved 2024-06-10.
  2. "1.1 Logical Operations". www.whitman.edu. Retrieved 2024-06-10.
  3. Look, Brandon C. (2014-09-25). The Bloomsbury Companion to Leibniz. Bloomsbury Publishing. p. 127. ISBN   978-1-4725-2485-0.
  4. 1 2 3 4 5 6 Howson, Colin (1997). Logic with trees: an introduction to symbolic logic. London; New York: Routledge. pp. 41, 44–45. ISBN   978-0-415-13342-5.
  5. 1 2 "Boolean algebra, Part 1 | Review ICS 241". courses.ics.hawaii.edu. Retrieved 2024-06-10.
  6. 1 2 Kurki-Suonio, R. (2005-07-20). A Practical Theory of Reactive Systems: Incremental Modeling of Dynamic Behaviors. Springer Science & Business Media. pp. 80–81. ISBN   978-3-540-27348-6.
  7. 1 2 3 4 5 6 7 8 Bostock, David (1997). Intermediate logic. Oxford : New York: Clarendon Press; Oxford University Press. pp. 62–65. ISBN   978-0-19-875141-0.
  8. Robinson, Alan J. A.; Voronkov, Andrei (2001-06-21). Handbook of Automated Reasoning. Gulf Professional Publishing. p. 306. ISBN   978-0-444-82949-8.
  9. 1 2 Polkowski, Lech T. (2023-10-03). Logic: Reference Book for Computer Scientists: The 2nd Revised, Modified, and Enlarged Edition of "Logics for Computer and Data Sciences, and Artificial Intelligence". Springer Nature. p. 70. ISBN   978-3-031-42034-4.
  10. Bagdasar, Ovidiu (2013-10-28). Concise Computer Mathematics: Tutorials on Theory and Problems. Springer Science & Business Media. p. 36. ISBN   978-3-319-01751-8.
  11. Makridis, Odysseus (2022). Symbolic logic. Palgrave philosophy today. Cham, Switzerland: Palgrave Macmillan. p. 133. ISBN   978-3-030-67395-6.
  12. Lyons, John (1977-06-02). Semantics: Volume 1. Cambridge University Press. p. 145. ISBN   978-0-521-29165-1.