Conjunction/disjunction duality

Last updated

In propositional logic and Boolean algebra, there is a duality between conjunction and disjunction , [1] [2] [3] also called the duality principle. [4] [5] [6] It is, undoubtedly, the most widely known example of duality in logic. [1] The duality consists in these metalogical theorems:

Contents

This article will prove these results, in the § Mutual definability and § Negation is semantically equivalent to dual sections respectively.

Mutual definability

Because of their semantics, i.e. the way they are commonly interpreted in classical propositional logic, conjunction and disjunction can be defined in terms of each other with the aid of negation, so that consequently, only one of them needs to be taken as primitive. [4] [1] For example, if conjunction (∧) and negation (¬) are taken as primitives, then disjunction (∨) can be defined as follows: [1] [8]

(1)

Alternatively, if disjunction is taken as primitive, then conjunction can be defined as follows: [1] [9] [8]

(2)

Also, each of these equivalences can be derived from the other one; for example, if (1) is taken as primitive, then (2) is obtained as follows: [1]

(3)

Functional completeness

Since the Disjunctive Normal Form Theorem shows that the set of connectives is functionally complete, these results show that the sets of connectives and are themselves functionally complete as well.

De Morgan's laws

De Morgan's laws also follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. [1] If conjunction is taken as primitive, then (4) follows immediately from (1), while (5) follows from (1) via (3): [1]

(4)
(5)

Negation is semantically equivalent to dual

Theorem: [4] [7] Let be any sentence in . (That is, the language with the propositional variables and the connectives .) Let be obtained from by replacing every occurrence of in by , every occurrence of by , and every occurrence of by . Then . ( is called the dual of .)

Proof: [4] [7] A sentence of , where is as in the theorem, will be said to have the property if . We shall prove by induction on immediate predecessors that all sentences of have . (An immediate predecessor of a well-formed formula is any of the formulas that are connected by its dominant connective ; it follows that sentence letters have no immediate predecessors.) So we have to establish that the following two conditions are satisfied: (1) each has ; and (2) for any non-atomic , from the inductive hypothesis that the immediate predecessors of have , it follows that does also.

  1. Each clearly has no occurrence of or , and so is just . So showing that has merely requires showing that , which we know to be the case.
  2. The induction step is an argument by cases. If is not an then must have one of the following three forms: (i) , (ii) , or (iii) where and are sentences of . If is of the form (i) or (ii) it has as immediate predecessors and , while if it is of the form (iii) it has the one immediate predecessor . We shall check that the induction step holds in each of the cases.
    1. Suppose that and each have , i.e. that and . This supposition, recall, is the inductive hypothesis. From this we infer that . By de Morgan's Laws . But , and . So it has been shown that the inductive hypothesis implies that , i.e. has as required.
    2. We have the same inductive hypothesis as in (i). So again and . Hence . By de Morgan again, . But . So in this case too.
    3. Here the inductive hypothesis is simply that . Hence . But . Hence . Q.E.D.

Further duality theorems

Assume . Then by uniform substitution of for . Hence, , by contraposition; so finally, , by the property that  ⟚ , which was just proved above. [7] And since , it is also true that if, and only if, . [7] And it follows, as a corollary, that if , then . [7]

Conjunctive and disjunctive normal forms

For a formula in disjunctive normal form, the formula will be in conjunctive normal form, and given the result that § Negation is semantically equivalent to dual, it will be semantically equivalent to . [10] [11] This provides a procedure for converting between conjunctive normal form and disjunctive normal form. [12] Since the Disjunctive Normal Form Theorem shows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. [11]

Related Research Articles

<span class="mw-page-title-main">Logical disjunction</span> Logical connective OR

In logic, disjunction, also known as logical disjunction or logical or or logical addition or inclusive disjunction, is a logical connective typically notated as and read aloud as "or". For instance, the English language sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formula , assuming that abbreviates "it is sunny" and abbreviates "it is warm".

First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Original proof of Gödel's completeness theorem</span>

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.

<span class="mw-page-title-main">De Morgan's laws</span> Pair of logical equivalences

In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or — in philosophical logic — a cluster concept. As a normal form, it is useful in automated theorem proving.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and .

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.

Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.

In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.

In constructive mathematics, pseudo-order is a name given to certain binary relations appropriate for modeling continuous orderings.

In modal logic, standard translation is a logic translation that transforms formulas of modal logic into formulas of first-order logic which capture the meaning of the modal formulas. Standard translation is defined inductively on the structure of the formula. In short, atomic formulas are mapped onto unary predicates and the objects in the first-order language are the accessible worlds. The logical connectives from propositional logic remain untouched and the modal operators are transformed into first-order formulas according to their semantics.

Metric temporal logic (MTL) is a special case of temporal logic. It is an extension of temporal logic in which temporal operators are replaced by time-constrained versions like until, next, since and previous operators. It is a linear-time logic that assumes both the interleaving and fictitious-clock abstractions. It is defined over a point-based weakly-monotonic integer-time semantics.

Inquisitive semantics is a framework in logic and natural language semantics. In inquisitive semantics, the semantic content of a sentence captures both the information that the sentence conveys and the issue that it raises. The framework provides a foundation for the linguistic analysis of statements and questions. It was originally developed by Ivano Ciardelli, Jeroen Groenendijk, Salvador Mascarenhas, and Floris Roelofsen.

A non-normal modal logic is a variant of modal logic that deviates from the basic principles of normal modal logics.

References

  1. 1 2 3 4 5 6 7 8 9 "Duality in Logic and Language | Internet Encyclopedia of Philosophy" . Retrieved 2024-06-10.
  2. "1.1 Logical Operations". www.whitman.edu. Retrieved 2024-06-10.
  3. Look, Brandon C. (2014-09-25). The Bloomsbury Companion to Leibniz. Bloomsbury Publishing. p. 127. ISBN   978-1-4725-2485-0.
  4. 1 2 3 4 5 6 7 Howson, Colin (1997). Logic with trees: an introduction to symbolic logic. London ; New York: Routledge. pp. 41, 44–45. ISBN   978-0-415-13342-5.
  5. 1 2 "Boolean algebra, Part 1 | Review ICS 241". courses.ics.hawaii.edu. Retrieved 2024-06-10.
  6. 1 2 Kurki-Suonio, R. (2005-07-20). A Practical Theory of Reactive Systems: Incremental Modeling of Dynamic Behaviors. Springer Science & Business Media. pp. 80–81. ISBN   978-3-540-27348-6.
  7. 1 2 3 4 5 6 7 8 Bostock, David (1997). Intermediate logic. Oxford : New York: Clarendon Press ; Oxford University Press. pp. 62–65. ISBN   978-0-19-875141-0.
  8. 1 2 Makridis, Odysseus (2022). Symbolic logic. Palgrave philosophy today. Cham, Switzerland: Palgrave Macmillan. p. 133. ISBN   978-3-030-67395-6.
  9. Lyons, John (1977-06-02). Semantics: Volume 1. Cambridge University Press. p. 145. ISBN   978-0-521-29165-1.
  10. Robinson, Alan J. A.; Voronkov, Andrei (2001-06-21). Handbook of Automated Reasoning. Gulf Professional Publishing. p. 306. ISBN   978-0-444-82949-8.
  11. 1 2 Polkowski, Lech T. (2023-10-03). Logic: Reference Book for Computer Scientists: The 2nd Revised, Modified, and Enlarged Edition of "Logics for Computer and Data Sciences, and Artificial Intelligence". Springer Nature. p. 70. ISBN   978-3-031-42034-4.
  12. Bagdasar, Ovidiu (2013-10-28). Concise Computer Mathematics: Tutorials on Theory and Problems. Springer Science & Business Media. p. 36. ISBN   978-3-319-01751-8.