Curry's paradox

Last updated

Curry's paradox is a paradox in which an arbitrary claim F is proved from the mere existence of a sentence C that says of itself "If C, then F". The paradox requires only a few apparently innocuous logical deduction rules. Since F is arbitrary, any logic having these rules allows one to prove everything. The paradox may be expressed in natural language and in various logics, including certain forms of set theory, lambda calculus, and combinatory logic.

Contents

The paradox is named after the logician Haskell Curry, who wrote about it in 1942. [1] It has also been called Löb's paradox after Martin Hugo Löb, [2] due to its relationship to Löb's theorem.

In natural language

Claims of the form "if A, then B" are called conditional claims. Curry's paradox uses a particular kind of self-referential conditional sentence, as demonstrated in this example:

If this sentence is true, then Germany borders China.

Even though Germany does not border China, the example sentence certainly is a natural-language sentence, and so the truth of that sentence can be analyzed. The paradox follows from this analysis. The analysis consists of two steps. First, common natural-language proof techniques can be used to prove that the example sentence is true [steps 1.-4. below]. Second, the truth of the sentence can be used to prove that Germany borders China [steps 5.-6.]:

  1. The sentence reads "If this sentence is true, then Germany borders China"  [repeat definition to get step numbering compatible to #Formal proof]
  2. If the sentence is true, then it is true.  [obvious, i.e., a tautology]
  3. If the sentence is true, then: if the sentence is true, then Germany borders China.  [replace "it is true" by the sentence's definition]
  4. If the sentence is true, then Germany borders China.  [contract repeated condition]
  5. But 4. is what the sentence says, so it is indeed true.
  6. The sentence is true [by 5.], and [by 4.]: if it is true, then Germany borders China.
    So, Germany borders China.  [ modus ponens ]

Because Germany does not border China, this suggests that there has been an error in one of the proof steps. The claim "Germany borders China" could be replaced by any other claim, and the sentence would still be provable. Thus every sentence appears to be provable. Because the proof uses only well-accepted methods of deduction, and because none of these methods appears to be incorrect, this situation is paradoxical. [3]

Informal proof

The standard method for proving conditional sentences (sentences of the form "if A, then B") is called "conditional proof". In this method, in order to prove "if A, then B", first A is assumed and then with that assumption B is shown to be true.

To produce Curry's paradox, as described in the two steps above, apply this method to the sentence "if this sentence is true, then Germany borders China". Here A, "this sentence is true", refers to the overall sentence, while B is "Germany borders China". So, assuming A is the same as assuming "If A, then B". Therefore, in assuming A, we have assumed both A and "If A, then B". Therefore, B is true, by modus ponens, and we have proven "If this sentence is true, then 'Germany borders China' is true." in the usual way, by assuming the hypothesis and deriving the conclusion.

Now, because we have proved "If this sentence is true, then 'Germany borders China' is true", then we can again apply modus ponens, because we know that the claim "this sentence is true" is correct. In this way, we can deduce that Germany borders China.

In formal logics

Sentential logic

The example in the previous section used unformalized, natural-language reasoning. Curry's paradox also occurs in some varieties of formal logic. In this context, it shows that if we assume there is a formal sentence (X → Y), where X itself is equivalent to (X → Y), then we can prove Y with a formal proof. One example of such a formal proof is as follows. For an explanation of the logic notation used in this section, refer to the list of logic symbols.

  1. X := (X → Y)
    assumption, the starting point, equivalent to "If this sentence is true, then Y"
  2. X → X
  3. X → (X → Y)
    substitute right side of 2, since X is equivalent to X → Y by 1
  4. X → Y
    from 3 by contraction
  5. X
    substitute 4, by 1
  6. Y
    from 5 and 4 by modus ponens

An alternative proof is via Peirce's law . If X = X → Y then (X → Y) → X. This together with Peirce's law ((X → Y) → X) → X and modus ponens implies X and subsequently Y (as in above proof).

The above derivation shows that, if Y is an unprovable statement in a formal system, there is no statement X in that system such that X is equivalent to the implication (X → Y). In other words, step 1 of the previous proof fails. By contrast, the previous section shows that in natural (unformalized) language, for every natural language statement Y there is a natural language statement Z such that Z is equivalent to (Z → Y) in natural language. Namely, Z is "If this sentence is true then Y".

Naive set theory

Even if the underlying mathematical logic does not admit any self-referential sentences, certain forms of naive set theory are still vulnerable to Curry's paradox. In set theories that allow unrestricted comprehension, we can prove any logical statement Y by examining the set

One then shows easily that the statement is equivalent to . From this, may be deduced, similarly to the proofs shown above. ("" stands for "this sentence".)

Therefore, in a consistent set theory, the set does not exist for false Y. This can be seen as a variant on Russell's paradox, but is not identical. Some proposals for set theory have attempted to deal with Russell's paradox not by restricting the rule of comprehension, but by restricting the rules of logic so that it tolerates the contradictory nature of the set of all sets that are not members of themselves. The existence of proofs like the one above shows that such a task is not so simple, because at least one of the deduction rules used in the proof above must be omitted or restricted.

Lambda calculus with restricted minimal logic

Curry's paradox may be expressed in untyped lambda calculus, enriched by restricted minimal logic. To cope with the lambda calculus's syntactic restrictions, shall denote the implication function taking two parameters, that is, the lambda term shall be equivalent to the usual infix notation .

An arbitrary formula can be proved by defining a lambda function , and , where denotes Curry's fixed-point combinator. Then by definition of and , hence the above sentential logic proof can be duplicated in the calculus: [4] [5] [6]

In simply typed lambda calculus, fixed-point combinators cannot be typed and hence are not admitted.

Combinatory logic

Curry's paradox may also be expressed in combinatory logic, which has equivalent expressive power to lambda calculus. Any lambda expression may be translated into combinatory logic, so a translation of the implementation of Curry's paradox in lambda calculus would suffice.

The above term translates to in combinatory logic, where

hence [7]

Discussion

Curry's paradox can be formulated in any language supporting basic logic operations that also allows a self-recursive function to be constructed as an expression. Two mechanisms that support the construction of the paradox are self-reference (the ability to refer to "this sentence" from within a sentence) and unrestricted comprehension in naive set theory. Natural languages nearly always contain many features that could be used to construct the paradox, as do many other languages. Usually the addition of meta programming capabilities to a language will add the features needed. Mathematical logic generally does not allow explicit reference to its own sentences. However the heart of Gödel's incompleteness theorems is the observation that a different form of self-reference can be added; see Gödel number.

The rules used in the construction of the proof are the rule of assumption for conditional proof, the rule of contraction, and modus ponens. These are included in most common logical systems, such as first-order logic.

Consequences for some formal logic

In the 1930s, Curry's paradox and the related Kleene–Rosser paradox, from which Curry's paradox was developed, [8] [1] played a major role in showing that various formal logic systems allowing self-recursive expressions are inconsistent.

The axiom of unrestricted comprehension is not supported by modern set theory and Curry's paradox is thus avoided.

See also

Related Research Articles

Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

In propositional logic, modus ponens, also known as modus ponendo ponens, implication elimination, or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q.P is true. Therefore, Q must also be true."

In propositional logic, modus tollens (MT), also known as modus tollendo tollens and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.

In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems.

Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.

In combinatory logic for computer science, a fixed-point combinator, is a higher-order function that returns some fixed point of its argument function, if one exists.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs.

In mathematical logic, a deduction theorem is a metatheorem that justifies doing conditional proofs from a hypothesis in systems that do not explicitly axiomatize that hypothesis, i.e. to prove an implication A → B, it is sufficient to assume A as a hypothesis and then proceed to derive B. Deduction theorems exist for both propositional logic and first-order logic. The deduction theorem is an important tool in Hilbert-style deduction systems because it permits one to write more comprehensible and usually much shorter proofs than would be possible without it. In certain other formal proof systems the same conveniency is provided by an explicit inference rule; for example natural deduction calls it implication introduction.

The B, C, K, W system is a variant of combinatory logic that takes as primitive the combinators B, C, K, and W. This system was discovered by Haskell Curry in his doctoral thesis Grundlagen der kombinatorischen Logik, whose results are set out in Curry (1930).

In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula P, if it is provable in PA that "if P is provable in PA then P is true", then P is provable in PA. If Prov(P) means that the formula P is provable, we may express this more formally as

<span class="mw-page-title-main">Lambda cube</span>

In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:

Bunched logic is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about resource composition, which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs. The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, and in systems modelling, where it provides a way to decompose the resources used by components of a system.

In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).

The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel and Haskell Curry.

The simply typed lambda calculus, a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.

In mathematical logic and computer science the symbol ⊢ has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as "yields", "proves", "satisfies" or "entails".

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

Combinatory categorial grammar (CCG) is an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures and is therefore a type of phrase structure grammar.

Minimal logic, or minimal calculus, is a symbolic logic system originally developed by Ingebrigt Johansson. It is an intuitionistic and paraconsistent logic, that rejects both the law of the excluded middle as well as the principle of explosion, and therefore holding neither of the following two derivations as valid:

References

  1. 1 2 Curry, Haskell B. (Sep 1942). "The Inconsistency of Certain Formal Logics". The Journal of Symbolic Logic. 7 (3): 115–117. doi:10.2307/2269292. JSTOR   2269292. S2CID   121991184.
  2. Barwise, Jon; Etchemendy, John (1987). The Liar: An Essay on Truth and Circularity. New York: Oxford University Press. p. 23. ISBN   0195059441 . Retrieved 24 January 2013.
  3. A parallel example is explained in the Stanford Encyclopedia of Philosophy. See Shapiro, Lionel; Beall, Jc (2018). "Curry's Paradox". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy .
  4. The naming here follows the sentential logic proof, except that "Z" is used instead of "Y" to avoid confusion with Curry's fixed-point combinator .
  5. Gérard Huet (May 1986). Formal Structures for Computation and Deduction. International Summer School on Logic of Programming and Calculi of Discrete Design. Marktoberdorf. Archived from the original on 2014-07-14.{{cite book}}: CS1 maint: location missing publisher (link) Here: p.125
  6. Haskell B. Curry; Robert Feys (1958). Combinatory Logic I. Studies in Logic and the Foundations of Mathematics. Vol. 22. Amsterdem: North Holland.[ page needed ]
  7. Curry, Haskell B. (Jun 1942). "The Combinatory Foundations of Mathematical Logic". Journal of Symbolic Logic. 7 (2): 49–64. doi:10.2307/2266302. JSTOR   2266302. S2CID   36344702.