Proof by contradiction

Last updated

In logic, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition, by showing that assuming the proposition to be false leads to a contradiction. Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid. [1]

Contents

More broadly, proof by contradiction is any form of argument that establishes a statement by arriving at a contradiction, even when the initial assumption is not the negation of the statement to be proved. In this general sense, proof by contradiction is also known as indirect proof, proof by assuming the opposite, [2] and reductio ad impossibile. [3]

A mathematical proof employing proof by contradiction usually proceeds as follows:

  1. The proposition to be proved is P.
  2. We assume P to be false, i.e., we assume ¬P.
  3. It is then shown that ¬P implies falsehood. This is typically accomplished by deriving two mutually contradictory assertions, Q and ¬Q, and appealing to the law of noncontradiction.
  4. Since assuming P to be false leads to a contradiction, it is concluded that P is in fact true.

An important special case is the existence proof by contradiction: in order to demonstrate that an object with a given property exists, we derive a contradiction from the assumption that all objects satisfy the negation of the property.

Formalization

The principle may be formally expressed as the propositional formula ¬¬P ⇒ P, equivalently (¬P ⇒ ⊥) ⇒ P, which reads: "If assuming P to be false implies falsehood, then P is true."

In natural deduction the principle takes the form of the rule of inference

which reads: "If is proved, then may be concluded."

In sequent calculus the principle is expressed by the sequent

which reads: "Hypotheses and entail the conclusion or."

Justification

In classical logic the principle may be justified by the examination of the truth table of the proposition ¬¬P ⇒ P, which demonstrates it to be a tautology:

p¬p¬¬p¬¬p ⇒ p
TFTT
FTFT

Another way to justify the principle is to derive it from the law of the excluded middle, as follows. We assume ¬¬P and seek to prove P. By the law of excluded middle P either holds or it does not:

  1. if P holds, then of course P holds.
  2. if ¬P holds, then we derive falsehood by applying the law of noncontradiction to ¬P and ¬¬P, after which the principle of explosion allows us to conclude P.

In either case, we established P. It turns out that, conversely, proof by contradiction can be used to derive the law of excluded middle.

In classical sequent calculus LK proof by contradiction is derivable from the inference rules for negation:

Relationship with other proof techniques

Refutation by contradiction

Proof by contradiction is similar to refutation by contradiction, [4] [5] also known as proof of negation, which states that ¬P is proved as follows:

  1. The proposition to be proved is ¬P.
  2. Assume P.
  3. Derive falsehood.
  4. Conclude ¬P.

In contrast, proof by contradiction proceeds as follows:

  1. The proposition to be proved is P.
  2. Assume ¬P.
  3. Derive falsehood.
  4. Conclude P.

Formally these are not the same, as refutation by contradiction applies only when the proposition to be proved is negated, whereas proof by contradiction may be applied to any proposition whatsoever. [6] In classical logic, where and may be freely interchanged, the distinction is largely obscured. Thus in mathematical practice, both principles are referred to as "proof by contradiction".

Law of the excluded middle

Proof by contradiction is equivalent to the law of the excluded middle, first formulated by Aristotle, which states that either an assertion or its negation is true, P ∨ ¬P.

Law of non-contradiction

The law of noncontradiction was first stated as a metaphysical principle by Aristotle. It posits that a proposition and its negation cannot both be true, or equivalently, that a proposition cannot be both true and false. Formally the law of non-contradiction is written as ¬(P ∧ ¬P) and read as "it is not the case that a proposition is both true and false". The law of non-contradiction neither follows nor is implied by the principle of Proof by contradiction.

The laws of excluded middle and non-contradiction together mean that exactly one of P and ¬P is true.

Proof by contradiction in intuitionistic logic

In intuitionistic logic proof by contradiction is not generally valid, although some particular instances can be derived. In contrast, proof of negation and principle of noncontradiction are both intuitionistically valid.

Brouwer–Heyting–Kolmogorov interpretation of proof by contradiction gives the following intuitionistic validity condition: if there is no method for establishing that a proposition is false, then there is a method for establishing that the proposition is true.[ clarify ]

If we take "method" to mean algorithm, then the condition is not acceptable, as it would allow us to solve the Halting problem. To see how, consider the statement H(M) stating "Turing machine M halts or does not halt". Its negation ¬H(M) states that "M neither halts nor does not halt", which is false by the law of noncontradiction (which is intuitionistically valid). If proof by contradiction were intuitionistically valid, we would obtain an algorithm for deciding whether an arbitrary Turing machine M halts, thereby violating the (intuitionistically valid) proof of non-solvability of the Halting problem.

A proposition P which satisfies is known as a ¬¬-stable proposition. Thus in intuitionistic logic proof by contradiction is not universally valid, but can only be applied to the ¬¬-stable propositions. An instance of such a proposition is a decidable one, i.e., satisfying . Indeed, the above proof that the law of excluded middle implies proof by contradiction can be repurposed to show that a decidable proposition is ¬¬-stable. A typical example of a decidable proposition is a statement that can be checked by direct computation, such as " is prime" or " divides ".

Examples of proofs by contradiction

Euclid's Elements

An early occurrence of proof by contradiction can be found in Euclid's Elements, Book 1, Proposition 6: [7]

If in a triangle two angles equal one another, then the sides opposite the equal angles also equal one another.

The proof proceeds by assuming that the opposite sides are not equal, and derives a contradiction.

Hilbert's Nullstellensatz

An influential proof by contradiction was given by David Hilbert. His Nullstellensatz states:

If are polynomials in n indeterminates with complex coefficients, which have no common complex zeros, then there are polynomials such that

Hilbert proved the statement by assuming that there are no such polynomials and derived a contradiction. [8]

Infinitude of primes

Euclid's theorem states that there are infinitely many primes. In Euclid's Elements the theorem is stated in Book IX, Proposition 20: [9]

Prime numbers are more than any assigned multitude of prime numbers.

Depending on how we formally write the above statement, the usual proof takes either the form of a proof by contradiction or a refutation by contradiction. We present here the former, see below how the proof is done as refutation by contradiction.

If we formally express Euclid's theorem as saying that for every natural number there is a prime bigger than it, then we employ proof by contradiction, as follows.

Given any number , we seek to prove that there is a prime larger than . Suppose to the contrary that no such p exists (an application of proof by contradiction). Then all primes are smaller than or equal to , and we may form the list of them all. Let be the product of all primes and . Because is larger than all prime numbers it is not prime, hence it must be divisible by one of them, say . Now both and are divisible by , hence so is their difference , but this cannot be because 1 is not divisible by any primes. Hence we have a contradiction and so there is a prime number bigger than

Examples of refutations by contradiction

The following examples are commonly referred to as proofs by contradiction, but formally employ refutation by contradiction (and therefore are intuitionistically valid). [10]

Infinitude of primes

Let us take a second look at Euclid's theorem – Book IX, Proposition 20: [11]

Prime numbers are more than any assigned multitude of prime numbers.

We may read the statement as saying that for every finite list of primes, there is another prime not on that list, which is arguably closer to and in the same spirit as Euclid's original formulation. In this case Euclid's proof applies refutation by contradiction at one step, as follows.

Given any finite list of prime numbers , it will be shown that at least one additional prime number not in this list exists. Let be the product of all the listed primes and a prime factor of , possibly itself. We claim that is not in the given list of primes. Suppose to the contrary that it were (an application of refutation by contradiction). Then would divide both and , therefore also their difference, which is . This gives a contradiction, since no prime number divides 1.

Irrationality of the square root of 2

The classic proof that the square root of 2 is irrational is a refutation by contradiction. [12] Indeed, we set out to prove the negation ¬ ∃ a, b ∈ . a/b = 2 by assuming that there exist natural numbers a and b whose ratio is the square root of two, and derive a contradiction.

Proof by infinite descent

Proof by infinite descent is a method of proof whereby a smallest object with desired property is shown not to exist as follows:

Such a proof is again a refutation by contradiction. A typical example is the proof of the proposition "there is no smallest positive rational number": assume there is a smallest positive rational number q and derive a contradiction by observing that q/2 is even smaller than q and still positive.

Russell's paradox

Russell's paradox, stated set-theoretically as "there is no set whose elements are precisely those sets that do not contain themselves", is a negated statement whose usual proof is a refutation by contradiction.

Notation

Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for "quod est absurdum" ("which is absurd"), along the lines of Q.E.D., but this notation is rarely used today. [13] A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. [14] Others sometimes used include a pair of opposing arrows (as [ citation needed ] or ),[ citation needed ] struck-out arrows (),[ citation needed ] a stylized form of hash (such as U+2A33: ⨳),[ citation needed ] or the "reference mark" (U+203B: ※),[ citation needed ] or . [15] [16]

Hardy's view

G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." [17]

Automated theorem proving

In automated theorem proving the method of resolution is based on proof by contradiction. That is, in order to show that a given statement is entailed by given hypotheses, the automated prover assumes the hypotheses and the negation of the statement, and attempts to derive a contradiction. [18]

See also


Related Research Articles

In classical logic, disjunctive syllogism is a valid argument form which is a syllogism having a disjunctive statement for one of its premises.

In logic, the law of excluded middle or the principle of excluded middle states that for every proposition, either this proposition or its negation is true. It is one of the three laws of thought, along with the law of noncontradiction, and the law of identity; however, no system of logic is built on just these laws, and none of these laws provides inference rules, such as modus ponens or De Morgan's laws. The law is also known as the law / principleof the excluded third, in Latin principium tertii exclusi. Another Latin designation for this law is tertium non datur or "no third [possibility] is given". In classical logic, the law is a tautology.

The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.

In propositional logic, modus tollens (MT), also known as modus tollendo tollens and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.

<span class="mw-page-title-main">De Morgan's laws</span> Pair of logical equivalences

In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or — in philosophical logic — a cluster concept. As a normal form, it is useful in automated theorem proving.

In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and .

<span class="mw-page-title-main">Contradiction</span> Logical incompatibility between two or more propositions

In traditional logic, a contradiction occurs when a proposition conflicts either with itself or established fact. It is often used as a tool to detect disingenuous beliefs and bias. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation ab of implication such that (ca) ≤ b is equivalent to c ≤ (ab). From a logical standpoint, AB is by this definition the weakest proposition for which modus ponens, the inference rule AB, AB, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by Arend Heyting (1930) to formalize intuitionistic logic.

In propositional logic, double negation is the theorem that states that "If a statement is true, then it is not the case that the statement is not true". This is expressed by saying that a proposition A is logically equivalent to not (not-A), or by the formula A ≡ ~(~A) where the sign ≡ expresses logical equivalence and the sign ~ expresses negation.

Paraconsistent logic is an attempt at a logical system to deal with contradictions in a discriminating way. Alternatively, paraconsistent logic is the subfield of logic that is concerned with studying and developing "inconsistency-tolerant" systems of logic, which reject the principle of explosion.

In classical logic, intuitionistic logic and similar logical systems, the principle of explosion, or the principle of Pseudo-Scotus, is the law according to which any statement can be proven from a contradiction. That is, from a contradiction, any proposition can be inferred; this is known as deductive explosion.

In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.

In mathematical logic, a literal is an atomic formula or its negation. The definition mostly appears in proof theory, e.g. in conjunctive normal form and the method of resolution.

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as § Proof by contrapositive. The contrapositive of a statement has its antecedent and consequent inverted and flipped.

Minimal logic, or minimal calculus, is a symbolic logic system originally developed by Ingebrigt Johansson. It is an intuitionistic and paraconsistent logic, that rejects both the law of the excluded middle as well as the principle of explosion, and therefore holding neither of the following two derivations as valid:

References

  1. Bishop, Errett 1967. Foundations of Constructive Analysis, New York: Academic Press. ISBN   4-87187-714-0
  2. "Proof By Contradiction". www2.edc.org. Retrieved 12 June 2023.
  3. "Reductio ad absurdum | logic". Encyclopedia Britannica. Retrieved 25 October 2019.
  4. "Proof by contradiction". nLab. Retrieved 7 October 2022.
  5. Richard Hammack, Book of Proof , 3rd edition, 2022, ISBN   978-0-9894721-2-8; see "Chapter 9: Disproof".
  6. Bauer, Andrej (29 March 2010). "Proof of negation and proof by contradiction". Mathematics and Computation. Retrieved 26 October 2021.
  7. "Euclid's Elements, Book 6, Proposition 1" . Retrieved 2 October 2022.
  8. Hilbert, David (1893). "Ueber die vollen Invariantensysteme" . Mathematische Annalen . 42 (3): 313–373. doi:10.1007/BF01444162.
  9. "Euclid's Elements, Book 9, Proposition 20" . Retrieved 2 October 2022.
  10. Bauer, Andrej (2017). "Five stages of accepting constructive mathematics". Bull. Amer. Math. Soc. 54 (2017), 481-498. Retrieved 2 October 2022.
  11. "Euclid's Elements, Book 9, Proposition 20" . Retrieved 2 October 2022.
  12. Alfeld, Peter (16 August 1996). "Why is the square root of 2 irrational?". Understanding Mathematics, a study guide. Department of Mathematics, University of Utah. Retrieved 6 February 2013.
  13. "Math Forum Discussions".
  14. B. Davey and H.A. Priestley, Introduction to Lattices and Order , Cambridge University Press, 2002; see "Notation Index", p. 286.
  15. Gary Hardegree, Introduction to Modal Logic, Chapter 2, pg. II–2. https://web.archive.org/web/20110607061046/http://people.umass.edu/gmhwww/511/pdf/c02.pdf
  16. The Comprehensive LaTeX Symbol List, pg. 20. http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf
  17. G. H. Hardy, A Mathematician's Apology; Cambridge University Press, 1992. ISBN   9780521427067. PDF p.19 Archived 2021-02-16 at the Wayback Machine .
  18. "Linear Resolution", From Logic to Logic Programming, The MIT Press, 1994, retrieved 21 December 2023