Problem of multiple generality

Last updated

The problem of multiple generality names a failure in traditional logic to describe certain intuitively valid inferences. For example, it is intuitively clear that if:

Some cat is feared by every mouse

then it follows logically that:

All mice are afraid of at least one cat.

The syntax of traditional logic (TL) permits exactly four sentence types: "All As are Bs", "No As are Bs", "Some As are Bs" and "Some As are not Bs". Each type is a quantified sentence containing exactly one quantifier. Since the sentences above each contain two quantifiers ('some' and 'every' in the first sentence and 'all' and 'at least one' in the second sentence), they cannot be adequately represented in TL. The best TL can do is to incorporate the second quantifier from each sentence into the second term, thus rendering the artificial-sounding terms 'feared-by-every-mouse' and 'afraid-of-at-least-one-cat'. This in effect "buries" these quantifiers, which are essential to the inference's validity, within the hyphenated terms. Hence the sentence "Some cat is feared by every mouse" is allotted the same logical form as the sentence "Some cat is hungry". And so the logical form in TL is:

Some As are Bs
All Cs are Ds

which is clearly invalid.

The first logical calculus capable of dealing with such inferences was Gottlob Frege's Begriffsschrift (1879), the ancestor of modern predicate logic, which dealt with quantifiers by means of variable bindings. Modestly, Frege did not argue that his logic was more expressive than extant logical calculi, but commentators on Frege's logic regard this as one of his key achievements.

Using modern predicate calculus, we quickly discover that the statement is ambiguous.

Some cat is feared by every mouse

could mean (Some cat is feared) by every mouse (paraphrasable as Every mouse fears some cat), i.e.

For every mouse m, there exists a cat c, such that c is feared by m,

in which case the conclusion is trivial.

But it could also mean Some cat is (feared by every mouse) (paraphrasable as There's a cat feared by all mice), i.e.

There exists one cat c, such that for every mouse m, c is feared by m.

This example illustrates the importance of specifying the scope of such quantifiers as for all and there exists.

Further reading


Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Original proof of Gödels completeness theorem

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any" or "for all". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.

In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier. Existential quantification is distinct from universal quantification, which asserts that the property or relation holds for all members of the domain. Some sources use the term existentialization to refer to existential quantification.

In logic, negation, also called the logical complement, is an operation that takes a proposition to another proposition "not ", written , or . It is interpreted intuitively as being true when is false, and false when is true. Negation is thus a unary (single-argument) logical connective. It may be applied as an operation on notions, propositions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition is the proposition whose proofs are the refutations of .

In formal logic and related branches of mathematics, a functional predicate, or function symbol, is a logical symbol that may be applied to an object term to produce another object term. Functional predicates are also sometimes called mappings, but that term has additional meanings in mathematics. In a model, a function symbol will be modelled by a function.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In philosophy, term logic, also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to logic that began with Aristotle and was developed further in ancient history mostly by his followers, the peripatetics, but largely fell into decline by the third century CE. Term logic revived in medieval times, first in Islamic logic by Alpharabius in the tenth century, and later in Christian Europe in the twelfth century with the advent of new logic, and remained dominant until the advent of modern predicate logic in the late nineteenth century. This entry is an introduction to the term logic needed to understand philosophy texts written before it was expanded as a formal logic system by predicate logic. Readers lacking a grasp of the basic terminology and ideas of term logic can have difficulty understanding such texts, because their authors typically assumed an acquaintance with term logic.

In logic and mathematics second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.

Conceptual graph

A conceptual graph (CG) is a formalism for knowledge representation. In the first published paper on CGs, John F. Sowa used them to represent the conceptual schemas used in database systems. The first book on CGs applied them to a wide range of topics in artificial intelligence, computer science, and cognitive science.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix.

In formal semantics and philosophy of language, a definite description is a denoting phrase in the form of "the X" where X is a noun-phrase or a singular common noun. The definite description is proper if X applies to a unique individual or object. For example: "the first person in space" and "the 42nd President of the United States of America", are proper. The definite descriptions "the person in space" and "the Senator from Ohio" are improper because the noun phrase X applies to more than one thing, and the definite descriptions "the first man on Mars" and "the Senator from some Country" are improper because X applies to nothing. Improper descriptions raise some difficult questions about the law of excluded middle, denotation, modality, and mental content.

In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.

<i>Begriffsschrift</i> Book about logic

Begriffsschrift is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book.

Logical form Form for logical arguments, obtained by abstracting from the subject matter of its content terms

In logic, logical form of a statement is a precisely-specified semantic version of that statement in a formal system. Informally, the logical form attempts to formalize a possibly ambiguous statement into a statement with a precise, unambiguous logical interpretation with respect to a formal system. In an ideal formal language, the meaning of a logical form can be determined unambiguously from syntax alone. Logical forms are semantic, not syntactic constructs; therefore, there may be more than one string that represents the same logical form in a given language.

In logic, the monadic predicate calculus is the fragment of first-order logic in which all relation symbols in the signature are monadic, and there are no function symbols. All atomic formulas are thus of the form , where is a relation symbol and is a variable.

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

Donkey sentences are sentences that contain a pronoun with clear meaning but whose syntactical role in the sentence poses challenges to grammarians. Such sentences defy straightforward attempts to generate their formal language equivalents. The difficulty is with understanding how English speakers parse such sentences.

Czesław Lejewski (1913–2001) was a Polish philosopher and logician, and a member of the Lwow-Warsaw School of Logic. He studied under Jan Łukasiewicz and Karl Popper in the London School of Economics, and W.V.O. Quine.

In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there is something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.