Branching quantifier

Last updated

In logic a branching quantifier, [1] also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering [2]

Contents

of quantifiers for Q  {∀,∃}. It is a special case of generalized quantifier. In classical logic, quantifier prefixes are linearly ordered such that the value of a variable ym bound by a quantifier Qm depends on the value of the variables

y1, ..., ym−1

bound by quantifiers

Qy1, ..., Qym−1

preceding Qm. In a logic with (finite) partially ordered quantification this is not in general the case.

Branching quantification first appeared in a 1959 conference paper of Leon Henkin. [3] Systems of partially ordered quantification are intermediate in strength between first-order logic and second-order logic. They are being used as a basis for Hintikka's and Gabriel Sandu's independence-friendly logic.

Definition and properties

The simplest Henkin quantifier is

It (in fact every formula with a Henkin prefix, not just the simplest one) is equivalent to its second-order Skolemization, i.e.

It is also powerful enough to define the quantifier (i.e. "there are infinitely many") defined as

Several things follow from this, including the nonaxiomatizability of first-order logic with (first observed by Ehrenfeucht), and its equivalence to the -fragment of second-order logic (existential second-order logic)the latter result published independently in 1970 by Herbert Enderton [4] and W. Walkoe. [5]

The following quantifiers are also definable by . [2]

The Henkin quantifier can itself be expressed as a type (4) Lindström quantifier. [2]

Relation to natural languages

Hintikka in a 1973 paper [6] advanced the hypothesis that some sentences in natural languages are best understood in terms of branching quantifiers, for example: "some relative of each villager and some relative of each townsman hate each other" is supposed to be interpreted, according to Hintikka, as: [7] [8]

which is known to have no first-order logic equivalent. [7]

The idea of branching is not necessarily restricted to using the classical quantifiers as leaves. In a 1979 paper, [9] Jon Barwise proposed variations of Hintikka sentences (as the above is sometimes called) in which the inner quantifiers are themselves generalized quantifiers, for example: "Most villagers and most townsmen hate each other." [7] Observing that is not closed under negation, Barwise also proposed a practical test to determine whether natural language sentences really involve branching quantifiers, namely to test whether their natural-language negation involves universal quantification over a set variable (a sentence). [10]

Hintikka's proposal was met with skepticism by a number of logicians because some first-order sentences like the one below appear to capture well enough the natural language Hintikka sentence.

where

denotes

Although much purely theoretical debate followed, it wasn't until 2009 that some empirical tests with students trained in logic found that they are more likely to assign models matching the "bidirectional" first-order sentence rather than branching-quantifier sentence to several natural-language constructs derived from the Hintikka sentence. For instance students were shown undirected bipartite graphs with squares and circles as verticesand asked to say whether sentences like "more than 3 circles and more than 3 squares are connected by lines" were correctly describing the diagrams. [7]

See also

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Gödel's ontological proof is a formal argument by the mathematician Kurt Gödel (1906–1978) for the existence of God. The argument is in a line of development that goes back to Anselm of Canterbury (1033–1109). St. Anselm's ontological argument, in its most succinct form, is as follows: "God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist." A more elaborate version was given by Gottfried Leibniz (1646–1716); this is the version that Gödel studied and attempted to clarify with his ontological argument.

<span class="mw-page-title-main">Original proof of Gödel's completeness theorem</span>

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.

Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.

In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

In mathematical logic, a Lindström quantifier is a generalized polyadic quantifier. Lindström quantifiers generalize first-order quantifiers, such as the existential quantifier, the universal quantifier, and the counting quantifiers. They were introduced by Per Lindström in 1966. They were later studied for their applications in logic in computer science and database query languages.

In constructive mathematics, Church's thesis is an axiom stating that all total functions are computable functions.

Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it." In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.

In mathematical logic, a witness is a specific value t to be substituted for variable x of an existential statement of the form ∃xφ(x) such that φ(t) is true.

In modal logic, standard translation is a way of transforming formulas of modal logic into formulas of first-order logic which capture the meaning of the modal formulas. Standard translation is defined inductively on the structure of the formula. In short, atomic formulas are mapped onto unary predicates and the objects in the first-order language are the accessible worlds. The logical connectives from propositional logic remain untouched and the modal operators are transformed into first-order formulas according to their semantics.

Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .

In mathematical logic, the quantifier rank of a formula is the depth of nesting of its quantifiers. It plays an essential role in model theory.

In mathematical logic, the spectrum of a sentence is the set of natural numbers occurring as the size of a finite model in which a given sentence is true.

Inquisitive semantics is a framework in logic and natural language semantics. In inquisitive semantics, the semantic content of a sentence captures both the information that the sentence conveys and the issue that it raises. The framework provides a foundation for the linguistic analysis of statements and questions. It was originally developed by Ivano Ciardelli, Jeroen Groenendijk, Salvador Mascarenhas, and Floris Roelofsen.

In mathematics, a filter on a set informally gives a notion of which subsets are "large". Filter quantifiers are a type of logical quantifier which, informally, say whether or not a statement is true for "most" elements of Such quantifiers are often used in combinatorics, model theory, and in other fields of mathematical logic where (ultra)filters are used.

In first-order arithmetic, the induction principles, bounding principles, and least number principles are three related families of first-order principles, which may or may not hold in nonstandard models of arithmetic. These principles are often used in reverse mathematics to calibrate the axiomatic strength of theorems.

References

  1. Stanley Peters; Dag Westerståhl (2006). Quantifiers in language and logic. Clarendon Press. pp. 66–72. ISBN   978-0-19-929125-0.
  2. 1 2 3 Antonio Badia (2009). Quantifiers in Action: Generalized Quantification in Query, Logical and Natural Languages. Springer. p. 7476. ISBN   978-0-387-09563-9.
  3. Henkin, L. "Some Remarks on Infinitely Long Formulas". Infinitistic Methods: Proceedings of the Symposium on Foundations of Mathematics, Warsaw, 2–9 September 1959, Panstwowe Wydawnictwo Naukowe and Pergamon Press, Warsaw, 1961, pp. 167–183. OCLC   2277863
  4. Jaakko Hintikka and Gabriel Sandu, "Game-theoretical semantics", in Handbook of logic and language, ed. J. van Benthem and A. ter Meulen, Elsevier 2011 (2nd ed.) citing Enderton, H.B., 1970. Finite partially-ordered quantifiers. Z. Math. Logik Grundlag. Math. 16, 393–397 doi : 10.1002/malq.19700160802.
  5. Blass, A.; Gurevich, Y. (1986). "Henkin quantifiers and complete problems" (PDF). Annals of Pure and Applied Logic. 32: 1–16. doi:10.1016/0168-0072(86)90040-0. hdl: 2027.42/26312 . citing W. Walkoe, Finite partially-ordered quantification, Journal of Symbolic Logic 35 (1970) 535–555. JSTOR   2271440
  6. Hintikka, J. (1973). "Quantifiers vs. Quantification Theory". Dialectica. 27 (3–4): 329–358. doi:10.1111/j.1746-8361.1973.tb00624.x.
  7. 1 2 3 4 Gierasimczuk, N.; Szymanik, J. (2009). "Branching Quantification v. Two-way Quantification" (PDF). Journal of Semantics. 26 (4): 367. doi:10.1093/jos/ffp008.
  8. Sher, G. (1990). "Ways of branching quantifers" (PDF). Linguistics and Philosophy. 13 (4): 393–422. doi:10.1007/BF00630749. S2CID   61362436.
  9. Barwise, J. (1979). "On branching quantifiers in English". Journal of Philosophical Logic. 8: 47–80. doi:10.1007/BF00258419. S2CID   31950692.
  10. Hand, Michael (1998). "Reviewed work: On Branching Quantifiers in English, Jon Barwise; Branching Generalized Quantifiers and Natural Language. Generalized Quantifiers, Linguistic and Logical Approaches, Dag Westerståhl, Peter Gärdenfors; Ways of Branching Quantifiers, Gila Sher". The Journal of Symbolic Logic. 63 (4): 1611–1614. doi:10.2307/2586678. JSTOR   2586678. S2CID   117833401.