Cumulativity (linguistics)

Last updated

In linguistic semantics, an expression X is said to have cumulative reference if and only if the following holds: If X is true of both of a and b, then it is also true of the combination of a and b. Example: If two separate entities can be said to be "water", then combining them into one entity will yield more "water". If two separate entities can be said to be "a house", their combination cannot be said to be "a house". Hence, "water" has cumulative reference, while the expression "a house" does not. The plural form "houses", however, does have cumulative reference. If two (groups of) entities are both "houses", then their combination will still be "houses".

Cumulativity has proven relevant to the linguistic treatment of the mass/count distinction and for the characterization of grammatical telicity.

Formally, a cumulative predicate CUM can be defined as follows, where capital X is a variable over sets, U is the universe of discourse, p is a mereological part structure on U, and is the mereological sum operation.

In later work, Krifka has generalized the notion to n-ary predicates, based on the phenomenon of cumulative quantification. For example, the two following sentences appear to be equivalent:

John ate an apple and Mary ate a pear.
John and Mary ate an apple and a pear.

This shows that the relation "eat" is cumulative. In general, an n-ary predicate R is cumulative if and only if the following holds:

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Original proof of Gödels completeness theorem

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

In linguistics, a mass noun, uncountable noun, or non-count noun is a noun with the syntactic property that any quantity of it is treated as an undifferentiated unit, rather than as something with discrete elements. Non-count nouns are distinguished from count nouns.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

In logic, negation, also called the logical complement, is an operation that takes a proposition to another proposition "not ", written , or . It is interpreted intuitively as being true when is false, and false when is true. Negation is thus a unary (single-argument) logical connective. It may be applied as an operation on notions, propositions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition is the proposition whose proofs are the refutations of .

In mathematics and logic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition. This sort of quantification is known as uniqueness quantification or unique existential quantification, and is often denoted with the symbols "∃!" or "∃=1". For example, the formal statement

In logic and mathematics second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.

In mathematics and logic, plural quantification is the theory that an individual variable x may take on plural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories.

In formal semantics and philosophy of language, a definite description is a denoting phrase in the form of "the X" where X is a noun-phrase or a singular common noun. The definite description is proper if X applies to a unique individual or object. For example: "the first person in space" and "the 42nd President of the United States of America", are proper. The definite descriptions "the person in space" and "the Senator from Ohio" are improper because the noun phrase X applies to more than one thing, and the definite descriptions "the first man on Mars" and "the Senator from some Country" are improper because X applies to nothing. Improper descriptions raise some difficult questions about the law of excluded middle, denotation, modality, and mental content.

In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.

Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions.

In formal semantics, a predicate is quantized if it being true of an entity requires that it is not true of any proper subparts of that entity. For example, if something is an "apple", then no proper subpart of that thing is an "apple". If something is "water", then many of its subparts will also be "water". Hence, the predicate "apple" is quantized, while "water" is not.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic. For example, it can express branching quantifier sentences, such as the formula which expresses infinity in the empty signature; this cannot be done in FOL. Therefore, first-order logic cannot, in general, express this pattern of dependency, in which depends only on and , and depends only on and . IF logic is more general than branching quantifiers, for example in that it can express dependencies that are not transitive, such as in the quantifier prefix , which expresses that depends on , and depends on , but does not depend on .

In logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering

In mathematical logic, an atomic formula is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives.

In the study of formal theories in mathematical logic, bounded quantifiers are often included in a formal language in addition to the standard quantifiers "∀" and "∃". Bounded quantifiers differ from "∀" and "∃" in that bounded quantifiers restrict the range of the quantified variable. The study of bounded quantifiers is motivated by the fact that determining whether a sentence with only bounded quantifiers is true is often not as difficult as determining whether an arbitrary sentence is true.

In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member:

T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning.

Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .

In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there is something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.

References

Krifka, Manfred (1989). "Nominal reference, temporal constitution and quantification in event semantics". In Renate Bartsch, Johan van Benthem and Peter van Emde Boas (eds.), Semantics and Contextual Expressions 75–115. Dordrecht: Foris.

Krifka, Manfred. 1999. "At least some determiners aren’t determiners". In The semantics/pragmatics interface from different points of view, ed. K. Turner, 257–291. North-Holland: Elsevier Science.

Scha, Remko. 1981. "Distributive, collective, and cumulative quantification". In Formal methods in the study of language, ed. T. Janssen and M. Stokhof, 483–512. Amsterdam: Mathematical Centre Tracts.