In formal semantics, homogeneity is the phenomenon where plural expressions that seem to mean "all" negate to "none" rather than "not all". For example, the English sentence "Robin read the books" requires Robin to have read all of the books, while "Robin didn't read the books" requires her to have read none of them. Neither sentence is true if she read exactly half of the books. Homogeneity effects have been observed in a variety of languages including Japanese, Russian, and Hungarian. Semanticists have proposed a variety of explanations for homogeneity, often involving a combination of presupposition, plural quantification, and trivalent logics. Because analogous effects have been observed with conditionals and other modal expressions, some semanticists have proposed that these phenomena involve pluralities of possible worlds.
Homogeneous interpretations arise when a plural expression seems to mean "all" when asserted but "none" when negated. For example, the English sentence in (1a) is typically interpreted to mean that Robin read all the books, while (1b) is interpreted to mean that she read none of them. This is a puzzle since (1b) would merely mean that some books went unread if "the books" expressed universal quantification, as it appears to do in the positive sentence. [1] [2]
Homogeneous readings are also possible with other expressions including conjunctions and bare plurals. For instance, (2a) means that Robin read both books while (2b) means that she read neither; example (3a) means that in general Robin likes books while (3b) means that in general she does not. [1]
Homogeneity effects have been studied in a variety of languages including English, Russian, Japanese and Hungarian. For instance, the Hungarian example in (4) behaves analogously to the English one in (1b). [3]
Homogeneity can be suspended in certain circumstances. For instance, the definite plurals in (1) lose their homogeneous interpretation when an overt universal quantifier is inserted, as shown in (5). [1]
Additionally, the conjunctions in (3) lose their homogeneous interpretation when the connective receives focus. [3]
Homogeneity is important to semantic theory in part because it results in apparent truth value gaps. For example, neither of the sentences in (1) are assertable if Robin read exactly half of the relevant books. As a result, some linguists have attempted to provide unified analyses with other gappy phenomena such as presupposition, scalar implicature, free choice inferences, and vagueness. [1] Homogeneity effects have been argued to appear with semantic types other than individuals. For instance, negated conditionals and modals have been argued to show similar effects, potentially suggesting that they refer to pluralities of possible worlds. [1] [4]
First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
The following outline is provided as an overview and topical guide to linguistics:
In linguistics, the partitive is a word, phrase, or case that indicates partialness. Nominal partitives are syntactic constructions, such as "some of the children", and may be classified semantically as either set partitives or entity partitives based on the quantifier and the type of embedded noun used. Partitives should not be confused with quantitives, which often look similar in form, but behave differently syntactically and have a distinct meaning.
In formal semantics and philosophy of language, a definite description is a denoting phrase in the form of "the X" where X is a noun-phrase or a singular common noun. The definite description is proper if X applies to a unique individual or object. For example: "the first person in space" and "the 42nd President of the United States of America", are proper. The definite descriptions "the person in space" and "the Senator from Ohio" are improper because the noun phrase X applies to more than one thing, and the definite descriptions "the first man on Mars" and "the Senator from Washington D.C." are improper because X applies to nothing. Improper descriptions raise some difficult questions about the law of excluded middle, denotation, modality, and mental content.
Montague grammar is an approach to natural language semantics, named after American logician Richard Montague. The Montague grammar is based on mathematical logic, especially higher-order predicate logic and lambda calculus, and makes use of the notions of intensional logic, via Kripke models. Montague pioneered this approach in the 1960s and early 1970s.
Intensional logic is an approach to predicate logic that extends first-order logic, which has quantifiers that range over the individuals of a universe (extensions), by additional quantifiers that range over terms that may have such individuals as their value (intensions). The distinction between intensional and extensional entities is parallel to the distinction between sense and reference.
In linguistics and philosophy, modality refers to the ways language can express various relationships to reality or truth. For instance, a modal expression may convey that something is likely, desirable, or permissible. Quintessential modal expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" or "probable". However, modal components have been identified in the meanings of countless natural language expressions, including counterfactuals, propositional attitudes, evidentials, habituals, and generics.
Homogeneity and heterogeneity are concepts relating to the uniformity of a substance, process or image. A homogeneous feature is uniform in composition or character ; one that is heterogeneous is distinctly nonuniform in at least one of these qualities.
In generative grammar and related approaches, the logical form (LF) of a linguistic expression is the variant of its syntactic structure which undergoes semantic interpretation. It is distinguished from phonetic form, the structure which corresponds to a sentence's pronunciation. These separate representations are postulated in order to explain the ways in which an expression's meaning can be partially independent of its pronunciation, e.g. scope ambiguities.
Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.
Attempto Controlled English (ACE) is a controlled natural language, i.e. a subset of standard English with a restricted syntax and restricted semantics described by a small set of construction and interpretation rules. It has been under development at the University of Zurich since 1995. In 2013, ACE version 6.7 was announced.
Araki is a nearly extinct language spoken in the small island of Araki, south of Espiritu Santo Island in Vanuatu. Araki is gradually being replaced by Tangoa, a language from a neighbouring island.
In semantics, a donkey sentence is a sentence containing a pronoun which is semantically bound but syntactically free. They are a classic puzzle in formal semantics and philosophy of language because they are fully grammatical and yet defy straightforward attempts to generate their formal language equivalents. In order to explain how speakers are able to understand them, semanticists have proposed a variety of formalisms including systems of dynamic semantics such as Discourse representation theory. Their name comes from the example sentence "Every farmer who owns a donkey beats it", in which "it" acts as a donkey pronoun because it is semantically but not syntactically bound by the indefinite noun phrase "a donkey". The phenomenon is known as donkey anaphora.
In pragmatics, scalar implicature, or quantity implicature, is an implicature that attributes an implicit meaning beyond the explicit or literal meaning of an utterance, and which suggests that the utterer had a reason for not using a more informative or stronger term on the same scale. The choice of the weaker characterization suggests that, as far as the speaker knows, none of the stronger characterizations in the scale holds. This is commonly seen in the use of 'some' to suggest the meaning 'not all', even though 'some' is logically consistent with 'all'. If Bill says 'I have some of my money in cash', this utterance suggests to a hearer that Bill does not have all his money in cash.
Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it." In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.
Formal semantics is the study of grammatical meaning in natural languages using formal concepts from logic, mathematics and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.
Craige Roberts is an American linguist, known for her work on pragmatics and formal semantics.
In formal semantics, the scope of a semantic operator is the semantic object to which it applies. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. Scope can be thought of as the semantic order of operations.
In formal semantics and pragmatics, modal subordination is the phenomenon whereby a modal expression is interpreted relative to another modal expression to which it is not syntactically subordinate. For instance, the following example does not assert that the birds will in fact be hungry, but rather that hungry birds would be a consequence of Joan forgetting to fill the birdfeeder. This interpretation was unexpected in early theories of the syntax-semantics interface since the content concerning the birds' hunger occurs in a separate sentence from the if-clause.
In formal semantics, Strawson entailment is a variant of the concept of entailment which is insensitive to presupposition failures. Formally, a sentence P Strawson-entails a sentence Q iff Q is always true when P is true and Qs presuppositions are satisfied. For example, "Maria loves every cat" Strawson-entails "Maria loves her cat" because Maria could not love every cat without loving her own, assuming that she has one. This would not be an ordinary entailment, since the first sentence could be true while the second is undefined on account of a presupposition failure; loving every cat would not guarantee that she owns a cat.