Grammatical features |
---|
In linguistics, focus (abbreviated FOC) is a grammatical category that conveys which part of the sentence contributes new, non-derivable, or contrastive information. In the English sentence "Mary only insulted BILL", focus is expressed prosodically by a pitch accent on "Bill" which identifies him as the only person whom Mary insulted. By contrast, in the sentence "Mary only INSULTED Bill", the verb "insult" is focused and thus expresses that Mary performed no other actions towards Bill. Focus is a cross-linguistic phenomenon and a major topic in linguistics. Research on focus spans numerous subfields including phonetics, syntax, semantics, pragmatics, and sociolinguistics.
Information structure has been described at length by a number of linguists as a grammatical phenomenon. [1] [2] [3] Lexicogrammatical structures that code prominence, or focus, of some information over other information has a particularly significant history dating back to the 19th century. [4] Recent attempts to explain focus phenomena in terms of discourse function, including those by Knud Lambrecht and Talmy Givón, often connect focus with the packaging of new, old, and contrasting information. Lambrecht in particular distinguishes three main types of focus constructions: predicate-focus structure, argument-focus structure, and sentence-focus structure. Focus has also been linked to other more general cognitive processes, including attention orientation. [5] [6]
In such approaches, contrastive focus is understood as the coding of information that is contrary to the presuppositions of the interlocutor. [7] [8] [9] The topic–comment model distinguishes between the topic (theme) and what is being said about that topic (the comment, rheme, or focus). [9] [10] [11]
Standard formalist approaches to grammar argue that phonology and semantics cannot exchange information directly (See Fig. 1). Therefore, syntactic mechanisms including features and transformations include prosodic information regarding focus that is passed to the semantics and phonology.
Focus may be highlighted either prosodically or syntactically or both, depending on the language. In syntax this can be done assigning focus markers, as shown in (1), or by preposing as shown in (2):
In (1), focus is marked syntactically with the subscripted ‘f’ which is realized phonologically by a nuclear pitch accent. Clefting induces an obligatory intonation break. Therefore, in (2), focus is marked via word order and a nuclear pitch accent.
In English, focus also relates to phonology and has ramifications for how and where suprasegmental information such as rhythm, stress, and intonation is encoded in the grammar, and in particular intonational tunes that mark focus. [12] Speakers can use pitch accents on syllables to indicate what word(s) are in focus. New words are often accented while given words are not. The accented word(s) forms the focus domain. However, not all of the words in a focus domain need be accented. (See [13] [14] [15] for rules on accent placement and focus-marking). The focus domain can be either broad, as shown in (3), or narrow, as shown in (4) and (5):
The question/answer paradigm shown in (3)–(5) has been utilized by a variety of theorists [12] [16] to illustrate the range of contexts a sentence containing focus can be used felicitously. Specifically, the question/answer paradigm has been used as a diagnostic for what counts as new information. For example, the focus pattern in (3) would be infelicitous if the question was ‘Did you see a grey dog or a black dog?’.
In (3) and (4), the pitch accent is marked in bold. In (3), the pitch accent is placed on dog but the entire noun phrase a grey dog is under focus. In (4), the pitch accent is also placed on dog but only the noun dog is under focus. In (5), pitch accent is placed on grey and only the adjective grey is under focus.
Historically, generative proposals made focus a feature bound to a single word within a sentence. Chomsky and Halle [17] formulated a Nuclear Stress Rule that proposed there to be a relation between the main stress of a sentence and a single constituent. Since this constituent is prominent sententially in a way that can contrast with lexical stress, this was originally referred to as "nuclear" stress. The purpose of this rule was to capture the intuition that within each sentence, there is one word in particular that is accented more prominently due to its importance – this is said to form the nucleus of that sentence.
Focus was later suggested to be a structural position at the beginning of the sentence (or on the left periphery) in Romance languages such as Italian, as the lexical head of a Focus Phrase (or FP, following the X-bar theory of phrase structure). Jackendoff, [18] Selkirk, [13] [14] Rooth, [19] [20] Krifka, [21] Schwarzschild [15] argue that focus consists of a feature that is assigned to a node in the syntactic representation of a sentence. Because focus is now widely seen as corresponding between heavy stress, or nuclear pitch accent, this feature is often associated with the phonologically prominent element(s) of a sentence.
Sound structure (phonological and phonetic) studies of focus are not as numerous, as relational language phenomena tend to be of greater interest to syntacticians and semanticists. But this may be changing: a recent study found that not only do focused words and phrases have a higher range of pitch compared to words in the same sentence but that words following the focus in both American English and Mandarin Chinese were lower than normal in pitch and words before a focus are unaffected. The precise usages of focus in natural language are still uncertain. A continuum of possibilities could possibly be defined between precisely enunciated and staccato styles of speech based on variations in pragmatics or timing.
Currently, there are two central themes in research on focus in generative linguistics. First, given what words or expressions are prominent, what is the meaning of some sentence? Rooth, [19] Jacobs, [22] Krifka, [21] and von Stechow [23] claim that there are lexical items and construction specific-rules that refer directly to the notion of focus. Dryer, [24] Kadmon, [25] Marti, [26] Roberts, [16] Schwarzschild, [27] Vallduvi, [28] and Williams [29] argue for accounts in which general principles of discourse explain focus sensitivity. [12] Second, given the meaning and syntax of some sentence, what words or expressions are prominent?
Focus directly affects the semantics, or meaning, of a sentence. Different ways of pronouncing the sentence affects the meaning, or, what the speaker intends to convey. Focus distinguishes one interpretation of a sentence from other interpretations of the same sentence that do not differ in word order, but may differ in the way in which the words are taken to relate to each other. To see the effects of focus on meaning, consider the following examples:
In (6), accent is placed on Sue. There are two readings of (6) – broad focus shown in (7) and narrow focus shown in (8):
The meaning of (7) can be summarized as the only thing John did was introduce Bill to Sue. The meaning of (8) can be summarized as the only person to whom John introduced Bill is Sue.
In both (7) and (8), focus is associated with the focus sensitive expression only. This is known as association with focus. The class of focus sensitive expressions in which focus can be associated with includes exclusives (only, just) non-scalar additives (merely, too) scalar additives (also, even), particularlizers (in particular, for example), intensifiers, quantificational adverbs, quantificational determiners, sentential connectives, emotives, counterfactuals, superlatives, negation and generics. [12] It is claimed that focus operators must c-command their focus.
In the alternative semantics approach to focus pioneered by Mats Rooth, each constituent has both an ordinary denotation and a focus denotation which are composed by parallel computations. The ordinary denotation of a sentence is simply whatever denotation it would have in a non-alternative-based system while its focus denotation can be thought of as the set containing all ordinary denotations one could get by substituting the focused constituent for another expression of the same semantic type. For a sentence such as (9), the ordinary denotation will be the proposition which is true iff Mary likes Sue. Its focus denotation will be the set of each propositions such that for some contextually relevant individual 'x', that proposition is true iff Mary likes 'x'. [30] [19] [20]
In formal terms, the ordinary denotation of (9) will be as shown below:
Focus denotations are computed using the alternative sets provided by alternative semantics. In this system, most unfocused items denote the singleton set containing their ordinary denotations.
Focused constituents denote the set of all (contextually relevant) semantic objects of the same type.
In alternative semantics, the primary composition rule is Pointwise Functional Application. This rule can be thought of as analogous to the cross product.
Applying this rule to example (9) would give the following focus denotation if the only contextually relevant individuals are Sue, Bill, Lisa, and Mary
The focus denotation can be "caught" by focus-sensitive expressions like "only" as well as other covert items such as the squiggle operator. [19] [20] [30]
Following Jacobs [22] and Williams, [29] Krifka [21] argues differently. Krifka claims focus partitions the semantics into a background part and focus part, represented by the pair:
The logical form of which represented in lambda calculus is:
This pair is referred to as a structured meaning. Structured meanings allow for a compositional semantic approach to sentences that involve single or multiple foci. This approach follows Frege's (1897) Principle of Compositionality: the meaning of a complex expression is determined by the meanings of its parts, and the way in which those parts are combined into structured meanings. Krifka’s structured meaning theory represents focus in a transparent and compositional fashion it encompasses sentences with more than one focus as well as sentences with a single focus. Krifka claims the advantages of structured meanings are twofold: 1) We can access the meaning of an item in focus directly, and 2) Rooth's [19] [20] alternative semantics can be derived from a structured meaning approach but not vice versa. To see Krifka’s approach illustrated, consider the following examples of single focus shown in (10) and multiple foci shown in (11):
Generally, the meaning of (10) can be summarized as John introduced Bill to Sue and no one else, and the meaning of (11) can be summarized as the only pair of persons such that John introduced the first to the second is Bill and Sue.
Specifically, the structured meaning of (10) is:
The background part of the structured meaning is; introd (j, b, x); and the focus part is s.
Through a (modified) form of functional application (or beta reduction), the focus part of (10) and (11) is projected up through the syntax to the sentential level. Importantly, each intermediate level has distinct meaning.
It has been claimed that new information in the discourse is accented while given information is not. Generally, the properties of new and given are referred to as a word's discourse status. Definitions of new and given vary. Halliday [31] defines given as "anaphorically" recoverable, while new is defined to be "textually and situationally non-derivable information". To illustrate this point, consider the following discourse in (12) and (13):
In (13) we note that the verb make is not given by the sentence in (12). It is discourse new. Therefore, it is available for accentuation. However, toast in (13) is given in (12). Therefore, it is not available for accentuation. As previously mentioned, pitch accenting can relate to focus. Accented words are often said to be in focus or F-marked often represented by F-markers. The relationship between accent placement is mediated through the discourse status of particular syntactic nodes. [33] The percolation of F-markings in a syntactic tree is sensitive to argument structure and head-phrase relations. [15]
Selkirk [13] [14] develops an explicit account of how F-marking propagates up syntactic trees. Accenting indicates F-marking. F-marking projects up a given syntactic tree such that both lexical items, i.e. terminal nodes and phrasal levels, i.e. nonterminal nodes, can be F-marked. Specifically, a set of rules determines how and where F-marking occurs in the syntax. These rules are shown in (1) and (2):
To see how (14) and (15) apply, consider the following example:
Because there is no rule in (14) or (15) that licenses F-marking to the direct object from any other node, the direct object parrot must be accented as indicated in bold. Rule (15b) allows F-marking to project from the direct object to the head verb adopted. Rule (15a) allows F-marking to project from the head verb to the VP adopted a parrot. Selkirk [13] [14] assumes the subject Judy is accented if F-marked as indicated in bold. [33]
Schwarzschild [15] points out weaknesses in Selkirk’s [13] [14] ability to predict accent placement based on facts about the discourse. Selkirk’s theory says nothing about how accentuation arises in sentences with entirely old information. She does not fully articulate the notion of discourse status and its relation to accent marking. Schwarzschild differs from Selkirk in that he develops a more robust model of discourse status. Discourse status is determined via the entailments of the context. This is achieved through the definition in (16):
The operation in (16b) can apply to any constituent. -type-shifting "is a way of transforming syntactic constituents into full propositions so that it is possible to check whether they are entailed by the context". [33] For example, the result of -type-shifting the VP in (17) is (18):
Note that (18) is a full proposition. The existential F-closure in (16b) refers to the operation of replacing the highest F-marked node with an existentially closed variable. The operation is shown in (19) and (20):
Given the discourse context in (21a) it is possible to determine the discourse status of any syntactic node in (21b):
If the VP in (21a) is the salient antecedent for the VP in (21b), then the VP in (21b) counts as given. -type-shifed VP in (21a) is shown in (22). The existential F-closure of the VP in (21b) is shown in (23):
(22) entails (23). Therefore, the VP of (21b) counts as given. Schwarzschild [15] assumes an optimality theoretic grammar. [34] Accent placement is determined by a set of violable, hierarchically ranked constraints as shown in (24):
The ranking Schwarzschild [15] proposes is seen in (25):
As seen, GIVENness relates F-marking to discourse status. Foc relates F-marking to accent placement. Foc simply requires that a constituent(s) of an F-marked phrase contain an accent. AvoidF states that less F-marking is preferable to more F-marking. HeadArg encodes the head-argument asymmetry into the grammar directly. [33]
Recent empirical work by German et al. [33] suggests that both Selkirk’s [13] [14] and Schwarzschild’s [15] theory of accentuation and F-marking makes incorrect predictions. Consider the following context:
It has been noted that prepositions are intrinsically weak and do not readily take accent. [32] [33] However, both Selkirk and Schwarzschild predict that in the narrow focus context, an accent will occur at most on the preposition in (27) as shown in (28):
However, the production experiment reported in German et al. [33] showed that subjects are more likely to accent verbs or nouns as opposed to prepositions in the narrow focused context, thus ruling out accent patterns shown in (28). German et al. argue for a stochastic constraint-based grammar similar to Anttila [35] and Boersma [36] that more fluidly accounts for how speakers accent words in discourse.
{{citation}}
: CS1 maint: multiple names: authors list (link){{citation}}
: CS1 maint: multiple names: authors list (link)First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
In computer science, denotational semantics is an approach of formalizing the meanings of programming languages by constructing mathematical objects that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.
Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
Lexical functional grammar (LFG) is a constraint-based grammar framework in theoretical linguistics. It posits two separate levels of syntactic structure, a phrase structure grammar representation of word order and constituency, and a representation of grammatical functions such as subject and object, similar to dependency grammar. The development of the theory was initiated by Joan Bresnan and Ronald Kaplan in the 1970s, in reaction to the theory of transformational grammar which was current in the late 1970s. It mainly focuses on syntax, including its relation with morphology and semantics. There has been little LFG work on phonology.
In logic, syntax is anything having to do with formal languages or formal systems without regard to any interpretation or meaning given to them. Syntax is concerned with the rules used for constructing, or transforming the symbols and words of a language, as contrasted with the semantics of a language which is concerned with its meaning.
Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.
In linguistic semantics, an expression X is said to have cumulative reference if and only if the following holds: If X is true of both of a and b, then it is also true of the combination of a and b. Example: If two separate entities can be said to be "water", then combining them into one entity will yield more "water". If two separate entities can be said to be "a house", their combination cannot be said to be "a house". Hence, "water" has cumulative reference, while the expression "a house" does not. The plural form "houses", however, does have cumulative reference. If two entities are both "houses", then their combination will still be "houses".
In formal semantics, a predicate is quantized if it being true of an entity requires that it is not true of any proper subparts of that entity. For example, if something is an "apple", then no proper subpart of that thing is an "apple". If something is "water", then many of its subparts will also be "water". Hence, the predicate "apple" is quantized, while "water" is not.
In linguistics, volition is a concept that distinguishes whether the subject, or agent of a particular sentence intended an action or not. Simply, it is the intentional or unintentional nature of an action. Volition concerns the idea of control and for the purposes outside of psychology and cognitive science, is considered the same as intention in linguistics. Volition can then be expressed in a given language using a variety of possible methods. These sentence forms usually indicate that a given action has been done intentionally, or willingly. There are various ways of marking volition cross-linguistically. When using verbs of volition in English, like "want" or "prefer", these verbs are not expressly marked. Other languages handle this with affixes, while others have complex structural consequences of volitional or non-volitional encoding.
In programming language semantics, normalisation by evaluation (NBE) is a method of obtaining the normal form of terms in the λ-calculus by appealing to their denotational semantics. A term is first interpreted into a denotational model of the λ-term structure, and then a canonical (β-normal and η-long) representative is extracted by reifying the denotation. Such an essentially semantic, reduction-free, approach differs from the more traditional syntactic, reduction-based, description of normalisation as reductions in a term rewrite system where β-reductions are allowed deep inside λ-terms.
An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
In semantics, a donkey sentence is a sentence containing a pronoun which is semantically bound but syntactically free. They are a classic puzzle in formal semantics and philosophy of language because they are fully grammatical and yet defy straightforward attempts to generate their formal language equivalents. In order to explain how speakers are able to understand them, semanticists have proposed a variety of formalisms including systems of dynamic semantics such as Discourse representation theory. Their name comes from the example sentence "Every farmer who owns a donkey beats it", in which the donkey pronoun acts as a donkey pronoun because it is semantically but not syntactically bound by the indefinite noun phrase "a donkey". The phenomenon is known as donkey anaphora.
In linguistics, information structure, also called information packaging, describes the way in which information is formally packaged within a sentence. This generally includes only those aspects of information that "respond to the temporary state of the addressee's mind", and excludes other aspects of linguistic information such as references to background (encyclopedic/common) knowledge, choice of style, politeness, and so forth. For example, the difference between an active clause and a corresponding passive is a syntactic difference, but one motivated by information structuring considerations. Other structures motivated by information structure include preposing and inversion.
In linguistics, sloppy identity is an interpretive property that is found with verb phrase ellipsis where the identity of the pronoun in an elided VP is not identical to the antecedent VP.
Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it." In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.
Formal semantics is the study of grammatical meaning in natural languages using formal tools from logic, mathematics and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.
Pregroup grammar (PG) is a grammar formalism intimately related to categorial grammars. Much like categorial grammar (CG), PG is a kind of type logical grammar. Unlike CG, however, PG does not have a distinguished function type. Rather, PG uses inverse types combined with its monoidal operation.
The Integrational theory of language is the general theory of language that has been developed within the general linguistic approach of integrational linguistics.
In linguistics, givenness is a phenomenon in which a speaker assumes that contextual information of a topic of discourse is already known to the listener. The speaker thus considers it unnecessary to supply further contextual information through an expression's linguistic properties, its syntactic form or position, or its patterns of stress and intonation. Givenness involves contextual information in a discourse that is given, or assumed to be known, by the addressee in the moment of utterance. Therefore, a given expression must be known from prior discourse.
In formal semantics, the squiggle operator is an operator that constrains the occurrence of focus. In one common definition, the squiggle operator takes a syntactic argument and a discourse salient argument and introduces a presupposition that the ordinary semantic value of is either a subset or an element of the focus semantic value of . The squiggle was first introduced by Mats Rooth in 1992 as part of his treatment of focus within the framework of alternative semantics. It has become one of the standard tools in formal work on focus, playing a key role in accounts of contrastive focus, ellipsis, deaccenting, and question-answer congruence.