In generative grammar and related frameworks, a node in a parse tree c-commands its sister node and all of its sister's descendants. In these frameworks, c-command plays a central role in defining and constraining operations such as syntactic movement, binding, and scope. Tanya Reinhart introduced c-command in 1976 as a key component of her theory of anaphora. The term is short for "constituent command".
Common terms to represent the relationships between nodes are below (refer to the tree on the right):
The standard definition of c-command is based partly on the relationship of dominance: Node N1 dominates node N2 if N1 is above N2 in the tree and one can trace a path from N1 to N2 moving only downwards in the tree (never upwards); that is, if N1 is a parent, grandparent, etc. of N2. For a node (N1) to c-command another node (N2) the parent of N1 must establish dominance over N2.
Based upon this definition of dominance, node N1c-commands node N2 if and only if:
For example, according to the standard definition, in the tree at the right,
If node A c-commands node B, and B also c-commands A, it can be said that A symmetrically c-commands B. If A c-commands B but B does not c-command A, then A asymmetrically c-commands B. The notion of asymmetric c-command plays a major role in Richard S. Kayne's theory of Antisymmetry.
A simplification of the standard definition on c-command is as follows:
A node A c-commands a node B iff
As such, we get sentences like:
Where [node A] John c-commands [node B]. This means that [node A] also c-commands [node C] and [node D], which means that John c-commands both [likes] and [her].
In a Parse tree(syntax tree), nodes A and B are replaced with a DP constituent, where the DP John c-commands DP he. In a more complex sentence, such as (2), the pronoun could interact with its antecedent and be interpreted in two ways.
In this example, two interpretations could be made:
In the first interpretation, John c-commands he and also co-references he. Co-reference is noted by the same subscript (i) present under both of the DP nodes. The second interpretation shows that John c-commands he but does not co-reference the DP he. Since co-reference is not possible, there are different subscripts under the DP John (i) and the DP he (m).
Example sentences like these shows the basic relationship of pronouns with its antecedent expression. However, looking at definite anaphora where pronouns takes a definite descriptions as its antecedent, we see that pronouns with name cannot co-refer with its antecedent within its domain.
Where [he] c-commands [John] but [he]i cannot co-refer to [John]i*, and we can only interpret that someone else thinks that John is smart.
In response of the limits of c-command, Reinhart proposes a constraint on definite anaphora:
The notion of c-command can be found in frameworks such as Binding Theory, which shows the syntactic relationship between pronouns and its antecedent. [4] The binding theory framework was first introduced by Chomsky in 1973 in relation to the treatment of various anaphoric phenomena, and has since been revised throughout the years. Chomsky's analysis places a constraint on the relationship between a pronoun and a variable antecedent. As such, a variable cannot be the antecedent of a pronoun to its left.
The first major revision to binding theory is found in Chomsky (1980) with their standard definitions:
Compared to definite anaphora, quantificational expressions works differently and is more restrictive. As proposed by Reinhart in 1973, a quantificational expression must c-command any pronoun that it binds. [6]
In this example, the quantifier [every man] c-commands the other pronoun [he] and a bound variable reading is possible as the pronoun 'he' is bound by the universal quantifier 'every man'. The sentence in (3) show two possible readings as a result of the bounding of pronouns with the universal quantifier. The reading in (3a) states that for all man, they each think that they (he) are intelligent. Meanwhile, sentence (3b) state that for all men, they all think that someone (he) is intelligent. In general, for a pronoun to be bound by the quantifier and bound variable reading made possible, (i) the quantifier must c-command the pronoun and (ii) both the quantifier and pronoun have to occur in the same sentence. [8]
Relative to the history of the concept of c-command, one can identify two stages: (i) analyses focused on applying c-command to solve specific problems relating to coreference and non-coreference; (ii) analyses which focused on c-command as a structural on a wide range of natural language phenomena that include but are not limited to tracking coreference and non-coreference.
The development of ‘c-command’ is introduced by the notion of coreference. This is denoted by the first stage of the concept of c-command. [9] In the initial emergence of coreference, Jackendoff (1972). [10] officially states... If for any NP1 and NP2 in a sentence, there is no entry in the table NP2 + coref NP2, enter in the table NP1 - coref NP2 (OBLIGATORY)
In other words, this rule states that any noun phrases that have not been associated with a coreference rule, are assumed to be noncoreferential. The tree to the right specifies this through the cyclical leftward movement of the pronoun and/or noun.
This is, then, edited by Lasnik (1976) [11] in which...
NP1 cannot be interpreted as coreferential with NP2 iff NP1 precedes and commands NP2 and NP2 is not a pronoun. If NP1 precedes and commands NP2, and NP2 is not a pronoun, then NP1 and NP1 are noncoreferential.
According to this rule, it is essential that NP2 (denoted as NPy in the tree on the left) be a pronoun for the sentence to be grammatical, despite NP1 (denoted as NPx on the tree) being a pronoun or not. This can be shown through the examples below.
a) Lucy greets the customers she serves.
b) *She greets the customers Lucy serves.
c) *Lucy greets the customers Lucy serves.
d) She greets the customers she serves.
In this edition of coreference, Lasnik sets some restrictions on the permissible locations of NP1 and NP2, which hint at potential dominance.
This leads to Stage 2 of the concept of c-command in which particular dominance is thoroughly explored. The term c-command was introduced by Tanya Reinhart in her 1976 dissertation and is a shortened form of constituent command. Reinhart thanks Nick Clements for suggesting both the term and its abbreviation. [12] Reinhart (1976) states that...
A commands node B iff the branching node ⍺1 most immediately dominating A either dominates B or is immediately dominated by a node ⍺2 which dominates B, and ⍺2 is of the same category type as ⍺1
In other words, “⍺ c-commands β iff every branching node dominating ⍺ dominates β”
Chomsky adds a second layer to the previous edition of the c-command rule by introducing the requirement of maximal projections. He states...
⍺ c-commands β iff every maximal projection dominating ⍺ dominates β
This became known as "m-command."
The tree to the right compares the two definitions in this stage. Reinhart's "c-command" focuses on the branching nodes whereas Chomsky's "m-command" focuses on the maximal projections. [13]
The current and widely used definition of c-command that Reinhart had developed was not new to syntax. Similar configurational notions had been circulating for more than a decade. In 1964, Klima defined a configurational relationship between nodes he labeled "in construction with". In addition, Langacker proposed a similar notion of "command" in 1969. Reinhart's definition has also shown close relations to Chomsky's 'superiority relation.' [14]
Over the years, the validity and importance of c-command for the theory of syntax have been widely debated. [15] Linguists such as Benjamin Bruening have provided empirical data to prove that c-command is flawed and fails to predict whether or not pronouns are being used properly. [16]
In most cases, c-command correlates with precedence (linear order); that is, if node A c-commands node B, it is usually the case that node A also precedes node B. Furthermore, basic S(V)O (subject-verb-object) word order in English correlates positively with a hierarchy of syntactic functions, subjects precede (and c-command) objects. Moreover, subjects typically precede objects in declarative sentences in English and related languages. Going back to Bruening (2014), an argument is presented which suggests that theories of the syntax that build on c-command have misconstrued the importance of precedence and/or the hierarchy of grammatical functions (i.e. the grammatical function of subject versus object). The grammatical rules of pronouns and the variable binding of pronouns that co-occur with quantified noun phrases and wh-phrases were originally grouped together and interpreted as being the same, but Bruening brings to light that there is a notable difference between the two and provides his own theory on this matter. Bruening suggests that the current function of c-command is inaccurate and concludes that what c-command is intended to address is more accurately analyzed in terms of precedence and grammatical functions. Furthermore, the c-command concept was developed primarily on the basis of syntactic phenomena of English, a language with relatively strict word order. When confronted with the much freer word order of many other languages, the insights provided by c-command are less compelling since linear order becomes less important.
As previously suggested, the phenomena that c-command is intended to address may be more plausibly examined in terms of linear order and a hierarchy of syntactic functions. Concerning the latter, some theories of syntax take a hierarchy of syntactic functions to be primitive. This is true of Head-Driven Phrase Structure Grammar (HPSG), [17] Lexical Functional Grammar (LFG), [18] and dependency grammars (DGs). [19] The hierarchy of syntactic functions that these frameworks posit is usually something like the following: SUBJECT > FIRST OBJECT > SECOND OBJECT > OBLIQUE OBJECT. Numerous mechanisms of syntax are then addressed in terms of this hierarchy
Like Bruening, Barker (2012) provides his own input on c-command, stating that it is not relevant for quantificational binding in English. Although not a complete characterization of the conditions in which a quantifier can bind a pronoun, Barker proposes a scope requirement. [20]
The sentence in (5) indicates that [each woman] scopes over [someone] and this supports the claim that [each woman] can take scope over a pronoun such as in (4).
The sentence in (7) indicates that [each woman] cannot scope over [someone] and shows that the quantifier does not take scope over the pronoun. As such, there is no interpretation where each woman in a sentence (6) refers to she and coreference is not possible, which is indicated with a different subscript for she.
Bruening along with other linguists such as Chung-Chien Shan and Chris Barker has gone against Reinhart's claims by suggesting that variable binding and co-reference do not relate to each other. [22] Barker (2012) aims to demonstrate how variable binding can function through the usage of continuations without c-command. This is achieved by avoiding the usage of c-command and instead focusing on the notion of precedence in order to present a system that is capable of binding variables and accounting events such as crossover violation. Barker shows that precedence, in the way of an evaluation order, can be used in the place of c-command. [20]
Another important work of criticism stems from Wuijts (2016) which is a response to Barker's stance on c-command and poses the question for Barker's work: How are “alternatives to c-command for the binding of pronouns justified and are these alternatives adequate?”. Wuijts dives deep into Barker's work and concludes that the semantic interpretation of pronouns serves as functions in their own context. [23]
Wuijts further claims that a binder can adopt the outcome as an argument and bind the pronoun all through a system that utilizes continuation without the notion of c-command. Both Bruening's and Barker's alternatives to c-command for the binding of pronouns are determined as ‘adequate alternatives’ which accurately show how co-reference and variable binding can operate without c-command. Wuijts brings forth two primary points that justify using a form of precedence:
Both Barker and Wuijts state that the goal is not to eliminate c-command entirely but to recognize that there are better alternatives that exist. In other words, c-command can still be used to effectively differentiate between strong and weak crossovers but it may not be as successful in other areas such as asymmetry which was previously mentioned. Wuijts concludes that a better alternative without c-command may be preferred and suggests that the current alternatives to c-command point to precedence, the binary relation between nodes in a tree structure, to be of great importance.
Keek Cho investigates Chomsky's binding theory and proposes that lexical items in the same argument structures that stem from the same predicates, require an m-command-based binding relation whereas lexical items in arguments structures that stem from different predicates require c-command based binding relations. [25]
Cho (2019) challenges Chomsky’s binding theory (1995) by showing that its definition of c-command in binding principles B and C, fails to work in different argument structures of different predicates. Cho states that binding principles use m-command-based c-command for intra-argument structures and binding principles use command-based c-command for inter argument structure. [26] With this statement, Cho implies that the notion of c-command used in binding principles is actually m-command and both c-command and m-command have their own limitations.
Looking at Binding Relations in Intra-Argument Structures
By analyzing the following sentences, Cho is able to support the argument that the notion of c-command used in binding principles is actually m-command:
By analyzing sentence (1a), it is apparent that the governing category for himself, the anaphor, is the entire sentence The tall boy will hurt himself. The antecedent, boy, c-commands himself. This is done in a way that allows for the categorial maximal projection of the former to c-command the categorial maximal projection of the latter. Cho argues that the notion of c-command in sentences (1a), (1b), and (1c) are in fact m-command and that the m-command-based binding principles deal with binding relations of lexical items and/or arguments that are in the same argument structure of a predicate.
In sentence (1a), boy and himself are lexical items that serve as external and internal arguments of hurt, a two-place predicate. The two lexical items boy and himself are also in the same argument structure of the same predicate.
In sentence (1b), lady and her are lexical items that serve as external and internal arguments for showed, a three-place predicate. The two lexical items lady and her are also in the same argument structure of the same predicate.
In sentence (1c), beliefs is a two-place main clause predicate and takes on woman, the subject, as its external argument and that we hate Jina, the embedded clause, as its internal argument.
Looking at Binding Relations in Inter-Argument Structures
Cho argues that binding relations in the intra-argument structures utilize m-command-based c-command which is limited to the binding relations of arguments and/or lexical items belonging to argument structures of the same predicate. Cho makes use of the following sentences to demonstrate how command-based c-command operates for inter-argument structure binding relations:
Cho not only uses sentences (2a)-(2g) to explain command-based c-command and its role in inter-argument structure binding relations but also claims that command-based c-command can account for unexplained binding relations between different argument structures joined by a conjunctive phrase as well as explain why sentence (7d) is grammatical and (7e) is ungrammatical. [27]
The notion of c-command shows the relation of pronouns with its antecedent expression. In general, pronouns, such as it, are used to refer to previous concepts that are more prominent and highly predictable, and requires an antecedent representation that it refers back to. In order for a proper interpretation to occur, the antecedent representation must be made accessible within the comprehender's mind and then aligned with the appropriate pronoun, so that the pronoun will have something to refer to. There are studies that suggest that there is a connection between pronoun prominence and the referent in a comprehender's cognitive state. [28] Research has shown that prominent antecedent representations are more active compared to less prominent ones. [29]
In sentence (i), there is an active representation of the antecedent my brush in the comprehender's mind and it coreferences with the following pronoun it. Pronouns tend to refer back to the salient object within the sentence, such as my brush in sentence (i).
Furthermore, the more active an antecedent representation is the more it is readily available for interpretation when a pronoun emerges, which are then useful for operations such as pronoun resolution. [30]
In sentence (ii), my brush is less prominent as there are other objects within the sentence that are more prominent, such as my black bag. The antecedent my black bag is more active in the representation in the comprehender's mind, as it is more prominent, and coreference for the pronoun it with the antecedent my brush is harder.
Based on findings from memory retrieval studies, Foraker suggests that prominent antecedents have a higher retrieval time when a following pronoun is introduced. [31] Furthermore, when sentences are syntactically clefted, antecedent representations, such as pronouns, become more distinctive in working memory, and are easily integrable in subsequent discourse operations. In other words, antecedent pronouns, when placed in the beginning of sentences, are easier to remember as it is held within their focal attention. [32] Thus, the sentences are easily interpreted and understood. They also found that gendered pronouns, such as he/she, increases the prominence compared to unambiguous pronouns, such as it. In addition, noun phrases also become more prominent in representation when syntactically clefted. [33] It has also been suggested that there is a relationship between antecedent retrieval and its sensitivity to c-command restraints on quantificational binding, and that c-command facilitates the relational information, which help to retrieve antecedents and distinguish them from quantificational phrases that allows bound variable pronoun readings from quantificational phrases that do not. [34]
Recent research by Khetrapal and Thornton (2017) questioned whether children with Autism Spectrum Disorders (ASD) are capable of computing the hierarchical structural relationship of c-command. Khetrapal and Thornton brought up the possibility that children with ASD may be relying on a form of linear strategy for reference assignment. [35] The study aimed to investigate the status of c-command in children with ASD by testing participants on their interpretation of sentences which incorporated the usage of c-command and a linear strategy for reference assignment. Researchers found that children with high-functioning autism (HFA) did not show any difficulties with computing the hierarchical relationship of c-command. The results suggest that children with HFA do not have syntactic deficiency however Kethrapal and Thornton stress that conducting further cross-linguistic investigation is essential.
Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
Government and binding is a theory of syntax and a phrase structure grammar in the tradition of transformational grammar developed principally by Noam Chomsky in the 1980s. This theory is a radical revision of his earlier theories and was later revised in The Minimalist Program (1995) and several subsequent papers, the latest being Three Factors in Language Design (2005). Although there is a large literature on government and binding theory which is not written by Chomsky, Chomsky's papers have been foundational in setting the research agenda.
In linguistics, the minimalist program is a major line of inquiry that has been developing inside generative grammar since the early 1990s, starting with a 1993 paper by Noam Chomsky.
In linguistics, anaphora is the use of an expression whose interpretation depends upon another expression in context. In a narrower sense, anaphora is the use of an expression that depends specifically upon an antecedent expression and thus is contrasted with cataphora, which is the use of an expression that depends upon a postcedent expression. The anaphoric (referring) term is called an anaphor. For example, in the sentence Sally arrived, but nobody saw her, the pronoun her is an anaphor, referring back to the antecedent Sally. In the sentence Before her arrival, nobody saw Sally, the pronoun her refers forward to the postcedent Sally, so her is now a cataphor. Usually, an anaphoric expression is a pro-form or some other kind of deictic expression. Both anaphora and cataphora are species of endophora, referring to something mentioned elsewhere in a dialog or text.
In grammar and theoretical linguistics, government or rection refers to the relationship between a word and its dependents. One can discern between at least three concepts of government: the traditional notion of case government, the highly specialized definition of government in some generative models of syntax, and a much broader notion in dependency grammars.
In linguistics, binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding has been a major area of research in syntax and semantics since the 1970s and, as the name implies, is a core component of government and binding theory.
In linguistics, coreference, sometimes written co-reference, occurs when two or more expressions refer to the same person or thing; they have the same referent. For example, in Bill said Alice would arrive soon, and she did, the words Alice and she refer to the same person.
In generative grammar, non-configurational languages are languages characterized by a flat phrase structure, which allows syntactically discontinuous expressions, and a relatively free word order.
In generative grammar and related approaches, the logical form (LF) of a linguistic expression is the variant of its syntactic structure which undergoes semantic interpretation. It is distinguished from phonetic form, the structure which corresponds to a sentence's pronunciation. These separate representations are postulated in order to explain the ways in which an expression's meaning can be partially independent of its pronunciation, e.g. scope ambiguities.
Exceptional case-marking (ECM), in linguistics, is a phenomenon in which the subject of an embedded infinitival verb seems to appear in a superordinate clause and, if it is a pronoun, is unexpectedly marked with object case morphology. The unexpected object case morphology is deemed "exceptional". The term ECM itself was coined in the Government and Binding grammar framework although the phenomenon is closely related to the accusativus cum infinitivo constructions of Latin. ECM-constructions are also studied within the context of raising. The verbs that license ECM are known as raising-to-object verbs. Many languages lack ECM-predicates, and even in English, the number of ECM-verbs is small. The structural analysis of ECM-constructions varies in part according to whether one pursues a relatively flat structure or a more layered one.
A reciprocal pronoun is a pronoun that indicates a reciprocal relationship. A reciprocal pronoun can be used for one of the participants of a reciprocal construction, i.e. a clause in which two participants are in a mutual relationship. The reciprocal pronouns of English are one another and each other, and they form the category of anaphors along with reflexive pronouns.
In generative linguistics, PRO is a pronominal determiner phrase (DP) without phonological content. As such, it is part of the set of empty categories. The null pronoun PRO is postulated in the subject position of non-finite clauses. One property of PRO is that, when it occurs in a non-finite complement clause, it can be bound by the main clause subject or the main clause object. The presence of PRO in non-finite clauses lacking overt subjects allows a principled solution for problems relating to binding theory.
The linguistics wars were extended disputes among American theoretical linguists that occurred mostly during the 1960s and 1970s, stemming from a disagreement between Noam Chomsky and several of his associates and students. The debates started in 1967 when linguists Paul Postal, John R. Ross, George Lakoff, and James D. McCawley —self-dubbed the "Four Horsemen of the Apocalypse"—proposed an alternative approach in which the relation between semantics and syntax is viewed differently, which treated deep structures as meaning rather than syntactic objects. While Chomsky and other generative grammarians argued that meaning is driven by an underlying syntax, generative semanticists posited that syntax is shaped by an underlying meaning. This intellectual divergence led to two competing frameworks in generative semantics and interpretive semantics.
In linguistics, locality refers to the proximity of elements in a linguistic structure. Constraints on locality limit the span over which rules can apply to a particular structure. Theories of transformational grammar use syntactic locality constraints to explain restrictions on argument selection, syntactic binding, and syntactic movement.
In semantics, a donkey sentence is a sentence containing a pronoun which is semantically bound but syntactically free. They are a classic puzzle in formal semantics and philosophy of language because they are fully grammatical and yet defy straightforward attempts to generate their formal language equivalents. In order to explain how speakers are able to understand them, semanticists have proposed a variety of formalisms including systems of dynamic semantics such as Discourse representation theory. Their name comes from the example sentence "Every farmer who owns a donkey beats it", in which "it" acts as a donkey pronoun because it is semantically but not syntactically bound by the indefinite noun phrase "a donkey". The phenomenon is known as donkey anaphora.
A bound variable pronoun is a pronoun that has a quantified determiner phrase (DP) – such as every, some, or who – as its antecedent.
Logophoricity is a phenomenon of binding relation that may employ a morphologically different set of anaphoric forms, in the context where the referent is an entity whose speech, thoughts, or feelings are being reported. This entity may or may not be distant from the discourse, but the referent must reside in a clause external to the one in which the logophor resides. The specially-formed anaphors that are morphologically distinct from the typical pronouns of a language are known as logophoric pronouns, originally coined by the linguist Claude Hagège. The linguistic importance of logophoricity is its capability to do away with ambiguity as to who is being referred to. A crucial element of logophoricity is the logophoric context, defined as the environment where use of logophoric pronouns is possible. Several syntactic and semantic accounts have been suggested. While some languages may not be purely logophoric, logophoric context may still be found in those languages; in those cases, it is common to find that in the place where logophoric pronouns would typically occur, non-clause-bounded reflexive pronouns appear instead.
In linguistics, crossover effects are restrictions on possible binding or coreference that hold between certain phrases and pronouns. Coreference that is normal and natural when a pronoun follows its antecedent becomes impossible, or at best just marginally possible, when "crossover" is deemed to have occurred, e.g. ?Who1 do his1 friends admire __1? The term itself refers to the traditional transformational analysis of sentences containing leftward movement, whereby it appears as though the fronted constituent crosses over the expression with which it is coindexed on its way to the front of the clause. Crossover effects are divided into strong crossover (SCO) and weak crossover (WCO). The phenomenon occurs in English and related languages, and it may be present in all natural languages that allow fronting.
The lexicalist hypothesis is a hypothesis proposed by Noam Chomsky in which he claims that syntactic transformations only can operate on syntactic constituents. It says that the system of grammar that assembles words is separate and different from the system of grammar that assembles phrases out of words.
In linguistics, the syntax–semantics interface is the interaction between syntax and semantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning. Specific topics include scope, binding, and lexical semantic properties such as verbal aspect and nominal individuation, semantic macroroles, and unaccusativity.