In linguistics, control is a construction in which the understood subject of a given predicate is determined by some expression in context. Stereotypical instances of control involve verbs. A superordinate verb "controls" the arguments of a subordinate, nonfinite verb. Control was intensively studied in the government and binding framework in the 1980s, and much of the terminology from that era is still used today. [1] In the days of Transformational Grammar, control phenomena were discussed in terms of Equi-NP deletion. [2] Control is often analyzed in terms of a null pronoun called PRO . Control is also related to raising, although there are important differences between control and raising.
Standard instances of (obligatory) control are present in the following sentences:
Each of these sentences contains two verbal predicates. Each time the control verb is on the left, and the verb whose arguments are controlled is on the right. The control verb determines which expression is interpreted as the subject of the verb on the right. The first three sentences are examples of subject control, since the subject of the control verb is also the understood subject of the subordinate verb. The second three examples are instances of object control, because the object of the control verb is understood as the subject of the subordinate verb. The argument of the matrix predicate that functions as the subject of the embedded predicate is the controller. The controllers are in bold in the examples.
Control verbs have semantic content; they semantically select their arguments, that is, their appearance strongly influences the nature of the arguments they take. [3] In this regard, they are very different from auxiliary verbs, which lack semantic content and do not semantically select arguments. Compare the following pairs of sentences:
The a-sentences contain auxiliary verbs that do not select the subject argument. What this means is that the embedded verbs go, do, and lie and cheat are responsible for semantically selecting the subject argument. The point is that while control verbs may have the same outward appearance as auxiliary verbs, the two verb types are quite different.
Control verbs (such as promise, stop, try, ask, tell, force, yearn, refuse, attempt) obligatorily induce a control construction. That is, when control verbs appear, they inherently determine which of their arguments controls the embedded predicate. Control is hence obligatorily present with these verbs. In contrast, the arguments of many verbs can be controlled even when a superordinate control verb is absent, e.g.
In one sense, control is obligatory in these sentences because the arguments of the present participles singing, understanding, and holding are clearly controlled by the matrix subjects. In another sense, however, control is non-obligatory (or optional) because there is no control predicate present that necessitates that control occur. [4] General contextual factors are determining which expression is understood as the controller. The controller is the subject in these sentences because the subject establishes point of view.
Some researchers have begun to use the term "obligatory control" to just mean that there is a grammatical dependency between the controlled subject and its controller, even if that dependency is not strictly required. "Non-obligatory control", on the other hand, may be used just to mean that there is no grammatical dependency involved. [5] Both "obligatory control" and "non-obligatory control" can be present in a single sentence. The following example can either mean that the pool had been in the hot sun all day (so it was nice and warm), in which case there would be a syntactic dependency between "the pool" and "being". Or it can mean that the speaker was in the hot sun all day (so the pool is nice and cool), in which case there would be no grammatical dependency between "being" and the understood controller (the speaker). [6] In such non-obligatory control sentences, it appears that the understood controller needs to be either a perspective holder in the discourse or an established topic. [7]
The pool was the perfect temperature after being in the hot sun all day.
Arbitrary control occurs when the controller is understood to be anybody in general, e.g. [8]
The understood subject of the gerunds in these sentence is non-discriminate; any generic person will do. In such cases, control is said to be "arbitrary". Any time the understood subject of a given predicate is not present in the linguistic or situational context, a generic subject (e.g. 'one') is understood.
Theoretical linguistics posits the existence of the null pronoun PRO as the theoretical basis for the analysis of control structures. The null pronoun PRO is an element that impacts a sentence in a similar manner to how a normal pronoun impacts a sentence, but the null pronoun is inaudible. [9] The null PRO is added to the predicate, where it occupies the position that one would typically associate with an overt subject (if one were present). The following trees illustrate PRO in both constituency-based structures of phrase structure grammars and dependency-based structures of dependency grammars: [10]
The constituency-based trees are the a-trees on the left, and the dependency-based trees the b-trees on the right. Certainly aspects of these trees - especially of the constituency trees - can be disputed. In the current context, the trees are intended merely to suggest by way of illustration how control and PRO are conceived of. The indices are a common means of identifying PRO and with its antecedent in the control predicate, and the orange arrows indicate further the control relation. In a sense, the controller assigns its index to PRO, which identifies the argument that is understood as the subject of the subordinate predicate.
A (constituency-based) X-bar theoretic tree that is consistent with the standard GB-type analysis is given next: [11]
The details of this tree are, again, not so important. What is important is that by positing the existence of the null subject PRO, the theoretical analysis of control constructions gains a useful tool that can help uncover important traits of control constructions.
Control must be distinguished from raising, though the two can be outwardly similar. [12] Control predicates semantically select their arguments, as stated above. Raising predicates, in contrast, do not semantically select (at least) one of their dependents. The contrast is evident with the so-called raising-to-object verbs (=ECM-verbs) such as believe, expect, want, and prove. Compare the following a- and b-sentences:
The control predicates ask and force semantically select their object arguments, whereas the raising-to-object verbs do not. Instead, the object of the raising verb appears to have "risen" from the subject position of the embedded predicate, in this case from the embedded predicates to read and to have said. In other words, the embedded predicate is semantically selecting the argument of the matrix predicate. What this means is that while a raising-to-object verb takes an object dependent, that dependent is not a semantic argument of that raising verb. The distinction becomes apparent when one considers that a control predicate like ask requires its object to be an animate entity, whereas a raising-to-object predicate like expects places no semantic limitations on its object dependent.
The different predicate types can be identified using expletive there. [13] Expletive there can appear as the "object" of a raising-to-object predicate, but not of a control verb, e.g.
The control predicates cannot take expletive there because there does not fulfill the semantic requirements of the control predicates. Since the raising-to-object predicates do not select their objects, they can easily take expletive there.
Control and raising also differ in how they behave with idiomatic expressions. [14] Idiomatic expressions retain their meaning in a raising construction, but they lose it when they are arguments of a control verb. See the examples below featuring the idiom "The cat is out of the bag", which has the meaning that facts that were previously hidden are now revealed.
The explanation for this fact is that raising predicates do not semantically select their arguments, and therefore their arguments are not interpreted compositionally, as the subject or object of the raising predicate. Arguments of the control predicate, on the other hand, have to fulfill their semantic requirements, and interpreted as the argument of the predicate compositionally.
This test works for object control and ECM too.
A syntactic category is a syntactic unit that theories of syntax assume. Word classes, largely corresponding to traditional parts of speech, are syntactic categories. In phrase structure grammars, the phrasal categories are also syntactic categories. Dependency grammars, however, do not acknowledge phrasal categories.
In language, a clause is a constituent that comprises a semantic predicand and a semantic predicate. A typical clause consists of a subject and a syntactic predicate, the latter typically a verb phrase composed of a verb with any objects and other modifiers. However, the subject is sometimes unvoiced if it is retrievable from context, especially in null-subject language but also in other languages, including English instances of the imperative mood.
In linguistics, an object is any of several types of arguments. In subject-prominent, nominative-accusative languages such as English, a transitive verb typically distinguishes between its subject and any of its objects, which can include but are not limited to direct objects, indirect objects, and arguments of adpositions ; the latter are more accurately termed oblique arguments, thus including other arguments not covered by core grammatical roles, such as those governed by case morphology or relational nouns . In ergative-absolutive languages, for example most Australian Aboriginal languages, the term "subject" is ambiguous, and thus the term "agent" is often used instead to contrast with "object", such that basic word order is often spoken of in terms such as Agent-Object-Verb (AOV) instead of Subject-Object-Verb (SOV). Topic-prominent languages, such as Mandarin, focus their grammars less on the subject-object or agent-object dichotomies but rather on the pragmatic dichotomy of topic and comment.
A subject is one of the two main parts of a sentence.
Theta roles are the names of the participant roles associated with a predicate: the predicate may be a verb, an adjective, a preposition, or a noun. If an object is in motion or in a steady state as the speakers perceives the state, or it is the topic of discussion, it is called a theme. The participant is usually said to be an argument of the predicate. In generative grammar, a theta role or θ-role is the formal device for representing syntactic argument structure—the number and type of noun phrases—required syntactically by a particular verb. For example, the verb put requires three arguments.
The term phrase structure grammar was originally introduced by Noam Chomsky as the term for grammar studied previously by Emil Post and Axel Thue. Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars.
A dummy pronoun, also known as an expletive pronoun, is a deictic pronoun that fulfills a syntactical requirement without providing a contextually explicit meaning of its referent. As such, it is an example of exophora.
In syntactic analysis, a constituent is a word or a group of words that function as a single unit within a hierarchical structure. The constituent structure of sentences is identified using tests for constituents. These tests apply to a portion of a sentence, and the results provide evidence about the constituent structure of the sentence. Many constituents are phrases. A phrase is a sequence of one or more words built around a head lexical item and working as a unit within a sentence. A word sequence is shown to be a phrase/constituent if it exhibits one or more of the behaviors discussed below. The analysis of constituent structure is associated mainly with phrase structure grammars, although dependency grammars also allow sentence structure to be broken down into constituent parts.
In grammar, a complement is a word, phrase, or clause that is necessary to complete the meaning of a given expression. Complements are often also arguments.
The term predicate is used in two ways in linguistics and its subfields. The first defines a predicate as everything in a standard declarative sentence except the subject, and the other defines it as only the main content verb or associated predicative expression of a clause. Thus, by the first definition, the predicate of the sentence Frank likes cake is likes cake, while by the second definition, it is only the content verb likes, and Frank and cake are the arguments of this predicate. The conflict between these two definitions can lead to confusion.
Topicalization is a mechanism of syntax that establishes an expression as the sentence or clause topic by having it appear at the front of the sentence or clause. This involves a phrasal movement of determiners, prepositions, and verbs to sentence-initial position. Topicalization often results in a discontinuity and is thus one of a number of established discontinuity types, the other three being wh-fronting, scrambling, and extraposition. Topicalization is also used as a constituency test; an expression that can be topicalized is deemed a constituent. The topicalization of arguments in English is rare, whereas circumstantial adjuncts are often topicalized. Most languages allow topicalization, and in some languages, topicalization occurs much more frequently and/or in a much less marked manner than in English. Topicalization in English has also received attention in the pragmatics literature.
In linguistics, grammatical relations are functional relationships between constituents in a clause. The standard examples of grammatical functions from traditional grammar are subject, direct object, and indirect object. In recent times, the syntactic functions, typified by the traditional categories of subject and object, have assumed an important role in linguistic theorizing, within a variety of approaches ranging from generative grammar to functional and cognitive theories. Many modern theories of grammar are likely to acknowledge numerous further types of grammatical relations. The role of grammatical relations in theories of grammar is greatest in dependency grammars, which tend to posit dozens of distinct grammatical relations. Every head-dependent dependency bears a grammatical function.
In linguistics, raising constructions involve the movement of an argument from an embedded or subordinate clause to a matrix or main clause. A raising predicate/verb appears with a syntactic argument that is not its semantic argument but rather the semantic argument of an embedded predicate. In other words, the sentence is expressing something about a phrase taken as a whole. For example, in they seem to be trying, "to be trying" is the subject of seem. English has raising constructions, unlike some other languages.
In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).
In linguistics, a small clause consists of a subject and its predicate, but lacks an overt expression of tense. Small clauses have the semantic subject-predicate characteristics of a clause, and have some, but not all, properties of a constituent. Structural analyses of small clauses vary according to whether a flat or layered analysis is pursued. The small clause is related to the phenomena of raising-to-object, exceptional case-marking, accusativus cum infinitivo, and object control.
Antecedent-contained deletion (ACD), also called antecedent-contained ellipsis, is a phenomenon whereby an elided verb phrase appears to be contained within its own antecedent. For instance, in the sentence "I read every book that you did", the verb phrase in the main clause appears to license ellipsis inside the relative clause which modifies its object. ACD is a classic puzzle for theories of the syntax-semantics interface, since it threatens to introduce an infinite regress. It is commonly taken as motivation for syntactic transformations such as quantifier raising, though some approaches explain it using semantic composition rules or by adoption more flexible notions of what it means to be a syntactic unit.
Exceptional case-marking (ECM), in linguistics, is a phenomenon in which the subject of an embedded infinitival verb seems to appear in a superordinate clause and, if it is a pronoun, is unexpectedly marked with object case morphology. The unexpected object case morphology is deemed "exceptional". The term ECM itself was coined in the Government and Binding grammar framework although the phenomenon is closely related to the accusativus cum infinitivo constructions of Latin. ECM-constructions are also studied within the context of raising. The verbs that license ECM are known as raising-to-object verbs. Many languages lack ECM-predicates, and even in English, the number of ECM-verbs is small. The structural analysis of ECM-constructions varies in part according to whether one pursues a relatively flat structure or a more layered one.
In linguistics, subcategorization denotes the ability/necessity for lexical items to require/allow the presence and types of the syntactic arguments with which they co-occur. For example, the word "walk" as in "X walks home" requires the noun-phrase X to be animate.
In linguistics, negative inversion is one of many types of subject–auxiliary inversion in English. A negation or a word that implies negation or a phrase containing one of these words precedes the finite auxiliary verb necessitating that the subject and finite verb undergo inversion. Negative inversion is a phenomenon of English syntax. Other Germanic languages have a more general V2 word order, which allows inversion to occur much more often than in English, so they may not acknowledge negative inversion as a specific phenomenon. While negative inversion is a common occurrence in English, a solid understanding of just what elicits the inversion has not yet been established. It is, namely, not entirely clear why certain fronted expressions containing a negation elicit negative inversion, but others do not.
In linguistics, selection denotes the ability of predicates to determine the semantic content of their arguments. Predicates select their arguments, which means they limit the semantic content of their arguments. One sometimes draws a distinction between types of selection; one acknowledges both s(emantic)-selection and c(ategory)-selection. Selection in general stands in contrast to subcategorization: predicates both select and subcategorize for their complement arguments, whereas they only select their subject arguments. Selection is a semantic concept, whereas subcategorization is a syntactic one. Selection is closely related to valency, a term used in other grammars than the Chomskian generative grammar, for a similar phenomenon.