Syntactic movement is the means by which some theories of syntax address discontinuities. Movement was first postulated by structuralist linguists who expressed it in terms of discontinuous constituents or displacement. [1] Some constituents appear to have been displaced from the position in which they receive important features of interpretation. [2] The concept of movement is controversial and is associated with so-called transformational or derivational theories of syntax (such as transformational grammar, government and binding theory, minimalist program). Representational theories (such as head-driven phrase structure grammar, lexical functional grammar, construction grammar, and most dependency grammars), in contrast, reject the notion of movement and often instead address discontinuities with other mechanisms including graph reentrancies, feature passing, and type shifters.
Movement is the traditional means of explaining discontinuities such as wh-fronting, topicalization, extraposition, scrambling, inversion, and shifting: [3]
The a-sentences show canonical word order, and the b-sentences illustrate the discontinuities that movement seeks to explain. Bold script marks the expression that is moved, and underscores mark the positions from which movement is assumed to have occurred. In the first a-sentence, the constituent the first story serves as the object of the verb likes and appears in its canonical position immediately following that verb. In the first b-sentence, the constituent which story likewise serves as the object of the verb, but appears at the beginning of the sentence rather than in its canonical position following the verb. Movement-based analyses explain this fact by positing that the constituent is base-generated in its canonical position but is moved to the beginning of the sentence, in this case because of a question-forming operation.
The examples above use an underscore to mark the position from which movement is assumed to have occurred. In formal theories of movement, these underscores correspond to actual syntactic objects, either traces or copies depending on one's particular theory. [4] e.g.
Subscripts help indicate the constituent that is assumed to have left a trace in its former position, the position marked by t. [5] The other means of indicating movement is in terms of copies. Movement is actually taken to be a process of copying the same constituent in different positions and deleting the phonological features in all but one case. [6] Italics are used in the following example to indicate a copy that lacks phonological representation:
There are various nuances associated with each of the means of indicating movement (blanks, traces, copies), but for the most part, each convention has the same goal of indicating the presence of a discontinuity.
Within generative grammar, various types of movement have been distinguished. An important distinction is the one between head movement and phrasal movement, with the latter type being further subdivided into A-movement and A-bar movement. Copy movement is another more general type of movement.
Argument movement (A-movement) displaces a phrase into a position in which a fixed grammatical function is assigned, such as in movement of the object to the subject position in passives: [7]
Non-argument movement (A-bar movement or A'-movement), in contrast, displaces a phrase into a position where a fixed grammatical function is not assigned, such as the movement of a subject or object NP to a pre-verbal position in interrogatives:
The A- vs. A-bar distinction is a reference to the theoretical status of syntax with respect to the lexicon. The distinction elevates the role of syntax by locating the theory of voice (active vs. passive) almost entirely in syntax (as opposed to in the lexicon). A theory of syntax that locates the active-passive distinction in the lexicon (the passive is not derived via transformations from the active) rejects the distinction entirely.
A different partition among types of movement is phrasal vs. head movement. [8] Phrasal movement occurs when the head of a phrase moves together with all its dependents in such a manner that the entire phrase moves. Most of the examples above involve phrasal movement. Head movement, in contrast, occurs when just the head of a phrase moves, and the head leaves behind its dependents. Subject-auxiliary inversion is a canonical instance of head movement:
On the assumption that the auxiliaries has and will are the heads of phrases, such as of IPs (inflection phrases), the b-sentences are the result of head movement, and the auxiliary verbs has and will move leftward without taking with them the rest of the phrase that they head.
The distinction between phrasal movement and head movement relies crucially on the assumption that movement is occurring leftward. An analysis of subject-auxiliary inversion that acknowledges rightward movement can dispense with head movement entirely:
The analysis shown in those sentences views the subject pronouns someone and she as moving rightward, instead of the auxiliary verbs moving leftward. Since the pronouns lack dependents (they alone qualify as complete phrases), there would be no reason to assume head movement.
Since it was first proposed, the theory of syntactic movement has yielded a new field of research aiming at providing the filters that block certain types of movement. Called locality theory, [9] it is interested in discerning the islands and barriers to movement. It strives to identify the categories and constellations that block movement from occurring. In other words, it strives to explain the failure of certain attempts at movement:
All of the b-sentences are now disallowed because of locality constraints on movement. Adjuncts and subjects are islands that block movement, and left branches in NPs are barriers that prevent pre-noun modifiers from being extracted out of NPS.
Syntactic movement is controversial, especially in light of movement paradoxes. Theories of syntax that posit feature passing reject syntactic movement outright, that is, they reject the notion that a given "moved" constituent ever appears in its "base" position below the surface: the positions marked by blanks, traces, or copies. Instead, they assume that there is but one level of syntax, and all constituents appear only in their surface positions, with no underlying level or derivation. To address discontinuities, they posit that the features of a displaced constituent are passed up and/or down the syntactic hierarchy between that constituent and its governor. [10] The following tree illustrates the feature passing analysis of a wh-discontinuity in a dependency grammar. [11]
The words in red mark the catena (chain of words) that connects the displaced wh-constituent what to its governor eat, the word that licenses its appearance. [12] The assumption is that features (=information) associated with what (e.g. noun, direct object) are passed up and down along the catena marked in red. In that manner, the ability of eat to subcategorize for a direct object NP is acknowledged. By examining the nature of catenae like the one in red, the locality constraints on discontinuities can be identified.
In government and binding theory and some of its descendant theories, movement leaves behind an empty category called a trace.
In such theories, traces are considered real parts of syntactic structure, detectable in secondary effects they have on the syntax. For instance, one empirical argument for their existence comes from the English phenomenon of wanna contraction, in which want to contracts into wanna. This phenomenon has been argued to be impossible when a trace would intervene between "want" and "to", as in the b-sentence below. [13]
Evidence of this sort has not led to a full consensus in favor of traces, since other kinds of contraction permit an intervening putative trace. [14]
Proponents of the trace theory have responded to these counterarguments in various ways. For instance, Bresnan (1971) argued that contractions of "to" are enclitic while contractions of tensed auxiliaries are proclitic, meaning that only the former would be affected by a preceding trace. [15]
In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.
In grammar, a phrase—called expression in some contexts—is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consist of a single word or a complete sentence. In theoretical linguistics, phrases are often analyzed as units of syntactic structure such as a constituent. There is a difference between the common use of the term phrase and its technical use in linguistics. In common usage, a phrase is usually a group of words with some special idiomatic meaning or other significance, such as "all rights reserved", "economical with the truth", "kick the bucket", and the like. It may be a euphemism, a saying or proverb, a fixed expression, a figure of speech, etc.. In linguistics, these are known as phrasemes.
In language, a clause is a constituent that comprises a semantic predicand and a semantic predicate. A typical clause consists of a subject and a syntactic predicate, the latter typically a verb phrase composed of a verb with any objects and other modifiers. However, the subject is sometimes unvoiced if it is retrievable from context, especially in null-subject language but also in other languages, including English instances of the imperative mood.
In linguistics, X-bar theory is a model of phrase-structure grammar and a theory of syntactic category formation that was first proposed by Noam Chomsky in 1970 reformulating the ideas of Zellig Harris (1951), and further developed by Ray Jackendoff, along the lines of the theory of generative grammar put forth in the 1950s by Chomsky. It attempts to capture the structure of phrasal categories with a single uniform structure called the X-bar schema, basing itself on the assumption that any phrase in natural language is an XP that is headed by a given syntactic category X. It played a significant role in resolving issues that phrase structure rules had, representative of which is the proliferation of grammatical rules, which is against the thesis of generative grammar.
A movement paradox is a phenomenon of grammar that challenges the transformational approach to syntax. The importance of movement paradoxes is emphasized by those theories of syntax that reject movement, i.e. the notion that discontinuities in syntax are explained by the movement of constituents.
In syntax, verb-second (V2) word order is a sentence structure in which the finite verb of a sentence or a clause is placed in the clause's second position, so that the verb is preceded by a single word or group of words.
Dependency grammar (DG) is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back primarily to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. Dependency grammar differs from phrase structure grammar in that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word and its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, and they are thus well suited for the analysis of languages with free word order, such as Czech or Warlpiri.
In grammar and theoretical linguistics, government or rection refers to the relationship between a word and its dependents. One can discern between at least three concepts of government: the traditional notion of case government, the highly specialized definition of government in some generative models of syntax, and a much broader notion in dependency grammars.
In syntactic analysis, a constituent is a word or a group of words that function as a single unit within a hierarchical structure. The constituent structure of sentences is identified using tests for constituents. These tests apply to a portion of a sentence, and the results provide evidence about the constituent structure of the sentence. Many constituents are phrases. A phrase is a sequence of one or more words built around a head lexical item and working as a unit within a sentence. A word sequence is shown to be a phrase/constituent if it exhibits one or more of the behaviors discussed below. The analysis of constituent structure is associated mainly with phrase structure grammars, although dependency grammars also allow sentence structure to be broken down into constituent parts.
In linguistics, wh-movement is the formation of syntactic dependencies involving interrogative words. An example in English is the dependency formed between what and the object position of doing in "What are you doing?" Interrogative forms are sometimes known within English linguistics as wh-words, such as what, when, where, who, and why, but also include other interrogative words, such as how. This dependency has been used as a diagnostic tool in syntactic studies as it can be observed to interact with other grammatical constraints.
In linguistics, pied-piping is a phenomenon of syntax whereby a given focused expression brings along an encompassing phrase with it when it is moved.
Topicalization is a mechanism of syntax that establishes an expression as the sentence or clause topic by having it appear at the front of the sentence or clause. This involves a phrasal movement of determiners, prepositions, and verbs to sentence-initial position. Topicalization often results in a discontinuity and is thus one of a number of established discontinuity types, the other three being wh-fronting, scrambling, and extraposition. Topicalization is also used as a constituency test; an expression that can be topicalized is deemed a constituent. The topicalization of arguments in English is rare, whereas circumstantial adjuncts are often topicalized. Most languages allow topicalization, and in some languages, topicalization occurs much more frequently and/or in a much less marked manner than in English. Topicalization in English has also received attention in the pragmatics literature.
In linguistics, raising constructions involve the movement of an argument from an embedded or subordinate clause to a matrix or main clause. A raising predicate/verb appears with a syntactic argument that is not its semantic argument but rather the semantic argument of an embedded predicate. In other words, the sentence is expressing something about a phrase taken as a whole. For example, in they seem to be trying, "to be trying" is the subject of seem. English has raising constructions, unlike some other languages.
In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).
Subject–auxiliary inversion is a frequently occurring type of inversion in the English language whereby a finite auxiliary verb – taken here to include finite forms of the copula be – appears to "invert" with the subject. The word order is therefore Aux-S (auxiliary–subject), which is the opposite of the canonical SV (subject–verb) order of declarative clauses in English. The most frequent use of subject–auxiliary inversion in English is in the formation of questions, although it also has other uses, including the formation of condition clauses, and in the syntax of sentences beginning with negative expressions.
In linguistics, inversion is any of several grammatical constructions where two expressions switch their canonical order of appearance, that is, they invert. There are several types of subject-verb inversion in English: locative inversion, directive inversion, copular inversion, and quotative inversion. The most frequent type of inversion in English is subject–auxiliary inversion in which an auxiliary verb changes places with its subject; it often occurs in questions, such as Are you coming?, with the subject you is switched with the auxiliary are. In many other languages, especially those with a freer word order than English, inversion can take place with a variety of verbs and with other syntactic categories as well.
In linguistics, negative inversion is one of many types of subject–auxiliary inversion in English. A negation or a word that implies negation or a phrase containing one of these words precedes the finite auxiliary verb necessitating that the subject and finite verb undergo inversion. Negative inversion is a phenomenon of English syntax. Other Germanic languages have a more general V2 word order, which allows inversion to occur much more often than in English, so they may not acknowledge negative inversion as a specific phenomenon. While negative inversion is a common occurrence in English, a solid understanding of just what elicits the inversion has not yet been established. It is, namely, not entirely clear why certain fronted expressions containing a negation elicit negative inversion, but others do not.
In linguistics, a discontinuity occurs when a given word or phrase is separated from another word or phrase that it modifies in such a manner that a direct connection cannot be established between the two without incurring crossing lines in the tree structure. The terminology that is employed to denote discontinuities varies depending on the theory of syntax at hand. The terms discontinuous constituent, displacement, long distance dependency, unbounded dependency, and projectivity violation are largely synonymous with the term discontinuity. There are various types of discontinuities, the most prominent and widely studied of these being topicalization, wh-fronting, scrambling, and extraposition.
Subject–verb inversion in English is a type of inversion marked by a predicate verb that precedes a corresponding subject, e.g., "Beside the bed stood a lamp". Subject–verb inversion is distinct from subject–auxiliary inversion because the verb involved is not an auxiliary verb.
Lexicalist hypothesis is a hypothesis proposed by Noam Chomsky in which he claims that syntactic transformations only can operate on syntactic constituents. The hypothesis states that the system of grammar that assembles words is separate and different from the system of grammar that assembles phrases out of words.