Topicalization

Last updated

Topicalization is a mechanism of syntax that establishes an expression as the sentence or clause topic by having it appear at the front of the sentence or clause (as opposed to in a canonical position further to the right). This involves a phrasal movement of determiners, prepositions, and verbs to sentence-initial position. [1] Topicalization often results in a discontinuity and is thus one of a number of established discontinuity types, the other three being wh-fronting, scrambling, and extraposition. Topicalization is also used as a constituency test; an expression that can be topicalized is deemed a constituent. [2] The topicalization of arguments in English is rare, whereas circumstantial adjuncts are often topicalized. Most languages allow topicalization, and in some languages, topicalization occurs much more frequently and/or in a much less marked manner than in English. Topicalization in English has also received attention in the pragmatics literature. [3]

Contents

Examples

Typical cases of topicalization are illustrated with the following examples:

a. The boys roll rocks for entertainment.
b. For entertainment, the boys roll rocks. -Topicalization of the adjunct for entertainment
a. Everyone refused to answer because the pressure was too great.
b. Because the pressure was too great, everyone refused to answer. - Topicalization of the adjunct because the pressure was too great
a. I won't eat that pizza.
b. That pizza, I won't eat. - Topicalization of the object argument that pizza
a. I am terrified of those dogs.
b. Those dogs, I am terrified of. - Topicalization of the object argument those dogs

Assuming that the a-sentences represent canonical word order, the b-sentences contain instances of topicalization. The constituent in bold is fronted to establish it as topic. The first two examples, which use topicalized adjuncts, are typical, but the last two examples with topicalized object arguments are comparatively rare. The appearance of the demonstrative determiners that and those is important since without them, topicalization of an argument seems less acceptable: A pizza I won't eat.

Topicalization can occur across long distances:

a. I thought you said that Tom believes the explanation needs such examples.
b. Such examples I thought you said that Tom believes the explanation needs. - Topicalization of the object argument such examples over a long distance

Further examples

Topicalization is similar to wh-movement insofar as the constituents that can be wh-fronted can also be topicalized:

a. Bill is living in that one house on the hill.
b. Which house is Bill living in? - Wh-fronting of NP resulting in preposition stranding
c. That one house on the hill Bill is living in. - Topicalization of NP resulting in preposition stranding
a. Shelly has indeed uncovered part of our plan.
b. What has Shelly indeed uncovered part of? - Wh-fronting out of object NP resulting in preposition stranding
c. Our plan Shelly has indeed uncovered part of. - Topicalization out of object NP resulting in preposition stranding

Also, topicalization is similar to wh-fronting insofar as the islands and barriers to wh-fronting are also islands and barriers to topicalization:

a. The description of his aunt was really funny.
b. *Whose aunt was the description of really funny? - Wh-fronting impossible out of a subject in English
c. *His aunt the description of was really funny. - Topicalization impossible out of a subject in English
a. He relaxes after he's played Starcraft.
b. *What does he relax after he's played? - Wh-fronting impossible out of adjunct clause
c. *Starcraft he relaxes after he's played. - Topicalization impossible out of adjunct clause
a. She approves of the suggestion to make pasta.
b. *What does she approve of the suggestion to make? - Wh-fronting impossible out of complex NP
c. *Pasta she approves of the suggestion to make. - Topicalization impossible out of complex NP

Those examples illustrate the similar behavior of topicalization and wh-fronting. Further data, which will not be produced here, could show, however, that topicalization is unlike the other two major discontinuity types: scrambling and extraposition.

Theoretical analyses

The theoretical analysis of topicalization can vary greatly depending in part on the theory of sentence structure that one adopts. If one assumes the layered structures associated with many phrase structure grammars, all instances of topicalization will involve a discontinuity. If, in contrast, less layered structures are assumed as for example in dependency grammar, then many instances of topicalization do not involve a discontinuity, but rather just inversion. [4] This point is illustrated here first using flatter structures that lack a finite VP-constituent (which means the entire sentence has the status of a large VP). Both constituency- and dependency-based analyses are given. The example itself is a piece of Yoda wisdom (as he speaks to Anakin), and is certainly of questionable acceptability in this regard. It is, however, perfectly understandable:

E-top-01.jpg

The upper two trees show the analysis using flat constituency-based structures that lack a finite VP constituent, and the lower two trees are dependency-based, [5] whereby dependency inherently rejects the existence of finite VP-constituents. [6] The noteworthy aspect of these examples is that topicalization does not result in a discontinuity, since there are no crossing lines in the trees. What this means is that such cases can be analyzed purely in terms of inversion. The topicalized expression simply "inverts" to the other side of its head. [7]

Instead of the flat trees just examined, most constituency grammars posit more layered structures that include a finite VP constituent. These more layered structures are likely to address topicalization in terms of movement or copying, as illustrated with the following two trees: [8]

E-top-02.jpg

Tree a. shows the canonical word order again, and tree b. illustrates what is known as the movement or copying analysis. The topicalized expression is first generated in its canonical position but is then copied to the front of the sentence, the original then being deleted.

The movement analysis of discontinuities is one possible way to address those instances of topicalization that cannot be explained in terms of inversion. An alternative explanation is feature passing. One assumes that the topicalized expression is not moved or copied to the clause-initial position, but rather it is "base" generated there. Instead of movement, there is feature passing, however. [9] A link of a sort is established between the topicalized expression and its governor. The link is the path along which information about the topicalized expression is passed to the governor of that expression. A piece of Yoda wisdom is again used for illustration, the full sentence being Careful you must be when sensing the future, Anakin:

E-top-03.jpg

The nodes in red mark the path of feature passing. Features (=information) about the topicalized expression are passed rightward through (and down) the tree structure to the governor of that expression. This path is present in both analyses, i.e. in the constituency-based a-analysis on the left and in the dependency-based b-analysis on the right. Since topicalization can occur over long distances, feature passing must also occur over long distances. The final example shows a dependency-based analysis of a sentence where the feature passing path is quite long:

E-top-04.jpg

Information about the topicalized such nonsense is passed along the path marked in red down to the governor of the topicalized expression spouting. The words corresponding to the nodes in red form a catena (Latin for 'chain', plural catenae). [10] A theory of topicalization is then built up in part by examining the nature of these catenae for feature passing.

See also

Notes

  1. Sportiche, Dominique; Koopman, Hilda; Stabler, Edward (2014). An Introduction to Syntactic Analysis and Theory. Wiley Blackwell. pp. 68–70, 189–191. ISBN   978-1-4051-0017-5.
  2. For examples of topicalization used as a constituency test, see for instance Allerton (1979:114), Borsley (1991:24), Napoli (1993:422), Burton-Roberts (1997:17), Poole (2002:32), Radford (2004:72), Haegeman (2006:790).
  3. Concerning topicalization as discussed in the pragmatics literature, see for example Prince (1998).
  4. Two prominent sources on dependency grammar are Tesnière (1959) and Ágel (2003/6).
  5. See Mel'čuk (2003: 221) and Starosta (2003: 278) for dependency grammar analyses of topicalization similar to the ones shown here.
  6. Concerning the rejection of a finite VP constituent in dependency grammar, see Tesnière (1959:16ff.).
  7. See Groß and Osborne (2009:64-66) for such an analysis.
  8. See for instance Grewendorf (1988:66ff.), Ouhalla (1998: 136f.), Radford (2004: 123ff).
  9. See for instance the account of functional uncertainty in Lexical Functional Grammar (Bresnan 2001:64-69).
  10. See Osborne et al. (2013) concerning catenae.

Related Research Articles

In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning. There are numerous approaches to syntax which differ in their central assumptions and goals.

In syntax and grammar, a phrase is a group of words which act together as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consist of a single word or a complete sentence. In theoretical linguistics, phrases are often analyzed as units of syntactic structure such as a constituent.

Phrase structure rules are a type of rewrite rule used to describe a given language's syntax and are closely associated with the early stages of transformational grammar, proposed by Noam Chomsky in 1957. They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories, including both lexical categories and phrasal categories. A grammar that uses phrase structure rules is a type of phrase structure grammar. Phrase structure rules as they are commonly employed operate according to the constituency relation, and a grammar that employs phrase structure rules is therefore a constituency grammar; as such, it stands in contrast to dependency grammars, which are based on the dependency relation.

A noun phrase, or nominal (phrase), is a phrase that has a noun or pronoun as its head or performs the same grammatical function as a noun. Noun phrases are very common cross-linguistically, and they may be the most frequently occurring phrase type.

In language, a clause is a constituent that links a semantic predicand and a semantic predicate. A typical clause consists of a subject and a syntactic predicate, the latter typically a verb phrase, a verb with any objects and other modifiers. However, the subject is sometimes not said or explicit, often the case in null-subject languages if the subject is retrievable from context, but it sometimes also occurs in other languages such as English.

Parse tree

A parse tree or parsing tree or derivation tree or concrete syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term parse tree itself is used primarily in computational linguistics; in theoretical syntax, the term syntax tree is more common.

Dependency grammar (DG) is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back primarily to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. Dependency grammar differs from phrase structure grammar in that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word and its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, and they are thus well suited for the analysis of languages with free word order, such as Czech or Warlpiri.

In syntactic analysis, a constituent is a word or a group of words that function as a single unit within a hierarchical structure. The constituent structure of sentences is identified using tests for constituents. These tests apply to a portion of a sentence, and the results provide evidence about the constituent structure of the sentence. Many constituents are phrases. A phrase is a sequence of one or more words built around a head lexical item and working as a unit within a sentence. A word sequence is shown to be a phrase/constituent if it exhibits one or more of the behaviors discussed below. The analysis of constituent structure is associated mainly with phrase structure grammars, although dependency grammars also allow sentence structure to be broken down into constituent parts.

In linguistics, wh-movement is the formation of syntactic dependencies involving interrogative words. An example in English is the dependency formed between what and the object position of doing in "What are you doing?". Interrogative forms are known within English linguistics as wh-words such as what, when, where, who, and why, but also include interrogative words like how. This kind of dependency has been used as a diagnostic tool in syntactic studies as it is subject to a number of interacting grammatical constraints.

In linguistics, pied-piping is a phenomenon of syntax whereby a given focused expression takes an entire encompassing phrase with it when it is "moved".

The term predicate is used in one of two ways in linguistics and its subfields. The first defines a predicate as everything in a standard declarative sentence except the subject, and the other views it as just the main content verb or associated predicative expression of a clause. Thus, by the first definition the predicate of the sentence Frank likes cake is likes cake. By the second definition, the predicate of the same sentence is just the content verb likes, whereby Frank and cake are the arguments of this predicate. Differences between these two definitions can lead to confusion.

In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

Antecedent-contained deletion (ACD), also called antecedent-contained ellipsis, is a phenomenon whereby an elided verb phrase appears to be contained within its own antecedent. For instance, in the sentence "I read every book that you did", the verb phrase in the main clause appears to license ellipsis inside the relative clause which modifies its object. ACD is a classic puzzle for theories of the syntax-semantics interface, since it threatens to introduce an infinite regress. It is commonly taken as motivation for syntactic transformations such as quantifier raising, though some approaches explain it using semantic composition rules or by adoption more flexible notions of what it means to be a syntactic unit.

Syntactic movement is the means by which some theories of syntax address discontinuities. Movement was first postulated by structuralist linguists who expressed it in terms of discontinuous constituents or displacement. Some constituents appear to have been displaced from the position in which they receive important features of interpretation. The concept of movement is controversial and is associated with so-called transformational or derivational theories of syntax. Representational theories, in contrast, reject the notion of movement and often instead address discontinuities with other mechanisms including graph reentrancies, feature passing, and type shifters.

Scrambling is a syntactic phenomenon wherein sentences can be formulated using a variety of different word orders without any change in meaning. Scrambling often results in a discontinuity since the scrambled expression can end up at a distance from its head. Scrambling does not occur in English, but it is frequent in languages with freer word order, such as German, Russian, Persian and Turkic languages. The term was coined by Haj Ross in his 1967 dissertation and is widely used in present work, particularly with the generative tradition.

In linguistics, the catena is a unit of syntax and morphology, closely associated with dependency grammars. It is a more flexible and inclusive unit than the constituent and may therefore be better suited than the constituent to serve as the fundamental unit of syntactic and morphosyntactic analysis.

In syntax, shifting occurs when two or more constituents appearing on the same side of their common head exchange positions in a sense to obtain non-canonical order. The most widely acknowledged type of shifting is heavy NP shift, but shifting involving a heavy NP is just one manifestation of the shifting mechanism. Shifting occurs in most if not all European languages, and it may in fact be possible in all natural languages including sign languages. Shifting is not inversion, and inversion is not shifting, but the two mechanisms are similar insofar as they are both present in languages like English that have relatively strict word order. The theoretical analysis of shifting varies in part depending on the theory of sentence structure that one adopts. If one assumes relatively flat structures, shifting does not result in a discontinuity. Shifting is often motivated by the relative weight of the constituents involved. The weight of a constituent is determined by a number of factors: e.g., number of words, contrastive focus, and semantic content.

In linguistics, a discontinuity occurs when a given word or phrase is separated from another word or phrase that it modifies in such a manner that a direct connection cannot be established between the two without incurring crossing lines in the tree structure. The terminology that is employed to denote discontinuities varies depending on the theory of syntax at hand. The terms discontinuous constituent, displacement, long distance dependency, unbounded dependency, and projectivity violation are largely synonymous with the term discontinuity. There are various types of discontinuities, the most prominent and widely studied of these being topicalization, wh-fronting, scrambling, and extraposition.

Extraposition is a mechanism of syntax that alters word order in such a manner that a relatively "heavy" constituent appears to the right of its canonical position. Extraposing a constituent results in a discontinuity and in this regard, it is unlike shifting, which does not generate a discontinuity. The extraposed constituent is separated from its governor by one or more words that dominate its governor. Two types of extraposition are acknowledged in theoretical syntax: standard cases where extraposition is optional and it-extraposition where extraposition is obligatory. Extraposition is motivated in part by a desire to reduce center embedding by increasing right-branching and thus easing processing, center-embedded structures being more difficult to process. Extraposition occurs frequently in English and related languages.

Subject–verb inversion in English is a type of inversion where the subject and verb switch their canonical order of appearance so that the subject follows the verb(s), e.g. A lamp stood beside the bedBeside the bed stood a lamp. Subject–verb inversion is distinct from subject–auxiliary inversion because the verb involved is not an auxiliary verb.

References