Syntactic category

Last updated

A syntactic category is a syntactic unit that theories of syntax assume. [1] Word classes, largely corresponding to traditional parts of speech (e.g. noun, verb, preposition, etc.), are syntactic categories. In phrase structure grammars, the phrasal categories (e.g. noun phrase, verb phrase, prepositional phrase, etc.) are also syntactic categories. Dependency grammars, however, do not acknowledge phrasal categories (at least not in the traditional sense). [2]

Contents

Word classes considered as syntactic categories may be called lexical categories, as distinct from phrasal categories. The terminology is somewhat inconsistent between the theoretical models of different linguists. [2] However, many grammars also draw a distinction between lexical categories (which tend to consist of content words, or phrases headed by them) and functional categories (which tend to consist of function words or abstract functional elements, or phrases headed by them). The term lexical category therefore has two distinct meanings. Moreover, syntactic categories should not be confused with grammatical categories (also known as grammatical features), which are properties such as tense, gender, etc.

Defining criteria

At least three criteria are used in defining syntactic categories:

  1. The type of meaning it expresses
  2. The type of affixes it takes
  3. The structure in which it occurs

For instance, many nouns in English denote concrete entities, they are pluralized with the suffix -s, and they occur as subjects and objects in clauses. Many verbs denote actions or states, they are conjugated with agreement suffixes (e.g. -s of the third person singular in English), and in English they tend to show up in medial positions of the clauses in which they appear.

The third criterion is also known as distribution. The distribution of a given syntactic unit determines the syntactic category to which it belongs. The distributional behavior of syntactic units is identified by substitution. [3] Like syntactic units can be substituted for each other.

Additionally, there are also informal criteria one can use in order to determine syntactic categories. For example, one informal means of determining if an item is lexical, as opposed to functional, is to see if it is left behind in "telegraphic speech" (that is, the way a telegram would be written; e.g., Pants fire. Bring water, need help.) [4]

Lexical categories vs. phrasal categories

The traditional parts of speech are lexical categories, in one meaning of that term. [5] Traditional grammars tend to acknowledge approximately eight to twelve lexical categories, e.g.

Lexical categories
adjective (A), adposition (preposition, postposition, circumposition) (P), adverb (Adv), coordinate conjunction (C), determiner (D), interjection (I), noun (N), particle (Par), pronoun (Pr), subordinate conjunction (Sub), verb (V), etc.

The lexical categories that a given grammar assumes will likely vary from this list. Certainly numerous subcategories can be acknowledged. For instance, one can view pronouns as a subtype of noun, and verbs can be divided into finite verbs and non-finite verbs (e.g. gerund, infinitive, participle, etc.). The central lexical categories give rise to corresponding phrasal categories: [6]

Phrasal categories
Adjective phrase (AP), adverb phrase (AdvP), adposition phrase (PP), noun phrase (NP), verb phrase (VP), etc.

In terms of phrase structure rules, phrasal categories can occur to the left of the arrow while lexical categories cannot, e.g. NP → D N. Traditionally, a phrasal category should consist of two or more words, although conventions vary in this area. X-bar theory, for instance, often sees individual words corresponding to phrasal categories. Phrasal categories are illustrated with the following trees:

Syntactic category1.jpg

The lexical and phrasal categories are identified according to the node labels, phrasal categories receiving the "P" designation.

Lexical categories only

Dependency grammars do not acknowledge phrasal categories in the way that phrase structure grammars do. [2] What this means is that the interaction between lexical and phrasal categories disappears, the result being that only the lexical categories are acknowledged. [7] The tree representations are simpler because the number of nodes and categories is reduced, e.g.

Syntactic category 2.jpg

The distinction between lexical and phrasal categories is absent here. The number of nodes is reduced by removing all nodes marked with "P". Note, however, that phrases can still be acknowledged insofar as any subtree that contains two or more words will qualify as a phrase.

Lexical categories vs. functional categories

Many grammars draw a distinction between lexical categories and functional categories. [8] This distinction is orthogonal to the distinction between lexical categories and phrasal categories. In this context, the term lexical category applies only to those parts of speech and their phrasal counterparts that form open classes and have full semantic content. The parts of speech that form closed classes and have mainly just functional content are called functional categories:

Lexical categories
Adjective (A) and adjective phrase (AP), adverb (Adv) and adverb phrase (AdvP), noun (N) and noun phrase (NP), verb and verb phrase (VP), preposition and prepositional phrase (PP)
Functional categories
Coordinate conjunction (C), determiner (D), negation (Neg), particle (Par), preposition (P) and prepositional phrase (PP), subordinate conjunction (Sub), etc.

There is disagreement in certain areas, for instance concerning the status of prepositions. The distinction between lexical and functional categories plays a big role in Chomskyan grammars (Transformational Grammar, Government and Binding Theory, Minimalist Program), where the role of the functional categories is large. Many phrasal categories are assumed that do not correspond directly to a specific part of speech, e.g. inflection phrase (IP), tense phrase (TP), agreement phrase (AgrP), focus phrase (FP), etc. (see also Phrase → Functional categories). In order to acknowledge such functional categories, one has to assume that the constellation is a primitive of the theory and that it exists separately from the words that appear. As a consequence, many grammar frameworks do not acknowledge such functional categories, e.g. Head Driven Phrase Structure Grammar, Dependency Grammar, etc.

Note: The abbreviations for these categories vary across systems; see Part-of-speech tagging § Tag sets.

Labels in the Minimalist Program

Early research suggested shifting away from the use of labelling, as they were considered to be non-optimal for the analysis of syntactic structure, and should therefore be eliminated. [9] Collins (2002) argued that, although labels such as Noun, Pronoun, Adjective and the like were unavoidable and undoubtedly useful for categorizing syntactic items, providing labels for the projections of those items, was not useful and was, in fact, detrimental to structural analysis, since there were disagreements and discussions about how exactly to label these projections. The labeling of projections such as Noun Phrases (NP), Verb Phrases (VP), and others have since been a topic of discussion amongst syntacticians, who have since then been working on labelling algorithms to solve the very problem brought up by Collins.

In line with both Phrase Structure Rules and X-bar theory, syntactic labelling plays an important role within Chomsky's Minimalist Program (MP). Chomsky first developed the MP by means of creating a theoretical framework for generative grammar that can be applied universally among all languages. In contrast to Phrase Structure Rules and X-bar theory, many of the research and proposed theories done on labels are fairly recent and still ongoing.

See also

Notes

  1. For the general reasoning behind syntactic categories, see Bach (1974:70-71) and Haegeman (1994:36).
  2. 1 2 3 Luraghi, Sylvia; Parodi, Claudi (2008). Key Terms in Syntax and Syntactic theories. Continuum International Publishing Group. pp. 15–17.
  3. See Culicover (1982:8ff.).
  4. Carnie, Andrew (2013). Syntax A Generative Introduction. MA, USA: Wiley-Blackwell. p. 52. ISBN   9781118321874.
  5. See for instance Emonds (1976:14), Culicover (1982:12), Brown and Miller (1991:24, 105), Cowper (1992:20, 173), Napoli (1993:169, 52), Haegeman (1994:38), Culicover (1997:19), Brinton (2000:169).
  6. See for instance Emonds (1976:12), Culicover (1982:13), Brown and Miller (1991:107), Cowper (1992:20), Napoli(1993:165), Haegeman (1994:38).
  7. "A Grammar of English". Public ASU. June 2000.
  8. For examples of grammars that draw a distinction between lexical and functional categories, see for instance Fowler (1971:36, 40), Emonds (1976:13), Cowper (1992:173ff.), Culicover (1997:142), Haegeman and Guéron (1999:58), Falk (2001:34ff.), Carnie (2007:45f.).
  9. Collins, Chris (2002). "Eliminating Labels". In Derivation and Explanation in the Minimalist Program: 33–49.

Related Research Articles

An adverb is a word or an expression that generally modifies a verb, adjective, another adverb, determiner, clause, preposition, or sentence. Adverbs typically express manner, place, time, frequency, degree, level of certainty, etc., answering questions such as how, in what way, when, where, to what extent. This is called the adverbial function and may be performed by single words (adverbs) or by multi-word adverbial phrases and adverbial clauses.

In grammar, a phrase—called expression in some contexts—is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consist of a single word or a complete sentence. In theoretical linguistics, phrases are often analyzed as units of syntactic structure such as a constituent.

In grammar, a part of speech or part-of-speech is a category of words that have similar grammatical properties. Words that are assigned to the same part of speech generally display similar syntactic behavior, sometimes similar morphological behavior in that they undergo inflection for similar properties and even similar semantic behavior. Commonly listed English parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, numeral, article, and determiner.

Phrase structure rules are a type of rewrite rule used to describe a given language's syntax and are closely associated with the early stages of transformational grammar, proposed by Noam Chomsky in 1957. They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories, including both lexical categories and phrasal categories. A grammar that uses phrase structure rules is a type of phrase structure grammar. Phrase structure rules as they are commonly employed operate according to the constituency relation, and a grammar that employs phrase structure rules is therefore a constituency grammar; as such, it stands in contrast to dependency grammars, which are based on the dependency relation.

English grammar is the set of structural rules of the English language. This includes the structure of words, phrases, clauses, sentences, and whole texts.

An adjective phrase is a phrase whose head is an adjective. Almost any grammar or syntax textbook or dictionary of linguistics terminology defines the adjective phrase in a similar way, e.g. Kesner Bland (1996:499), Crystal (1996:9), Greenbaum (1996:288ff.), Haegeman and Guéron (1999:70f.), Brinton (2000:172f.), Jurafsky and Martin (2000:362). The adjective can initiate the phrase, conclude the phrase, or appear in a medial position. The dependents of the head adjective—i.e. the other words and phrases inside the adjective phrase—are typically adverb or prepositional phrases, but they can also be clauses. Adjectives and adjective phrases function in two basic ways, attributively or predicatively. An attributive adjective (phrase) precedes the noun of a noun phrase. A predicative adjective (phrase) follows a linking verb and serves to describe the preceding subject, e.g. The man is very happy.

Theta roles are the names of the participant roles associated with a predicate: the predicate may be a verb, an adjective, a preposition, or a noun. If an object is in motion or in a steady state as the speakers perceives the state, or it is the topic of discussion, it is called a theme. The participant is usually said to be an argument of the predicate. In generative grammar, a theta role or θ-role is the formal device for representing syntactic argument structure—the number and type of noun phrases—required syntactically by a particular verb. For example, the verb put requires three arguments.

In linguistics, branching refers to the shape of the parse trees that represent the structure of sentences. Assuming that the language is being written or transcribed from left to right, parse trees that grow down and to the right are right-branching, and parse trees that grow down and to the left are left-branching. The direction of branching reflects the position of heads in phrases, and in this regard, right-branching structures are head-initial, whereas left-branching structures are head-final. English has both right-branching (head-initial) and left-branching (head-final) structures, although it is more right-branching than left-branching. Some languages such as Japanese and Turkish are almost fully left-branching (head-final). Some languages are mostly right-branching (head-initial).

In syntactic analysis, a constituent is a word or a group of words that function as a single unit within a hierarchical structure. The constituent structure of sentences is identified using tests for constituents. These tests apply to a portion of a sentence, and the results provide evidence about the constituent structure of the sentence. Many constituents are phrases. A phrase is a sequence of one or more words built around a head lexical item and working as a unit within a sentence. A word sequence is shown to be a phrase/constituent if it exhibits one or more of the behaviors discussed below. The analysis of constituent structure is associated mainly with phrase structure grammars, although dependency grammars also allow sentence structure to be broken down into constituent parts.

An adpositional phrase is a syntactic category that includes prepositional phrases, postpositional phrases, and circumpositional phrases. Adpositional phrases contain an adposition as head and usually a complement such as a noun phrase. Language syntax treats adpositional phrases as units that act as arguments or adjuncts. Prepositional and postpositional phrases differ by the order of the words used. Languages that are primarily head-initial such as English predominantly use prepositional phrases whereas head-final languages predominantly employ postpositional phrases. Many languages have both types, as well as circumpositional phrases.

In linguistics, an adverbial phrase ("AdvP") is a multi-word expression operating adverbially: its syntactic function is to modify other expressions, including verbs, adjectives, adverbs, adverbials, and sentences. Adverbial phrases can be divided into two types: complement adverbs and modifier adverbs. For example, in the sentence She sang very well, the expression very well is an adverbial phrase, as it modifies the verb to sing. More specifically, the adverbial phrase very well contains two adverbs, very and well: while well modifies the verb to convey information about the manner of singing, very is a degree modifier that conveys information about the degree to which the action of singing well was accomplished.

In theoretical linguistics, a distinction is made between endocentric and exocentric constructions. A grammatical construction is said to be endocentric if it fulfils the same linguistic function as one of its parts, and exocentric if it does not. The distinction reaches back at least to Bloomfield's work of the 1930s, who based it on terms by Pāṇini and Patañjali in Sanskrit grammar. Such a distinction is possible only in phrase structure grammars, since in dependency grammars all constructions are necessarily endocentric.

<span class="mw-page-title-main">Grammatical relation</span>

In linguistics, grammatical relations are functional relationships between constituents in a clause. The standard examples of grammatical functions from traditional grammar are subject, direct object, and indirect object. In recent times, the syntactic functions, typified by the traditional categories of subject and object, have assumed an important role in linguistic theorizing, within a variety of approaches ranging from generative grammar to functional and cognitive theories. Many modern theories of grammar are likely to acknowledge numerous further types of grammatical relations. The role of grammatical relations in theories of grammar is greatest in dependency grammars, which tend to posit dozens of distinct grammatical relations. Every head-dependent dependency bears a grammatical function.

In linguistics, nominalization or nominalisation is the use of a word that is not a noun as a noun, or as the head of a noun phrase. This change in functional category can occur through morphological transformation, but it does not always. Nominalization can refer, for instance, to the process of producing a noun from another part of speech by adding a derivational affix, but it can also refer to the complex noun that is formed as a result.

<span class="mw-page-title-main">Lucien Tesnière</span> French linguist

Lucien Tesnière was a prominent and influential French linguist. He was born in Mont-Saint-Aignan on May 13, 1893. As a senior lecturer at the University of Strasbourg (1924) and later professor at the University of Montpellier (1937), he published many papers and books on Slavic languages. However, his importance in the history of linguistics is based mainly on his development of an approach to the syntax of natural languages that would become known as dependency grammar. He presented his theory in his book Éléments de syntaxe structurale, published posthumously in 1959. In the book he proposes a sophisticated formalization of syntactic structures, supported by many examples from a diversity of languages. Tesnière died in Montpellier on December 6, 1954.

In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

<span class="mw-page-title-main">English prepositions</span> Prepositions in the English language

English prepositions are words – such as of, in, on, at, from, etc. – that function as the head of a prepositional phrase, and most characteristically license a noun phrase object. Semantically, they most typically denote relations in space and time. Morphologically, they are usually simple and do not inflect. They form a closed lexical category.

Syntactic movement is the means by which some theories of syntax address discontinuities. Movement was first postulated by structuralist linguists who expressed it in terms of discontinuous constituents or displacement. Some constituents appear to have been displaced from the position in which they receive important features of interpretation. The concept of movement is controversial and is associated with so-called transformational or derivational theories of syntax. Representational theories, in contrast, reject the notion of movement and often instead address discontinuities with other mechanisms including graph reentrancies, feature passing, and type shifters.

<span class="mw-page-title-main">English determiners</span> Determiners in the English language

English determiners are words – such as the, a, each, some, which, this, and six – that are most commonly used with nouns to specify their referents. The determiners form a closed lexical category in English.

<span class="mw-page-title-main">English phrasal verbs</span> Concept in English grammar

In the traditional grammar of Modern English, a phrasal verb typically constitutes a single semantic unit composed of a verb followed by a particle, sometimes combined with a preposition. Alternative terms include verb-adverb combination, verb-particle construction, two-part word/verb or three-part word/verb and multi-word verb.

References