Non-configurational language

Last updated

In generative grammar, non-configurational languages are languages characterized by a flat phrase structure, which allows syntactically discontinuous expressions, and a relatively free word order. [1]

Contents

History of the concept of "non-configurationality"

The concept of non-configurationality was developed by grammarians working within Noam Chomsky's generative framework. Some of these linguists observed that the Syntactic Universals proposed by Chomsky and which required a rigid phrase structure was challenged by the syntax of some of the world's languages that had a much less rigid syntax than that of the languages on which Chomsky had based his studies. [1] The concept was invented by Ken Hale who described the syntax of Warlpiri as being non-configurational. However, the first to publish a description of non-configurationality was Chomsky himself in his 1981 lectures on Government and Binding, in which he referred to an unpublished paper by Hale. [2] Chomsky made it a goal of the Government and Binding framework to accommodate languages such as Japanese and Warlpiri that apparently did not conform to his proposed language universal of Move α. Hale later published his own description of non-configurationality in Warlpiri. [3] [4]

Distinction

Non-configurational languages contrast to configurational languages, where the subject of a sentence is outside the finite verb phrase (VP) (directly under S below) but the object is inside it. Since there is no VP constituent in non-configurational languages, there is no structural difference between subject and object. The distinction — configurational versus non-configurational — can exist in phrase structure grammars only. In a dependency-based grammar, the distinction is meaningless because dependency-based structures do not acknowledge a finite VP constituent.

The following trees illustrate the distinction:

Configurational-non-configurat.png

Non-configurational languages have a seemingly 'flat' constituent structure, [5] as illustrated above. The presence of the VP constituent in the configurational tree on the left allows one to define the syntactic relations (subject vs. object) in terms of configuration: the subject is the argument that appears outside of the VP, but the object appears inside it. The flat structure on the right, where there is no VP, forces/allows one to view aspects of syntax differently. More generally, Hale proposed that non-configurational languages have the following characteristics:

  1. free (or more accurately, pragmatically determined) word order
  2. extensive use of null anaphora (pro-drop phenomena)
  3. syntactically discontinuous expressions

However, it is not clear that those properties all cluster together. Languages that have been described as non-configurational include Mohawk, [6] Warlpiri, [7] Nahuatl, [6] O'odham (Papago), [8] Jingulu, [9] and Jiwarli. [9]

Discourse-configurationality

Using non-configurationality as a model, Maria Vilkuna coined and Katalin Kiss developed the concept of discourse-configurationality to describe languages where constituent order is primarily determined by pragmatic factors. [10] [11] Non-configurationality and discourse-configurationality are mutually independent. [11]

The Oxford Handbook of Information Structure defines "discourse-configurational" as referring to "languages in which a particular phrase structure configuration is systematically and exclusively associated with some Information Structure category falling under the notions of Topic and Focus." Those associated with Topic status are more specifically called "topic-configurational," while those associated with Focus status are called "focus-configurational." [11]

Hungarian is discourse-configurational. [10] [11]

Non-configurational Languages

Warlpiri

Warlpiri is a language of the large Pama-Nyungan language family and is spoken in Central Australia by more than 3000 people. It has four main dialects: Yuendumu Warlpiri, Willowra Warlpiri, Lajamanu Warlpiri, and Wakirti Warlpiri, which are spoken across the region. [12] It displays the three main characteristics of non-configurationality, namely free word order, extensive use of null anaphora, and discontinuous expressions.

This tree shows the basic sentence structure of Warlpiri sentences. Warlpiri Tree.png
This tree shows the basic sentence structure of Warlpiri sentences.
This tree shows the basic sentence structure of English sentences. English Tree.png
This tree shows the basic sentence structure of English sentences.

According to Hale, the relatively unconstrained manner in which words are ordered within the sentence is due to the way in which the projection principle acts in non-configurational languages. Hale's Configurationality Parameter (CP) holds that, in non-configurational languages, the projection principle holds of only lexical structure (LS). [4] This is in contrast to configurational languages, where CP states that the projection principle holds of both phrasal structure (PS) and lexical structure. [4] According to Hale, it is the lack of relation between lexical structure (LS) and phrase structure (PS) of sentences in Warlpiri that permits the three characteristics of non-configurationality to be present: [4]

  1. Free word order within Warlpiri is due to three properties of the language: word position within the sentence can be assigned freely, pronominal clitics within the auxiliary verbs provide information about their functions, and argument-taking predicates include their case marking within their lexical entries. [12]
  2. Concerning the possibility of null anaphora, the subject of an infinitive is allowed to be marked as anaphoric. [4] However, the subject argument of lexical structure cannot be marked as anaphoric, because it cannot be bound and therefore would violate Principle A of Binding Theory. [4]
  3. Discontinuous expressions are permitted in Warlpiri because non-adjacent nominals are able to correspond to a single verbal (AUX) argument. This means that the DP[ clarification needed ] and the NP can be referring to the same verb, but are not beside each other in the sentence. [13]

The major (lexical) categories of Warlpiri include N, V, and PV (proverb) and the minor (functional) categories include AUX (verbs) and particles, conjunctions, and clitics, which are all part of the category Particles. [12] The general Warlpiri sentence phrase structure is as follows:

S --> (AUX) α α* (with α = N, V, or particle) [12]

Pronominals are freely ordered with respect to the other words in the sentence, and behave as other nominals do. [9] This is in contrast to the sentence structure of a configurational language, such as English, with a basic sentence phrase structure following:

S --> NP VP.

Warlpiri verbs are always argument-taking predicates and Warlpiri nominals are always arguments or argument-taking predicates. [12] This is shown in the tree structure to the right of ngaju-rna mijipurru ("I am short") in Warlpiri, with the nominals ngaju ("I") and mijipurru ("short") acting as either an argument-taking predicate or argument, depending on the category of the AUX -rna ("am"). In this sentence, the AUX is first person singular, indicating that ngaju must act as an argument and that mijipurru must act as an argument-taking predicate in order for the sentence to be grammatical in Warlpiri. [12] In English, the DP "I" is the argument and the adjective "short" is the argument-taking predicate. The trees to the right show the differences between configurational and non-configurational languages, with an example tree from Warlpiri compared with an example tree from English.

Case-based analysis: Jelinek

Hale (1980, 1981, 1982, 1983) aimed to define a configurationality parameter from which the clusters of properties in non-configurational languages would follow. Eloise Jelinek challenges Hale, providing a re-analysis of Walpiri and certain other non-configurational languages, proposes a different parameter. [13] Mainly Jelinek provides an analysis for why nominals are frequently 'absent' in Warlpiri (null anaphora). Following the Government Binding Theory, the projection principle prevents missing nominals, instead there are empty heads that bear relevant thematic roles, in other words the nominal is recoverable. Hale stipulates that nominals in non-configurational languages are simply optional, which is a result of the nature of the relationship between phrase structure and lexical structure in non-configurational languages. [4] However Jelinek proposes configurationally parameters that are in agreement with the projection principle, with specific reference to Warlpiri data. It is proposed that the AUX not only marks grammatical relations but it also a constituent containing case marked fully referential clitic pronouns that serve as verbal arguments. [13] Since nominals are never verb arguments they can be omitted, without violating the projection principle.

Subsequently, Jelinek explains the free word order and apparent discontinuous expressions of non-configurational languages. Since nominals are not related to arguments, more than one nominal may be adjoined to a single argument thus yielding discontinuous expressions. Additionally because nominals act as adjuncts, they are not required to have a fixed word order. Following this, the function of nominals in non-configurational languages is, similar to adjoined clauses, to add more information to the verbal argument or the predicate.

Mark Baker's application of non-configurationality to polysynthetic languages

Linguist Mark Baker considers polysynthesis, making specific use of Mohawk, to provide a conception of Universal Grammar which accurately accounts for both polysynthetic languages and non-polysynthetic languages. [6] He asserts that the polysynthetic languages must conform to a syntactic rule he calls the "polysynthesis parameter", and that as a result will show a special set of syntactic properties. Following this parameter, one property of polysynthetic languages is non-rigid phrase structure, making these languages non-configurational. To support his claim he considers three features of non-configurationality: the position of NPs, the licensing of NPs and discontinuous constituents. [6]

Position of NPs

In non-configurational languages any NP can be omitted and can appear in an order relative to the verb or other NPs. Baker proposes polysynthetic languages follow this structure as NPs appear to have the properties of adjuncts. To take an example of an English parallel, adverbs are modifiers and can appear on either side of the VP, Baker applies this familiar concept to a new domain, showing that in Mohawk (a polysynthetic language), like English the VP has an obligatory position but NPs can be adjuncts with respect to this element.

Licensing of NPs

As discussed above, Baker proposes that in polysynthetic languages NPs do not take the argument position, he hence suggest there is another parameter which forces NPs into the adjoined position. He suggests this licensing occurs as a result of the Adjunct Licensing Condition, and following this, the Chain Condition. The Adjunct Licensing Condition states that an argument type phrase XP generated in the adjoined positions licensed if and only if it forms a chain with a unique null pronominal in an argument position. The Chain Condition states that X and Y may form a chain only under certain conditions. Namely, X c-commands Y, X and Y are coindexed, there is no barrier containing Y but not X and X and Y are non distinct in morphosyntactic features.

Discontinuous Constituents

Baker also considers Hale proposed third element of non-configurationality: the existence of discontinuous expressions. The range of discontinuous expressions of a polysynthetic language is determined primarily by lexical factors. [6] This suggests that a language that allows a wider range of discontinuous expressions perhaps has more ways of licensing NP expressions.

In considering polysynthesis through the framework of non-configurationality, Mark Baker is able to provide basis for the unique syntax seen in polysynthetic languages. Mark Baker's approach to polysythesis creates some debate among linguists as it heavily relies on generative grammar, which causes some languages which would traditionally be considered to be polysynthetic to be excluded.

Controversy amongst phrase structure grammars

The analysis of non-configurational languages has been controversial among phrase structure grammars. [14] On the one hand, much work on these languages in Principles and Parameters has attempted to show that they are in fact configurational. On the other hand, it has been argued in Lexical Functional Grammar that these attempts are flawed, and that truly non-configurational languages exist. [15] From the perspective of syntactic theory, the existence of non-configurational languages bears on the question of whether grammatical functions like subject and object are independent of structure. If they are not, no language can be truly non-configurational.

Controversy with dependency grammars

The distinction between configurational and non-configurational languages can exist for phrase structure grammars only. Dependency grammars (DGs), since they lack a finite VP constituent altogether, do not acknowledge the distinction. In other words, all languages are non-configurational for DGs, even English, which all phrase structure grammars take for granted as having a finite VP constituent. The point is illustrated with the following examples:

No structure will have a finite VP constituent. - Finite VP in bold
No structure will have a finite VP constituent. - Non-finite VP in bold

Phrase structure grammars almost unanimously assume that the finite VP in bold in the first sentence is a constituent. DGs, in contrast, do not see finite VPs as constituents. Both phrase structure grammars and DGs do, however, see non-finite VPs as constituents. The dependency structure of the example sentence is as follows:

Non-configurational tree.jpg

Since the finite VP will have a finite VP constituent does not qualify as a complete subtree, it is not a constituent. What this means based upon the criterion of configurationality is that this dependency structure (like all dependency structures) is non-configurational. The distinction between configurational and non-configurational has hence disappeared entirely, all languages being non-configurational in the relevant sense. Note, however, that while the finite VP is not a constituent in the tree, the non-finite VP have a finite VP constituent is a constituent (because it qualifies as a complete subtree).

Dependency grammars point to the results of standard constituency tests as evidence that finite VP does not exist as a constituent [16] While these tests deliver clear evidence for the existence of a non-finite VP constituent in English (and other languages), they do not do the same for finite VP.

Notes

  1. 1 2 Golumbia, David (2004). "The interpretation of nonconfigurationality". Language & Communication. 24 (1): 1–22. doi:10.1016/S0271-5309(02)00058-7.
  2. Chomsky, N., 1981. Lectures on Government and Binding: The Pisa Lectures. Foris, Dordrecht.
  3. Hale, K., 1989. On nonconfigurational structures. In: Mara´ cz, L., Muysken, P. (Eds.), Configurationality: The Typology of Asymmetries. Foris, Dordrecht, pp. 293–300
  4. 1 2 3 4 5 6 7 Hale, K , 1983. Warlpiri and the grammar of non-configurational languages. Natural Language & Linguistic Theory 1, 5–47
  5. Crystal, David (2008). A Dictionary of Linguistics and Phonetics. Blackwell Pub. p. 329. ISBN   978-1-4051-5296-9.
  6. 1 2 3 4 5 Baker, Mark C. (1996). The Polysynthesis Parameter. Oxford Studies in Comparative Syntax. New York: Oxford University Press. ISBN   0-19-509308-9. OCLC   31045692.
  7. Hale 1984, 1989
  8. Smith, Marcus. 2004. A Pre-group Grammar for a non-configurational language. URL http://www.bol.ucla.edu/ smithma/papers.html, UCLA ms., revised 3/12/2004.
  9. 1 2 3 Pensalfini, Rob (May 2004). "Towards a Typology of Configurationality". Natural Language & Linguistic Theory. 22 (2): 395–396. doi:10.1023/B:NALA.0000015794.02583.00. S2CID   170091602.
  10. 1 2 "Google Scholar". scholar.google.co.uk. Retrieved 2018-03-09.
  11. 1 2 3 4 Surányi, Balázs (2015). "Discourse-configurationality". In Féry, Caroline; Ishihara, Shinichiro (eds.). The Oxford Handbook of Information Structure. doi:10.1093/oxfordhb/9780199642670.013.37.
  12. 1 2 3 4 5 6 Simpson, Jane (1991). Warlpiri Morpho-Syntax: A Lexicalist Approach. The Netherlands: Kluwer Academic Publishers. ISBN   0-7923-1292-9.
  13. 1 2 3 Jelinek, Eloise. "Empty Categories, Case and Configurationality". Natural Language and Linguistic Theory. 2: 39–76.
  14. See for instance Hale 1984 and Marácz and Muysken 1989.
  15. Austin and Bresnan 1996
  16. See Osborne et al. 2011:323-324.

See also

Related Research Articles

<span class="mw-page-title-main">Syntax</span> System responsible for combining morphemes into complex structures

In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.

In grammar, a phrase—called expression in some contexts—is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consist of a single word or a complete sentence. In theoretical linguistics, phrases are often analyzed as units of syntactic structure such as a constituent.

Phrase structure rules are a type of rewrite rule used to describe a given language's syntax and are closely associated with the early stages of transformational grammar, proposed by Noam Chomsky in 1957. They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories, including both lexical categories and phrasal categories. A grammar that uses phrase structure rules is a type of phrase structure grammar. Phrase structure rules as they are commonly employed operate according to the constituency relation, and a grammar that employs phrase structure rules is therefore a constituency grammar; as such, it stands in contrast to dependency grammars, which are based on the dependency relation.

In language, a clause is a constituent that comprises a semantic predicand and a semantic predicate. A typical clause consists of a subject and a syntactic predicate, the latter typically a verb phrase composed of a verb with any objects and other modifiers. However, the subject is sometimes unvoiced if it is retrievable from context, especially in null-subject language but also in other languages, including English instances of the imperative mood.

Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.

In linguistics, a verb phrase (VP) is a syntactic unit composed of a verb and its arguments except the subject of an independent clause or coordinate clause. Thus, in the sentence A fat man quickly put the money into the box, the words quickly put the money into the box constitute a verb phrase; it consists of the verb put and its arguments, but not the subject a fat man. A verb phrase is similar to what is considered a predicate in traditional grammars.

Lexical functional grammar (LFG) is a constraint-based grammar framework in theoretical linguistics. It posits two separate levels of syntactic structure, a phrase structure grammar representation of word order and constituency, and a representation of grammatical functions such as subject and object, similar to dependency grammar. The development of the theory was initiated by Joan Bresnan and Ronald Kaplan in the 1970s, in reaction to the theory of transformational grammar which was current in the late 1970s. It mainly focuses on syntax, including its relation with morphology and semantics. There has been little LFG work on phonology.

In linguistics, incorporation is a phenomenon by which a grammatical category, such as a verb, forms a compound with its direct object or adverbial modifier, while retaining its original syntactic function. The inclusion of a noun qualifies the verb, narrowing its scope rather than making reference to a specific entity.

Dependency grammar (DG) is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back primarily to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. Dependency grammar differs from phrase structure grammar in that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word and its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, and they are thus well suited for the analysis of languages with free word order, such as Czech or Warlpiri.

The term phrase structure grammar was originally introduced by Noam Chomsky as the term for grammar studied previously by Emil Post and Axel Thue. Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars.

The term predicate is used in two ways in linguistics and its subfields. The first defines a predicate as everything in a standard declarative sentence except the subject, and the other defines it as only the main content verb or associated predicative expression of a clause. Thus, by the first definition, the predicate of the sentence Frank likes cake is likes cake, while by the second definition, it is only the content verb likes, and Frank and cake are the arguments of this predicate. The conflict between these two definitions can lead to confusion.

In theoretical linguistics, a distinction is made between endocentric and exocentric constructions. A grammatical construction is said to be endocentric if it fulfils the same linguistic function as one of its parts, and exocentric if it does not. The distinction reaches back at least to Bloomfield's work of the 1930s, who based it on terms by Pāṇini and Patañjali in Sanskrit grammar. Such a distinction is possible only in phrase structure grammars, since in dependency grammars all constructions are necessarily endocentric.

<span class="mw-page-title-main">Grammatical relation</span>

In linguistics, grammatical relations are functional relationships between constituents in a clause. The standard examples of grammatical functions from traditional grammar are subject, direct object, and indirect object. In recent times, the syntactic functions, typified by the traditional categories of subject and object, have assumed an important role in linguistic theorizing, within a variety of approaches ranging from generative grammar to functional and cognitive theories. Many modern theories of grammar are likely to acknowledge numerous further types of grammatical relations. The role of grammatical relations in theories of grammar is greatest in dependency grammars, which tend to posit dozens of distinct grammatical relations. Every head-dependent dependency bears a grammatical function.

In linguistics, nominalization or nominalisation is the use of a word that is not a noun as a noun, or as the head of a noun phrase. This change in functional category can occur through morphological transformation, but it does not always. Nominalization can refer, for instance, to the process of producing a noun from another part of speech by adding a derivational affix, but it can also refer to the complex noun that is formed as a result.

In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

Exceptional case-marking (ECM), in linguistics, is a phenomenon in which the subject of an embedded infinitival verb seems to appear in a superordinate clause and, if it is a pronoun, is unexpectedly marked with object case morphology. The unexpected object case morphology is deemed "exceptional". The term ECM itself was coined in the Government and Binding grammar framework although the phenomenon is closely related to the accusativus cum infinitivo constructions of Latin. ECM-constructions are also studied within the context of raising. The verbs that license ECM are known as raising-to-object verbs. Many languages lack ECM-predicates, and even in English, the number of ECM-verbs is small. The structural analysis of ECM-constructions varies in part according to whether one pursues a relatively flat structure or a more layered one.

In linguistic typology, polysynthetic languages, formerly holophrastic languages, are highly synthetic languages, i.e., languages in which words are composed of many morphemes. They are very highly inflected languages. Polysynthetic languages typically have long "sentence-words" such as the Yupik word tuntussuqatarniksaitengqiggtuq.

In linguistics, subcategorization denotes the ability/necessity for lexical items to require/allow the presence and types of the syntactic arguments with which they co-occur. For example, the word "walk" as in "X walks home" requires the noun-phrase X to be animate.

In linguistics, a catena is a unit of syntax and morphology, closely associated with dependency grammars. It is a more flexible and inclusive unit than the constituent and its proponents therefore consider it to be better suited than the constituent to serve as the fundamental unit of syntactic and morphosyntactic analysis.

Subject–verb inversion in English is a type of inversion marked by a predicate verb that precedes a corresponding subject, e.g., "Beside the bed stood a lamp". Subject–verb inversion is distinct from subject–auxiliary inversion because the verb involved is not an auxiliary verb.

References