Nanosyntax

Last updated

Nanosyntax is an approach to syntax where the terminal nodes of syntactic parse trees may be reduced to units smaller than a morpheme. Each unit may stand as an irreducible element and not be required to form a further "subtree." [1] Due to its reduction to the smallest terminal possible, the terminals are smaller than morphemes. Therefore, morphemes and words cannot be itemised as a single terminal, and instead are composed by several terminals. As a result, nanosyntax can serve as a solution to phenomena that are inadequately explained by other theories of syntax. [2]

Contents

Some recent[ when? ] work in theoretical linguistics suggests that the "atoms" of syntax are much smaller than words or morphemes. It then follows that the responsibility of syntax is not limited to ordering "preconstructed" words. Instead, within the framework of nanosyntax, [3] the words are derived entities built into syntax, rather than primitive elements supplied by a lexicon.

History

Theoretical context

Nanosyntax arose within the context of other syntactic theories, primarily Cartography and Distributed Morphology. [4] Cartographic syntax theories were highly influential for the thought behind nanosyntax, and the theories share many commonalities. Cartography seeks to provide a syntactic theory that fits within Universal Grammar by charting building blocks and structures of syntax present in all languages. Because Cartography is grounded in empirical evidence, smaller and more detailed syntactic units and structures were being developed to accommodate new linguistic data. Cartography also syntacticizes various domains of grammar, particularly semantics, to varying degrees in different frameworks. For example, elements of semantics which serve grammatical functions, such as features conveying number, tense, or case, are viewed as being a part of semantics. This trend towards including other grammatical domains within syntax is also reflected in nanosyntax. [4] Other elements of cartography that are present in nanosyntax include a universal merge order syntactic categories and right branching trees/leftward movement exclusively. [5] However, cartographic syntax conceptualizes the lexicon as a pre-syntactic repository, which contrasts with the Nanosyntactic view of the lexicon/syntax. [6] [4]

The architecture of grammar in Nanosyntactic theory. NS Figure.png
The architecture of grammar in Nanosyntactic theory.

Distributed Morphology provides an alternative to Lexicalist approaches to how the lexicon and syntax interact, that is, with words independently created in the lexicon and then organized using syntax. In Distributed Morphology, the lexicon does not function independently and is instead distributed across many linguistic processes. [7] Both Distributed Morphology and nanosyntax are late insertion models, meaning that syntax is viewed as a pre-lexical/phonological process, with syntactic categories as abstract concepts. [4] Additionally, both theories see syntax as responsible for both sentence- and word-level structure. [4] [8] Despite their many similarities, nanosyntax and Distributed Morphology still differ in a few key areas, particularly with regards to the architecture of how they theorize grammatical domains interacting. Distributed Morphology makes use of a presyntactic list of abstracted roots, functional morphemes, and vocabulary insertion which follows syntactic processes. In contrast, nanosyntax has syntax, morphology, and semantics working simultaneously as part of one domain which interacts throughout the syntactic process to apply lexical elements (the lexicon is a single domain in nanosyntax, while it is spread over multiple domains in Distributed Morphology). [4] See the section Tools of nanosyntax below for more information.

Nanosyntactic theory is in direct conflict with theories that adopt views of the lexicon as an independent domain which generates lexical entries apart from any other grammatical domain. An example of such a theory is the Lexical Integrity Hypothesis, which states that syntax has no access to the internal structure of lexical items. [9]

Reasoning

A subtree for the idiom "tie the knot," meaning "marry." Nanosyntax Idiom.png
A subtree for the idiom "tie the knot," meaning "marry."

By adopting a theoretic architecture of grammar which does not separate syntactic, morphological, and semantic processes and by allowing terminals to represent sub-morphemic information, nanosyntax is equipped to address various failings and areas of uncertainty in previous theories. One example that supports these tools of nanosyntax is idioms, in which a single lexical item is represented using multiple words whose meaning cannot be determined cumulatively. Because terminals in nanosyntax represent sub-morphemic information, a single morpheme is able to span several terminals, thus creating a subtree. This accommodates the structure of idioms, which are best represented as a subtree representing one morpheme. [4]

More evidence for the need for a Nanosyntactic analysis include analyzing irregular plural noun forms and irregular verb inflection (described in more detail in the Nanosyntactic Operations section) and analyzing morphemes which contain multiple grammatical functions (described in more detail in the Tools section).

Nanosyntactic operations

Nanosyntax is a theory that seeks to fill in holes left by other theories when they seek to explain phenomena in language. The most notable phenomena that nanosyntax tackles is that of irregular conjugation. [2] For example, "goose" is irregular in that its plural form is not "gooses", but rather, "geese". This poses a problem to simple syntax, as without additional rules and allowances, "geese" should be found to be a suboptimal candidate for the plural of "goose" in comparison to "gooses".

Possible solutions

There are three manners by which syntacticians may attempt to resolve this. The first is a word-based treatment. In the above examples, “duck”, “ducks”, “goose”, and “geese” are all counted as separate heads under the category of nouns. [10] Whether a word is singular or plural, is then marked in the lexical entry, and there does not exist a Number head with which affixes can be included to modify the root word. This theory requires significant work on the part of the speaker to retrieve the correct word. It is also considered lacking in the face of morphological concepts such as that displayed by the Wug test wherein children are able to correctly conjugate a previously unheard nonsense noun from its singular to its plural.

Distributed Morphology attempts to tackle the question through the process of fusion. Fusion is the process in which a noun head and its numeral head may fuse together under certain parameters to derive an irregular plural. [11] In the above example, the plural of “duck” would simply select its plural allomorph “ducks”, and the plural of “goose” would select its plural allomorph “geese”, created through the fusion of “goose” and “-s”. In this way, distributed morphology is head-based. However, this theory still does not provide a reason as to why "geese" is preferable and a more optimal candidate for a plurality of geese over "gooses".

Nanosyntax goes about this dilemma by suggesting that rather than each word being a head, it is instead a phrase and can therefore be made into a subtree. Within the tree, heads can be assigned to override other heads in specific contexts. For example, if there is a head that says "-s" is added to a noun to turn it from a singular noun to a plural noun, but a head overrides it in the case of an irregularly conjugated plural noun such as "goose", it will select for the operation of the superseding head. Since it uses a formula and not rote memorisation of lexical items, it bypasses the challenges brought forth by a word-based treatment, and due to the arrangement of heads and their precedence, also provides a solution to the optimality concerns of Distributed Morphology.

Nanosyntax functions based on two principles: phrasal lexicalisation and the Elsewhere Principle.

Phrasal Lexicalisation occurs here, where something can lexicalise another if it fits its specific parameters. In this example, "geese" can lexicalise "NP goose + NumP plural." Goose Structure.jpg
Phrasal Lexicalisation occurs here, where something can lexicalise another if it fits its specific parameters. In this example, “geese” can lexicalise “NP goose + NumP plural."

Phrasal lexicalisation

Phrasal lexicalisation is the concept that proposes that only lexical items can constitute terminal nodes. [12] [13] When this principle is applied, we can say that in regular plural nouns, there is no special lexicalisation (denoted in the below example using X) that needs to apply, and so standard pluralisation rules apply. The following is an example using "duck" where because there is no additional lexicalisation of the plural noun, an -s is added to pluralise the noun:

Irregular verbs can be parallel to Phonological Idioms (exemplified by "geese") and use Phrasal Lexicalisation as well. Verb .jpg
Irregular verbs can be parallel to Phonological Idioms (exemplified by "geese") and use Phrasal Lexicalisation as well.

X ↔ [PlP [NP DUCK] Pl   duck ↔ [NP DUCK](8) s ↔ Pl

This principle also allows for a word like "geese" to lexicalise [goose[Pl0]. When an additional lexicalisation is present, instead of following the standard addition of -s to pluralise the noun, instead the lexicalisation rule takes over in the following manner:

geese ↔ [PlP [NP GOOSE] Pl .

Elsewhere Principle

The Elsewhere Principle seeks to provide a solution to the question of which lexicalisation applies to the noun in question. In simple terms, the lexicalisation that is more specific will always take precedence over a more general lexicalisation.

As illustrated, syntactic structure S seeks either lexicalisation from A ↔ [XP X [YP Y [ZP Z ]]] or B ↔ [YP Y [ZP Z ]], B will win out over A because B lexicalises in a more specific situation whereas A lexicalises more generally. This solves the problem that Distributed Morphology faces when determining optimal pluralisation for irregular nouns.

Observable consequences

Caha proposed that there is a hierarchy in case as follows from broadest to narrowest: Dative, Genitive, Accusative, Nominative. [14] Caha also suggested that each of these cases could be broken down into its most basic structures, each of which is a syntactic terminal, as follows:

Dative = [WP W [XP X [YP Y [ZP Z ]]]] Genitive = [XP X [YP Y [ZP Z ]]] Accusative = [YP Y [ZP Z ]] Nominative = [ZP Z ]

This is further outlined below, in the section Morphological Containment/Nesting.

As each is formed with sets within, it is possible for portions of the tense to be lexicalised by a separate noun. Therefore, there are several possibilities in syncretism patterns, namely AAAA, AAAB, AABB, ABBB, AABC, ABBC and ABCC. Some arrangements do not appear as possibilities because of the constraints laid on by the Elsewhere Principle. Notably, once there has been a switch to a separate lexicalisation, the lexicalisations from prior cannot return. In other words, there are no occurrences where once A turns to B or B to C, A or B reappears respectively. The Elsewhere Principle says that narrower lexicalisations win over broader lexicalisations, and once a narrower lexicalisation has been selected for, the broader lexicalisation will not reappear.

Nanosyntax of specific categories

Nanosyntactic analyses have been developed for specific lexical categories, including nouns and prepositions.

Nouns

Nanosyntax has been found to be a useful analysis for explaining properties of nouns, more specifically patterns in their affixes. Across languages, syntacticians have used principles of nanosyntax such as Spellout (or Phrasal Lexicalisation) and the Elsewhere Principle (described above), and Cyclic Override (i.e. multiple spellout processes) to show that the structure of nouns are smaller than individual morphemes. These structures are underlying in our syntax, and it is the structure which dictates the morphemes as opposed to morphemes dictating structure. [15] [16] Descriptive analyses like those below, where morphemes are broken down into smaller units and given their own sub-morphemic structure, are possible using a nanosyntactic approach where morphemes are able to lexicalize multiple syntactic tree terminals. [16]

English plural morpheme

Syntax tree showing how spellout and the elsewhere principle allow irregular plurals in English Syntax Tree 'mice'.jpg
Syntax tree showing how spellout and the elsewhere principle allow irregular plurals in English

The structure of irregular plural nouns in English, and why they surface in the lexicon can be explained using a nanosyntactic approach. For example, the plural of mouse is not *mouses (* indicates ungrammaticality), it is mice. The structure of a plural noun can be broken down into the following morphemes: [Noun Plural] [15] The lexical item mice is able to lexicalize both the noun and plural morphemes in one singular morpheme because it is both a larger and more specific structure than *mouse-s. This means that is more favourable because it spells out an entire syntactic sub-tree rather than an individual daughter node.

Nguni noun class prefixes

Research has been done on Nguni languages to develop a structure for the noun class prefixes. Each of the prefixes has a layered structure whereby different layers interact with other morphemes in the language. As a result, different syntactic parts of the morpheme will surface in different environments. The complete Class 2 prefix aba- surfaces in the noun aba-fundi ('the/some students'). However in the vocative case, the initial vowel is not present: Molweni, (*a)ba-fundi ('Hi students'). The initial vowel is syntactically independent from the rest of the morpheme, able to appear in some environments and not others. Nanosyntax provides an explanation by positing that the initial vowel has a separate node from the rest of the morpheme in a subtree. [16]

Prepositions

There lies some uncertainty in deriving prepositions in nanosyntax. Prepositions must precede the item with which they combine and not be moved due to Spellout (i.e., the need to replace a portion of the syntactic tree with a lexical item). [4] [17] Three proposed approaches that aim to justify this are spanning, head movement, and the existence of an additional workspace for complex heads.

Spanning refers to the operation in which the categorical features of a lexical item can associate it with various heads (i.e., a head and its related heads, determined by mutually selected maximal projections). The approach restricts that all associated heads must be contiguous, but the item need not form a constituent. Under this approach, prepositions may be antecedently lexicalized even if they do not hold constituent status. [18] [4]

Head movement has been suggested to be responsible for the order of morphemes within a word. [14] The Nanosyntactic approach results in an ordering of affixes that poses a problem for head movement, which has caused some debate. [19] This motivates the argument posited by some researchers that phrasal movements are responsible for specific orderings of morphemes, meaning that head movement is dispensable and only need be understood as a special case of phrasal movement called “roll-up.” [14] [20] Nonetheless, other researchers abide by movement restrictions as they only apply movement for constituents containing a head. Such attempts to keep movement operations simple make it possible for those following a Nanosyntactic approach to use the same operations as conventional minimalist syntax (in which terminal nodes are lexical items). [19]

The suggestion of an additional workspace for complex heads conceptualizes that the prefixal element is created in a separate space. Thereafter, it combines antecedently with the remaining structure in the primary workspace, allowing the assembled item to maintain its internal ordering of features. [4]

Tools

Nanosyntax uses a handful of tools in order to map out fine-grained elements of the language being analyzed. Beyond Spellout Principles, there are three main tools for this system based on the writings of Baunaz, Haegeman, De Clercq, and Lander in Exploring Nanosyntax. [20]

Semantics

The universal structure of compositionality is used in order for mapping within sentences semantically. This deals with mapping which feature words are composed of which structures a given word is semantically "constructed on". Semantic considerations impact the parameters of structural seize of a sentence, based on semantic categories of things such as verbs. This is an important guiding feature on what elements of syntax need to be aligned with semantic markers. [20]

Syncretism

Syncretism has played a central role in the development of nanosyntax. [21] This system combines two distinct morphosyntactic structures on the surface of a sentence: such as two grammar functions that are contained within a single lexical form. [22] An example of this could be something like the French "à" which can be used to indicate a location or a goal; this is therefore a Location-Goal syncretism. [21] This observation of a syncretism comes from work investigating patterns of the readings of words such as goal "to", route "via", and location "at" cross-linguistically performed by linguistics as suggested by Svenonius. [23]

Case syncretism have been determined to only be possible with adjacent cases, based on the ABA theorem. [24] This therefore can be used to target adjacent elements in the ordering of cases, such as nominative and accusative cases in languages such as English. [20] Through using syncretism in nanosyntax, a universal order of cases can be identified, through determining which cases sit beside one another. This finding allows linguists to understand which features are present, as well as their order.

Morphological containment

The nesting of cases assumed in nanosyntax.

Morphological containment relates to the hierarchy of linear order in syntactic structures. Syncretism may reveal linear order, but is unable to determine in which direction the linear order occurs. This is where morphological containment is required. It is used in this context to posit the hierarchy of cases. Syncretism can determine the linear order of cases is COM > INS > DAT > GEN > ACC > NOM or NOM > ACC > GEN > DAT > INS > COM, but morphological containment decides whether it is nominative or comitative initial. [20] These case features can be understood as sets of each other, where features build on top of one another, with the first feature being a singleton, but the next feature is the first and second nested within itself- and so on. These sets can be referred to as the above mentioned features. Alternatively, to simplify the nesting of the features, one can label them as K1/etc as proposed by Pavel Caha. [22] Arguments for nominative case being the simplest and first case can be linked to its simplicity in structure and features. [22] Examples are found in natural language which suggest an order beginning with NOM and ending with COM, such as in West Tocharian, where the ACC plural ending -m is found nested in the GEN/DAT ending -mts. [20] This is a surface representation of the ordering of case through use of nesting in nanosyntax.

See also

Related Research Articles

A morpheme is the smallest meaningful constituent of a linguistic expression. The field of linguistic study dedicated to morphemes is called morphology.

In linguistics, morphology is the study of words, including the principles by which they are formed, and how they relate to one another within a language. Most approaches to morphology investigate the structure of words in terms of morphemes, which are the smallest units in a language with some independent meaning. Morphemes include roots that can exist as words by themselves, but also categories such as affixes that can only appear as part of a larger word. For example, in English the root catch and the suffix -ing are both morphemes; catch may appear as its own word, or it may be combined with -ing to form the new word catching. Morphology also analyzes how words behave as parts of speech, and how they may be inflected to express grammatical categories including number, tense, and aspect. Concepts such as productivity are concerned with how speakers create words in specific contexts, which evolves over the history of a language.

In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.

Linguistics is the scientific study of human language. Someone who engages in this study is called a linguist. See also the Outline of linguistics, the List of phonetics topics, the List of linguists, and the List of cognitive science topics. Articles related to linguistics include:

Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.

Lexical functional grammar (LFG) is a constraint-based grammar framework in theoretical linguistics. It posits two separate levels of syntactic structure, a phrase structure grammar representation of word order and constituency, and a representation of grammatical functions such as subject and object, similar to dependency grammar. The development of the theory was initiated by Joan Bresnan and Ronald Kaplan in the 1970s, in reaction to the theory of transformational grammar which was current in the late 1970s. It mainly focuses on syntax, including its relation with morphology and semantics. There has been little LFG work on phonology.

Head-driven phrase structure grammar (HPSG) is a highly lexicalized, constraint-based grammar developed by Carl Pollard and Ivan Sag. It is a type of phrase structure grammar, as opposed to a dependency grammar, and it is the immediate successor to generalized phrase structure grammar. HPSG draws from other fields such as computer science and uses Ferdinand de Saussure's notion of the sign. It uses a uniform formalism and is organized in a modular way which makes it attractive for natural language processing.

In generative linguistics, Distributed Morphology is a theoretical framework introduced in 1993 by Morris Halle and Alec Marantz. The central claim of Distributed Morphology is that there is no divide between the construction of words and sentences. The syntax is the single generative engine that forms sound-meaning correspondences, both complex phrases and complex words. This approach challenges the traditional notion of the Lexicon as the unit where derived words are formed and idiosyncratic word-meaning correspondences are stored. In Distributed Morphology there is no unified Lexicon as in earlier generative treatments of word-formation. Rather, the functions that other theories ascribe to the Lexicon are distributed among other components of the grammar.

Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.

In linguistics, incorporation is a phenomenon by which a grammatical category, such as a verb, forms a compound with its direct object or adverbial modifier, while retaining its original syntactic function. The inclusion of a noun qualifies the verb, narrowing its scope rather than making reference to a specific entity.

In linguistics, especially within generative grammar, phi features are the morphological expression of a semantic process in which a word or morpheme varies with the form of another word or phrase in the same sentence. This variation can include person, number, gender, and case, as encoded in pronominal agreement with nouns and pronouns. Several other features are included in the set of phi-features, such as the categorical features ±N (nominal) and ±V (verbal), which can be used to describe lexical categories and case features.

In linguistics, a feature is any characteristic used to classify a phoneme or word. These are often binary or unary conditions which act as constraints in various forms of linguistic analysis.

In linguistics, apophony is any alternation within a word that indicates grammatical information.

In linguistics, nominalization or nominalisation is the use of a word that is not a noun as a noun, or as the head of a noun phrase. This change in functional category can occur through morphological transformation, but it does not always. Nominalization can refer, for instance, to the process of producing a noun from another part of speech by adding a derivational affix, but it can also refer to the complex noun that is formed as a result.

In linguistics, syncretism exists when functionally distinct occurrences of a single lexeme, morph or phone are identical in form. The term arose in historical linguistics, referring to the convergence of morphological forms within inflectional paradigms. In such cases, a former distinction has been 'syncretized'. However, syncretism is also used to describe any situation where multiple syntactical features share the same inflectional marker, without implying a distinction ever existed. The term syncretism is often used when a fairly regular pattern can be observed across a paradigm.

Rochelle Lieber is an American Professor of Linguistics at the University of New Hampshire. She is a linguist known for her work in morphology, the syntax-morphology interface, and morphology and lexical semantics.

Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.

The lexical integrity hypothesis (LIH) or lexical integrity principle is a hypothesis in linguistics which states that syntactic transformations do not apply to subparts of words. It functions as a constraint on transformational grammar.

The exoskeletal model in linguistics, or XSM, is a generative framework in morphology and morphosyntax, introduced in the work of Hagit Borer, professor of linguistics at the Queen Mary University of London and previously professor of linguistics at University of Southern California. The main idea of the Exoskeletal Model is that Lexical items do not have a syntactic category. Rather, they take on whatever syntactic category is imposed on them by their syntactic context.

The Integrational theory of language is the general theory of language that has been developed within the general linguistic approach of integrational linguistics.

References

  1. "Pavel's mythbuster nanoseminar". Nanosyntax. Archived from the original on May 10, 2013.
  2. 1 2 Taraldsen, Knut Tarald (11 December 2019). "An introduction to Nanosyntax". Linguistics Vanguard. 5 (1). doi:10.1515/lingvan-2018-0045. hdl: 10037/17963 . S2CID   209378983.
  3. Starke, Michal (2014). "Towards elegant parameters: Language variation reduces to the size of lexically-stored trees". Linguistic Variation in the Minimalist Framework. pp. 140–152. doi:10.1093/acprof:oso/9780198702894.003.0007. ISBN   9780198702894.
  4. 1 2 3 4 5 6 7 8 9 10 Baunaz, Lena & Lander, Eric. "Nanosyntax: The Basics" in Exploring nanosyntax. Edited by Lena Baunaz, Liliane Haegeman, Karen De Clercq, and Eric Lander. Oxford University Press, 2018.
  5. Cinque, Guglielmo. 2005. “Deriving Greenberg’s Universal 20 and Its Exceptions.”Linguistic Inquiry 36 (3): pp. 315–332.
  6. Rizzi, Luigi. 2013. “Syntactic Cartography and the Syntacticisation of Scope-Discourse Semantics.” In Mind, Values and Metaphysics—Philosophical Papers Dedicated to Kevin Mulligan, edited by Anne Reboul, pp. 517–533. Dordrecht, e Netherlands: Springer.
  7. Marantz, Alec. 1997a. 'No escape from syntax: Don't try morphological analysis in the privacy of your own Lexicon.' Proceedings of the 21st Annual Penn Linguistics Colloquium:  Penn Working Papers in Linguistics 4: 2, ed. Alexis Dimitriadis et.al. 201-225.
  8. Embick, David and Rolf Noyer. 2007. “Distributed Morphology and the Syntax- Morphology Interface.” In e Oxford Handbook of Linguistic Interfaces, edited by Gillian Ramchand and Charles Reiss, pp. 289–324. Oxford: Oxford University Press.
  9. Himmelreich, Anke (2019). “Morphology: Lexical Integrity” lecture from 23 October 2019. Universität Leipzig: Institut für Linguistik. https://home.uni-leipzig.de/~assmann/teaching/WS1920mo/02_lexicalIntegrity.pdf
  10. Cinque, Guglielmo; Rizzi, Luigi (2010). "The cartography of syntactic structures". Oxford Handbook of Linguistic Analysis.
  11. Morris, Halle; Marantz, Alec (1993). "Distributed morphology and the pieces of inflection. Hale, K. & SJ Keyser (eds.), The View from Building 20". Morphology: Critical Concepts in Linguistics: 111–176.
  12. Starke, Michal (2010). "Nanosyntax: A short primer to a new approach to language". Nordlyd. 36 (1): pp. doi: 10.7557/12.213 .
  13. Starke, Mical (2011). "Towards elegant parameters: Language variation reduces to the size of lexically stored trees". Ms. Tromsø University.
  14. 1 2 3 Caha, Pavel (2009). The nanosyntax of case (PhD thesis). University of Tromsø.
  15. 1 2 Starke, Michal (2009). "Nanosyntax : A short primer to a new approach to language". Nordlyd. 36 (1): 1–6.
  16. 1 2 3 Taraldsen, Knut Tarald (2010). "The nanosyntax of Nguni noun class prefixes and concords". Lingua. 120 (6): 1522–1548. doi:10.1016/j.lingua.2009.10.004.
  17. Pantcheva, Marina Blagoeva (2011-10-21). Decomposing Path : The Nanosyntax of Directional Expressions (Doctoral thesis). Universitetet i Tromsø. p. 109.
  18. Dékány, Éva (2009-01-01). "The nanosyntax of Hungarian postpositions". Nordlyd: Tromsø University Working Papers on Language & Linguistics. 36 (1): 41–76. doi: 10.7557/12.219 . hdl: 10037/3185 .
  19. 1 2 Pretorius, Erin; Oosthuizen, Johan (2012). "Nanosyntax: A fresh approach to syntactic analysis". Southern African Linguistics and Applied Language Studies. 30 (4): 433–447. doi:10.2989/16073614.2012.750819. ISSN   1607-3614. S2CID   123998697.
  20. 1 2 3 4 5 6 Baunaz, Lena; Haegeman, Liliane; De Clercq, Karen; Lander, Eric, eds. (2018-06-21). "Exploring Nanosyntax". Oxford Scholarship Online. doi:10.1093/oso/9780190876746.001.0001. ISBN   9780190876746.
  21. 1 2 Pantcheva, Marina. Decomposing Path: The Nanosyntax of Directional Expressions. University of Tromso. OCLC   786337774.
  22. 1 2 3 Pavel Caha (July 2009). The Nanosyntax of Case (PDF) (PhD dissertation).
  23. Svenonius, Peter. 2010. “Spatial Prepositions in English.” In Mapping Spatial PPs: The Cartography of Syntactic Structures, Vol. 6, edited by Guglielmo Cinque and Luigi Rizzi, pp. 127–160. New York: Oxford University Press.
  24. Bobaljik, Jonathan David (2012). Universals in Comparative Morphology: Suppletion, Superlatives, and the Structure of Words (Current Studies in Linguistics). MIT Press. OCLC   939970944.