Linguistic categories

Last updated

Linguistic categories include

The definition of linguistic categories is a major concern of linguistic theory, and thus, the definition and naming of categories varies across different theoretical frameworks and grammatical traditions for different languages. The operationalization of linguistic categories in lexicography, computational linguistics, natural language processing, corpus linguistics, and terminology management typically requires resource-, problem- or application-specific definitions of linguistic categories. In Cognitive linguistics it has been argued that linguistic categories have a prototype structure like that of the categories of common words in a language. [1]

Contents

Linguistic category inventories

To facilitate the interoperability between lexical resources, linguistic annotations and annotation tools and for the systematic handling of linguistic categories across different theoretical frameworks, a number of inventories of linguistic categories have been developed and are being used, with examples as given below. The practical objective of such inventories is to perform quantitative evaluation (for language-specific inventories), to train NLP tools, or to facilitate cross-linguistic evaluation, querying or annotation of language data. At a theoretical level, the existence of universal categories in human language has been postulated, e.g., in Universal grammar, but also heavily criticized.

Part-of-Speech tagsets

Schools commonly teach that there are 9 parts of speech in English: noun, verb, article, adjective, preposition, pronoun, adverb, conjunction, and interjection. However, there are clearly many more categories and sub-categories. For nouns, the plural, possessive, and singular forms can be distinguished. In many languages words are also marked for their case (role as subject, object, etc.), grammatical gender, and so on; while verbs are marked for tense, aspect, and other things. In some tagging systems, different inflections of the same root word will get different parts of speech, resulting in a large number of tags. For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as features somewhat independent from part-of-speech. [2]

In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such as Greek and Latin can be very large; tagging words in agglutinative languages such as Inuit languages may be virtually impossible. Work on stochastic methods for tagging Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were ambiguous in that language as in English. A morphosyntactic descriptor in the case of morphologically rich languages is commonly expressed using very short mnemonics, such as Ncmsan for Category=Noun, Type = common, Gender = masculine, Number = singular, Case = accusative, Animate = no.

The most popular "tag set" for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project.

Multilingual annotation schemes

For Western European languages, cross-linguistically applicable annotation schemes for parts-of-speech, morphosyntax and syntax have been developed with the EAGLES Guidelines. The "Expert Advisory Group on Language Engineering Standards" (EAGLES) was an initiative of the European Commission that ran within the DG XIII Linguistic Research and Engineering programme from 1994 to 1998, coordinated by Consorzio Pisa Ricerche, Pisa, Italy. The EAGLES guidelines provide guidance for markup to be used with text corpora, particularly for identifying features relevant in computational linguistics and lexicography. Numerous companies, research centres, universities and professional bodies across the European Union collaborated to produce the EAGLES Guidelines, which set out recommendations for de facto standards and rules of best practice for: [3]

The Eagles guidelines have inspired subsequent work on other regions, as well, e.g., Eastern Europe. [4]

A generation later, a similar effort was initiated by the research community under the umbrella of Universal Dependencies. Petrov et al. [5] [6] have proposed a "universal", but highly reductionist, tag set, with 12 categories (for example, no subtypes of nouns, verbs, punctuation, etc.; no distinction of "to" as an infinitive marker vs. preposition (hardly a "universal" coincidence), etc.). Subsequently, this was complemented with cross-lingual specifications for dependency syntax (Stanford Dependencies), [7] and morphosyntax (Interset interlingua, [8] partially building on the Multext-East/Eagles tradition) in the context of the Universal Dependencies (UD), an international cooperative project to create treebanks of the world's languages with cross-linguistically applicable ("universal") annotations for parts of speech, dependency syntax, and (optionally) morphosyntactic (morphological) features. Core applications are automated text processing in the field of natural language processing (NLP) and research into natural language syntax and grammar, especially within linguistic typology. The annotation scheme has it roots in three related projects: The UD annotation scheme uses a representation in the form of dependency trees as opposed to a phrase structure trees. At as of February 2019, there are just over 100 treebanks of more than 70 languages available in the UD inventory. [9] The project's primary aim is to achieve cross-linguistic consistency of annotation. However, language-specific extensions are permitted for morphological features (individual languages or resources can introduce additional features). In a more restricted form, dependency relations can be extended with a secondary label that accompanies the UD label, e.g., aux:pass for an auxiliary (UD aux) used to mark passive voice. [10]

The Universal Dependencies have inspired similar efforts for the areas of inflectional morphology, [11] frame semantics [12] and coreference. [13] For phrase structure syntax, a comparable effort does not seem to exist, but the specifications of the Penn Treebank have been applied to (and extended for) a broad range of languages, [14] e.g., Icelandic, [15] Old English, [16] Middle English, [17] Middle Low German, [18] Early Modern High German, [19] Yiddish, [20] Portuguese, [21] Japanese, [22] Arabic [23] and Chinese. [24]

Conventions for interlinear glosses

In linguistics, an interlinear gloss is a gloss (series of brief explanations, such as definitions or pronunciations) placed between lines (inter- + linear), such as between a line of original text and its translation into another language. When glossed, each line of the original text acquires one or more lines of transcription known as an interlinear text or interlinear glossed text (IGT)—interlinear for short. Such glosses help the reader follow the relationship between the source text and its translation, and the structure of the original language. There is no standard inventory for glosses, but common labels are collected in the Leipzig Glossing Rules. [25] Wikipedia also provides a List of glossing abbreviations that draws on this and other sources.

General Ontology for Linguistic Description (GOLD)

GOLD ("General Ontology for Linguistic Description") is an ontology for descriptive linguistics. It gives a formalized account of the most basic categories and relations used in the scientific description of human language, e.g., as a formalization of interlinear glosses. GOLD was first introduced by Farrar and Langendoen (2003). [26] Originally, it was envisioned as a solution to the problem of resolving disparate markup schemes for linguistic data, in particular data from endangered languages. However, GOLD is much more general and can be applied to all languages. In this function, GOLD overlaps with the ISO 12620 Data Category Registry (ISOcat); it is, however, more stringently structured.

GOLD was maintained by the LINGUIST List and others from 2007 to 2010. [27] The RELISH project created a mirror of the 2010 edition of GOLD as a Data Category Selection within ISOcat. As of 2018, GOLD data remains an important terminology hub in the context of the Linguistic Linked Open Data cloud, but as it is not actively maintained anymore, its function is increasingly replaced by OLiA (for linguistic annotation, building on GOLD and ISOcat) and lexinfo.net (for dictionary metadata, building on ISOcat).

ISO 12620 (ISO TC37 Data Category Registry, ISOcat)

ISO 12620 is a standard from ISO/TC 37 that defines a Data Category Registry, a registry for registering linguistic terms used in various fields of translation, computational linguistics and natural language processing and defining mappings both between different terms and the same terms used in different systems. [28] [29] [30]

An earlier implementation of this standard, ISOcat, provides persistent identifiers and URIs for linguistic categories, including the inventory of the GOLD ontology (see below). The goal of the registry is that new systems can reuse existing terminology, or at least be easily mapped to existing terminology, to aid interoperability. [31] The standard is used by other standards such as Lexical Markup Framework (ISO 24613:2008), and a number of terminologies have been added to the registry, including the Eagles guidelines, the National Corpus of Polish, and the TermBase eXchange format from the Localization Industry Standards Association.

However, the current edition ISO 12620:2019 [32] does no longer provide a registry of terms for language technology and terminology, but it is now restricted to terminology resources, hence the revised title "Management of terminology resources — Data category specifications". Accordingly, ISOcat is no longer actively developed. [33] As of May 2020, successor systems, CLARIN Concept Registry [34] and DatCatInfo [35] are only emerging.

For linguistic categories relevant to lexical resources, the lexinfo vocabulary represents an established community standard, [36] in particular in connection with the OntoLex vocabulary and machine-readable dictionaries in the context of Linguistic Linked Open Data technologies. Like the OntoLex vocabulary builds on the Lexical Markup Framework (LMF), lexinfo builds on (the LMF section of) ISOcat. [37] Unlike ISOcat, however, lexinfo is actively maintained and currently (May 2020) extended in a community effort. [38]

Ontologies of Linguistic Annotation (OLiA)

Similar in spirit to GOLD, the Ontologies of Linguistic Annotation (OLiA) provide a reference inventory of linguistic categories for syntactic, morphological and semantic phenomena relevant for linguistic annotation and linguistic corpora in the form of an ontology. In addition, they also provide machine-readable annotation schemes for more than 100 languages, linked with the OLiA reference model. [39] The OLiA ontologies represent a major hub of annotation terminology in the (Linguistic) Linked Open Data cloud, with applications for search, retrieval and machine learning over heterogeneously annotated language resources. [37]

In addition to annotation schemes, the OLiA Reference Model is also linked with the Eagles Guidelines, [40] GOLD, [40] ISOcat, [41] CLARIN Concept Registry, [42] Universal Dependencies, [43] lexinfo, [43] etc., they thus enable interoperability between these vocabularies. OLiA is being developed as a community project on GitHub [44]

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Corpus linguistics is an empirical method for the study of language by way of a text corpus. Corpora are balanced, often stratified collections of authentic, "real world", text of speech or writing that aim to represent a given linguistic variety. Today, corpora are generally machine-readable data collections.

In linguistics and natural language processing, a corpus or text corpus is a dataset, consisting of natively digital and older, digitalized, language resources, either annotated or unannotated.

In corpus linguistics, part-of-speech tagging, also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc.

In linguistics and pedagogy, an interlinear gloss is a gloss placed between lines, such as between a line of original text and its translation into another language. When glossed, each line of the original text acquires one or more corresponding lines of transcription known as an interlinear text or interlinear glossed text (IGT) – an interlinear for short. Such glosses help the reader follow the relationship between the source text and its translation, and the structure of the original language. In its simplest form, an interlinear gloss is a literal, word-for-word translation of the source text.

The American National Corpus (ANC) is a text corpus of American English containing 22 million words of written and spoken data produced since 1990. Currently, the ANC includes a range of genres, including emerging genres such as email, tweets, and web data that are not included in earlier corpora such as the British National Corpus. It is annotated for part of speech and lemma, shallow parse, and named entities.

<span class="mw-page-title-main">Treebank</span> Text corpus with tree annotations

In linguistics, a treebank is a parsed text corpus that annotates syntactic or semantic sentence structure. The construction of parsed corpora in the early 1990s revolutionized computational linguistics, which benefitted from large-scale empirical data.

Terminology extraction is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a given corpus.

<span class="mw-page-title-main">Eva Hajičová</span> Czech linguist

Eva Hajičová [ˈɛva ˈɦajɪt͡ʃovaː] is a Czech linguist, specializing in topic–focus articulation and corpus linguistics. In 2006, she was awarded the Association for Computational Linguistics (ACL) Lifetime Achievement Award. She was named a fellow of the ACL in 2011.

<span class="mw-page-title-main">ISO/TC 37</span> Technical committee within the International Organization for Standardization

ISO/TC 37 is a technical committee within the International Organization for Standardization (ISO) that prepares standards and other documents concerning methodology and principles for terminology and language resources.

The International Corpus of English (ICE) is a set of text corpora representing varieties of English from around the world. Over twenty countries or groups of countries where English is the first language or an official second language are included.

Language resource management – Lexical markup framework, produced by ISO/TC 37, is the ISO standard for natural language processing (NLP) and machine-readable dictionary (MRD) lexicons. The scope is standardization of principles and methods relating to language resources in the contexts of multilingual communication.

<span class="mw-page-title-main">Quranic Arabic Corpus</span>

The Quranic Arabic Corpus is an annotated linguistic resource consisting of 77,430 words of Quranic Arabic. The project aims to provide morphological and syntactic annotations for researchers wanting to study the language of the Quran.

The knowledge acquisition bottleneck is perhaps the major impediment to solving the word-sense disambiguation (WSD) problem. Unsupervised learning methods rely on knowledge about word senses, which is barely formulated in dictionaries and lexical databases. Supervised learning methods depend heavily on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises.

The following outline is provided as an overview of and topical guide to natural-language processing:

The Bulgarian Sense-annotated Corpus (BulSemCor) is a structured corpus of Bulgarian texts in which each lexical item is assigned a sense tag. BulSemCor was created by the Department of Computational Linguistics at the Institute for Bulgarian Language of the Bulgarian Academy of Sciences.

Manually Annotated Sub-Corpus (MASC) is a balanced subset of 500K words of written texts and transcribed speech drawn primarily from the Open American National Corpus (OANC). The OANC is a 15 million word corpus of American English produced since 1990, all of which is in the public domain or otherwise free of usage and redistribution restrictions.

In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then.

Universal Dependencies, frequently abbreviated as UD, is an international cooperative project to create treebanks of the world's languages. These treebanks are openly accessible and available. Core applications are automated text processing in the field of natural language processing (NLP) and research into natural language syntax and grammar, especially within linguistic typology. The project's primary aim is to achieve cross-linguistic consistency of annotation, while still permitting language-specific extensions when necessary. The annotation scheme has it roots in three related projects: Stanford Dependencies, Google universal part-of-speech tags, and the Interset interlingua for morphosyntactic tagsets. The UD annotation scheme uses a representation in the form of dependency trees as opposed to a phrase structure trees. At the present time, there are just over 200 treebanks of more than 100 languages available in the UD inventory.

In linguistics and language technology, a language resource is a "[composition] of linguistic material used in the construction, improvement and/or evaluation of language processing applications, (...) in language and language-mediated research studies and applications."

References

  1. John R Taylor (1995) Linguistic Categorization: Prototypes in Linguistic Theory, 2nd ed., ch.2 p.21
  2. Universal POS tags
  3. The essentials of EAGLES
  4. Dimitrova, L., Ide, N., Petkevic, V., Erjavec, T., Kaalep, H. J., & Tufis, D. (1998, August). Multext-east: Parallel and comparable corpora and lexicons for six central and eastern european languages. In Proceedings of the 17th international conference on Computational linguistics-Volume 1 (pp. 315-319). Association for Computational Linguistics.
  5. Petrov, Slav; Das, Dipanjan; McDonald, Ryan (11 Apr 2011). "A Universal Part-of-Speech Tagset". arXiv: 1104.2086 [cs.CL].
  6. Petrov, Slav (11 Apr 2011). "A Universal Part-of-Speech Tagset". arXiv: 1104.2086 [cs.CL].
  7. "Stanford Dependencies". nlp.stanford.edu. The Stanford Natural Language Processing Group. Retrieved 8 May 2020.
  8. "Interset". cuni.cz. Institute of Formal and Applied Linguistics (Czech Republic). Retrieved 8 May 2020.
  9. "Universal Dependencies". universaldependencies.org. Retrieved 2020-05-14.
  10. "aux:pass". universaldependencies.org. Retrieved 2020-05-14.
  11. UniMorph. "UniMorph: Universal Morphological Annotation". UniMorph. Retrieved 2020-05-14.
  12. System-T/UniversalPropositions, System-T, 2020-05-14, retrieved 2020-05-14
  13. Prange, J., Schneider, N., & Abend, O. (2019, August). Semantically Constrained Multilayer Annotation: The Case of Coreference. In Proceedings of the First International Workshop on Designing Meaning Representations (pp. 164-176).
  14. "Penn Parsed Corpora of Historical English: Other Corpora". www.ling.upenn.edu. Retrieved 2020-05-14.
  15. "Icelandic Parsed Historical Corpus (IcePaHC)". www.linguist.is. Retrieved 2020-05-14.
  16. Warner, Anthony Department of Language and Linguistic Science University of York York; Taylor, Ann; Warner, Anthony; Pintzuk, Susan; Beths, Frank (September 2003). "The York-Toronto-Helsinki Parsed Corpus of Old English prose (YCOE)".{{cite journal}}: Cite journal requires |journal= (help)
  17. "Penn-Helsinki Parsed Corpus of Middle English 2". www.ling.upenn.edu. Retrieved 2020-05-14.
  18. "Corpus of Historical Low German". www.chlg.ac.uk. Retrieved 2020-05-14.
  19. Light, C., & Wallenberg, J. (2011). On the use of passives across Germanic. Presented at 13th Meeting of the Diachronic Generative Syntax (DIGS) Conference DIGS 13, University of Pennsylvania. June 5, 2011
  20. Beatrice Santorini (1993) [./Ftp://babel.ling.upenn.edu/papers/faculty/beatrice%20santorini/santorini-1993.pdf The rate of phrase structure change in the history of Yiddish]. Language Variation and Change 5, 257-283.
  21. "Tycho Brahe Project". www.tycho.iel.unicamp.br. Retrieved 2020-05-14.
  22. "NPCMJ – Ninjal Parsed Corpus of Modern Japanese" . Retrieved 2020-05-14.
  23. "Arabic Treebank: Part 3 (full corpus) v 2.0 (MPG + Syntactic Analysis) - Linguistic Data Consortium". catalog.ldc.upenn.edu. Retrieved 2020-05-14.
  24. "Penn Chinese Treebank Project". verbs.colorado.edu. Retrieved 2020-05-14.
  25. Comrie, B., Haspelmath, M., & Bickel, B. (2008). The Leipzig Glossing Rules: Conventions for interlinear morpheme-by-morpheme glosses. Department of Linguistics of the Max Planck Institute for Evolutionary Anthropology & the Department of Linguistics of the University of Leipzig. Retrieved January, 28, 2010.
  26. Scott Farrar and D. Terence Langendoen (2003) "A linguistic ontology for the Semantic Web." GLOT International. 7 (3), pp.97-100, .
  27. GOLD versions
  28. "ISO 12620:1999 - Computer applications in terminology -- Data categories". iso.org. 2011. Retrieved 9 November 2011.
  29. "ISO 12620:2009 - Terminology and other language and content resources -- Specification of data categories and management of a Data Category Registry for language resources". iso.org. 2011. Retrieved 9 November 2011.
  30. "ISO 12620:2019 Management of terminology resources — Data category specifications". ISO. Retrieved 20 January 2020.
  31. Bononno, Robert (2011). "Terminology for Translators -- an Implementation of ISO 12620". Meta. 45 (4): 646–669. CiteSeerX   10.1.1.136.4771 . doi:10.7202/002101ar.
  32. "ISO 12620:2019 Management of terminology resources — Data category specifications". ISO. Retrieved 20 January 2020.
  33. "The Data Category Repository (DCR) has changed address". www.iso.org. Retrieved 2020-05-08.
  34. "CLARIN Concept Registry | CLARIN ERIC". www.clarin.eu. Retrieved 2020-05-08.
  35. "DatCatInfo". www.datcatinfo.net. Retrieved 2020-05-08.
  36. "LexInfo". www.lexinfo.net. Retrieved 2020-05-14.
  37. 1 2 Cimiano, P., Chiarcos, C., McCrae, J. P., & Gracia, J. (2020). Linguistic Linked Data (pp. 137-160). Springer, Cham.
  38. ontolex/lexinfo, OntoLex Community Group, 2020-03-07, retrieved 2020-05-14
  39. "OLiA ontologies". purl.org/olia. Retrieved 2020-05-14.
  40. 1 2 Chiarcos, C. (2008). An ontology of linguistic annotations. In LDV Forum (Vol. 23, No. 1, pp. 1-16).
  41. Chiarcos, C. (2010, May). Grounding an ontology of linguistic annotations in the Data Category Registry. In LREC 2010 Workshop on Language Resource and Language Technology Standards (LT&LTS), Valetta, Malta (pp. 37-40).
  42. Rehm, G., Galanis, D., Labropoulou, P., Piperidis, S., Welß, M., Usbeck, R., et al (2020). Towards an Interoperable Ecosystem of AI and LT Platforms: A Roadmap for the Implementation of Different Levels of Interoperability. arXiv preprint arXiv : 2004.08355 .
  43. 1 2 Christian Chiarcos, Maxim Ionov and Christian Fäth (2020), Annotation interoperability in the post-ISOcat era, LREC 2020
  44. acoli-repo/olia, ACoLi, 2020-03-10, retrieved 2020-05-14