Terminology extraction

Last updated

Terminology extraction (also known as term extraction, glossary extraction, term recognition, or terminology mining) is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a given corpus. [1]

Contents

In the semantic web era, a growing number of communities and networked enterprises started to access and interoperate through the internet. Modeling these communities and their information needs is important for several web applications, like topic-driven web crawlers, [2] web services, [3] recommender systems, [4] etc. The development of terminology extraction is also essential to the language industry.

One of the first steps to model a knowledge domain is to collect a vocabulary of domain-relevant terms, constituting the linguistic surface manifestation of domain concepts. Several methods to automatically extract technical terms from domain-specific document warehouses have been described in the literature. [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

Typically, approaches to automatic term extraction make use of linguistic processors (part of speech tagging, phrase chunking) to extract terminological candidates, i.e. syntactically plausible terminological noun phrases. Noun phrases include compounds (e.g. "credit card"), adjective noun phrases (e.g. "local tourist information office"), and prepositional noun phrases (e.g. "board of directors"). In English, the first two (compounds and adjective noun phrases) are the most frequent. [18] Terminological entries are then filtered from the candidate list using statistical and machine learning methods. Once filtered, because of their low ambiguity and high specificity, these terms are particularly useful for conceptualizing a knowledge domain or for supporting the creation of a domain ontology or a terminology base. Furthermore, terminology extraction is a very useful starting point for semantic similarity, knowledge management, human translation and machine translation, etc.

Bilingual terminology extraction

The methods for terminology extraction can be applied to parallel corpora. Combined with e.g. co-occurrence statistics, candidates for term translations can be obtained. [19] Bilingual terminology can be extracted also from comparable corpora [20] (corpora containing texts within the same text type, domain but not translations of documents between each other).

See also

Related Research Articles

<span class="mw-page-title-main">Natural language processing</span> Field of linguistics and computer science

Natural language processing (NLP) is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate speech. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

<span class="mw-page-title-main">WordNet</span> Computational lexicon of English

WordNet is a lexical database of semantic relations between words that links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into synsets with short definitions and usage examples. It can thus be seen as a combination and extension of a dictionary and thesaurus. While it is accessible to human users via a web browser, its primary use is in automatic text analysis and artificial intelligence applications. It was first created in the English language and the English WordNet database and software tools have been released under a BSD style license and are freely available for download from that WordNet website. There are now WordNets in more than 200 languages.

In linguistics and natural language processing, a corpus or text corpus is a dataset, consisting of natively digital and older, digitalized, language resources, either annotated or unannotated.

Word-sense disambiguation (WSD) is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious/automatic but can often come to conscious attention when ambiguity impairs clarity of communication, given the pervasive polysemy in natural language. In computational linguistics, it is an open problem that affects other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence, and inference.

<span class="mw-page-title-main">Glossary</span> Alphabetical list of terms relevant to a certain field of study or action

A glossary, also known as a vocabulary or clavis, is an alphabetical list of terms in a particular domain of knowledge with the definitions for those terms. Traditionally, a glossary appears at the end of a book and includes terms within that book that are either newly introduced, uncommon, or specialized. While glossaries are most commonly associated with non-fiction books, in some cases, fiction novels sometimes include a glossary for unfamiliar terms.

Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can distinguish between three different perspectives of text mining: information extraction, data mining, and a knowledge discovery in databases (KDD) process. Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.

Information extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents and other electronically represented sources. In most of the cases this activity concerns processing human language texts by means of natural language processing (NLP). Recent activities in multimedia document processing like automatic annotation and content extraction out of images/audio/video/documents could be seen as information extraction.

Automatic summarization is the process of shortening a set of data computationally, to create a subset that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, specialized for different types of data.

Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".

In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.

Linguistic categories include

The National Centre for Text Mining (NaCTeM) is a publicly funded text mining (TM) centre. It was established to provide support, advice and information on TM technologies and to disseminate information from the larger TM community, while also providing services and tools in response to the requirements of the United Kingdom academic community.

<span class="mw-page-title-main">Ontology learning</span> Automatic creation of ontologies

Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

The following outline is provided as an overview of and topical guide to natural-language processing:

<span class="mw-page-title-main">Marti Hearst</span> American computer scientist

Marti Hearst is a professor in the School of Information at the University of California, Berkeley. She did early work in corpus-based computational linguistics, including some of the first work in automating sentiment analysis, and word sense disambiguation. She invented an algorithm that became known as "Hearst patterns" which applies lexico-syntactic patterns to recognize hyponymy (ISA) relations with high accuracy in large text collections, including an early application of it to WordNet; this algorithm is widely used in commercial text mining applications including ontology learning. Hearst also developed early work in automatic segmentation of text into topical discourse boundaries, inventing a now well-known approach called TextTiling.

Automatic taxonomy construction (ATC) is the use of software programs to generate taxonomical classifications from a body of texts called a corpus. ATC is a branch of natural language processing, which in turn is a branch of artificial intelligence.

<span class="mw-page-title-main">Sketch Engine</span> Corpus manager and text analysis software

Sketch Engine is a corpus manager and text analysis software developed by Lexical Computing CZ s.r.o. since 2003. Its purpose is to enable people studying language behaviour to search large text collections according to complex and linguistically motivated queries. Sketch Engine gained its name after one of the key features, word sketches: one-page, automatic, corpus-derived summaries of a word's grammatical and collocational behaviour. Currently, it supports and provides corpora in 90+ languages.

Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document.

<span class="mw-page-title-main">Paola Velardi</span> Professor of computer science

Paola Velardi is a full professor of computer science at Sapienza University in Rome, Italy. Her research encompasses natural language processing, machine learning, business intelligence and semantic web, web information extraction in particular. Velardi is one of the hundred female scientists included in the database "100esperte.it". This online, open database champions the recognition of top-rated female scientists in Science, Technology, Engineering and Mathematics (STEM) areas.

References

  1. Alrehamy, Hassan H; Walker, Coral (2018). "SemCluster: Unsupervised Automatic Keyphrase Extraction Using Affinity Propagation". Advances in Computational Intelligence Systems. Advances in Intelligent Systems and Computing. Vol. 650. pp. 222–235. doi:10.1007/978-3-319-66939-7_19. ISBN   978-3-319-66938-0.
  2. Menczer F., Pant G. and Srinivasan P. Topic-Driven Crawlers: machine learning issues.
  3. Fan J. and Kambhampati S. A Snapshot of Public Web Services, in ACM SIGMOD Record archive Volume 34 , Issue 1 (March 2005).
  4. Yan Zheng Wei, Luc Moreau, Nicholas R. Jennings. A market-based approach to recommender systems, in ACM Transactions on Information Systems (TOIS), 23(3), 2005.
  5. Bourigault D. and Jacquemin C. Term Extraction+Term Clustering: an integrated platform for computer-aided terminology Archived 2006-06-19 at the Wayback Machine , in Proc. of EACL, 1999.
  6. Collier, N.; Nobata, C.; Tsujii, J. (2002). "Automatic acquisition and classification of terminology using a tagged corpus in the molecular biology domain". Terminology. 7 (2): 239–257. doi:10.1075/term.7.2.07col.
  7. K. Frantzi, S. Ananiadou and H. Mima. (2000). Automatic recognition of multi-word terms: the C-value/NC-value method. In: C. Nikolau and C. Stephanidis (Eds.) International Journal on Digital Libraries, Vol. 3, No. 2., pp. 115-130.
  8. K. Frantzi, S. Ananiadou and J. Tsujii. (1998) The C-value/NC-value Method of Automatic Recognition of Multi-word Terms, In: ECDL '98 Proceedings of the Second European Conference on Research and Advanced Technology for Digital Libraries, pp. 585-604. ISBN   3-540-65101-2
  9. L. Kozakov; Y. Park; T. Fin; Y. Drissi; Y. Doganata & T. Cofino. (2004). "Glossary extraction and utilization in the information search and delivery system for IBM Technical Support" (PDF). IBM Systems Journal. 43 (3): 546–563. doi:10.1147/sj.433.0546.
  10. Navigli R. and Velardi, P. Learning Domain Ontologies from Document Warehouses and Dedicated Web Sites. Computational Linguistics. 30 (2), MIT Press, 2004, pp. 151-179
  11. Oliver, A. and Vàzquez, M. TBXTools: A Free, Fast and Flexible Tool for Automatic Terminology Extraction. Proceedings of Recent Advances in Natural Language Processing (RANLP 2015), 2015, pp. 473–479
  12. Y. Park, R. J. Byrd, B. Boguraev. "Automatic glossary extraction: beyond terminology identification", International Conference On Computational Linguistics, Proceedings of the 19th international conference on Computational linguistics - Taipei, Taiwan, 2002.
  13. Sclano, F. and Velardi, P.. TermExtractor: a Web Application to Learn the Shared Terminology of Emergent Web Communities. To appear in Proc. of the 3rd International Conference on Interoperability for Enterprise Software and Applications (I-ESA 2007). Funchal (Madeira Island), Portugal, March 28–30th, 2007.
  14. P. Velardi, R. Navigli, P. D'Amadio. Mining the Web to Create Specialized Glossaries, IEEE Intelligent Systems, 23(5), IEEE Press, 2008, pp. 18-25.
  15. Wermter J. and Hahn U. Finding New terminology in Very large Corpora, in Proc. of K-CAP'05, October 2–5, 2005, Banff, Alberta, Canada
  16. Wong, W., Liu, W. & Bennamoun, M. (2007) Determining Termhood for Learning Domain Ontologies using Domain Prevalence and Tendency. In: 6th Australasian Conference on Data Mining (AusDM); Gold Coast. ISBN   978-1-920682-51-4
  17. Wong, W., Liu, W. & Bennamoun, M. (2007) Determining Termhood for Learning Domain Ontologies in a Probabilistic Framework. In: 6th Australasian Conference on Data Mining (AusDM); Gold Coast. ISBN   978-1-920682-51-4
  18. Alrehamy, Hassan H; Walker, Coral (2018). "SemCluster: Unsupervised Automatic Keyphrase Extraction Using Affinity Propagation". Advances in Computational Intelligence Systems. Advances in Intelligent Systems and Computing. Vol. 650. pp. 222–235. doi:10.1007/978-3-319-66939-7_19. ISBN   978-3-319-66938-0.
  19. Macken, Lieve; Lefever, Els; Hoste, Veronique (2013). "TExSIS: Bilingual terminology extraction from parallel corpora using chunk-based alignment". Terminology. 19 (1): 1–30. doi:10.1075/term.19.1.01mac. hdl: 1854/LU-2128573 .
  20. Sharoff, Serge; Rapp, Reinhard; Zweigenbaum, Pierre; Fung, Pascale (2013), Building and Using Comparable Corpora (PDF), Berlin: Springer-Verlag