In corpus linguistics a key word is a word which occurs in a text more often than we would expect to occur by chance alone. [1] Key words are calculated by carrying out a statistical test (e.g., loglinear or chi-squared) which compares the word frequencies in a text against their expected frequencies derived in a much larger corpus, which acts as a reference for general language use. Keyness is then the quality a word or phrase has of being "key" in its context. Combinations of nouns with parts of speech that human readers would not likely notice, such as prepositions, time adverbs, and pronouns can be a relevant part of keyness. Even separate pronouns can constitute keywords. [2]
Compare this with collocation, the quality linking two words or phrases usually assumed to be within a given span of each other. Keyness is a textual feature, not a language feature (so a word has keyness in a certain textual context but may well not have keyness in other contexts, whereas a node and collocate are often found together in texts of the same genre so collocation is to a considerable extent a language phenomenon). The set of keywords found in a given text share keyness, they are co-key. Words typically found in the same texts as a key word are called associates.
In politics, sociology and critical discourse analysis, the key reference for keywords was Raymond Williams (1976), [3] but Williams was resolutely Marxist, and Critical Discourse Analysis has tended to perpetuate this political meaning of the term: keywords are part of ideologies and studying them is part of social criticism. Cultural studies has tended to develop along similar lines. This stands in stark contrast to present day linguistics which is wary of political analysis, and has tended to aspire to non-political objectivity. The development of technology, new techniques and methodology relating to massive corpora have all consolidated this trend.
There are, however, numerous political dimensions that come into play when keywords are studied in relation to cultures, societies and their histories. The Lublin Ethnolinguistics School studies Polish and European keywords in this fashion. Anna Wierzbicka (1997), [4] probably the best known cultural linguist writing in English today, studies languages as parts of cultures evolving in society and history. And it becomes impossible to ignore politics when keywords migrate from one culture to another. Underhill and Gianninoto [5] demonstrate the way political terms like, "citizen" and "individual" are integrated into the Chinese worldview over the course of the 19th and 20th century. They argue that this is part of a complex readjustment of conceptual clusters related to "the people". Keywords like "citizen" generate various translations in Chinese, and are part of an ongoing adaptation to global concepts of individual rights and responsibilities. Understanding keywords in this light becomes crucial for understanding how the politics of China evolves as Communism emerges and as the free market and citizens' rights develop. Underhill and Gianninoto argue that this is part of the complex ways ideological worldviews interact with the language as an ongoing means of perceiving and understanding the world.
Barbara Cassin studies keywords in a more traditional manner, striving to define the words specific to individual cultures, in order to demonstrate that many of our keywords are partially "untranslatable" into their "equivalents. The Greeks may need four words to cover all the meanings English-speakers have in mind when speaking of "love". Similarly, the French find that "liberté" suffices, while English-speakers attribute different associations to "liberty" and "freedom": "freedom of speech" or "freedom of movement", but "the Statue of Liberty". [6]
Keywords are identified by software that compares a word-list of the text with a word-list based on a larger reference corpus. Software such as e.g. WordSmith, lists keywords and phrases and allows plotting their occurrence as they appear in texts.
Corpus linguistics is the study of a language as that language is expressed in its text corpus, its body of "real world" text. Corpus linguistics proposes that a reliable analysis of a language is more feasible with corpora collected in the field—the natural context ("realia") of that language—with minimal experimental interference. The large collections of text allow linguists to run quantitative analyses on linguistic concepts, otherwise harder to quantify.
In linguistics and natural language processing, a corpus or text corpus is a dataset, consisting of natively digital and older, digitalized, language resources, either annotated or unannotated.
Ethnolinguistics is an area of anthropological linguistics that studies the relationship between a language and the cultural behavior of the people who speak that language.
In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words that make it up. This contrasts with an idiom, where the meaning of the whole cannot be inferred from its parts, and may be completely unrelated.
Stop words are the words in a stop list which are filtered out before or after processing of natural language data (text) because they are deemed insignificant. There is no single universal list of stop words used by all natural language processing tools, nor any agreed upon rules for identifying stop words, and indeed not all tools even use such a list. Therefore, any group of words can be chosen as the stop words for a given purpose. The "general trend in [information retrieval] systems over time has been from standard use of quite large stop lists to very small stop lists to no stop list whatsoever".
Semantic prosody, also discourse prosody, describes the way in which certain seemingly neutral words can be perceived with positive or negative associations through frequent occurrences with particular collocations. Coined in analogy to linguistic prosody, popularised by Bill Louw.
A concordance is an alphabetical list of the principal words used in a book or body of work, listing every instance of each word with its immediate context. Historically, concordances have been compiled only for works of special importance, such as the Vedas, Bible, Qur'an or the works of Shakespeare, James Joyce or classical Latin and Greek authors, because of the time, difficulty, and expense involved in creating a concordance in the pre-computer era.
In linguistics, the term lexis designates the complete set of all possible words in a language, or a particular subset of words that are grouped by some specific linguistic criteria. For example, the general term English lexis refers to all words of the English language, while more specific term English religious lexis refers to a particular subset within English lexis, encompassing only words that are semantically related to the religious sphere of life.
In linguistics, semantic analysis is the process of relating syntactic structures, from the levels of words, phrases, clauses, sentences and paragraphs to the level of the writing as a whole, to their language-independent meanings. It also involves removing features specific to particular linguistic and cultural contexts, to the extent that such a project is possible. The elements of idiom and figurative speech, being cultural, are often also converted into relatively invariant meanings in semantic analysis. Semantics, although related to pragmatics, is distinct in that the former deals with word or sentence choice in any given context, while pragmatics considers the unique or particular meaning derived from context or tone. To reiterate in different terms, semantics is about universally coded meaning, and pragmatics, the meaning encoded in words that is then interpreted by an audience.
Collostructional analysis is a family of methods developed by Stefan Th. Gries and Anatol Stefanowitsch. Collostructional analysis aims at measuring the degree of attraction or repulsion that words exhibit to constructions, where the notion of construction has so far been that of Goldberg's construction grammar.
The British National Corpus (BNC) is a 100-million-word text corpus of samples of written and spoken English from a wide range of sources. The corpus covers British English of the late 20th century from a wide variety of genres, with the intention that it be a representative sample of spoken and written British English of that time. It is used in corpus linguistics for analysis of corpora.
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.
The Longman Dictionary of Contemporary English (LDOCE), first published by Longman in 1978, is an advanced learner's dictionary, providing definitions using a restricted vocabulary, helping non-native English speakers understand meanings easily. It is available in four configurations:
Corpus-assisted discourse studies is related historically and methodologically to the discipline of corpus linguistics. The principal endeavor of corpus-assisted discourse studies is the investigation, and comparison of features of particular discourse types, integrating into the analysis the techniques and tools developed within corpus linguistics. These include the compilation of specialised corpora and analyses of word and word-cluster frequency lists, comparative keyword lists and, above all, concordances.
A word list is a list of a language's lexicon within some given text corpus, serving the purpose of vocabulary acquisition. A lexicon sorted by frequency "provides a rational basis for making sure that learners get the best return for their vocabulary learning effort", but is mainly intended for course writers, not directly for learners. Frequency lists are also made for lexicographical purposes, serving as a sort of checklist to ensure that common words are not left out. Some major pitfalls are the corpus content, the corpus register, and the definition of "word". While word counting is a thousand years old, with still gigantic analysis done by hand in the mid-20th century, natural language electronic processing of large corpora such as movie subtitles has accelerated the research field.
Michael Hoey was a British linguist and Baines Professor of English Language. He lectured in applied linguistics in over 40 countries.
Norbert Schmitt is an American applied linguist and Emeritus Professor of Applied Linguistics at the University of Nottingham in the United Kingdom. He is known for his work on second-language vocabulary acquisition and second-language vocabulary teaching. He has published numerous books and papers on vocabulary acquisition.
The following outline is provided as an overview of and topical guide to natural-language processing:
Sketch Engine is a corpus manager and text analysis software developed by Lexical Computing CZ s.r.o. since 2003. Its purpose is to enable people studying language behaviour to search large text collections according to complex and linguistically motivated queries. Sketch Engine gained its name after one of the key features, word sketches: one-page, automatic, corpus-derived summaries of a word's grammatical and collocational behaviour. Currently, it supports and provides corpora in 90+ languages.
CorCenCC or the National Corpus of Contemporary Welsh is a language resource for Welsh speakers, Welsh learners, Welsh language researchers, and anyone who is interested in the Welsh language. CorCenCC is a freely accessible collection of multiple language samples, gathered from real-life communication, and presented in the searchable online CorCenCC text corpus. The corpus is accompanied by an online teaching and learning toolkit – Y Tiwtiadur – which draws directly on the data from the corpus to provide resources for Welsh language learning at all ages and levels.
{{cite book}}
: CS1 maint: ref duplicates default (link){{cite book}}
: CS1 maint: ref duplicates default (link){{cite book}}
: CS1 maint: ref duplicates default (link)