Multiword expression

Last updated

A multiword expression (MWE), also called phraseme [ citation needed ], is a lexeme-like unit made up of a sequence of two or more lexemes that has properties that are not predictable from the properties of the individual lexemes or their normal mode of combination. MWEs differ from lexemes in that the latter are required by many sources to have meaning that cannot be derived from the meaning of separate components. While MWEs must have some properties that cannot be derived from the same property of the components, the property in question does not need to be meaning.

Contents

For a shorter definition, MWEs can be described as "idiosyncratic interpretations that cross word boundaries (or spaces)". [1]

A multiword expression can be a compound, a fragment of a sentence, or a sentence. The group of lexemes which makup up a MWE can be continuous or discontinuous. It is not always possible to mark a MWE with a part of speech.

A MWE may be more or less frozen.

Example #1 in English: to kick the bucket, which means to die rather than to hit a bucket with one's foot. In this example, that is an endocentric compound, the part of speech may be determined as being a verb . The MWE is frozen, in the sense that no variation is possible.

Example #2 in English: to throw <somebody> to the lions. The pattern <somebody> restricts the usage. The expression is half-frozen because a certain degree of variation is possible but not everything is possible. It is not possible, for instance, to say to the three lions. Like the previous example, the part of speech is a verb.

Example #3 in French: la moutarde <me,te,lui,nous,vous,leur> monte au nez. This MWE is more frozen than the other examples. Let us add that a tense variation is allowed for the verb but we cannot determine what is the part of speech for the whole expression because it is a sentence.

Machine translation (MT)

According to Sag et al. (2002), multiword expressions are, apart from disambiguation, one of the two key problems for natural language processing (NLP) and especially for machine translation (MT).

The number of MWEs in a speaker's lexicon is estimated to be of the same order of magnitude as the number of single words. Specialized domain vocabulary overwhelmingly consists of MWEs, hence, the proportion of MWEs will rise as a system adds vocabulary for new domains, because each domain adds more MWEs than simplex words.

Problems

The greatest problem for translating MWEs might be the idiomaticity problem, as many MWEs have an idiomatic sense, to a higher or a lesser degree.

For example, it is hard to predict for a system that an expression like kick the bucket has a meaning that is totally unrelated to the meaning of kick, the and bucket while appearing to conform to the grammar of English Vps. Idioms cannot be translated literally, because in many cases the idiom does not exist in an equivalent form in the target language. Attention has to be paid to syntactic and semantic (non)equivalence.

Also, not every MWE of the source language has a MWE in the target language as well. For example, the German MWE ins Auge fassen can only be translated by the English one-word term envisage.

Approaches

The most promising approach to the challenge of translating MWEs is example based MT, because in this case each MWE can be listed as an example with its translation equivalent in the target language.

For rule based MT it would be to difficult to define rules to translate MWEs, due to the magnitude of different kinds of MWEs.

Nevertheless, an example based MT system has to apply different rules for the translation of continuous and discontinuous MWEs as it is harder to identify a discontinuous MWE in a sentence where words are inserted between the different components of one MWE.

See also

Related Research Articles

A lexeme is a unit of lexical meaning that underlies a set of words that are related through inflection. It is a basic abstract unit of meaning, a unit of morphological analysis in linguistics that roughly corresponds to a set of forms taken by a single root word. For example, in English, run, runs, ran and running are forms of the same lexeme, which can be represented as RUN.

Lexicology is the branch of linguistics that analyzes the lexicon of a specific language. A word is the smallest meaningful unit of a language that can stand on its own, and is made up of small components called morphemes and even smaller elements known as phonemes, or distinguishing sounds. Lexicology examines every feature of a word – including formation, spelling, origin, usage, and definition.

<span class="mw-page-title-main">Morphology (linguistics)</span> Study of words, their formation, and their relationships in a word

In linguistics, morphology is the study of words, how they are formed, and their relationship to other words in the same language. It analyzes the structure of words and parts of words such as stems, root words, prefixes, and suffixes. Morphology also looks at parts of speech, intonation and stress, and the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, which is the classification of languages based on their use of words, and lexicology, which is the study of words and how they make up a language's vocabulary.

Natural language processing (NLP) is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

In grammar, a noun is a word that represents a concrete or abstract thing, such as living creatures, places, actions, qualities, states of existence, and ideas. A noun may serve as an object or subject within a phrase, clause, or sentence.

In grammar, a phrase—called expression in some contexts—is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consist of a single word or a complete sentence. In theoretical linguistics, phrases are often analyzed as units of syntactic structure such as a constituent. There is a difference between the common use of the term phrase and its technical use in linguistics. In common usage, a phrase is usually a group of words with some special idiomatic meaning or other significance, such as "all rights reserved", "economical with the truth", "kick the bucket", and the like. It may be a euphemism, a saying or proverb, a fixed expression, a figure of speech, etc.. In linguistics, these are known as phrasemes.

An idiom is a phrase or expression that usually presents a figurative, non-literal meaning attached to the phrase. Some phrases which become figurative idioms, however, do retain the phrase's literal meaning. Categorized as formulaic language, an idiom's figurative meaning is different from the literal meaning. Idioms occur frequently in all languages; in English alone there are an estimated twenty-five million idiomatic expressions.

<span class="mw-page-title-main">Collocation</span> Frequent occurrence of words next to each other

In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words that make it up. This contrasts with an idiom, where the meaning of the whole cannot be inferred from its parts, and may be completely unrelated.

<span class="mw-page-title-main">Word</span> Basic element of language

A word is a basic element of language that carries meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of what a word is, there is no consensus among linguists on its definition and numerous attempts to find specific criteria of the concept remain controversial. Different standards have been proposed, depending on the theoretical background and descriptive context; these do not converge on a single definition. Some specific definitions of the term "word" are employed to convey its different meanings at different levels of description, for example based on phonological, grammatical or orthographic basis. Others suggest that the concept is simply a convention used in everyday situations.

In linguistics, a grammatical category or grammatical feature is a property of items within the grammar of a language. Within each category there are two or more possible values, which are normally mutually exclusive. Frequently encountered grammatical categories include:

In lexicography, a lexical item is a single word, a part of a word, or a chain of words (catena) that forms the basic elements of a language's lexicon (≈ vocabulary). Examples are cat, traffic light, take care of, by the way, and it's raining cats and dogs. Lexical items can be generally understood to convey a single meaning, much as a lexeme, but are not limited to single words. Lexical items are like semes in that they are "natural units" translating between languages, or in learning a new language. In this last sense, it is sometimes said that language consists of grammaticalized lexis, and not lexicalized grammar. The entire store of lexical items in a language is called its lexis.

Studies that estimate and rank the most common words in English examine texts written in English. Perhaps the most comprehensive such analysis is one that was conducted against the Oxford English Corpus (OEC), a massive text corpus that is written in the English language.

A phraseme, also called a set phrase, fixed expression, multiword expression, , is a multi-word or multi-morphemic utterance whose components include at least one that is selectionally constrained or restricted by linguistic convention such that it is not freely chosen. In the most extreme cases, there are expressions such as X kicks the bucket ≈ ‘person X dies of natural causes, the speaker being flippant about X’s demise’ where the unit is selected as a whole to express a meaning that bears little or no relation to the meanings of its parts. All of the words in this expression are chosen restrictedly, as part of a chunk. At the other extreme, there are collocations such as stark naked, hearty laugh, or infinite patience where one of the words is chosen freely based on the meaning the speaker wishes to express while the choice of the other (intensifying) word is constrained by the conventions of the English language. Both kinds of expression are phrasemes, and can be contrasted with ’’free phrases’’, expressions where all of the members are chosen freely, based exclusively on their meaning and the message that the speaker wishes to communicate.

Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.

Rule-based machine translation is machine translation systems based on linguistic information about source and target languages basically retrieved from dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language respectively. Having input sentences, an RBMT system generates them to output sentences on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages involved in a concrete translation task.

An explanatory combinatorial dictionary (ECD) is a type of monolingual dictionary designed to be part of a meaning-text linguistic model of a natural language. It is intended to be a complete record of the lexicon of a given language. As such, it identifies and describes, in separate entries, each of the language's lexemes and phrasemes. Among other things, each entry contains (1) a definition that incorporates a lexeme's semantic actants (2) complete information on lexical co-occurrence ; (3) an extensive set of examples. The ECD is a production dictionary — that is, it aims to provide all the information needed for a foreign learner or automaton to produce perfectly formed utterances of the language. Since the lexemes and phrasemes of a natural language number in the hundreds of thousands, a complete ECD, in paper form, would occupy the space of a large encyclopaedia. Such a work has yet to be achieved; while ECDs of Russian and French have been published, each describes less than one percent of the vocabulary of the respective languages.

The following outline is provided as an overview of and topical guide to natural-language processing:

The Integrational theory of language is the general theory of language that has been developed within the general linguistic approach of integrational linguistics.

Idiom, also called idiomaticness or idiomaticity, is the syntactical, grammatical, or structural form peculiar to a language. Idiom is the realized structure of a language, as opposed to possible but unrealized structures that could have developed to serve the same semantic functions but did not.

Ann Alicia Copestake is professor of computational linguistics and head of the Department of Computer Science and Technology at the University of Cambridge and a fellow of Wolfson College, Cambridge.

References

  1. Sag, Ivan A.; Baldwin, Timothy; Bond, Francis; Copestake, Ann; Flickinger, Dan (2002). "Multiword Expressions: A Pain in the Neck for NLP". Computational Linguistics and Intelligent Text Processing. doi:10.1007/3-540-45715-1_1.