Brown clustering

Last updated

Brown clustering is a hard hierarchical agglomerative clustering problem based on distributional information proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter V. de Souza, Jennifer Lai, and Robert Mercer. [1] The method, which is based on bigram language models, [2] is typically applied to text, grouping words into clusters that are assumed to be semantically related by virtue of their having been embedded in similar contexts.

Contents

Introduction

In natural language processing, Brown clustering [3] or IBM clustering [4] is a form of hierarchical clustering of words based on the contexts in which they occur, proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter de Souza, Jennifer Lai, and Robert Mercer of IBM in the context of language modeling. [1] The intuition behind the method is that a class-based language model (also called cluster n-gram model [4] ), i.e. one where probabilities of words are based on the classes (clusters) of previous words, is used to address the data sparsity problem inherent in language modeling. The method has been successfully used to improve parsing, domain adaptation, and named entity recognition. [5]

Jurafsky and Martin give the example of a flight reservation system that needs to estimate the likelihood of the bigram "to Shanghai", without having seen this in a training set. [4] The system can obtain a good estimate if it can cluster "Shanghai" with other city names, then make its estimate based on the likelihood of phrases such as "to London", "to Beijing" and "to Denver".

Technical definition

Brown groups items (i.e., types) into classes, using a binary merging criterion based on the log-probability of a text under a class-based language model, i.e. a probability model that takes the clustering into account. Thus, average mutual information (AMI) is the optimization function, and merges are chosen such that they incur the least loss in global mutual information.

As a result, the output can be thought of not only as a binary tree [6] but perhaps more helpfully as a sequence of merges, terminating with one big class of all words. This model has the same general form as a hidden Markov model, reduced to bigram probabilities in Brown's solution to the problem. MI is defined as:

Finding the clustering that maximizes the likelihood of the data is computationally expensive. The approach proposed by Brown et al. is a greedy heuristic.

The work also suggests use of Brown clusterings as a simplistic bigram class-based language model. Given cluster membership indicators ci for the tokens wi in a text, the probability of the word instance wi given preceding word wi-1 is given by: [4]

This has been criticised[ citation needed ] as being of limited utility, as it only ever predicts the most common word in any class, and so is restricted to |c| word types; this is reflected in the low relative reduction in perplexity found when using this model and Brown.

When applied to Twitter data, for example, Brown clustering assigned a binary tree path to each word in unlabelled tweets during clustering. [7] The prefixes to these paths are used as new features for the tagger. [7]

Variations

Brown clustering has also been explored using trigrams. [8]

Brown clustering as proposed generates a fixed number of output classes. It is important to choose the correct number of classes, which is task-dependent. [9] The cluster memberships of words resulting from Brown clustering can be used as features in a variety of machine-learned natural language processing tasks. [3]

A generalization of the algorithm was published in the AAAI conference in 2016, including a succinct formal definition of the 1992 version and then also the general form. [10] Core to this is the concept that the classes considered for merging do not necessarily represent the final number of classes output, and that altering the number of classes considered for merging directly affects the speed and quality of the final result.

There are no known theoretical guarantees on the greedy heuristic proposed by Brown et al. (as of February 2018). However, the clustering problem can be framed as estimating the parameters of the underlying class-based language model: it is possible to develop a consistent estimator for this model under mild assumptions. [11]

See also

Related Research Articles

In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

<span class="mw-page-title-main">Negative binomial distribution</span> Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a die as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.

Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms. Operational semantics are classified in two categories: structural operational semantics formally describe how the individual steps of a computation take place in a computer-based system; by opposition natural semantics describe how the overall results of the executions are obtained. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and denotational semantics.

<span class="mw-page-title-main">Collocation</span> Frequent occurrence of words next to each other

In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words that make it up. This contrasts with an idiom, where the meaning of the whole cannot be inferred from its parts, and may be completely unrelated.

A language model is a probabilistic model of a natural language. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.

The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner.

Statistical machine translation (SMT) was a machine translation approach, that superseded the previous, rule-based approach because it required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages. Since 2003, the statistical approach itself has been gradually superseded by the deep learning-based neural network approach.

<span class="mw-page-title-main">One-way quantum computer</span> Method of quantum computing

The one-way or measurement-based quantum computer (MBQC) is a method of quantum computing that first prepares an entangled resource state, usually a cluster state or graph state, then performs single qubit measurements on it. It is "one-way" because the resource state is destroyed by the measurements.

In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts from a -dimensional multinomial distribution with trials, a "smoothed" version of the counts gives the estimator:

The noisy channel model is a framework used in spell checkers, question answering, speech recognition, and machine translation. In this model, the goal is to find the intended word given a word where the letters have been scrambled in some manner.

Referring expression generation (REG) is the subtask of natural language generation (NLG) that received most scholarly attention. While NLG is concerned with the conversion of non-linguistic information into natural language, REG focuses only on the creation of referring expressions that identify specific entities called targets.

In computer science, more specifically in automata and formal language theory, nested words are a concept proposed by Alur and Madhusudan as a joint generalization of words, as traditionally used for modelling linearly ordered structures, and of ordered unranked trees, as traditionally used for modelling hierarchical structures. Finite-state acceptors for nested words, so-called nested word automata, then give a more expressive generalization of finite automata on words. The linear encodings of languages accepted by finite nested word automata gives the class of visibly pushdown languages. The latter language class lies properly between the regular languages and the deterministic context-free languages. Since their introduction in 2004, these concepts have triggered much research in that area.

The following outline is provided as an overview of and topical guide to natural-language processing:

<span class="mw-page-title-main">Deep belief network</span> Type of artificial neural network

In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables, with connections between the layers but not between units within each layer.

Word2vec is a technique in natural language processing (NLP) for obtaining vector representations of words. These vectors capture information about the meaning of the word and their usage in context. The word2vec algorithm estimates these representations by modeling text in a large corpus. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. As the name implies, word2vec represents each distinct word with a particular list of numbers called a vector. The vectors are chosen carefully such that they capture the semantic and syntactic qualities of words; as such, a simple mathematical function can indicate the level of semantic similarity between the words represented by those vectors.

<span class="mw-page-title-main">Configuration model</span>

In network science, the configuration model is a method for generating random networks from a given degree sequence. It is widely used as a reference model for real-life social networks, because it allows the modeler to incorporate arbitrary degree distributions.

Paraphrase or paraphrasing in computational linguistics is the natural language processing task of detecting and generating paraphrases. Applications of paraphrasing are varied including information retrieval, question answering, text summarization, and plagiarism detection. Paraphrasing is also useful in the evaluation of machine translation, as well as semantic parsing and generation of new samples to expand existing corpora.

Syntactic parsing is the automatic analysis of syntactic structure of natural language, especially syntactic relations and labelling spans of constituents. It is motivated by the problem of structural ambiguity in natural language: a sentence can be assigned multiple grammatical parses, so some kind of knowledge beyond computational grammar rules are need to tell which parse is intended. Syntactic parsing is one of the important tasks in computational linguistics and natural language processing, and has been a subject of research since the mid-20th century with the advent of computers.

A word n-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network-based models, which has been superseded by large language models. It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word was considered, it was called a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens were introduced to denote the start and end of a sentence and .

References

  1. 1 2 Brown, Peter F.; de Souza, Peter V.; Mercer, Robert L.; Della Pietra, Vincent J.; Lai, Jenifer C. (1992). "Class-based n-gram models of natural language" (PDF). Computational Linguistics. 18 (4): 467–479. CiteSeerX   10.1.1.94.9004 .
  2. Gómez, Manuel Montes y; Escalante, Hugo Jair; Segura, Alberto; Murillo, Juan de Dios (2016). Advances in Artificial Intelligence - IBERAMIA 2016: 15th Ibero-American Conference on AI, San José, Costa Rica, November 23-25, 2016, Proceedings. Cham, Switzerland: Springer. p. 177. ISBN   978-3-319-47954-5.
  3. 1 2 Turian, Joseph; Ratinov, Lev; Bengio, Yoshua (2010). Word representations: a simple and general method for semi-supervised learning (PDF). Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. pp. 1533–9. CiteSeerX   10.1.1.714.8528 .
  4. 1 2 3 4 Jurafsky, Daniel; Martin, James H. (2009). Speech and Language Processing. Pearson Education International. pp. 145–6. ISBN   9780131873216.
  5. Rehm, Georg; Declerck, Thierry (2018). Language Technologies for the Challenges of the Digital Age: 27th International Conference, GSCL 2017, Berlin, Germany, September 13-14, 2017, Proceedings. Cham, Switzerland: Springer. p. 66. ISBN   978-3-319-73705-8.
  6. Sun, Maosong; Zhang, Min; Lin, Dekang; Wang, Haifeng (2013). Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data: 12th China National Conference, CCL 2013 and First International Symposium, NLP-NABD 2013, Suzhou, China, October 10-12, 2013, Proceedings. Heidelberg: Springer. p. 54. ISBN   978-3-642-41490-9.
  7. 1 2 Gurevych, Iryna; Biemann, Chris; Zesch, Torsten (2013). Language Processing and Knowledge in the Web: 25th International Conference, GSCL 2013, Darmstadt, Germany, September 25-27, 2013, Proceedings. Heidelberg: Springer. p. 167. ISBN   978-3-642-40721-5.
  8. Martin, Sven; Liermann, Jorg; Ney, Hermann (1999). "Algorithms for bigram and trigram word clustering". Speech Communication. 24 (1): 19–37. CiteSeerX   10.1.1.53.2354 . doi:10.1016/S0167-6393(97)00062-9.
  9. Derczynski, Leon; Chester, Sean; Bogh, Kenneth S. (2015). Tune your Brown clustering, please (PDF). Proceedings of the conference on Recent Advances in Natural Language Processing. CiteSeerX   10.1.1.713.5576 .
  10. Derczynski, Leon; Chester, Sean (2016). Generalised Brown Clustering and Roll-Up Feature Generation. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. pp. 1533–9. CiteSeerX   10.1.1.714.8528 .
  11. Stratos, Karl; Kim, Do-kyum; Collins, Michael; Hsu, Daniel (2014). A Spectral Algorithm for Learning Class-Based n-gram Models of Natural Language (PDF). Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence. pp. 762–771. CiteSeerX   10.1.1.643.6343 .