K-means clustering

Last updated

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

Contents

The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both k-means and Gaussian mixture modeling. They both use cluster centers to model the data; however, k-means clustering tends to find clusters of comparable spatial extent, while the Gaussian mixture model allows clusters to have different shapes.

The unsupervised k-means algorithm has a loose relationship to the k-nearest neighbor classifier, a popular supervised machine learning technique for classification that is often confused with k-means due to the name. Applying the 1-nearest neighbor classifier to the cluster centers obtained by k-means classifies new data into the existing clusters. This is known as nearest centroid classifier or Rocchio algorithm.

Description

Given a set of observations (x1, x2, ..., xn), where each observation is a -dimensional real vector, k-means clustering aims to partition the n observations into k (n) sets S = {S1, S2, ..., Sk} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find:

where μi is the mean (also called centroid) of points in , i.e.

is the size of , and is the usual L2 norm . This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:

The equivalence can be deduced from identity . Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS). [1] This deterministic relationship is also related to the law of total variance in probability theory.

History

The term "k-means" was first used by James MacQueen in 1967, [2] though the idea goes back to Hugo Steinhaus in 1956. [3] The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982. [4] In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm. [5]

Algorithms

Standard algorithm (naive k-means)

Convergence of k-means K-means convergence.gif
Convergence of k-means

The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the k-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve k-means", because there exist much faster alternatives. [6]

Given an initial set of k means m1(1), ..., mk(1) (see below), the algorithm proceeds by alternating between two steps: [7]

  1. Assignment step: Assign each observation to the cluster with the nearest mean: that with the least squared Euclidean distance. [8] (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.)
    where each is assigned to exactly one , even if it could be assigned to two or more of them.
  2. Update step: Recalculate means (centroids) for observations assigned to each cluster.

The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum. [9]

The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of k-means such as spherical k-means and k-medoids have been proposed to allow using other distance measures.

Initialization methods

Commonly used initialization methods are Forgy and Random Partition. [10] The Forgy method randomly chooses k observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al., [10] the Random Partition method is generally preferable for algorithms such as the k-harmonic means and fuzzy k-means. For expectation maximization and standard k-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al., [11] however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach [12] performs "consistently" in "the best group" and k-means++ performs "generally well".

The algorithm does not guarantee convergence to the global optimum. The result may depend on the initial clusters. As the algorithm is usually fast, it is common to run it multiple times with different starting conditions. However, worst-case performance can be slow: in particular certain point sets, even in two dimensions, converge in exponential time, that is 2Ω(n). [13] These point sets do not seem to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial. [14]

The "assignment" step is referred to as the "expectation step", while the "update step" is a maximization step, making this algorithm a variant of the generalized expectation-maximization algorithm.

Complexity

Finding the optimal solution to the k-means clustering problem for observations in d dimensions is:

Thus, a variety of heuristic algorithms such as Lloyd's algorithm given above are generally used.

The running time of Lloyd's algorithm (and most variants) is , [9] [19] where:

On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. Lloyd's algorithm is therefore often considered to be of "linear" complexity in practice, although it is in the worst case superpolynomial when performed until convergence. [20]

Lloyd's algorithm is the standard approach for this problem. However, it spends a lot of processing time computing the distances between each of the k cluster centers and the n data points. Since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naïve implementation very inefficient. Some implementations use caching and the triangle inequality in order to create bounds and accelerate Lloyd's algorithm. [9] [22] [23] [24] [25]

Variations

Hartigan–Wong method

Hartigan and Wong's method [9] provides a variation of k-means algorithm which progresses towards a local minimum of the minimum sum-of-squares problem with different solution updates. The method is a local search that iteratively attempts to relocate a sample into a different cluster as long as this process improves the objective function. When no sample can be relocated into a different cluster with an improvement of the objective, the method stops (in a local minimum). In a similar way as the classical k-means, the approach remains a heuristic since it does not necessarily guarantee that the final solution is globally optimum.

Let be the individual cost of defined by , with the center of the cluster.

Assignment step
Hartigan and Wong's method starts by partitioning the points into random clusters .
Update step
Next it determines the and for which the following function reaches a maximum
For the that reach this maximum, moves from the cluster to the cluster .
Termination
The algorithm terminates once is less than zero for all .

Different move acceptance strategies can be used. In a first-improvement strategy, any improving relocation can be applied, whereas in a best-improvement strategy, all possible relocations are iteratively tested and only the best is applied at each iteration. The former approach favors speed, whether the latter approach generally favors solution quality at the expense of additional computational time. The function used to calculate the result of a relocation can also be efficiently evaluated by using equality [35]

Global optimization and meta-heuristics

The classical k-means algorithm and its variations are known to only converge to local minima of the minimum-sum-of-squares clustering problem defined as

Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, global optimization algorithms based on branch-and-bound and semidefinite programming have produced ‘’provenly optimal’’ solutions for datasets with up to 4,177 entities and 20,531 features. [36] As expected, due to the NP-hardness of the subjacent optimization problem, the computational time of optimal algorithms for K-means quickly increases beyond this size. Optimal solutions for small- and medium-scale still remain valuable as a benchmark tool, to evaluate the quality of other heuristics. To find high-quality local minima within a controlled computational time but without optimality guarantees, other works have explored metaheuristics and other global optimization techniques, e.g., based on incremental approaches and convex optimization, [37] random swaps [38] (i.e., iterated local search), variable neighborhood search [39] and genetic algorithms. [40] [41] It is indeed known that finding better local minima of the minimum sum-of-squares clustering problem can make the difference between failure and success to recover cluster structures in feature spaces of high dimension. [41]

Discussion

A typical example of the k-means convergence to a local minimum. In this example, the result of k-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet. K-means convergence to a local minimum.png
A typical example of the k-means convergence to a local minimum. In this example, the result of k-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.
k-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols. Iris Flowers Clustering kMeans.svg
k-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.
k-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of k-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set. ClusterAnalysis Mouse.svg
k-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of k-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.

Three key features of k-means that make it efficient are often regarded as its biggest drawbacks:

A key limitation of k-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying k-means with a value of onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With , the two visible clusters (one containing two species) will be discovered, whereas with one of the two clusters will be split into two even parts. In fact, is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the k-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.

The result of k-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation-maximization algorithm (arguably a generalization of k-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than k-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices. K-means is closely related to nonparametric Bayesian modeling. [43]

Applications

k-means clustering is rather easy to apply to even large data sets, particularly when using heuristics such as Lloyd's algorithm. It has been successfully used in market segmentation, computer vision, and astronomy among many other domains. It often is used as a preprocessing step for other algorithms, for example to find a starting configuration.

Vector quantization

Example image with only red and green channel (for illustration purposes) Rosa Gold Glow 2 small noblue.png
Example image with only red and green channel (for illustration purposes)
Vector quantization of colors present in the image above into Voronoi cells using k-means Rosa Gold Glow 2 small noblue color space.png
Vector quantization of colors present in the image above into Voronoi cells using k-means

k-means originates from signal processing, and still finds use in this domain. For example, in computer graphics, color quantization is the task of reducing the color palette of an image to a fixed number of colors k. The k-means algorithm can easily be used for this task and produces competitive results. A use case for this approach is image segmentation. Other uses of vector quantization include non-random sampling, as k-means can easily be used to choose k different but prototypical objects from a large data set for further analysis.

Cluster analysis

In cluster analysis, the k-means algorithm can be used to partition the input data set into k partitions (clusters).

However, the pure k-means algorithm is not very flexible, and as such is of limited use (except for when vector quantization as above is actually the desired use case). In particular, the parameter k is known to be hard to choose (as discussed above) when not given by external constraints. Another limitation is that it cannot be used with arbitrary distance functions or on non-numerical data. For these use cases, many other algorithms are superior.

Feature learning

k-means clustering has been used as a feature learning (or dictionary learning) step, in either (semi-)supervised learning or unsupervised learning. [44] The basic approach is first to train a k-means clustering representation, using the input training data (which need not be labelled). Then, to project any input datum into the new feature space, an "encoding" function, such as the thresholded matrix-product of the datum with the centroid locations, computes the distance from the datum to each centroid, or simply an indicator function for the nearest centroid, [44] [45] or some smooth transformation of the distance. [46] Alternatively, transforming the sample-cluster distance through a Gaussian RBF, obtains the hidden layer of a radial basis function network. [47]

This use of k-means has been successfully combined with simple, linear classifiers for semi-supervised learning in NLP (specifically for named entity recognition) [48] and in computer vision. On an object recognition task, it was found to exhibit comparable performance with more sophisticated feature learning approaches such as autoencoders and restricted Boltzmann machines. [46] However, it generally requires more data, for equivalent performance, because each data point only contributes to one "feature". [44]

Relation to other algorithms

Gaussian mixture model

The slow "standard algorithm" for k-means clustering, and its associated expectation-maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance. [49] :850 Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of "hard" Gaussian mixture modelling. [50] :354,11.4.2.5 This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data. [49] :849

k-SVD

Another generalization of the k-means algorithm is the k-SVD algorithm, which estimates data points as a sparse linear combination of "codebook vectors". k-means corresponds to the special case of using a single codebook vector, with a weight of 1. [51]

Principal component analysis

The relaxed solution of k-means clustering, specified by the cluster indicators, is given by principal component analysis (PCA). [52] [53] The intuition is that k-means describe spherically shaped (ball-like) clusters. If the data has 2 clusters, the line connecting the two centroids is the best 1-dimensional projection direction, which is also the first PCA direction. Cutting the line at the center of mass separates the clusters (this is the continuous relaxation of the discrete cluster indicator). If the data have three clusters, the 2-dimensional plane spanned by three cluster centroids is the best 2-D projection. This plane is also defined by the first two PCA dimensions. Well-separated clusters are effectively modelled by ball-shaped clusters and thus discovered by k-means. Non-ball-shaped clusters are hard to separate when they are close. For example, two half-moon shaped clusters intertwined in space do not separate well when projected onto PCA subspace. k-means should not be expected to do well on this data. [54] It is straightforward to produce counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions. [55]

Mean shift clustering

Basic mean shift clustering algorithms maintain a set of data points the same size as the input data set. Initially, this set is copied from the input set. Then this set is iteratively replaced by the mean of those points in the set that are within a given distance of that point. By contrast, k-means restricts this updated set to k points usually much less than the number of points in the input data set, and replaces each point in this set by the mean of all points in the input set that are closer to that point than any other (e.g. within the Voronoi partition of each updating point). A mean shift algorithm that is similar then to k-means, called likelihood mean shift, replaces the set of points undergoing replacement by the mean of all points in the input set that are within a given distance of the changing set. [56] One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants.

Independent component analysis

Under sparsity assumptions and when input data is pre-processed with the whitening transformation, k-means produces the solution to the linear independent component analysis (ICA) task. This aids in explaining the successful application of k-means to feature learning. [57]

Bilateral filtering

k-means implicitly assumes that the ordering of the input data set does not matter. The bilateral filter is similar to k-means and mean shift in that it maintains a set of data points that are iteratively replaced by means. However, the bilateral filter restricts the calculation of the (kernel weighted) mean to include only points that are close in the ordering of the input data. [56] This makes it applicable to problems such as image denoising, where the spatial arrangement of pixels in an image is of critical importance.

Similar problems

The set of squared error minimizing cluster functions also includes the k-medoids algorithm, an approach which forces the center point of each cluster to be one of the actual points, i.e., it uses medoids in place of centroids.

Software implementations

Different implementations of the algorithm exhibit performance differences, with the fastest on a test data set finishing in 10 seconds, the slowest taking 25,988 seconds (~7 hours). [1] The differences can be attributed to implementation quality, language and compiler differences, different termination criteria and precision levels, and the use of indexes for acceleration.

Free Software/Open Source

The following implementations are available under Free/Open Source Software licenses, with publicly available source code.

Proprietary

The following implementations are available under proprietary license terms, and may not have publicly available source code.

See also

Related Research Articles

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.

In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents are statistically independent from each other. ICA is a special case of blind source separation. A common example application is the "cocktail party problem" of listening in on one person's speech in a noisy room.

<span class="mw-page-title-main">Cluster analysis</span> Grouping a set of objects by similarity

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.

Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.

Medoids are representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimal. Medoids are similar in concept to means or centroids, but medoids are always restricted to be members of the data set. Medoids are most commonly used on data when a mean or centroid cannot be defined, such as graphs. They are also used in contexts where the centroid is not representative of the dataset like in images, 3-D trajectories and gene expression. These are also of interest while wanting to find a representative using some distance other than squared euclidean distance.

Fuzzy clustering is a form of clustering in which each data point can belong to more than one cluster.

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the Representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.

The k-medoids problem is a clustering problem similar to k-means. The name was coined by Leonard Kaufman and Peter J. Rousseeuw with their PAM algorithm. Both the k-means and k-medoids algorithms are partitional and attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses actual data points as centers, and thereby allows for greater interpretability of the cluster centers than in k-means, where the center of a cluster is not necessarily one of the input data points. Furthermore, k-medoids can be used with arbitrary dissimilarity measures, whereas k-means generally requires Euclidean distance for efficient solutions. Because k-medoids minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances, it is more robust to noise and outliers than k-means.

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing.

In statistics, k-medians clustering is a cluster analysis algorithm. It is a variation of k-means clustering where instead of calculating the mean for each cluster to determine its centroid, one instead calculates the median. This has the effect of minimizing error over all clusters with respect to the 1-norm distance metric, as opposed to the squared 2-norm distance metric

BIRCH is an unsupervised data mining algorithm used to perform hierarchical clustering over particularly large data-sets. With modifications it can also be used to accelerate k-means clustering and Gaussian mixture modeling with the expectation–maximization algorithm. An advantage of BIRCH is its ability to incrementally and dynamically cluster incoming, multi-dimensional metric data points in an attempt to produce the best quality clustering for a given set of resources. In most cases, BIRCH only requires a single scan of the database.

Silhouette refers to a method of interpretation and validation of consistency within clusters of data. The technique provides a succinct graphical representation of how well each object has been classified. It was proposed by Belgian statistician Peter Rousseeuw in 1987.

CURE is an efficient data clustering algorithm for large databases. Compared with K-means clustering it is more robust to outliers and able to identify clusters having non-spherical shapes and size variances.

In computer science, data stream clustering is defined as the clustering of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied as a streaming algorithm and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time.

In data mining and machine learning, kq-flats algorithm is an iterative method which aims to partition m observations into k clusters where each cluster is close to a q-flat, where q is a given integer.

In statistics and data mining, affinity propagation (AP) is a clustering algorithm based on the concept of "message passing" between data points. Unlike clustering algorithms such as k-means or k-medoids, affinity propagation does not require the number of clusters to be determined or estimated before running the algorithm. Similar to k-medoids, affinity propagation finds "exemplars," members of the input set that are representative of clusters.

Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.

The Calinski–Harabasz index (CHI), also known as the Variance Ratio Criterion (VRC), is a metric for evaluating clustering algorithms, introduced by Caliński Tadeusz and Jerzy Harabasz in 1974. It is an internal evaluation metric, where the assessment of the clustering quality is based solely on the dataset and the clustering results, and not on external, ground-truth labels.

References

  1. 1 2 Kriegel, Hans-Peter; Schubert, Erich; Zimek, Arthur (2016). "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?". Knowledge and Information Systems. 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN   0219-1377. S2CID   40772241.
  2. MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. Vol. 1. University of California Press. pp. 281–297. MR   0214227. Zbl   0214.46201 . Retrieved 2009-04-07.
  3. Steinhaus, Hugo (1957). "Sur la division des corps matériels en parties". Bull. Acad. Polon. Sci. (in French). 4 (12): 801–804. MR   0090073. Zbl   0079.16403.
  4. Lloyd, Stuart P. (1957). "Least square quantization in PCM". Bell Telephone Laboratories Paper. Published in journal much later: Lloyd, Stuart P. (1982). "Least squares quantization in PCM" (PDF). IEEE Transactions on Information Theory . 28 (2): 129–137. CiteSeerX   10.1.1.131.1338 . doi:10.1109/TIT.1982.1056489. S2CID   10833328 . Retrieved 2009-04-15.
  5. Forgy, Edward W. (1965). "Cluster analysis of multivariate data: efficiency versus interpretability of classifications". Biometrics. 21 (3): 768–769. JSTOR   2528559.
  6. Pelleg, Dan; Moore, Andrew (1999). "Accelerating exact k -means algorithms with geometric reasoning". Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining. San Diego, California, United States: ACM Press. pp. 277–281. doi:10.1145/312129.312248. ISBN   9781581131437. S2CID   13907420.
  7. MacKay, David (2003). "Chapter 20. An Example Inference Task: Clustering" (PDF). Information Theory, Inference and Learning Algorithms. Cambridge University Press. pp. 284–292. ISBN   978-0-521-64298-9. MR   2012999.
  8. Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.
  9. 1 2 3 4 5 Hartigan, J. A.; Wong, M. A. (1979). "Algorithm AS 136: A k-Means Clustering Algorithm". Journal of the Royal Statistical Society, Series C . 28 (1): 100–108. JSTOR   2346830.
  10. 1 2 Hamerly, Greg; Elkan, Charles (2002). "Alternatives to the k-means algorithm that find better clusterings" (PDF). Proceedings of the eleventh international conference on Information and knowledge management (CIKM).
  11. Celebi, M. E.; Kingravi, H. A.; Vela, P. A. (2013). "A comparative study of efficient initialization methods for the k-means clustering algorithm". Expert Systems with Applications . 40 (1): 200–210. arXiv: 1209.1960 . doi:10.1016/j.eswa.2012.07.021. S2CID   6954668.
  12. Bradley, Paul S.; Fayyad, Usama M. (1998). "Refining Initial Points for k-Means Clustering". Proceedings of the Fifteenth International Conference on Machine Learning.
  13. Vattani, A. (2011). "k-means requires exponentially many iterations even in the plane" (PDF). Discrete and Computational Geometry . 45 (4): 596–616. doi: 10.1007/s00454-011-9340-1 . S2CID   42683406.
  14. 1 2 Arthur, David; Manthey, B.; Roeglin, H. (2009). "k-means has polynomial smoothed complexity". Proceedings of the 50th Symposium on Foundations of Computer Science (FOCS). arXiv: 0904.1113 .
  15. Aloise, D.; Deshpande, A.; Hansen, P.; Popat, P. (2009). "NP-hardness of Euclidean sum-of-squares clustering". Machine Learning . 75 (2): 245–249. doi: 10.1007/s10994-009-5103-0 .
  16. Dasgupta, S.; Freund, Y. (July 2009). "Random Projection Trees for Vector Quantization". IEEE Transactions on Information Theory. 55 (7): 3229–42. arXiv: 0805.1390 . doi:10.1109/TIT.2009.2021326. S2CID   666114.
  17. Mahajan, Meena; Nimbhorkar, Prajakta; Varadarajan, Kasturi (2009). "The Planar k-Means Problem is NP-Hard". WALCOM: Algorithms and Computation. Lecture Notes in Computer Science. Vol. 5431. pp. 274–285. doi:10.1007/978-3-642-00202-1_24. ISBN   978-3-642-00201-4.
  18. Inaba, M.; Katoh, N.; Imai, H. (1994). Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering. Proceedings of 10th ACM Symposium on Computational Geometry. pp. 332–9. doi: 10.1145/177424.178042 .
  19. Manning, Christopher D.; Raghavan, Prabhakar; Schütze, Hinrich (2008). Introduction to information retrieval. Cambridge University Press. ISBN   978-0521865715. OCLC   190786122.
  20. 1 2 Arthur, David; Vassilvitskii, Sergei (2006-01-01). "How slow is the k -means method?". Proceedings of the twenty-second annual symposium on Computational geometry. SCG '06. ACM. pp. 144–153. doi:10.1145/1137856.1137880. ISBN   978-1595933409. S2CID   3084311.
  21. Bhowmick, Abhishek (2009). "A theoretical analysis of Lloyd's algorithm for k-means clustering" (PDF). Archived from the original (PDF) on 2015-12-08. See also here.
  22. 1 2 Phillips, Steven J. (2002). "Acceleration of K-Means and Related Clustering Algorithms". In Mount, David M.; Stein, Clifford (eds.). Acceleration of k-Means and Related Clustering Algorithms. Lecture Notes in Computer Science. Vol. 2409. Springer. pp. 166–177. doi:10.1007/3-540-45643-0_13. ISBN   978-3-540-43977-6.
  23. 1 2 Elkan, Charles (2003). "Using the triangle inequality to accelerate k-means" (PDF). Proceedings of the Twentieth International Conference on Machine Learning (ICML).
  24. 1 2 Hamerly, Greg (2010). "Making k-means even faster". Proceedings of the 2010 SIAM International Conference on Data Mining. pp. 130–140. doi:10.1137/1.9781611972801.12. ISBN   978-0-89871-703-7.
  25. 1 2 Hamerly, Greg; Drake, Jonathan (2015). "Accelerating Lloyd's Algorithm for k-Means Clustering". Partitional Clustering Algorithms. pp. 41–78. doi:10.1007/978-3-319-09259-1_2. ISBN   978-3-319-09258-4.
  26. Kanungo, Tapas; Mount, David M.; Netanyahu, Nathan S.; Piatko, Christine D.; Silverman, Ruth; Wu, Angela Y. (2002). "An efficient k-means clustering algorithm: Analysis and implementation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (7): 881–892. doi:10.1109/TPAMI.2002.1017616. S2CID   12003435 . Retrieved 2009-04-24.
  27. Drake, Jonathan (2012). "Accelerated k-means with adaptive distance bounds" (PDF). The 5th NIPS Workshop on Optimization for Machine Learning, OPT2012.
  28. Dhillon, I. S.; Modha, D. M. (2001). "Concept decompositions for large sparse text data using clustering". Machine Learning. 42 (1): 143–175. doi: 10.1023/a:1007612920971 .
  29. Steinbach, M.; Karypis, G.; Kumar, V. (2000). ""A comparison of document clustering techniques". In". KDD Workshop on Text Mining. 400 (1): 525–526.
  30. Pelleg, D.; & Moore, A. W. (2000, June). "X-means: Extending k-means with Efficient Estimation of the Number of Clusters". In ICML, Vol. 1
  31. Hamerly, Greg; Elkan, Charles (2004). "Learning the k in k-means" (PDF). Advances in Neural Information Processing Systems . 16: 281.
  32. Amorim, R. C.; Mirkin, B. (2012). "Minkowski Metric, Feature Weighting and Anomalous Cluster Initialisation in k-Means Clustering". Pattern Recognition. 45 (3): 1061–1075. doi:10.1016/j.patcog.2011.08.012.
  33. Amorim, R. C.; Hennig, C. (2015). "Recovering the number of clusters in data sets with noise features using feature rescaling factors". Information Sciences. 324: 126–145. arXiv: 1602.06989 . doi:10.1016/j.ins.2015.06.039. S2CID   315803.
  34. Sculley, David (2010). "Web-scale k-means clustering". Proceedings of the 19th international conference on World Wide Web. ACM. pp. 1177–1178. Retrieved 2016-12-21.
  35. Telgarsky, Matus. "Hartigan's Method: k-means Clustering without Voronoi" (PDF).
  36. Piccialli, Veronica; Sudoso, Antonio M.; Wiegele, Angelika (2022-03-28). "SOS-SDP: An Exact Solver for Minimum Sum-of-Squares Clustering". INFORMS Journal on Computing. 34 (4): 2144–2162. arXiv: 2104.11542 . doi:10.1287/ijoc.2022.1166. ISSN   1091-9856. S2CID   233388043.
  37. Bagirov, A. M.; Taheri, S.; Ugon, J. (2016). "Nonsmooth DC programming approach to the minimum sum-of-squares clustering problems". Pattern Recognition. 53: 12–24. Bibcode:2016PatRe..53...12B. doi:10.1016/j.patcog.2015.11.011.
  38. Fränti, Pasi (2018). "Efficiency of random swap clustering". Journal of Big Data. 5 (1): 1–21. doi: 10.1186/s40537-018-0122-y .
  39. Hansen, P.; Mladenovic, N. (2001). "J-Means: A new local search heuristic for minimum sum of squares clustering". Pattern Recognition. 34 (2): 405–413. Bibcode:2001PatRe..34..405H. doi:10.1016/S0031-3203(99)00216-2.
  40. Krishna, K.; Murty, M. N. (1999). "Genetic k-means algorithm". IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 29 (3): 433–439. doi:10.1109/3477.764879. PMID   18252317.
  41. 1 2 Gribel, Daniel; Vidal, Thibaut (2019). "HG-means: A scalable hybrid metaheuristic for minimum sum-of-squares clustering". Pattern Recognition. 88: 569–583. arXiv: 1804.09813 . doi:10.1016/j.patcog.2018.12.022. S2CID   13746584.
  42. Mirkes, E. M. "K-means and k-medoids applet" . Retrieved 2 January 2016.
  43. Kulis, Brian; Jordan, Michael I. (2012-06-26). "Revisiting k-means: new algorithms via Bayesian nonparametrics" (PDF). ICML. pp. 1131–1138. ISBN   9781450312851.
  44. 1 2 3 Coates, Adam; Ng, Andrew Y. (2012). "Learning feature representations with k-means" (PDF). In Montavon, G.; Orr, G. B.; Müller, K.-R. (eds.). Neural Networks: Tricks of the Trade. Springer.
  45. Csurka, Gabriella; Dance, Christopher C.; Fan, Lixin; Willamowski, Jutta; Bray, Cédric (2004). Visual categorization with bags of keypoints (PDF). ECCV Workshop on Statistical Learning in Computer Vision.
  46. 1 2 Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning (PDF). International Conference on Artificial Intelligence and Statistics (AISTATS). Archived from the original (PDF) on 2013-05-10.
  47. Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). "Three learning phases for radial-basis-function networks". Neural Networks. 14 (4–5): 439–458. CiteSeerX   10.1.1.109.312 . doi:10.1016/s0893-6080(01)00027-2. PMID   11411631.
  48. Lin, Dekang; Wu, Xiaoyun (2009). Phrase clustering for discriminative learning (PDF). Annual Meeting of the ACL and IJCNLP. pp. 1030–1038.
  49. 1 2 Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 16.1. Gaussian Mixture Models and k-Means Clustering". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York (NY): Cambridge University Press. ISBN   978-0-521-88068-8.
  50. Kevin P. Murphy (2012). Machine learning : a probabilistic perspective. Cambridge, Mass.: MIT Press. ISBN   978-0-262-30524-2. OCLC   810414751.
  51. Aharon, Michal; Elad, Michael; Bruckstein, Alfred (2006). "K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation" (PDF). IEEE Transactions on Signal Processing. 54 (11): 4311. Bibcode:2006ITSP...54.4311A. doi:10.1109/TSP.2006.881199. S2CID   7477309.
  52. Zha, Hongyuan; Ding, Chris; Gu, Ming; He, Xiaofeng; Simon, Horst D. (December 2001). "Spectral Relaxation for k-means Clustering" (PDF). Neural Information Processing Systems Vol.14 (NIPS 2001): 1057–1064.
  53. Ding, Chris; He, Xiaofeng (July 2004). "K-means Clustering via Principal Component Analysis" (PDF). Proceedings of International Conference on Machine Learning (ICML 2004): 225–232.
  54. Drineas, Petros; Frieze, Alan M.; Kannan, Ravi; Vempala, Santosh; Vinay, Vishwanathan (2004). "Clustering large graphs via the singular value decomposition" (PDF). Machine Learning. 56 (1–3): 9–33. doi: 10.1023/b:mach.0000033113.59016.96 . S2CID   5892850 . Retrieved 2012-08-02.
  55. Cohen, Michael B.; Elder, Sam; Musco, Cameron; Musco, Christopher; Persu, Madalina (2014). "Dimensionality reduction for k-means clustering and low rank approximation (Appendix B)". arXiv: 1410.6801 [cs.DS].
  56. 1 2 Little, Max A.; Jones, Nick S. (2011). "Generalized Methods and Solvers for Piecewise Constant Signals: Part I" (PDF). Proceedings of the Royal Society A . 467 (2135): 3088–3114. Bibcode:2011RSPSA.467.3088L. doi:10.1098/rspa.2010.0671. PMC   3191861 . PMID   22003312.
  57. Vinnikov, Alon; Shalev-Shwartz, Shai (2014). "K-means Recovers ICA Filters when Independent Components are Sparse" (PDF). Proceedings of the International Conference on Machine Learning (ICML 2014).