Hierarchical clustering

Last updated

In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two categories:

Contents

In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering [1] are usually presented in a dendrogram.

Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances. On the other hand, except for the special case of single-linkage distance, none of the algorithms (except exhaustive search in ) can be guaranteed to find the optimum solution.

Complexity

The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of and requires memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity ) are known: SLINK [2] for single-linkage and CLINK [3] for complete-linkage clustering. With a heap, the runtime of the general case can be reduced to , an improvement on the aforementioned bound of , at the cost of further increasing the memory requirements. In many cases, the memory overheads of this approach are too large to make it practically usable. Methods exist which use quadtrees that demonstrate total running time with space. [4]

Divisive clustering with an exhaustive search is , but it is common to use faster heuristics to choose splits, such as k-means.

Cluster Linkage

In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate distance d, such as the Euclidean distance, between single observations of the data set, and a linkage criterion, which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. The choice of metric as well as linkage can have a major impact on the result of the clustering, where the lower level metric determines which objects are most similar, whereas the linkage criterion influences the shape of the clusters. For example, complete-linkage tends to produce more spherical clusters than single-linkage.

The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.

Some commonly used linkage criteria between two sets of observations A and B and a distance d are: [5] [6]

NamesFormula
Maximum or complete-linkage clustering
Minimum or single-linkage clustering
Unweighted average linkage clustering (or UPGMA)
Weighted average linkage clustering (or WPGMA)
Centroid linkage clustering, or UPGMC where and are the centroids of A resp. B.
Median linkage clustering, or WPGMC where
Versatile linkage clustering [7]
Ward linkage, [8] Minimum Increase of Sum of Squares (MISSQ) [9]
Minimum Error Sum of Squares (MNSSQ) [9]
Minimum Increase in Variance (MIVAR) [9]
Minimum Variance (MNVAR) [9]
Mini-Max linkage [10]
Hausdorff linkage [11]
Minimum Sum Medoid linkage [12] such that m is the medoid of the resulting cluster
Minimum Sum Increase Medoid linkage [12]
Medoid linkage [13] [14] where , are the medoids of the previous clusters
Minimum energy clustering

Some of these can only be recomputed recursively (WPGMA, WPGMC), for many a recursive computation with Lance-Williams-equations is more efficient, while for other (Mini-Max, Hausdorff, Medoid) the distances have to be computed with the slower full formula. Other linkage criteria include:

Agglomerative clustering example

Raw data Clusters.svg
Raw data

For example, suppose this data is to be clustered, and the Euclidean distance is the distance metric.

The hierarchical clustering dendrogram would be:

Traditional representation Hierarchical clustering simple diagram.svg
Traditional representation

Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row (from the top) of the dendrogram will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number but larger clusters.

This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance.

Optionally, one can also construct a distance matrix at this stage, where the number in the i-th row j-th column is the distance between the i-th and j-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below).

Suppose we have merged the two closest elements b and c, we now have the following clusters {a}, {b, c}, {d}, {e} and {f}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters and is one of the following:

In case of tied minimum distances, a pair is randomly chosen, thus being able to generate several structurally different dendrograms. Alternatively, all tied pairs may be joined at the same time, generating a unique dendrogram. [19]

One can always decide to stop clustering when there is a sufficiently small number of clusters (number criterion). Some linkages may also guarantee that agglomeration occurs at a greater distance between clusters than the previous agglomeration, and then one can stop clustering when the clusters are too far apart to be merged (distance criterion). However, this is not the case of, e.g., the centroid linkage where the so-called reversals [20] (inversions, departures from ultrametricity) may occur.

Divisive clustering

The basic principle of divisive clustering was published as the DIANA (DIvisive ANAlysis clustering) algorithm. [21] Initially, all data is in the same cluster, and the largest cluster is split until every object is separate. Because there exist ways of splitting each cluster, heuristics are needed. DIANA chooses the object with the maximum average dissimilarity and then moves all objects to this cluster that are more similar to the new cluster than to the remainder.

Informally, DIANA is not so much a process of "dividing" as it is of "hollowing out": each iteration, an existing cluster (e.g. the initial cluster of the entire dataset) is chosen to form a new cluster inside of it. Objects progressively move to this nested cluster, and hollow out the existing cluster. Eventually, all that's left inside a cluster is nested clusters that grew there, without it owning any loose objects by itself.

Formally, DIANA operates in the following steps:

  1. Let be the set of all object indices and the set of all formed clusters so far.
  2. Iterate the following until :
    1. Find the current cluster with 2 or more objects that has the largest diameter:
    2. Find the object in this cluster with the most dissimilarity to the rest of the cluster:
    3. Pop from its old cluster and put it into a new splinter group.
    4. As long as isn't empty, keep migrating objects from to add them to . To choose which objects to migrate, don't just consider dissimilarity to , but also adjust for dissimilarity to the splinter group: let where we define , then either stop iterating when , or migrate .
    5. Add to .

Intuitively, above measures how strongly an object wants to leave its current cluster, but it is attenuated when the object wouldn't fit in the splinter group either. Such objects will likely start their own splinter group eventually.

The dendrogram of DIANA can be constructed by letting the splinter group be a child of the hollowed-out cluster each time. This constructs a tree with as its root and unique single-object clusters as its leaves.

Software

Open source implementations

Hierarchical clustering dendrogram of the Iris dataset (using R). Source Iris dendrogram.png
Hierarchical clustering dendrogram of the Iris dataset (using R). Source
Hierarchical clustering and interactive dendrogram visualization in Orange data mining suite. Orange-data-mining-hierarchical-clustering.png
Hierarchical clustering and interactive dendrogram visualization in Orange data mining suite.

Commercial implementations

See also

Related Research Articles

Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.

UPGMA is a simple agglomerative (bottom-up) hierarchical clustering method. It also has a weighted variant, WPGMA, and they are generally attributed to Sokal and Michener.

<span class="mw-page-title-main">Multidimensional scaling</span> Set of related ordination techniques used in information visualization

Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. MDS is used to translate "information about the pairwise 'distances' among a set of objects or individuals" into a configuration of points mapped into an abstract Cartesian space.

<span class="mw-page-title-main">Cluster analysis</span> Grouping a set of objects by similarity

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.

<span class="mw-page-title-main">Dendrogram</span> Diagram with a treelike structure

A dendrogram is a diagram representing a tree. This diagrammatic representation is frequently used in different contexts:

In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression:

<span class="mw-page-title-main">Jaccard index</span> Measure of similarity and diversity between sets

The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets.

Medoids are representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimal. Medoids are similar in concept to means or centroids, but medoids are always restricted to be members of the data set. Medoids are most commonly used on data when a mean or centroid cannot be defined, such as graphs. They are also used in contexts where the centroid is not representative of the dataset like in images, 3-D trajectories and gene expression. These are also of interest while wanting to find a representative using some distance other than squared euclidean distance.

The k-medoids problem is a clustering problem similar to k-means. The name was coined by Leonard Kaufman and Peter J. Rousseeuw with their PAM algorithm. Both the k-means and k-medoids algorithms are partitional and attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses actual data points as centers, and thereby allows for greater interpretability of the cluster centers than in k-means, where the center of a cluster is not necessarily one of the input data points. Furthermore, k-medoids can be used with arbitrary dissimilarity measures, whereas k-means generally requires Euclidean distance for efficient solutions. Because k-medoids minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances, it is more robust to noise and outliers than k-means.

In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.

In statistics, single-linkage clustering is one of several methods of hierarchical clustering. It is based on grouping clusters in bottom-up fashion, at each step combining two clusters that contain the closest pair of elements not yet belonging to the same cluster as each other.

Consensus clustering is a method of aggregating results from multiple clustering algorithms. Also called cluster ensembles or aggregation of clustering, it refers to the situation in which a number of different (input) clusterings have been obtained for a particular dataset and it is desired to find a single (consensus) clustering which is a better fit in some sense than the existing clusterings. Consensus clustering is thus the problem of reconciling clustering information about the same data set coming from different sources or from different runs of the same algorithm. When cast as an optimization problem, consensus clustering is known as median partition, and has been shown to be NP-complete, even when the number of input clusterings is three. Consensus clustering for unsupervised learning is analogous to ensemble learning in supervised learning.

BIRCH is an unsupervised data mining algorithm used to perform hierarchical clustering over particularly large data-sets. With modifications it can also be used to accelerate k-means clustering and Gaussian mixture modeling with the expectation–maximization algorithm. An advantage of BIRCH is its ability to incrementally and dynamically cluster incoming, multi-dimensional metric data points in an attempt to produce the best quality clustering for a given set of resources. In most cases, BIRCH only requires a single scan of the database.

Silhouette refers to a method of interpretation and validation of consistency within clusters of data. The technique provides a succinct graphical representation of how well each object has been classified. It was proposed by Belgian statistician Peter Rousseeuw in 1987.

Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding density-based clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander. Its basic idea is similar to DBSCAN, but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. To do so, the points of the database are (linearly) ordered such that spatially closest points become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that must be accepted for a cluster so that both points belong to the same cluster. This is represented as a dendrogram.

Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all elements end up being in the same cluster. The method is also known as farthest neighbour clustering. The result of the clustering can be visualized as a dendrogram, which shows the sequence of cluster fusion and the distance at which each fusion took place.

<span class="mw-page-title-main">Top tree</span>

A top tree is a data structure based on a binary tree for unrooted dynamic trees that is used mainly for various path-related operations. It allows simple divide-and-conquer algorithms. It has since been augmented to maintain dynamically various properties of a tree such as diameter, center and median.

In statistics, Ward's method is a criterion applied in hierarchical cluster analysis. Ward's minimum variance method is a special case of the objective function approach originally presented by Joe H. Ward, Jr. Ward suggested a general agglomerative hierarchical clustering procedure, where the criterion for choosing the pair of clusters to merge at each step is based on the optimal value of an objective function. This objective function could be "any function that reflects the investigator's purpose." Many of the standard clustering procedures are contained in this very general class. To illustrate the procedure, Ward used the example where the objective function is the error sum of squares, and this example is known as Ward's method or more precisely Ward's minimum variance method.

In the theory of cluster analysis, the nearest-neighbor chain algorithm is an algorithm that can speed up several methods for agglomerative hierarchical clustering. These are methods that take a collection of points as input, and create a hierarchy of clusters of points by repeatedly merging pairs of smaller clusters to form larger clusters. The clustering methods that the nearest-neighbor chain algorithm can be used for include Ward's method, complete-linkage clustering, and single-linkage clustering; these all work by repeatedly merging the closest two clusters but use different definitions of the distance between clusters. The cluster distances for which the nearest-neighbor chain algorithm works are called reducible and are characterized by a simple inequality among certain cluster distances.

WPGMA is a simple agglomerative (bottom-up) hierarchical clustering method, generally attributed to Sokal and Michener.

References

  1. Nielsen, Frank (2016). "8. Hierarchical Clustering". Introduction to HPC with MPI for Data Science. Springer. pp. 195–211. ISBN   978-3-319-21903-5.
  2. Eppstein, David (2001-12-31). "Fast hierarchical clustering and other applications of dynamic closest pairs". ACM Journal of Experimental Algorithmics. 5: 1–es. arXiv: cs/9912014 . doi:10.1145/351827.351829. ISSN   1084-6654.
  3. "The CLUSTER Procedure: Clustering Methods". SAS/STAT 9.2 Users Guide. SAS Institute . Retrieved 2009-04-26.
  4. Székely, G. J.; Rizzo, M. L. (2005). "Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method". Journal of Classification. 22 (2): 151–183. doi:10.1007/s00357-005-0012-9. S2CID   206960007.
  5. Fernández, Alberto; Gómez, Sergio (2020). "Versatile linkage: a family of space-conserving strategies for agglomerative hierarchical clustering". Journal of Classification. 37 (3): 584–597. arXiv: 1906.09222 . doi:10.1007/s00357-019-09339-z. S2CID   195317052.
  6. 1 2 Ward, Joe H. (1963). "Hierarchical Grouping to Optimize an Objective Function". Journal of the American Statistical Association. 58 (301): 236–244. doi:10.2307/2282967. JSTOR   2282967. MR   0148188.
  7. 1 2 3 4 Podani, János (1989), Mucina, L.; Dale, M. B. (eds.), "New combinatorial clustering methods", Numerical syntaxonomy, Dordrecht: Springer Netherlands, pp. 61–77, doi:10.1007/978-94-009-2432-1_5, ISBN   978-94-009-2432-1 , retrieved 2022-11-04
  8. Ao, S. I.; Yip, K.; Ng, M.; Cheung, D.; Fong, P.-Y.; Melhado, I.; Sham, P. C. (2004-12-07). "CLUSTAG: hierarchical clustering and graph methods for selecting tag SNPs". Bioinformatics. 21 (8): 1735–1736. doi: 10.1093/bioinformatics/bti201 . ISSN   1367-4803. PMID   15585525.
  9. Basalto, Nicolas; Bellotti, Roberto; De Carlo, Francesco; Facchi, Paolo; Pantaleo, Ester; Pascazio, Saverio (2007-06-15). "Hausdorff clustering of financial time series". Physica A: Statistical Mechanics and Its Applications. 379 (2): 635–644. arXiv: physics/0504014 . Bibcode:2007PhyA..379..635B. doi:10.1016/j.physa.2007.01.011. ISSN   0378-4371. S2CID   27093582.
  10. 1 2 Schubert, Erich (2021). HACAM: Hierarchical Agglomerative Clustering Around Medoids – and its Limitations (PDF). LWDA’21: Lernen, Wissen, Daten, Analysen September 01–03, 2021, Munich, Germany. pp. 191–204 via CEUR-WS.
  11. Miyamoto, Sadaaki; Kaizu, Yousuke; Endo, Yasunori (2016). Hierarchical and Non-Hierarchical Medoid Clustering Using Asymmetric Similarity Measures. 2016 Joint 8th International Conference on Soft Computing and Intelligent Systems (SCIS) and 17th International Symposium on Advanced Intelligent Systems (ISIS). pp. 400–403. doi:10.1109/SCIS-ISIS.2016.0091.
  12. Herr, Dominik; Han, Qi; Lohmann, Steffen; Ertl, Thomas (2016). Visual Clutter Reduction through Hierarchy-based Projection of High-dimensional Labeled Data (PDF). Graphics Interface. Graphics Interface. doi:10.20380/gi2016.14 . Retrieved 2022-11-04.
  13. Zhang, Wei; Wang, Xiaogang; Zhao, Deli; Tang, Xiaoou (2012). "Graph Degree Linkage: Agglomerative Clustering on a Directed Graph". In Fitzgibbon, Andrew; Lazebnik, Svetlana; Perona, Pietro; Sato, Yoichi; Schmid, Cordelia (eds.). Computer Vision – ECCV 2012. Lecture Notes in Computer Science. Vol. 7572. Springer Berlin Heidelberg. pp. 428–441. arXiv: 1208.5092 . Bibcode:2012arXiv1208.5092Z. doi:10.1007/978-3-642-33718-5_31. ISBN   9783642337185. S2CID   14751. See also: https://github.com/waynezhanghk/gacluster
  14. Zhang, W.; Zhao, D.; Wang, X. (2013). "Agglomerative clustering via maximum incremental path integral". Pattern Recognition. 46 (11): 3056–65. Bibcode:2013PatRe..46.3056Z. CiteSeerX   10.1.1.719.5355 . doi:10.1016/j.patcog.2013.04.013.
  15. Zhao, D.; Tang, X. (2008). "Cyclizing clusters via zeta function of a graph". NIPS'08: Proceedings of the 21st International Conference on Neural Information Processing Systems. Curran. pp. 1953–60. CiteSeerX   10.1.1.945.1649 . ISBN   9781605609492.
  16. Ma, Y.; Derksen, H.; Hong, W.; Wright, J. (2007). "Segmentation of Multivariate Mixed Data via Lossy Data Coding and Compression". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (9): 1546–62. doi:10.1109/TPAMI.2007.1085. hdl: 2142/99597 . PMID   17627043. S2CID   4591894.
  17. Fernández, Alberto; Gómez, Sergio (2008). "Solving Non-uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms". Journal of Classification. 25 (1): 43–65. arXiv: cs/0608049 . doi:10.1007/s00357-008-9004-x. S2CID   434036.
  18. Legendre, P.; Legendre, L.F.J. (2012). "Cluster Analysis §8.6 Reversals". Numerical Ecology. Developments in Environmental Modelling. Vol. 24 (3rd ed.). Elsevier. pp. 376–7. ISBN   978-0-444-53868-0.
  19. Kaufman, L.; Rousseeuw, P.J. (2009) [1990]. "6. Divisive Analysis (Program DIANA)". Finding Groups in Data: An Introduction to Cluster Analysis. Wiley. pp. 253–279. ISBN   978-0-470-31748-8.
  20. "Hierarchical Clustering · Clustering.jl". juliastats.org. Retrieved 2022-02-28.
  21. "hclust function - RDocumentation". www.rdocumentation.org. Retrieved 2022-06-07.
  22. Galili, Tal; Benjamini, Yoav; Simpson, Gavin; Jefferis, Gregory (2021-10-28), dendextend: Extending 'dendrogram' Functionality in R , retrieved 2022-06-07
  23. Paradis, Emmanuel; et al. "ape: Analyses of Phylogenetics and Evolution" . Retrieved 2022-12-28.
  24. Fernández, Alberto; Gómez, Sergio (2021-09-12). "mdendro: Extended Agglomerative Hierarchical Clustering" . Retrieved 2022-12-28.

Further reading