Mean shift

Last updated

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. [1] Application domains include cluster analysis in computer vision and image processing. [2]

Contents

History

The mean shift procedure is usually credited to work by Fukunaga and Hostetler in 1975. [3] It is, however, reminiscent of earlier work by Schnell in 1964. [4]

Overview

Mean shift is a procedure for locating the maxima—the modes—of a density function given discrete data sampled from that function. [1] This is an iterative method, and we start with an initial estimate . Let a kernel function be given. This function determines the weight of nearby points for re-estimation of the mean. Typically a Gaussian kernel on the distance to the current estimate is used, . The weighted mean of the density in the window determined by is

where is the neighborhood of , a set of points for which .

The difference is called mean shift in Fukunaga and Hostetler. [3] The mean-shift algorithm now sets , and repeats the estimation until converges.

Although the mean shift algorithm has been widely used in many applications, a rigid proof for the convergence of the algorithm using a general kernel in a high dimensional space is still not known. [5] Aliyari Ghassabeh showed the convergence of the mean shift algorithm in one dimension with a differentiable, convex, and strictly decreasing profile function. [6] However, the one-dimensional case has limited real world applications. Also, the convergence of the algorithm in higher dimensions with a finite number of the stationary (or isolated) points has been proved. [5] [7] However, sufficient conditions for a general kernel function to have finite stationary (or isolated) points have not been provided.

Gaussian Mean-Shift is an Expectation–maximization algorithm. [8]

Details

Let data be a finite set embedded in the -dimensional Euclidean space, . Let be a flat kernel that is the characteristic function of the -ball in ,

In each iteration of the algorithm, is performed for all simultaneously. The first question, then, is how to estimate the density function given a sparse set of samples. One of the simplest approaches is to just smooth the data, e.g., by convolving it with a fixed kernel of width ,

where are the input samples and is the kernel function (or Parzen window). is the only parameter in the algorithm and is called the bandwidth. This approach is known as kernel density estimation or the Parzen window technique. Once we have computed from the equation above, we can find its local maxima using gradient ascent or some other optimization technique. The problem with this "brute force" approach is that, for higher dimensions, it becomes computationally prohibitive to evaluate over the complete search space. Instead, mean shift uses a variant of what is known in the optimization literature as multiple restart gradient descent. Starting at some guess for a local maximum, , which can be a random input data point , mean shift computes the gradient of the density estimate at and takes an uphill step in that direction. [9]

Types of kernels

Kernel definition: Let be the -dimensional Euclidean space, . The norm of is a non-negative number, . A function is said to be a kernel if there exists a profile, , such that

and

The two most frequently used kernel profiles for mean shift are:

Flat kernel

Gaussian kernel

where the standard deviation parameter works as the bandwidth parameter, .

Applications

Clustering

Consider a set of points in two-dimensional space. Assume a circular window centered at and having radius as the kernel. Mean-shift is a hill climbing algorithm which involves shifting this kernel iteratively to a higher density region until convergence. Every shift is defined by a mean shift vector. The mean shift vector always points toward the direction of the maximum increase in the density. At every iteration the kernel is shifted to the centroid or the mean of the points within it. The method of calculating this mean depends on the choice of the kernel. In this case if a Gaussian kernel is chosen instead of a flat kernel, then every point will first be assigned a weight which will decay exponentially as the distance from the kernel's center increases. At convergence, there will be no direction at which a shift can accommodate more points inside the kernel.

Tracking

The mean shift algorithm can be used for visual tracking. The simplest such algorithm would create a confidence map in the new image based on the color histogram of the object in the previous image, and use mean shift to find the peak of a confidence map near the object's old position. The confidence map is a probability density function on the new image, assigning each pixel of the new image a probability, which is the probability of the pixel color occurring in the object in the previous image. A few algorithms, such as kernel-based object tracking, [10] ensemble tracking, [11] CAMshift [12] [13] expand on this idea.

Smoothing

Let and be the -dimensional input and filtered image pixels in the joint spatial-range domain. For each pixel,

Strengths

  1. Mean shift is an application-independent tool suitable for real data analysis.
  2. Does not assume any predefined shape on data clusters.
  3. It is capable of handling arbitrary feature spaces.
  4. The procedure relies on choice of a single parameter: bandwidth.
  5. The bandwidth/window size 'h' has a physical meaning, unlike k-means.

Weaknesses

  1. The selection of a window size is not trivial.
  2. Inappropriate window size can cause modes to be merged, or generate additional “shallow” modes.
  3. Often requires using adaptive window size.

Availability

Variants of the algorithm can be found in machine learning and image processing packages:

See also

Related Research Articles

Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms.

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where the variation in the data can be described with fewer dimensions than the initial data. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Summary of algorithms for nonlinear dimensionality reduction

Nonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

<span class="mw-page-title-main">Cluster analysis</span> Grouping a set of objects by similarity

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.

<span class="mw-page-title-main">Gabor filter</span> Linear filter used for texture analysis

In image processing, a Gabor filter, named after Dennis Gabor, who first proposed it as a 1D filter. The Gabor filter was first generalized to 2D by Gösta Granlund, by adding a reference direction. The Gabor filter is a linear filter used for texture analysis, which essentially means that it analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis. Frequency and orientation representations of Gabor filters are claimed by many contemporary vision scientists to be similar to those of the human visual system. They have been found to be particularly appropriate for texture representation and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

<i>k</i>-means clustering Vector quantization algorithm minimizing the sum of squared deviations

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances, but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

<span class="mw-page-title-main">Kernel density estimation</span> Estimator

In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.

<span class="mw-page-title-main">Kernel method</span> Class of algorithms for pattern analysis

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the Representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.

In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.

In image processing, ridge detection is the attempt, via software, to locate ridges in an image, defined as curves whose points are local maxima of the function, akin to geographical ridges.

In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant respect the observing coordinates. The structure tensor is often used in image processing and computer vision.

The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics.

In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.

<span class="mw-page-title-main">Diffusion map</span>

Diffusion maps is a dimensionality reduction or feature extraction algorithm introduced by Coifman and Lafon which computes a family of embeddings of a data set into Euclidean space whose coordinates can be computed from the eigenvectors and eigenvalues of a diffusion operator on the data. The Euclidean distance between points in the embedded space is equal to the "diffusion distance" between probability distributions centered at those points. Different from linear dimensionality reduction methods such as principal component analysis (PCA), diffusion maps are part of the family of nonlinear dimensionality reduction methods which focus on discovering the underlying manifold that the data has been sampled from. By integrating local similarities at different scales, diffusion maps give a global description of the data-set. Compared with other methods, the diffusion map algorithm is robust to noise perturbation and computationally inexpensive.

<span class="mw-page-title-main">Point-set registration</span>

In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

References

  1. 1 2 Cheng, Yizong (August 1995). "Mean Shift, Mode Seeking, and Clustering". IEEE Transactions on Pattern Analysis and Machine Intelligence. 17 (8): 790–799. CiteSeerX   10.1.1.510.1222 . doi:10.1109/34.400568.
  2. Comaniciu, Dorin; Peter Meer (May 2002). "Mean Shift: A Robust Approach Toward Feature Space Analysis". IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (5): 603–619. CiteSeerX   10.1.1.160.3832 . doi:10.1109/34.1000236. S2CID   691081.
  3. 1 2 Fukunaga, Keinosuke; Larry D. Hostetler (January 1975). "The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition". IEEE Transactions on Information Theory. 21 (1): 32–40. doi:10.1109/TIT.1975.1055330.
  4. Schnell, P. (1964). "Eine Methode zur Auffindung von Gruppen". Biometrische Zeitschrift (in German). 6 (1): 47–48. doi:10.1002/bimj.19640060105.
  5. 1 2 Aliyari Ghassabeh, Youness (2015-03-01). "A sufficient condition for the convergence of the mean shift algorithm with Gaussian kernel". Journal of Multivariate Analysis. 135: 1–10. doi: 10.1016/j.jmva.2014.11.009 .
  6. Aliyari Ghassabeh, Youness (2013-09-01). "On the convergence of the mean shift algorithm in the one-dimensional space". Pattern Recognition Letters. 34 (12): 1423–1427. arXiv: 1407.2961 . Bibcode:2013PaReL..34.1423A. doi:10.1016/j.patrec.2013.05.004. S2CID   10233475.
  7. Li, Xiangru; Hu, Zhanyi; Wu, Fuchao (2007-06-01). "A note on the convergence of the mean shift". Pattern Recognition. 40 (6): 1756–1762. Bibcode:2007PatRe..40.1756L. doi:10.1016/j.patcog.2006.10.016.
  8. Carreira-Perpinan, Miguel A. (May 2007). "Gaussian Mean-Shift Is an EM Algorithm". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (5): 767–776. doi:10.1109/tpami.2007.1057. ISSN   0162-8828. PMID   17356198. S2CID   6694308.
  9. Richard Szeliski, Computer Vision, Algorithms and Applications, Springer, 2011
  10. Comaniciu, Dorin; Visvanathan Ramesh; Peter Meer (May 2003). "Kernel-based Object Tracking". IEEE Transactions on Pattern Analysis and Machine Intelligence. 25 (5): 564–575. CiteSeerX   10.1.1.8.7474 . doi:10.1109/tpami.2003.1195991. S2CID   823678.
  11. Avidan, Shai (2005). "Ensemble Tracking". 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 2. San Diego, California: IEEE. pp. 494–501. doi:10.1109/CVPR.2005.144. ISBN   978-0-7695-2372-9. PMID   17170479. S2CID   1638397.
  12. Gary Bradski (1998) Computer Vision Face Tracking For Use in a Perceptual User Interface Archived 2012-04-17 at the Wayback Machine , Intel Technology Journal, No. Q2.
  13. Emami, Ebrahim (2013). "Online failure detection and correction for CAMShift tracking algorithm". 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP). Vol. 2. IEEE. pp. 180–183. doi:10.1109/IranianMVIP.2013.6779974. ISBN   978-1-4673-6184-2. S2CID   15864761.