Multivariate kernel density estimation

Last updated

Kernel density estimation is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics. It can be viewed as a generalisation of histogram density estimation with improved statistical properties. Apart from histograms, other types of density estimators include parametric, spline, wavelet and Fourier series. Kernel density estimators were first introduced in the scientific literature for univariate data in the 1950s and 1960s [1] [2] and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to multivariate statistics. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to its univariate counterparts. [3]

Contents

Motivation

We take an illustrative synthetic bivariate data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (−1.5, −1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (−1.625, −1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive to the placement of the anchor point. [4]

Comparison of 2D histograms. Left. Histogram with anchor point at (-1.5, -1.5). Right. Histogram with anchor point at (-1.625, -1.625). Both histograms have a bindwidth of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point. Synthetic data 2D histograms.png
Comparison of 2D histograms. Left. Histogram with anchor point at (−1.5, -1.5). Right. Histogram with anchor point at (−1.625, −1.625). Both histograms have a bindwidth of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.

One possible solution to this anchor point placement problem is to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid. The coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density.

Construction of 2D kernel density estimate. Left. Individual kernels. Right. Kernel density estimate. Synthetic data 2D KDE.png
Construction of 2D kernel density estimate. Left. Individual kernels. Right. Kernel density estimate.

The goal of density estimation is to take a finite sample of data and to make inferences about the underlying probability density function everywhere, including where no data are observed. In kernel density estimation, the contribution of each data point is smoothed out from a single point into a region of space surrounding it. Aggregating the individually smoothed contributions gives an overall picture of the structure of the data and its density function. In the details to follow, we show that this approach leads to a reasonable estimate of the underlying density function.

Definition

The previous figure is a graphical representation of kernel density estimate, which we now define in an exact manner. Let x1, x2, ..., xn be a sample of d-variate random vectors drawn from a common distribution described by the density function ƒ. The kernel density estimate is defined to be

where

The choice of the kernel function K is not crucial to the accuracy of kernel density estimators, so we use the standard multivariate normal kernel throughout: , where H plays the role of the covariance matrix. On the other hand, the choice of the bandwidth matrix H is the single most important factor affecting its accuracy since it controls the amount and orientation of smoothing induced. [5] :36–39 That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) are S, the class of positive scalars times the identity matrix; D, diagonal matrices with positive entries on the main diagonal; and F, symmetric positive definite matrices. The S class kernels have the same amount of smoothing applied in all coordinate directions, D kernels allow different amounts of smoothing in each of the coordinates, and F kernels allow arbitrary amounts and orientation of the smoothing. Historically S and D kernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more general F class kernels. [6] [7]

Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix. Kernel parametrisation class.png
Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix.

Optimal bandwidth matrix selection

The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or mean integrated squared error

This in general does not possess a closed-form expression, so it is usual to use its asymptotic approximation (AMISE) as a proxy

where

with Id being the d × d identity matrix, with m2 = 1 for the normal kernel

The quality of the AMISE approximation to the MISE [5] :97 is given by

where o indicates the usual small o notation. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size n → ∞.

It can be shown that any reasonable bandwidth selector H has H = O(n−2/(d+4)) where the big O notation is applied elementwise. Substituting this into the MISE formula yields that the optimal MISE is O(n−4/(d+4)). [5] :99–100 Thus as n → ∞, the MISE → 0, i.e. the kernel density estimate converges in mean square and thus also in probability to the true density f. These modes of convergence are confirmation of the statement in the motivation section that kernel methods lead to reasonable density estimators. An ideal optimal bandwidth selector is

Since this ideal selector contains the unknown density function ƒ, it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practice: smoothed cross validation and plug-in selectors.

Plug-in

The plug-in (PI) estimate of the AMISE is formed by replacing Ψ4 by its estimator

where . Thus is the plug-in selector. [8] [9] These references also contain algorithms on optimal estimation of the pilot bandwidth matrix G and establish that converges in probability to HAMISE.

Smoothed cross validation

Smoothed cross validation (SCV) is a subset of a larger class of cross validation techniques. The SCV estimator differs from the plug-in estimator in the second term

Thus is the SCV selector. [9] [10] These references also contain algorithms on optimal estimation of the pilot bandwidth matrix G and establish that converges in probability to HAMISE.

Rule of thumb

Silverman's rule of thumb suggests using , where is the standard deviation of the ith variable and is the number of dimensions, and . Scott's rule is .

Asymptotic analysis

In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the expected value and the variance of the density estimator [5] :97

where * is the convolution operator between two functions, and

For these two expressions to be well-defined, we require that all elements of H tend to 0 and that n−1 |H|−1/2 tends to 0 as n tends to infinity. Assuming these two conditions, we see that the expected value tends to the true density f i.e. the kernel density estimator is asymptotically unbiased; and that the variance tends to zero. Using the standard mean squared value decomposition

we have that the MSE tends to 0, implying that the kernel density estimator is (mean square) consistent and hence converges in probability to the true density f. The rate of convergence of the MSE to 0 is the necessarily the same as the MISE rate noted previously O(n−4/(d+4)), hence the covergence rate of the density estimator to f is Op(n−2/(d+4)) where Op denotes order in probability. This establishes pointwise convergence. The functional covergence is established similarly by considering the behaviour of the MISE, and noting that under sufficient regularity, integration does not affect the convergence rates.

For the data-based bandwidth selectors considered, the target is the AMISE bandwidth matrix. We say that a data-based selector converges to the AMISE selector at relative rate Op(nα), α > 0 if

It has been established that the plug-in and smoothed cross validation selectors (given a single pilot bandwidth G) both converge at a relative rate of Op(n−2/(d+6)) [9] [11] i.e., both these data-based selectors are consistent estimators.

Density estimation with a full bandwidth matrix

Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix. Old Faithful Geyser KDE with plugin bandwidth.png
Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix.

The ks package [12] in R implements the plug-in and smoothed cross validation selectors (amongst others). This dataset (included in the base distribution of R) contains 272 records with two measurements each: the duration time of an eruption (minutes) and the waiting time until the next eruption (minutes) of the Old Faithful Geyser in Yellowstone National Park, USA.

The code fragment computes the kernel density estimate with the plug-in bandwidth matrix Again, the coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector, Hpi is replaced with Hscv. This is not displayed here since it is mostly similar to the plug-in estimate for this example.

library(ks)data(faithful)H<-Hpi(x=faithful)fhat<-kde(x=faithful,H=H)plot(fhat,display="filled.contour2")points(faithful,cex=0.5,pch=16)

Density estimation with a diagonal bandwidth matrix

Kernel density estimate with diagonal bandwidth for synthetic normal mixture data. Bivariate example.png
Kernel density estimate with diagonal bandwidth for synthetic normal mixture data.

We consider estimating the density of the Gaussian mixture (4π)−1exp(−12 (x12 + x22)) + (4π)−1exp(−12 ((x1 - 3.5)2 + x22)), from 500 randomly generated points. We employ the Matlab routine for 2-dimensional data. The routine is an automatic bandwidth selection method specifically designed for a second order Gaussian kernel. [13] The figure shows the joint density estimate that results from using the automatically selected bandwidth.

Matlab script for the example

Type the following commands in Matlab after downloading and saving the function kde2d.m in the current directory.

clearall% generate synthetic datadata=[randn(500,2);randn(500,1)+3.5,randn(500,1);];% call the routine, which has been saved in the current directory [bandwidth,density,X,Y]=kde2d(data);% plot the data and the density estimatecontour3(X,Y,density,50),holdonplot(data(:,1),data(:,2),'r.','MarkerSize',5)

Alternative optimality criteria

The MISE is the expected integrated L2 distance between the density estimate and the true density function f. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors. There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure. [3] :34–37,78 The equivalent L1 measure, Mean Integrated Absolute Error, is

Its mathematical analysis is considerably more difficult than the MISE ones. In practice, the gain appears not to be significant. [14] The L norm is the Mean Uniform Absolute Error

which has been investigated only briefly. [15] Likelihood error criteria include those based on the Mean Kullback–Leibler divergence

and the Mean Hellinger distance

The KL can be estimated using a cross-validation method, although KL cross-validation selectors can be sub-optimal even if it remains consistent for bounded density functions. [16] MH selectors have been briefly examined in the literature. [17]

All these optimality criteria are distance based measures, and do not always correspond to more intuitive notions of closeness, so more visual criteria have been developed in response to this concern. [18]

Objective and data-driven kernel selection

Demonstration of the filter function
I
A
-
(
t
-
)
{\displaystyle I_{\vec {A}}({\vec {t}})}
. The square of the empirical distribution function
|
ph
^
|
2
{\displaystyle |{\hat {\varphi }}|^{2}}
from N=10,000 samples of the 'transition distribution' discussed in Section 3.2 (and shown in Fig. 4), for
|
ph
^
|
2
>=
4
(
N
-
1
)
N
-
2
{\displaystyle |{\hat {\varphi }}|^{2}\geq 4(N-1)N^{-2}}
. There are two color schemes present in this figure. The predominantly dark, multicolored colored 'X-shaped' region in the center corresponds to values of
|
ph
^
|
2
{\displaystyle |{\hat {\varphi }}|^{2}}
for the lowest contiguous hypervolume (the area containing the origin); the colorbar at right applies to colors in this region. The lightly colored, monotone areas away from the first contiguous hypervolume correspond to additional contiguous hypervolumes (areas) with
|
ph
^
|
2
>=
4
(
N
-
1
)
N
-
2
{\displaystyle |{\hat {\varphi }}|^{2}\geq 4(N-1)N^{-2}}
. The colors of these areas are arbitrary and only serve to visually differentiate nearby contiguous areas from one another. Empirical Characteristic Function.jpg
Demonstration of the filter function . The square of the empirical distribution function from N=10,000 samples of the ‘transition distribution’ discussed in Section 3.2 (and shown in Fig. 4), for . There are two color schemes present in this figure. The predominantly dark, multicolored colored ‘X-shaped’ region in the center corresponds to values of for the lowest contiguous hypervolume (the area containing the origin); the colorbar at right applies to colors in this region. The lightly colored, monotone areas away from the first contiguous hypervolume correspond to additional contiguous hypervolumes (areas) with . The colors of these areas are arbitrary and only serve to visually differentiate nearby contiguous areas from one another.

Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the form of the distribution. [19] The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to the expected for parametric estimators. [19] [20] [21] This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space—as the optimal damping function (the Fourier transform of the kernel )-- in terms of the Fourier transform of the data , the empirical characteristic function (see Kernel density estimation):

[21]

where, N is the number of data points, d is the number of dimensions (variables), and is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter'; is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which (see [21] for a discussion of this and other filter functions).

Note that direct calculation of the empirical characteristic function (ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a non-uniform fast Fourier transform (nuFFT) method, [20] [21] which increases the calculation speed by several orders of magnitude (depending on the dimensionality of the problem). The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to as fastKDE in the literature. [21]

A non-trivial mixture of normal distributions: (a) the underlying PDF, (b) a fastKDE estimate on 1,000,000 samples, and (c) a fastKDE estimate on 10,000 samples. FastKDE example.jpg
A non-trivial mixture of normal distributions: (a) the underlying PDF, (b) a fastKDE estimate on 1,000,000 samples, and (c) a fastKDE estimate on 10,000 samples.

See also

Related Research Articles

A histogram is a visual representation of the distribution of numeric data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to "bin" the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent and are often of equal size.

The likelihood function is the joint probability mass of observed data viewed as a function of the parameters of a statistical model. Intuitively, the likelihood function is the probability of observing data assuming is the actual parameter.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal for the theorem to apply, nor do they need to be independent and identically distributed.

In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.

<span class="mw-page-title-main">Kriging</span> Method of interpolation

In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.

In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable.

In decision theory and estimation theory, Stein's example is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.

<span class="mw-page-title-main">Kernel density estimation</span> Estimator

In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.

In wireless communications, channel state information (CSI) is the known channel properties of a communication link. This information describes how a signal propagates from the transmitter to the receiver and represents the combined effect of, for example, scattering, fading, and power decay with distance. The method is called channel estimation. The CSI makes it possible to adapt transmissions to current channel conditions, which is crucial for achieving reliable communication with high data rates in multiantenna systems.

In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.

<span class="mw-page-title-main">Vectorization (mathematics)</span> Conversion of a matrix or a tensor to a vector

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another:

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

In statistics, kernel regression is a non-parametric technique to estimate the conditional expectation of a random variable. The objective is to find a non-linear relation between a pair of random variables X and Y.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent." There are a number of HAC estimators described in, and HAC estimator does not refer uniquely to Newey–West. One version of Newey–West Bartlett requires the user to specify the bandwidth and usage of the Bartlett kernel from Kernel density estimation

In statistics, adaptive or "variable-bandwidth" kernel density estimation is a form of kernel density estimation in which the size of the kernels used in the estimate are varied depending upon either the location of the samples or the location of the test point. It is a particularly effective technique when the sample space is multi-dimensional.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

References

  1. Rosenblatt, M. (1956). "Remarks on some nonparametric estimates of a density function". Annals of Mathematical Statistics. 27 (3): 832–837. doi: 10.1214/aoms/1177728190 .
  2. Parzen, E. (1962). "On estimation of a probability density function and mode". Annals of Mathematical Statistics. 33 (3): 1065–1076. doi: 10.1214/aoms/1177704472 .
  3. 1 2 Simonoff, J.S. (1996). Smoothing Methods in Statistics. Springer. ISBN   978-0-387-94716-7.
  4. Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis . Chapman & Hall/CRC. pp.  7–11. ISBN   978-0-412-24620-3.
  5. 1 2 3 4 Wand, M.P; Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC. ISBN   978-0-412-55270-0.
  6. Wand, M.P.; Jones, M.C. (1993). "Comparison of smoothing parameterizations in bivariate kernel density estimation". Journal of the American Statistical Association. 88 (422): 520–528. doi:10.1080/01621459.1993.10476303. JSTOR   2290332.
  7. Duong, T.; Hazelton, M.L. (2003). "Plug-in bandwidth matrices for bivariate kernel density estimation". Journal of Nonparametric Statistics. 15: 17–30. doi:10.1080/10485250306039.
  8. Wand, M.P.; Jones, M.C. (1994). "Multivariate plug-in bandwidth selection". Computational Statistics. 9: 97–177.
  9. 1 2 3 Duong, T.; Hazelton, M.L. (2005). "Cross validation bandwidth matrices for multivariate kernel density estimation". Scandinavian Journal of Statistics. 32 (3): 485–506. doi:10.1111/j.1467-9469.2005.00445.x.
  10. Hall, P.; Marron, J.; Park, B. (1992). "Smoothed cross-validation". Probability Theory and Related Fields. 92: 1–20. doi: 10.1007/BF01205233 .
  11. Duong, T.; Hazelton, M.L. (2005). "Convergence rates for unconstrained bandwidth matrix selectors in multivariate kernel density estimation". Journal of Multivariate Analysis. 93 (2): 417–433. doi: 10.1016/j.jmva.2004.04.004 .
  12. Duong, T. (2007). "ks: Kernel density estimation and kernel discriminant analysis in R". Journal of Statistical Software. 21 (7). doi: 10.18637/jss.v021.i07 .
  13. Botev, Z.I.; Grotowski, J.F.; Kroese, D.P. (2010). "Kernel density estimation via diffusion". Annals of Statistics . 38 (5): 2916–2957. arXiv: 1011.2602 . doi:10.1214/10-AOS799.
  14. Hall, P.; Wand, M.P. (1988). "Minimizing L1 distance in nonparametric density estimation". Journal of Multivariate Analysis. 26: 59–88. doi: 10.1016/0047-259X(88)90073-5 .
  15. Cao, R.; Cuevas, A.; Manteiga, W.G. (1994). "A comparative study of several smoothing methods in density estimation". Computational Statistics and Data Analysis. 17 (2): 153–176. doi:10.1016/0167-9473(92)00066-Z.
  16. Hall, P. (1989). "On Kullback-Leibler loss and density estimation". Annals of Statistics. 15 (4): 589–605. doi: 10.1214/aos/1176350606 .
  17. Ahmad, I.A.; Mugdadi, A.R. (2006). "Weighted Hellinger distance as an error criterion for bandwidth selection in kernel estimation". Journal of Nonparametric Statistics. 18 (2): 215–226. doi:10.1080/10485250600712008.
  18. Marron, J.S.; Tsybakov, A. (1996). "Visual error criteria for qualitative smoothing". Journal of the American Statistical Association. 90 (430): 499–507. doi:10.2307/2291060. JSTOR   2291060.
  19. 1 2 Bernacchia, Alberto; Pigolotti, Simone (2011-06-01). "Self-consistent method for density estimation". Journal of the Royal Statistical Society, Series B. 73 (3): 407–422. arXiv: 0908.3856 . doi:10.1111/j.1467-9868.2011.00772.x. ISSN   1467-9868.
  20. 1 2 O’Brien, Travis A.; Collins, William D.; Rauscher, Sara A.; Ringler, Todd D. (2014-11-01). "Reducing the computational cost of the ECF using a nuFFT: A fast and objective probability density estimation method". Computational Statistics & Data Analysis. 79: 222–234. doi: 10.1016/j.csda.2014.06.002 .
  21. 1 2 3 4 5 O’Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; Collins, William D.; O’Brien, John P. (2016). "A fast and objective multidimensional kernel density estimation method: fastKDE" (PDF). Computational Statistics & Data Analysis. 101: 148–160. doi: 10.1016/j.csda.2016.02.014 .