Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems. [1]
Kernel methods (for instance, support vector machines or Gaussian processes [2] ) project data points into a high-dimensional or infinite-dimensional feature space and find the optimal splitting hyperplane. In the kernel method the data is represented in a kernel matrix (or, Gram matrix). Many algorithms can solve machine learning problems using the kernel matrix. The main problem of kernel method is its high computational cost associated with kernel matrices. The cost is at least quadratic in the number of training data points, but most kernel methods include computation of matrix inversion or eigenvalue decomposition and the cost becomes cubic in the number of training data. Large training sets cause large storage and computational costs. While low rank decomposition methods (Cholesky decomposition) reduce this cost, they still require computing the kernel matrix. One of the approaches to deal with this problem is low-rank matrix approximations. The most popular examples of them are the Nyström approximation and randomized feature maps approximation methods. Both of them have been successfully applied to efficient kernel learning.
Kernel methods become computationally unfeasible when the number of points is so large such that the kernel matrix cannot be stored in memory.
If is the number of training examples, the storage and computational cost required to find the solution of the problem using general kernel method is and respectively. The Nyström approximation can allow a significant speed-up of the computations. [2] [3] This speed-up is achieved by using, instead of the kernel matrix, its approximation of rank . An advantage of the method is that it is not necessary to compute or store the whole kernel matrix, but only a submatrix of size .
It reduces the storage and complexity requirements to and respectively.
The method is named "Nyström approximation" because it can be interpreted as a case of the Nyström method from integral equation theory. [3]
Consider a positive-definite kernel function . Given some data points , we can form the kernel matrix such that .
Now, let be an integer, then we can divide the kernel matrix as , where is the top-left corner of it. Also, set to be its first columns.
The Nyström approximation of in this case is where is the Moore–Penrose pseudoinverse of , which must exist since is positive semidefinite.
By Mercer's theorem, we can decompose the kernel matrix as a Gram matrix: , where . Let be the left columns of .
Theorem — We have
1. The top-left is by the proposition about pseudoinverse: .
Take the SVD , where are full-squares, and has shape . Now evaluate by brute force the top-right corner, which gives Count the diagonal entries, and see that the sigma-cluster evaluates to .
2. From the first two requirements, we know that . From the previous calculation, we know that , so we can write . The third requirement then implies .
3. Evaluate directly in matrix blocks, and get
Take the SVD , where are full-squares, and has shape . Now evaluate where is a matrix of shape . It is diagonal. Its first entries are 1, and the rest are 0.
Thus, is the orthogonal projection to the first columns of .
(4) is a corollary of (3). (5) is a corollary of (2)
In a vector and kernel notation, the problem of regularized least squares can be rewritten as: Computing the gradient and setting in to 0, the minimum can be obtained: The inverse matrix can be computed using Woodbury matrix identity: It has the desired storage and complexity requirements.
Let – samples of data, – a randomized feature map (maps a single vector to a vector of higher dimensionality) so that the inner product between a pair of transformed points approximates their kernel evaluation:
where is the mapping embedded in the RBF kernel.
Since is low-dimensional, the input can be easily transformed with , after that different linear learning methods to approximate the answer of the corresponding nonlinear kernel can be applied. There are different randomized feature maps to compute the approximations to the RBF kernels. For instance, random Fourier features and random binning features.
The random Fourier features map produces a Monte Carlo approximation to the feature map. The Monte Carlo method is considered to be randomized. These random features consists of sinusoids randomly drawn from Fourier transform of the kernel to be approximated, where and are random variables. The line is randomly chosen, then the data points are projected on it by the mappings. The resulting scalar is passed through a sinusoid. The product of the transformed points will approximate a shift-invariant kernel. Since the map is smooth, random Fourier features work well on interpolation tasks.
A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points are assigned to the same bin is proportional to . The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of . Since this mapping is not smooth and uses the proximity between input points, Random binning features works well for approximating kernels that depend only on the distance between datapoints.
The approaches for large-scale kernel learning (Nyström method and random features) differs in the fact that the Nyström method uses data dependent basis functions while in random features approach the basis functions are sampled from a distribution independent from the training data. This difference leads to an improved analysis for kernel learning approaches based on the Nyström method. When there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nyström method can achieve better results than the random features based approach. [6]
In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.
Estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation. However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.
In probability theory and statistics, the generalized chi-squared distribution is the distribution of a quadratic form of a multinormal variable, or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In statistics and machine learning, Gaussian process approximation is a computational method that accelerates inference tasks in the context of a Gaussian process model, most commonly likelihood evaluation and prediction. Like approximations of other models, they can often be expressed as additional assumptions imposed on the model, which do not correspond to any actual feature, but which retain its key properties while simplifying calculations. Many of these approximation methods can be expressed in purely linear algebraic or functional analytic terms as matrix or function approximations. Others are purely algorithmic and cannot easily be rephrased as a modification of a statistical model.