Nonlinear dimensionality reduction

Last updated
Top-left: a 3D dataset of 1000 points in a spiraling band (a.k.a. the Swiss roll) with a rectangular hole in the middle. Top-right: the original 2D manifold used to generate the 3D dataset. Bottom left and right: 2D recoveries of the manifold respectively using the LLE and Hessian LLE algorithms as implemented by the Modular Data Processing toolkit. Lle hlle swissroll.png
Top-left: a 3D dataset of 1000 points in a spiraling band (a.k.a. the Swiss roll) with a rectangular hole in the middle. Top-right: the original 2D manifold used to generate the 3D dataset. Bottom left and right: 2D recoveries of the manifold respectively using the LLE and Hessian LLE algorithms as implemented by the Modular Data Processing toolkit.

Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself. [1] [2] The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

Contents

Applications of NLDR

High dimensional data can be hard for machines to work with, requiring significant time and space for analysis. It also presents a challenge for humans, since it's hard to visualize or understand data in more than three dimensions. Reducing the dimensionality of a data set, while keep its essential features relatively intact, can make algorithms more efficient and allow analysts to visualize trends and patterns.


Plot of the two-dimensional points that results from using a NLDR algorithm. In this case, Manifold Sculpting is used to reduce the data into just two dimensions (rotation and scale). Plot of two-dimensional points resulting from NLDR algorithm.jpg
Plot of the two-dimensional points that results from using a NLDR algorithm. In this case, Manifold Sculpting is used to reduce the data into just two dimensions (rotation and scale).

The reduced-dimensional representations of data are often referred to as "intrinsic variables". This description implies that these are the values from which the data was produced. For example, consider a dataset that contains images of a letter 'A', which has been scaled and rotated by varying amounts. Each image has 32×32 pixels. Each image can be represented as a vector of 1024 pixel values. Each row is a sample on a two-dimensional manifold in 1024-dimensional space (a Hamming space). The intrinsic dimensionality is two, because two variables (rotation and scale) were varied in order to produce the data. Information about the shape or look of a letter 'A' is not part of the intrinsic variables because it is the same in every instance. Nonlinear dimensionality reduction will discard the correlated information (the letter 'A') and recover only the varying information (rotation and scale). The image to the right shows sample images from this dataset (to save space, not all input images are shown), and a plot of the two-dimensional points that results from using a NLDR algorithm (in this case, Manifold Sculpting was used) to reduce the data into just two dimensions.

PCA (a linear dimensionality reduction algorithm) is used to reduce this same dataset into two dimensions, the resulting values are not so well organized. Letters pca.png
PCA (a linear dimensionality reduction algorithm) is used to reduce this same dataset into two dimensions, the resulting values are not so well organized.

By comparison, if principal component analysis, which is a linear dimensionality reduction algorithm, is used to reduce this same dataset into two dimensions, the resulting values are not so well organized. This demonstrates that the high-dimensional vectors (each representing a letter 'A') that sample this manifold vary in a non-linear manner.

It should be apparent, therefore, that NLDR has several applications in the field of computer-vision. For example, consider a robot that uses a camera to navigate in a closed static environment. The images obtained by that camera can be considered to be samples on a manifold in high-dimensional space, and the intrinsic variables of that manifold will represent the robot's position and orientation.

Invariant manifolds are of general interest for model order reduction in dynamical systems. In particular, if there is an attracting invariant manifold in the phase space, nearby trajectories will converge onto it and stay on it indefinitely, rendering it a candidate for dimensionality reduction of the dynamical system. While such manifolds are not guaranteed to exist in general, the theory of spectral submanifolds (SSM) gives conditions for the existence of unique attracting invariant objects in a broad class of dynamical systems. [3] Active research in NLDR seeks to unfold the observation manifolds associated with dynamical systems to develop modeling techniques. [4]

Some of the more prominent nonlinear dimensionality reduction techniques are listed below.

Important concepts

Sammon's mapping

Sammon's mapping is one of the first and most popular NLDR techniques.

Approximation of a principal curve by one-dimensional SOM (a broken line with red squares, 20 nodes). The first principal component is presented by a blue straight line. Data points are the small grey circles. For PCA, the Fraction of variance unexplained in this example is 23.23%, for SOM it is 6.86%. SOMsPCA.PNG
Approximation of a principal curve by one-dimensional SOM (a broken line with red squares, 20 nodes). The first principal component is presented by a blue straight line. Data points are the small grey circles. For PCA, the Fraction of variance unexplained in this example is 23.23%, for SOM it is 6.86%.

Self-organizing map

The self-organizing map (SOM, also called Kohonen map) and its probabilistic variant generative topographic mapping (GTM) use a point representation in the embedded space to form a latent variable model based on a non-linear mapping from the embedded space to the high-dimensional space. [6] These techniques are related to work on density networks, which also are based around the same probabilistic model.

Kernel principal component analysis

Perhaps the most widely used algorithm for dimensional reduction is kernel PCA. [7] PCA begins by computing the covariance matrix of the matrix

It then projects the data onto the first k eigenvectors of that matrix. By comparison, KPCA begins by computing the covariance matrix of the data after being transformed into a higher-dimensional space,

It then projects the transformed data onto the first k eigenvectors of that matrix, just like PCA. It uses the kernel trick to factor away much of the computation, such that the entire process can be performed without actually computing . Of course must be chosen such that it has a known corresponding kernel. Unfortunately, it is not trivial to find a good kernel for a given problem, so KPCA does not yield good results with some problems when using standard kernels. For example, it is known to perform poorly with these kernels on the Swiss roll manifold. However, one can view certain other methods that perform well in such settings (e.g., Laplacian Eigenmaps, LLE) as special cases of kernel PCA by constructing a data-dependent kernel matrix. [8]

KPCA has an internal model, so it can be used to map points onto its embedding that were not available at training time.

Principal curves and manifolds

Application of principal curves: Nonlinear quality of life index. Points represent data of the UN 171 countries in 4-dimensional space formed by the values of 4 indicators: gross product per capita, life expectancy, infant mortality, tuberculosis incidence. Different forms and colors correspond to various geographical locations. Red bold line represents the principal curve, approximating the dataset. This principal curve was produced by the method of elastic map. SlideQualityLife.png
Application of principal curves: Nonlinear quality of life index. Points represent data of the UN 171 countries in 4-dimensional space formed by the values of 4 indicators: gross product per capita, life expectancy, infant mortality, tuberculosis incidence. Different forms and colors correspond to various geographical locations. Red bold line represents the principal curve, approximating the dataset. This principal curve was produced by the method of elastic map.

Principal curves and manifolds give the natural geometric framework for nonlinear dimensionality reduction and extend the geometric interpretation of PCA by explicitly constructing an embedded manifold, and by encoding using standard geometric projection onto the manifold. This approach was originally proposed by Trevor Hastie in his 1984 thesis, [11] which he formally introduced in 1989. [12] This idea has been explored further by many authors. [13] How to define the "simplicity" of the manifold is problem-dependent, however, it is commonly measured by the intrinsic dimensionality and/or the smoothness of the manifold. Usually, the principal manifold is defined as a solution to an optimization problem. The objective function includes a quality of data approximation and some penalty terms for the bending of the manifold. The popular initial approximations are generated by linear PCA and Kohonen's SOM.

Laplacian eigenmaps

Laplacian eigenmaps uses spectral techniques to perform dimensionality reduction. [14] This technique relies on the basic assumption that the data lies in a low-dimensional manifold in a high-dimensional space. [15] This algorithm cannot embed out-of-sample points, but techniques based on Reproducing kernel Hilbert space regularization exist for adding this capability. [16] Such techniques can be applied to other nonlinear dimensionality reduction algorithms as well.

Traditional techniques like principal component analysis do not consider the intrinsic geometry of the data. Laplacian eigenmaps builds a graph from neighborhood information of the data set. Each data point serves as a node on the graph and connectivity between nodes is governed by the proximity of neighboring points (using e.g. the k-nearest neighbor algorithm). The graph thus generated can be considered as a discrete approximation of the low-dimensional manifold in the high-dimensional space. Minimization of a cost function based on the graph ensures that points close to each other on the manifold are mapped close to each other in the low-dimensional space, preserving local distances. The eigenfunctions of the Laplace–Beltrami operator on the manifold serve as the embedding dimensions, since under mild conditions this operator has a countable spectrum that is a basis for square integrable functions on the manifold (compare to Fourier series on the unit circle manifold). Attempts to place Laplacian eigenmaps on solid theoretical ground have met with some success, as under certain nonrestrictive assumptions, the graph Laplacian matrix has been shown to converge to the Laplace–Beltrami operator as the number of points goes to infinity. [15]

Isomap

Isomap [17] is a combination of the Floyd–Warshall algorithm with classic Multidimensional Scaling (MDS). Classic MDS takes a matrix of pair-wise distances between all points and computes a position for each point. Isomap assumes that the pair-wise distances are only known between neighboring points, and uses the Floyd–Warshall algorithm to compute the pair-wise distances between all other points. This effectively estimates the full matrix of pair-wise geodesic distances between all of the points. Isomap then uses classic MDS to compute the reduced-dimensional positions of all the points. Landmark-Isomap is a variant of this algorithm that uses landmarks to increase speed, at the cost of some accuracy.

In manifold learning, the input data is assumed to be sampled from a low dimensional manifold that is embedded inside of a higher-dimensional vector space. The main intuition behind MVU is to exploit the local linearity of manifolds and create a mapping that preserves local neighbourhoods at every point of the underlying manifold.

Locally-linear embedding

Locally-linear Embedding (LLE) was presented at approximately the same time as Isomap. [18] It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model.

The original data points , and the goal of LLE is to embed each point to some low-dimensional point , where .

LLE has two steps. In the first step, it computes, for each point Xi, the best approximation of Xi based on barycentric coordinates of its neighbors Xj. The original point is approximately reconstructed by a linear combination, given by the weight matrix Wij, of its neighbors. The reconstruction error is:

The weights Wij refer to the amount of contribution the point Xj has while reconstructing the point Xi. The cost function is minimized under two constraints:

These two constraints ensure that is unaffected by rotation and translation.

In the second step, a neighborhood preserving map is created based the weights. Each point is mapped onto a point by minimizing another cost:

Unlike in the previous cost function, the weights Wij are kept fixed and the minimization is done on the points Yi to optimize the coordinates. This minimization problem can be solved by solving a sparse N X N eigenvalue problem (N being the number of data points), whose bottom d nonzero eigen vectors provide an orthogonal set of coordinates.

The only hyperparameter in the algorithm is what counts as a "neighbor" of a point. Generally the data points are reconstructed from K nearest neighbors, as measured by Euclidean distance. In this case, the algorithm has only one integer-valued hyperparameter K, which can be chosen by cross validation.

Hessian locally-linear embedding (Hessian LLE)

Like LLE, Hessian LLE is also based on sparse matrix techniques. [19] It tends to yield results of a much higher quality than LLE. Unfortunately, it has a very costly computational complexity, so it is not well-suited for heavily sampled manifolds. It has no internal model.

Modified Locally-Linear Embedding (MLLE)

Modified LLE (MLLE) [20] is another LLE variant which uses multiple weights in each neighborhood to address the local weight matrix conditioning problem which leads to distortions in LLE maps. Loosely speaking the multiple weights are the local orthogonal projection of the original weights produced by LLE. The creators of this regularised variant are also the authors of Local Tangent Space Alignment (LTSA), which is implicit in the MLLE formulation when realising that the global optimisation of the orthogonal projections of each weight vector, in-essence, aligns the local tangent spaces of every data point. The theoretical and empirical implications from the correct application of this algorithm are far-reaching. [21]

Local tangent space alignment

LTSA [22] is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing the k-nearest neighbors of every point. It computes the tangent space at every point by computing the d-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces.

Maximum variance unfolding

Maximum Variance Unfolding, Isomap and Locally Linear Embedding share a common intuition relying on the notion that if a manifold is properly unfolded, then variance over the points is maximized. Its initial step, like Isomap and Locally Linear Embedding, is finding the k-nearest neighbors of every point. It then seeks to solve the problem of maximizing the distance between all non-neighboring points, constrained such that the distances between neighboring points are preserved. The primary contribution of this algorithm is a technique for casting this problem as a semidefinite programming problem. Unfortunately, semidefinite programming solvers have a high computational cost. Like Locally Linear Embedding, it has no internal model.

Autoencoders

An autoencoder is a feed-forward neural network which is trained to approximate the identity function. That is, it is trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of the hidden layers in the network is limited to contain only a small number of network units. Thus, the network must learn to encode the vector into a small number of dimensions and then decode it back into the original space. Thus, the first half of the network is a model which maps from high to low-dimensional space, and the second half maps from low to high-dimensional space. Although the idea of autoencoders is quite old, [23] training of deep autoencoders has only recently become possible through the use of restricted Boltzmann machines and stacked denoising autoencoders. Related to autoencoders is the NeuroScale algorithm, which uses stress functions inspired by multidimensional scaling and Sammon mappings (see above) to learn a non-linear mapping from the high-dimensional to the embedded space. The mappings in NeuroScale are based on radial basis function networks.

Gaussian process latent variable models

Gaussian process latent variable models (GPLVM) [24] are probabilistic dimensionality reduction methods that use Gaussian Processes (GPs) to find a lower dimensional non-linear embedding of high dimensional data. They are an extension of the Probabilistic formulation of PCA. The model is defined probabilistically and the latent variables are then marginalized and parameters are obtained by maximizing the likelihood. Like kernel PCA they use a kernel function to form a non linear mapping (in the form of a Gaussian process). However, in the GPLVM the mapping is from the embedded(latent) space to the data space (like density networks and GTM) whereas in kernel PCA it is in the opposite direction. It was originally proposed for visualization of high dimensional data but has been extended to construct a shared manifold model between two observation spaces. GPLVM and its many variants have been proposed specially for human motion modeling, e.g., back constrained GPLVM, GP dynamic model (GPDM), balanced GPDM (B-GPDM) and topologically constrained GPDM. To capture the coupling effect of the pose and gait manifolds in the gait analysis, a multi-layer joint gait-pose manifolds was proposed. [25]

t-distributed stochastic neighbor embedding

t-distributed stochastic neighbor embedding (t-SNE) [26] is widely used. It is one of a family of stochastic neighbor embedding methods. The algorithm computes the probability that pairs of datapoints in the high-dimensional space are related, and then chooses low-dimensional embeddings which produce a similar distribution.

Other algorithms

Relational perspective map

Relational perspective map is a multidimensional scaling algorithm. The algorithm finds a configuration of data points on a manifold by simulating a multi-particle dynamic system on a closed manifold, where data points are mapped to particles and distances (or dissimilarity) between data points represent a repulsive force. As the manifold gradually grows in size the multi-particle system cools down gradually and converges to a configuration that reflects the distance information of the data points.

Relational perspective map was inspired by a physical model in which positively charged particles move freely on the surface of a ball. Guided by the Coulomb force between particles, the minimal energy configuration of the particles will reflect the strength of repulsive forces between the particles.

The Relational perspective map was introduced in. [27] The algorithm firstly used the flat torus as the image manifold, then it has been extended (in the software VisuMap to use other types of closed manifolds, like the sphere, projective space, and Klein bottle, as image manifolds.

Contagion maps

Contagion maps use multiple contagions on a network to map the nodes as a point cloud. [28] In the case of the Global cascades model the speed of the spread can be adjusted with the threshold parameter . For the contagion map is equivalent to the Isomap algorithm.

Curvilinear component analysis

Curvilinear component analysis (CCA) looks for the configuration of points in the output space that preserves original distances as much as possible while focusing on small distances in the output space (conversely to Sammon's mapping which focus on small distances in original space). [29]

It should be noticed that CCA, as an iterative learning algorithm, actually starts with focus on large distances (like the Sammon algorithm), then gradually change focus to small distances. The small distance information will overwrite the large distance information, if compromises between the two have to be made.

The stress function of CCA is related to a sum of right Bregman divergences. [30]

Curvilinear distance analysis

CDA [29] trains a self-organizing neural network to fit the manifold and seeks to preserve geodesic distances in its embedding. It is based on Curvilinear Component Analysis (which extended Sammon's mapping), but uses geodesic distances instead.

Diffeomorphic dimensionality reduction

Diffeomorphic Dimensionality Reduction or Diffeomap [31] learns a smooth diffeomorphic mapping which transports the data onto a lower-dimensional linear subspace. The methods solves for a smooth time indexed vector field such that flows along the field which start at the data points will end at a lower-dimensional linear subspace, thereby attempting to preserve pairwise differences under both the forward and inverse mapping.

Manifold alignment

Manifold alignment takes advantage of the assumption that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial data sets. [32]

Diffusion maps

Diffusion maps leverages the relationship between heat diffusion and a random walk (Markov Chain); an analogy is drawn between the diffusion operator on a manifold and a Markov transition matrix operating on functions defined on the graph whose nodes were sampled from the manifold. [33] In particular, let a data set be represented by . The underlying assumption of diffusion map is that the high-dimensional data lies on a low-dimensional manifold of dimension . Let X represent the data set and represent the distribution of the data points on X. Further, define a kernel which represents some notion of affinity of the points in X. The kernel has the following properties [34]

k is symmetric

k is positivity preserving

Thus one can think of the individual data points as the nodes of a graph and the kernel k as defining some sort of affinity on that graph. The graph is symmetric by construction since the kernel is symmetric. It is easy to see here that from the tuple (X,k) one can construct a reversible Markov Chain. This technique is common to a variety of fields and is known as the graph Laplacian.

For example, the graph K = (X,E) can be constructed using a Gaussian kernel.

In the above equation, denotes that is a nearest neighbor of . Properly, Geodesic distance should be used to actually measure distances on the manifold. Since the exact structure of the manifold is not available, for the nearest neighbors the geodesic distance is approximated by euclidean distance. The choice modulates our notion of proximity in the sense that if then and if then . The former means that very little diffusion has taken place while the latter implies that the diffusion process is nearly complete. Different strategies to choose can be found in. [35]

In order to faithfully represent a Markov matrix, must be normalized by the corresponding degree matrix :

now represents a Markov chain. is the probability of transitioning from to in one time step. Similarly the probability of transitioning from to in t time steps is given by . Here is the matrix multiplied by itself t times.

The Markov matrix constitutes some notion of local geometry of the data set X. The major difference between diffusion maps and principal component analysis is that only local features of the data are considered in diffusion maps as opposed to taking correlations of the entire data set.

defines a random walk on the data set which means that the kernel captures some local geometry of data set. The Markov chain defines fast and slow directions of propagation through the kernel values. As the walk propagates forward in time, the local geometry information aggregates in the same way as local transitions (defined by differential equations) of the dynamical system. [34] The metaphor of diffusion arises from the definition of a family diffusion distance

For fixed t, defines a distance between any two points of the data set based on path connectivity: the value of will be smaller the more paths that connect x to y and vice versa. Because the quantity involves a sum over of all paths of length t, is much more robust to noise in the data than geodesic distance. takes into account all the relation between points x and y while calculating the distance and serves as a better notion of proximity than just Euclidean distance or even geodesic distance.

Local multidimensional scaling

Local Multidimensional Scaling performs multidimensional scaling in local regions, and then uses convex optimization to fit all the pieces together. [36]

Nonlinear PCA

Nonlinear PCA (NLPCA) uses backpropagation to train a multi-layer perceptron (MLP) to fit to a manifold. [37] Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space.

Data-driven high-dimensional scaling

Data-driven high-dimensional scaling (DD-HDS) [38] is closely related to Sammon's mapping and curvilinear component analysis except that (1) it simultaneously penalizes false neighborhoods and tears by focusing on small distances in both original and output space, and that (2) it accounts for concentration of measure phenomenon by adapting the weighting function to the distance distribution.

Manifold sculpting

Manifold Sculpting [39] uses graduated optimization to find an embedding. Like other algorithms, it computes the k-nearest neighbors and tries to seek an embedding that preserves relationships in local neighborhoods. It slowly scales variance out of higher dimensions, while simultaneously adjusting points in lower dimensions to preserve those relationships. If the rate of scaling is small, it can find very precise embeddings. It boasts higher empirical accuracy than other algorithms with several problems. It can also be used to refine the results from other manifold learning algorithms. It struggles to unfold some manifolds, however, unless a very slow scaling rate is used. It has no model.

RankVisu

RankVisu [40] is designed to preserve rank of neighborhood rather than distance. RankVisu is especially useful on difficult tasks (when the preservation of distance cannot be achieved satisfyingly). Indeed, the rank of neighborhood is less informative than distance (ranks can be deduced from distances but distances cannot be deduced from ranks) and its preservation is thus easier.

Topologically constrained isometric embedding

Topologically constrained isometric embedding (TCIE) [41] is an algorithm based on approximating geodesic distances after filtering geodesics inconsistent with the Euclidean metric. Aimed at correcting the distortions caused when Isomap is used to map intrinsically non-convex data, TCIE uses weight least-squares MDS in order to obtain a more accurate mapping. The TCIE algorithm first detects possible boundary points in the data, and during computation of the geodesic length marks inconsistent geodesics, to be given a small weight in the weighted stress majorization that follows.

Uniform manifold approximation and projection

Uniform manifold approximation and projection (UMAP) is a nonlinear dimensionality reduction technique. [42] It is similar to t-SNE. [43]

Methods based on proximity matrices

A method based on proximity matrices is one where the data is presented to the algorithm in the form of a similarity matrix or a distance matrix. These methods all fall under the broader class of metric multidimensional scaling. The variations tend to be differences in how the proximity data is computed; for example, isomap, locally linear embeddings, maximum variance unfolding, and Sammon mapping (which is not in fact a mapping) are examples of metric multidimensional scaling methods.

Software

See also

Related Research Articles

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

In machine learning, support vector machines are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories, SVMs are one of the most studied models, being based on statistical learning frameworks of VC theory proposed by Vapnik and Chervonenkis (1974).

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

<span class="mw-page-title-main">Isometry</span> Distance-preserving mathematical transformation

In mathematics, an isometry is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such as signal processing, speech recognition, neuroinformatics, and bioinformatics.

Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data.

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the Representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.

In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.

<span class="mw-page-title-main">Spectral clustering</span> Clustering methods

In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.

<span class="mw-page-title-main">Isomap</span>

Isomap is a nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods. Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points. The algorithm provides a simple method for estimating the intrinsic geometry of a data manifold based on a rough estimate of each data point’s neighbors on the manifold. Isomap is highly efficient and generally applicable to a broad range of data sources and dimensionalities.

In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. In the classical proof of the lemma, the embedding is a random orthogonal projection.

<span class="mw-page-title-main">Linear-nonlinear-Poisson cascade model</span>

The linear-nonlinear-Poisson (LNP) cascade model is a simplified functional model of neural spike responses. It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. The LNP model is generally implicit when using reverse correlation or the spike-triggered average to characterize neural responses with white-noise stimuli.

Manifold alignment is a class of machine learning algorithms that produce projections between sets of data, given that the original data sets lie on a common manifold. The concept was first introduced as such by Ham, Lee, and Saul in 2003, adding a manifold constraint to the general problem of correlating sets of high-dimensional vectors.

<span class="mw-page-title-main">Diffusion map</span>

Diffusion maps is a dimensionality reduction or feature extraction algorithm introduced by Coifman and Lafon which computes a family of embeddings of a data set into Euclidean space whose coordinates can be computed from the eigenvectors and eigenvalues of a diffusion operator on the data. The Euclidean distance between points in the embedded space is equal to the "diffusion distance" between probability distributions centered at those points. Different from linear dimensionality reduction methods such as principal component analysis (PCA), diffusion maps are part of the family of nonlinear dimensionality reduction methods which focus on discovering the underlying manifold that the data has been sampled from. By integrating local similarities at different scales, diffusion maps give a global description of the data-set. Compared with other methods, the diffusion map algorithm is robust to noise perturbation and computationally inexpensive.

t-distributed stochastic neighbor embedding Technique for dimensionality reduction

t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by Geoffrey Hinton and Sam Roweis, where Laurens van der Maaten and Hinton proposed the t-distributed variant. It is a nonlinear dimensionality reduction technique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

In mathematics and statistics, random projection is a technique used to reduce the dimensionality of a set of points which lie in Euclidean space. According to theoretical results, random projection preserves distances well, but empirical results are sparse. They have been applied to many natural language tasks under the name random indexing.

<span class="mw-page-title-main">Manifold regularization</span>

In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is smooth: data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.

Empirical dynamic modeling (EDM) is a framework for analysis and prediction of nonlinear dynamical systems. Applications include population dynamics, ecosystem service, medicine, neuroscience, dynamical systems, geophysics, and human-computer interaction. EDM was originally developed by Robert May and George Sugihara. It can be considered a methodology for data modeling, predictive analytics, dynamical system analysis, machine learning and time series analysis.

References

  1. Lawrence, Neil D (2012). "A unifying probabilistic perspective for spectral dimensionality reduction: insights and new models". Journal of Machine Learning Research . 13 (May): 1609–38. arXiv: 1010.4830 . Bibcode:2010arXiv1010.4830L.
  2. Lee, John A.; Verleysen, Michel (2007). Nonlinear Dimensionality Reduction. Springer. ISBN   978-0-387-39350-6.
  3. Haller, George; Ponsioen, Sten (2016). "Nonlinear normal modes and spectral submanifolds: Existence, uniqueness and use in model reduction". Nonlinear Dynamics. 86 (3): 1493–1534. arXiv: 1602.00560 . doi:10.1007/s11071-016-2974-z. S2CID   44074026.
  4. Gashler, M.; Martinez, T. (2011). Temporal Nonlinear Dimensionality Reduction (PDF). Proceedings of the International Joint Conference on Neural Networks IJCNN'11. pp. 1959–66.
  5. The illustration is prepared using free software: Mirkes, E.M. (2011). "Principal Component Analysis and Self-Organizing Maps: applet". University of Leicester.
  6. Yin, Hujun (2007). "3. Learning Nonlinear Principal Manifolds by Self-Organising Maps". In Gorban, A.N.; Kégl, B.; Wunsch, D.C.; Zinovyev, A. (eds.). Principal Manifolds for Data Visualization and Dimension Reduction. Lecture Notes in Computer Science and Engineering. Vol. 58. Springer. pp. 68–95. ISBN   978-3-540-73749-0.
  7. Schölkopf, B.; Smola, A.; Müller, K.-R. (1998). "Nonlinear Component Analysis as a Kernel Eigenvalue Problem". Neural Computation. 10 (5). MIT Press: 1299–1319. doi:10.1162/089976698300017467. S2CID   6674407.
  8. Ham, Jihun; Lee, Daniel D.; Mika, Sebastian; Schölkopf, Bernhard. "A kernel view of the dimensionality reduction of manifolds". Proceedings of the 21st International Conference on Machine Learning, Banff, Canada, 2004. doi:10.1145/1015330.1015417.
  9. Gorban, A. N.; Zinovyev, A. (2010). "Principal manifolds and graphs in practice: from molecular biology to dynamical systems". International Journal of Neural Systems . 20 (3): 219–232. arXiv: 1001.1122 . doi:10.1142/S0129065710002383. PMID   20556849. S2CID   2170982.
  10. A. Zinovyev, ViDaExpert - Multidimensional Data Visualization Tool Institut Curie, Paris.
  11. Hastie, T. (November 1984). Principal Curves and Surfaces (PDF) (PhD). Stanford Linear Accelerator Center, Stanford University. Archived (PDF) from the original on August 2, 2019.
  12. Hastie, T.; Stuetzle, W. (June 1989). "Principal Curves" (PDF). Journal of the American Statistical Association . 84 (406): 502–6. doi:10.1080/01621459.1989.10478797.
  13. Gorban, A. N.; Kégl, B.; Wunsch, D. C.; Zinovyev, A., eds. (2007). Principal Manifolds for Data Visualisation and Dimension Reduction. Lecture Notes in Computer Science and Engineering (LNCSE). Vol. 58. Springer. ISBN   978-3-540-73749-0.
  14. Belkin, Mikhail; Niyogi, Partha (2001). "Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering" (PDF). Advances in Neural Information Processing Systems. 14. MIT Press: 586–691. ISBN   0-262-27173-7. OCLC   52710683.
  15. 1 2 Belkin, Mikhail (August 2003). Problems of Learning on Manifolds (PhD). Department of Mathematics, The University of Chicago. Matlab code for Laplacian Eigenmaps can be found in algorithms at Ohio-state.edu
  16. Bengio, Yoshua; Paiement, Jean-Francois; Vincent, Pascal; Delalleau, Olivier; Le Roux, Nicolas; Ouimet, Marie (2004). "Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering" (PDF). Advances in Neural Information Processing Systems. Vol. 16. MIT Press. ISBN   0-262-20152-6.
  17. Tenenbaum, J B.; de Silva, V.; Langford, J.C. (2000). "A Global Geometric Framework for Nonlinear Dimensionality Reduction" (PDF). Science. 290 (5500): 2319–23. Bibcode:2000Sci...290.2319T. doi:10.1126/science.290.5500.2319. PMID   11125149. S2CID   221338160.
  18. Roweis, S. T.; Saul, L. K. (2000). "Nonlinear Dimensionality Reduction by Locally Linear Embedding". Science. 290 (5500): 2323–6. Bibcode:2000Sci...290.2323R. doi:10.1126/science.290.5500.2323. PMID   11125150. S2CID   5987139.
  19. Donoho, D.; Grimes, C. (2003). "Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data". Proc Natl Acad Sci U S A. 100 (10): 5591–6. Bibcode:2003PNAS..100.5591D. doi: 10.1073/pnas.1031596100 . PMC   156245 . PMID   16576753.
  20. Zhang, Z.; Wang, J. (2006). "MLLE: Modified Locally Linear Embedding Using Multiple Weights". NIPS'06: Proceedings of the 19th International Conference on Neural Information Processing Systems: 1593–1600.
  21. Sidhu, Gagan (2019). "Locally Linear Embedding and fMRI feature selection in psychiatric classification". IEEE Journal of Translational Engineering in Health and Medicine. 7: 1–11. arXiv: 1908.06319 . doi:10.1109/JTEHM.2019.2936348. PMC   6726465 . PMID   31497410. S2CID   201832756.
  22. Zhang, Zhenyue; Hongyuan Zha (2005). "Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment". SIAM Journal on Scientific Computing. 26 (1): 313–338. CiteSeerX   10.1.1.211.9957 . doi:10.1137/s1064827502419154.
  23. DeMers, D.; Cottrell, G.W. (1993). "Non-linear dimensionality reduction". Advances in neural information processing systems. Vol. 5. pp. 580–7. ISBN   1558600159. OCLC   928936290.
  24. Lawrence, N. (2005). "Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models". Journal of Machine Learning Research. 6: 1783–1816.
  25. Ding, M.; Fan, G. (2015). "Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling". IEEE Transactions on Cybernetics. 45 (11): 2413–24. doi:10.1109/TCYB.2014.2373393. PMID   25532201. S2CID   15591304.
  26. van der Maaten, L.J.P.; Hinton, G.E. (2008). "Visualizing High-Dimensional Data Using t-SNE" (PDF). Journal of Machine Learning Research. 9: 2579–2605.
  27. Li, James X. (2004). "Visualizing high-dimensional data with relational perspective map" (PDF). Information Visualization. 3: 49–59. doi:10.1057/palgrave.ivs.9500051. S2CID   7566939.
  28. Taylor, D.; Klimm, F.; Harrington, H. A.; Kramár, M.; Mischaikow, K.; Porter, M. A.; Mucha, P. J. (2015). "Topological data analysis of contagion maps for examining spreading processes on networks". Nature Communications. 6: 7723. arXiv: 1408.1168 . Bibcode:2015NatCo...6.7723T. doi:10.1038/ncomms8723. PMC   4566922 . PMID   26194875.
  29. 1 2 Demartines, P.; Hérault, J. (1997). "Curvilinear Component Analysis: A Self-Organizing Neural Network for Nonlinear Mapping of Data Sets" (PDF). IEEE Transactions on Neural Networks. 8 (1): 148–154. doi:10.1109/72.554199. PMID   18255618.
  30. Sun, Jigang; Crowe, Malcolm; Fyfe, Colin (2010). "Curvilinear component analysis and Bregman divergences" (PDF). European Symposium on Artificial Neural Networks (Esann). d-side publications. pp. 81–86.
  31. Walder, Christian; Schölkopf, Bernhard (2009). "Diffeomorphic Dimensionality Reduction" (PDF). Advances in Neural Information Processing Systems. Vol. 22. MIT Press. pp. 1713–20.
  32. Wang, Chang; Mahadevan, Sridhar (July 2008). Manifold Alignment using Procrustes Analysis (PDF). The 25th International Conference on Machine Learning. pp. 1120–7.
  33. Lafon, Stephane (May 2004). Diffusion Maps and Geometric Harmonics (PhD). Yale University.
  34. 1 2 Coifman, Ronald R.; Lafon, Stephane (July 2006). "Diffusion Maps" (PDF). Applied and Computational Harmonic Analysis. 21 (1): 5–30. doi:10.1016/j.acha.2006.04.006. S2CID   17160669.
  35. Bah, B. (2008). Diffusion Maps: Applications and Analysis (Masters). University of Oxford.
  36. Venna, J.; Kaski, S. (2006). "Local multidimensional scaling". Neural Networks. 19 (6–7): 889–899. doi:10.1016/j.neunet.2006.05.014. PMID   16787737.
  37. Scholz, M.; Kaplan, F.; Guy, C. L.; Kopka, J.; Selbig, J. (2005). "Non-linear PCA: a missing data approach". Bioinformatics. 21 (20). Oxford University Press: 3887–95. doi: 10.1093/bioinformatics/bti634 . hdl: 11858/00-001M-0000-0014-2B1F-2 . PMID   16109748.
  38. S. Lespinats, M. Verleysen, A. Giron, B. Fertil, DD-HDS: a tool for visualization and exploration of high-dimensional data, IEEE Transactions on Neural Networks 18 (5) (2007) 1265–1279.
  39. Gashler, M. and Ventura, D. and Martinez, T., Iterative Non-linear Dimensionality Reduction with Manifold Sculpting , In Platt, J.C. and Koller, D. and Singer, Y. and Roweis, S., editor, Advances in Neural Information Processing Systems 20, pp. 513–520, MIT Press, Cambridge, MA, 2008
  40. Lespinats S., Fertil B., Villemain P. and Herault J., Rankvisu: Mapping from the neighbourhood network, Neurocomputing, vol. 72 (13–15), pp. 2964–2978, 2009.
  41. Rosman, G.; Bronstein, M.M.; Bronstein, A.M.; Kimmel, R. (2010). "Nonlinear Dimensionality Reduction by Topologically Constrained Isometric Embedding" (PDF). International Journal of Computer Vision. 89 (1): 56–68. doi:10.1007/s11263-010-0322-1. S2CID   1365750.
  42. McInnes, Leland; Healy, John; Melville, James (2018-12-07). "Uniform manifold approximation and projection for dimension reduction". arXiv: 1802.03426 .
  43. "UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction — umap 0.3 documentation". umap-learn.readthedocs.io. Retrieved 2019-05-04.

Further reading