Part of a series on |
Machine learning and data mining |
---|
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms. [1]
Variants exist which aim to make the learned representations assume useful properties. [2] Examples are regularized autoencoders (sparse, denoising and contractive autoencoders), which are effective in learning representations for subsequent classification tasks, [3] and variational autoencoders, which can be used as generative models. [4] Autoencoders are applied to many problems, including facial recognition, [5] feature detection, [6] anomaly detection, and learning the meaning of words. [7] [8] In terms of data synthesis, autoencoders can also be used to randomly generate new data that is similar to the input (training) data. [6]
An autoencoder is defined by the following components:
Two sets: the space of decoded messages ; the space of encoded messages . Typically and are Euclidean spaces, that is, with
Two parametrized families of functions: the encoder family , parametrized by ; the decoder family , parametrized by .
For any , we usually write , and refer to it as the code, the latent variable, latent representation, latent vector, etc. Conversely, for any , we usually write , and refer to it as the (decoded) message.
Usually, both the encoder and the decoder are defined as multilayer perceptrons (MLPs). For example, a one-layer-MLP encoder is:
where is an element-wise activation function, is a "weight" matrix, and is a "bias" vector.
An autoencoder, by itself, is simply a tuple of two functions. To judge its quality, we need a task. A task is defined by a reference probability distribution over , and a "reconstruction quality" function , such that measures how much differs from .
With those, we can define the loss function for the autoencoder asThe optimal autoencoder for the given task is then . The search for the optimal autoencoder can be accomplished by any mathematical optimization technique, but usually by gradient descent. This search process is referred to as "training the autoencoder".
In most situations, the reference distribution is just the empirical distribution given by a dataset , so that
where is the Dirac measure, the quality function is just L2 loss: , and is the Euclidean norm. Then the problem of searching for the optimal autoencoder is just a least-squares optimization:
An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code. An optimal autoencoder would perform as close to perfect reconstruction as possible, with "close to perfect" defined by the reconstruction quality function .
The simplest way to perform the copying task perfectly would be to duplicate the signal. To suppress this behavior, the code space usually has fewer dimensions than the message space .
Such an autoencoder is called undercomplete. It can be interpreted as compressing the message, or reducing its dimensionality. [9] [10]
At the limit of an ideal undercomplete autoencoder, every possible code in the code space is used to encode a message that really appears in the distribution , and the decoder is also perfect: . This ideal autoencoder can then be used to generate messages indistinguishable from real messages, by feeding its decoder arbitrary code and obtaining , which is a message that really appears in the distribution .
If the code space has dimension larger than (overcomplete), or equal to, the message space , or the hidden units are given enough capacity, an autoencoder can learn the identity function and become useless. However, experimental results found that overcomplete autoencoders might still learn useful features. [11]
In the ideal setting, the code dimension and the model capacity could be set on the basis of the complexity of the data distribution to be modeled. A standard way to do so is to add modifications to the basic autoencoder, to be detailed below. [2]
Variational autoencoders (VAEs) belong to the families of variational Bayesian methods. Despite the architectural similarities with basic autoencoders, VAEs are architected with different goals and have a different mathematical formulation. The latent space is, in this case, composed of a mixture of distributions instead of fixed vectors.
Given an input dataset characterized by an unknown probability function and a multivariate latent encoding vector , the objective is to model the data as a distribution , with defined as the set of the network parameters so that .
Inspired by the sparse coding hypothesis in neuroscience, sparse autoencoders (SAE) are variants of autoencoders, such that the codes for messages tend to be sparse codes, that is, is close to zero in most entries. Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time. [12] Encouraging sparsity improves performance on classification tasks. [13]
There are two main ways to enforce sparsity. One way is to simply clamp all but the highest-k activations of the latent code to zero. This is the k-sparse autoencoder. [13]
The k-sparse autoencoder inserts the following "k-sparse function" in the latent layer of a standard autoencoder:where if ranks in the top k, and 0 otherwise.
Backpropagating through is simple: set gradient to 0 for entries, and keep gradient for entries. This is essentially a generalized ReLU function. [13]
The other way is a relaxed version of the k-sparse autoencoder. Instead of forcing sparsity, we add a sparsity regularization loss, then optimize forwhere measures how much sparsity we want to enforce. [14]
Let the autoencoder architecture have layers. To define a sparsity regularization loss, we need a "desired" sparsity for each layer, a weight for how much to enforce each sparsity, and a function to measure how much two sparsities differ.
For each input , let the actual sparsity of activation in each layer bewhere is the activation in the -th neuron of the -th layer upon input .
The sparsity loss upon input for one layer is , and the sparsity regularization loss for the entire autoencoder is the expected weighted sum of sparsity losses:Typically, the function is either the Kullback-Leibler (KL) divergence, as [13] [14] [15] [16]
or the L1 loss, as , or the L2 loss, as .
Alternatively, the sparsity regularization loss may be defined without reference to any "desired sparsity", but simply force as much sparsity as possible. In this case, one can define the sparsity regularization loss as where is the activation vector in the -th layer of the autoencoder. The norm is usually the L1 norm (giving the L1 sparse autoencoder) or the L2 norm (giving the L2 sparse autoencoder).
Denoising autoencoders (DAE) try to achieve a good representation by changing the reconstruction criterion. [2] [3]
A DAE, originally called a "robust autoassociative network", [17] is trained by intentionally corrupting the inputs of a standard autoencoder during training. A noise process is defined by a probability distribution over functions . That is, the function takes a message , and corrupts it to a noisy version . The function is selected randomly, with a probability distribution .
Given a task , the problem of training a DAE is the optimization problem:That is, the optimal DAE should take any noisy message and attempt to recover the original message without noise, thus the name "denoising".
Usually, the noise process is applied only during training and testing, not during downstream use.
The use of DAE depends on two assumptions:
Example noise processes include:
A contractive autoencoder (CAE) adds the contractive regularization loss to the standard autoencoder loss:where measures how much contractive-ness we want to enforce. The contractive regularization loss itself is defined as the expected Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input:To understand what measures, note the factfor any message , and small variation in it. Thus, if is small, it means that a small neighborhood of the message maps to a small neighborhood of its code. This is a desired property, as it means small variation in the message leads to small, perhaps even zero, variation in its code, like how two pictures may look the same even if they are not exactly the same.
The DAE can be understood as an infinitesimal limit of CAE: in the limit of small Gaussian input noise, DAEs make the reconstruction function resist small but finite-sized input perturbations, while CAEs make the extracted features resist infinitesimal input perturbations.
A minimum description length autoencoder (MDL-AE) is an advanced variation of the traditional autoencoder, which leverages principles from information theory, specifically the Minimum Description Length (MDL) principle. The MDL principle posits that the best model for a dataset is the one that provides the shortest combined encoding of the model and the data. In the context of autoencoders, this principle is applied to ensure that the learned representation is not only compact but also interpretable and efficient for reconstruction.
The MDL-AE seeks to minimize the total description length of the data, which includes the size of the latent representation (code length) and the error in reconstructing the original data. The objective can be expressed as , where represents the length of the compressed latent representation and denotes the reconstruction error. [18]
The concrete autoencoder is designed for discrete feature selection. [19] A concrete autoencoder forces the latent space to consist only of a user-specified number of features. The concrete autoencoder uses a continuous relaxation of the categorical distribution to allow gradients to pass through the feature selector layer, which makes it possible to use standard backpropagation to learn an optimal subset of input features that minimize reconstruction loss.
Autoencoders are often trained with a single-layer encoder and a single-layer decoder, but using many-layered (deep) encoders and decoders offers many advantages. [2]
Geoffrey Hinton developed the deep belief network technique for training many-layered deep autoencoders. His method involves treating each neighboring set of two layers as a restricted Boltzmann machine so that pretraining approximates a good solution, then using backpropagation to fine-tune the results. [10]
Researchers have debated whether joint training (i.e. training the whole architecture together with a single global reconstruction objective to optimize) would be better for deep auto-encoders. [20] A 2015 study showed that joint training learns better data models along with more representative features for classification as compared to the layerwise method. [20] However, their experiments showed that the success of joint training depends heavily on the regularization strategies adopted. [20] [21]
(Oja, 1982) [22] noted that PCA is equivalent to a neural network with one hidden layer with identity activation function. In the language of autoencoding, the input-to-hidden module is the encoder, and the hidden-to-output module is the decoder. Subsequently, in (Baldi and Hornik, 1989) [23] and (Kramer, 1991) [9] generalized PCA to autoencoders, which they termed as "nonlinear PCA".
Immediately after the resurgence of neural networks in the 1980s, it was suggested in 1986 [24] that a neural network be put in "auto-association mode". This was then implemented in (Harrison, 1987) [25] and (Elman, Zipser, 1988) [26] for speech and in (Cottrell, Munro, Zipser, 1987) [27] for images. [28] In (Hinton, Salakhutdinov, 2006), [29] deep belief networks were developed. These train a pair restricted Boltzmann machines as encoder-decoder pairs, then train another pair on the latent representation of the first pair, and so on. [30]
The first applications of AE date to early 1990s. [2] [31] [18] Their most traditional application was dimensionality reduction or feature learning, but the concept became widely used for learning generative models of data. [32] [33] Some of the most powerful AIs in the 2010s involved autoencoder modules as a component of larger AI systems, such as VAE in Stable Diffusion, discrete VAE in Transformer-based image generators like DALL-E 1, etc.
During the early days, when the terminology was uncertain, the autoencoder has also been called identity mapping, [34] [9] auto-associating, [35] self-supervised backpropagation, [9] or Diabolo network. [36] [11]
The two main applications of autoencoders are dimensionality reduction and information retrieval (or associative memory), [2] but modern variations have been applied to other tasks.
Dimensionality reduction was one of the first deep learning applications. [2]
For Hinton's 2006 study, [10] he pretrained a multi-layer autoencoder with a stack of RBMs and then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until hitting a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 components of a principal component analysis (PCA), and learned a representation that was qualitatively easier to interpret, clearly separating data clusters. [2] [10]
Representing dimensions can improve performance on tasks such as classification. [2] Indeed, the hallmark of dimensionality reduction is to place semantically related examples near each other. [38]
If linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an autoencoder is strongly related to principal component analysis (PCA). [28] [39] The weights of an autoencoder with a single hidden layer of size (where is less than the size of the input) span the same vector subspace as the one spanned by the first principal components, and the output of the autoencoder is an orthogonal projection onto this subspace. The autoencoder weights are not equal to the principal components, and are generally not orthogonal, yet the principal components may be recovered from them using the singular value decomposition. [40]
However, the potential of autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct the input with significantly lower information loss. [10]
Information retrieval benefits particularly from dimensionality reduction in that search can become more efficient in certain kinds of low dimensional spaces. Autoencoders were indeed applied to semantic hashing, proposed by Salakhutdinov and Hinton in 2007. [38] By training the algorithm to produce a low-dimensional binary code, all database entries could be stored in a hash table mapping binary code vectors to entries. This table would then support information retrieval by returning all entries with the same binary code as the query, or slightly less similar entries by flipping some bits from the query encoding.
The encoder-decoder architecture, often used in natural language processing and neural networks, can be scientifically applied in the field of SEO (Search Engine Optimization) in various ways:
In essence, the encoder-decoder architecture or autoencoders can be leveraged in SEO to optimize web page content, improve their indexing, and enhance their appeal to both search engines and users.
Another application for autoencoders is anomaly detection. [17] [41] [42] [43] [44] [45] By learning to replicate the most salient features in the training data under some of the constraints described previously, the model is encouraged to learn to precisely reproduce the most frequently observed characteristics. When facing anomalies, the model should worsen its reconstruction performance. In most cases, only data with normal instances are used to train the autoencoder; in others, the frequency of anomalies is small compared to the observation set so that its contribution to the learned representation could be ignored. After training, the autoencoder will accurately reconstruct "normal" data, while failing to do so with unfamiliar anomalous data. [43] Reconstruction error (the error between the original data and its low dimensional reconstruction) is used as an anomaly score to detect anomalies. [43]
Recent literature has however shown that certain autoencoding models can, counterintuitively, be very good at reconstructing anomalous examples and consequently not able to reliably perform anomaly detection. [46] [47]
The characteristics of autoencoders are useful in image processing.
One example can be found in lossy image compression, where autoencoders outperformed other approaches and proved competitive against JPEG 2000. [48] [49]
Another useful application of autoencoders in image preprocessing is image denoising. [50] [51] [52]
Autoencoders found use in more demanding contexts such as medical imaging where they have been used for image denoising [53] as well as super-resolution. [54] [55] In image-assisted diagnosis, experiments have applied autoencoders for breast cancer detection [56] and for modelling the relation between the cognitive decline of Alzheimer's disease and the latent features of an autoencoder trained with MRI. [57]
In 2019 molecules generated with variational autoencoders were validated experimentally in mice. [58] [59]
Recently, a stacked autoencoder framework produced promising results in predicting popularity of social media posts, [60] which is helpful for online advertising strategies.
Autoencoders have been applied to machine translation, which is usually referred to as neural machine translation (NMT). [61] [62] Unlike traditional autoencoders, the output does not match the input - it is in another language. In NMT, texts are treated as sequences to be encoded into the learning procedure, while on the decoder side sequences in the target language(s) are generated. Language-specific autoencoders incorporate further linguistic features into the learning procedure, such as Chinese decomposition features. [63] Machine translation is rarely still done with autoencoders, due to the availability of more effective transformer networks.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
A solenoid is a type of electromagnet formed by a helical coil of wire whose length is substantially greater than its diameter, which generates a controlled magnetic field. The coil can produce a uniform magnetic field in a volume of space when an electric current is passed through it.
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
In mathematics, a Killing vector field, named after Wilhelm Killing, is a vector field on a Riemannian manifold that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point of an object the same distance in the direction of the Killing vector will not distort distances on the object.
In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how much a model probability distribution Q is different from a true probability distribution P. Mathematically, it is defined as
The Kerr–Newman metric describes the spacetime geometry around a mass which is electrically charged and rotating. It is a vacuum solution which generalizes the Kerr metric by additionally taking into account the energy of an electromagnetic field, making it the most general asymptotically flat and stationary solution of the Einstein–Maxwell equations in general relativity. As an electrovacuum solution, it only includes those charges associated with the magnetic field; it does not include any free electric charges.
The Kuramoto model, first proposed by Yoshiki Kuramoto, is a mathematical model used in describing synchronization. More specifically, it is a model for the behavior of a large set of coupled oscillators. Its formulation was motivated by the behavior of systems of chemical and biological oscillators, and it has found widespread applications in areas such as neuroscience and oscillating flame dynamics. Kuramoto was quite surprised when the behavior of some physical systems, namely coupled arrays of Josephson junctions, followed his model.
In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.
In variational Bayesian methods, the evidence lower bound is a useful lower bound on the log-likelihood of some observed data.
Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.
A transformer is a deep learning architecture developed by researchers at Google and based on the multi-head attention mechanism, proposed in the 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods.
A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.
In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of three major components: the forward process, the reverse process, and the sampling procedure. The goal of diffusion models is to learn a diffusion process for a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs a random walk with drift through the space of all possible data. A trained diffusion model can be sampled in many ways, with different efficiency and quality.
Neural operators are a class of deep learning architectures designed to learn maps between infinite-dimensional function spaces. Neural operators represent an extension of traditional artificial neural networks, marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn operators between function spaces; they can receive input functions, and the output function can be evaluated at any discretization.
The reparameterization trick is a technique used in statistical machine learning, particularly in variational inference, variational autoencoders, and stochastic optimization. It allows for the efficient computation of gradients through random variables, enabling the optimization of parametric probability models using stochastic gradient descent, and the variance reduction of estimators.