Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.
Functional data analysis has roots going back to work by Grenander and Karhunen in the 1940s and 1950s. [1] [2] [3] [4] They considered the decomposition of square-integrable continuous time stochastic process into eigencomponents, now known as the Karhunen-Loève decomposition. A rigorous analysis of functional principal components analysis was done in the 1970s by Kleffe, Dauxois and Pousse including results about the asymptotic distribution of the eigenvalues. [5] [6] More recently in the 1990s and 2000s the field has focused more on applications and understanding the effects of dense and sparse observations schemes. The term "Functional Data Analysis" was coined by James O. Ramsay. [7]
Random functions can be viewed as random elements taking values in a Hilbert space, or as a stochastic process. The former is mathematically convenient, whereas the latter is somewhat more suitable from an applied perspective. These two approaches coincide if the random functions are continuous and a condition called mean-squared continuity is satisfied. [8]
In the Hilbert space viewpoint, one considers an -valued random element , where is a separable Hilbert space such as the space of square-integrable functions . Under the integrability condition that , one can define the mean of as the unique element satisfying
This formulation is the Pettis integral but the mean can also be defined as Bochner integral . Under the integrability condition that is finite, the covariance operator of is a linear operator that is uniquely defined by the relation
or, in tensor form, . The spectral theorem allows to decompose as the Karhunen-Loève decomposition
where are eigenvectors of , corresponding to the nonnegative eigenvalues of , in a non-increasing order. Truncating this infinite series to a finite order underpins functional principal component analysis.
The Hilbertian point of view is mathematically convenient, but abstract; the above considerations do not necessarily even view as a function at all, since common choices of like and Sobolev spaces consist of equivalence classes, not functions. The stochastic process perspective views as a collection of random variables
indexed by the unit interval (or more generally interval ). The mean and covariance functions are defined in a pointwise manner as
(if for all ).
Under the mean square continuity, and are continuous functions and then the covariance function defines a covariance operator given by
(1) |
The spectral theorem applies to , yielding eigenpairs , so that in tensor product notation writes
Moreover, since is continuous for all , all the are continuous. Mercer's theorem then states that
Finally, under the extra assumption that has continuous sample paths, namely that with probability one, the random function is continuous, the Karhunen-Loève expansion above holds for and the Hilbert space machinery can be subsequently applied. Continuity of sample paths can be shown using Kolmogorov continuity theorem.
Functional data are considered as realizations of a stochastic process that is an process on a bounded and closed interval with mean function and covariance function . The realizations of the process for the i-th subject is , and the sample is assumed to consist of independent subjects. The sampling schedule may vary across subjects, denoted as for the i-th subject. The corresponding i-th observation is denoted as , where . In addition, the measurement of is assumed to have random noise with and , which are independent across and .
Measurements available for all
Often unrealistic but mathematically convenient.
Real life example: Tecator spectral data. [7]
Measurements , where are recorded on a regular grid,
, and applies to typical functional data.
Real life example: Berkeley Growth Study Data and Stock data
Measurements , where are random times and their number per subject is random and finite.
Real life example: CD4 count data for AIDS patients. [9]
Functional principal component analysis (FPCA) is the most prevalent tool in FDA, partly because FPCA facilitates dimension reduction of the inherently infinite-dimensional functional data to finite-dimensional random vector of scores. More specifically, dimension reduction is achieved by expanding the underlying observed random trajectories in a functional basis consisting of the eigenfunctions of the covariance operator on . Consider the covariance operator as in ( 1 ), which is a compact operator on Hilbert space.
By Mercer's theorem, the kernel function of , i.e., the covariance function , has spectral decomposition , where the series convergence is absolute and uniform, and are real-valued nonnegative eigenvalues in descending order with the corresponding orthonormal eigenfunctions . By the Karhunen–Loève theorem, the FPCA expansion of an underlying random trajectory is , where are the functional principal components (FPCs), sometimes referred to as scores. The Karhunen–Loève expansion facilitates dimension reduction in the sense that the partial sum converges uniformly, i.e., as and thus the partial sum with a large enough yields a good approximation to the infinite sum. Thereby, the information in is reduced from infinite dimensional to a -dimensional vector with the approximated process:
(2) |
Other popular bases include spline, Fourier series and wavelet bases. Important applications of FPCA include the modes of variation and functional principal component regression.
Functional linear models can be viewed as an extension of the traditional multivariate linear models that associates vector responses with vector covariates. The traditional linear model with scalar response and vector covariate can be expressed as
(3) |
where denotes the inner product in Euclidean space, and denote the regression coefficients, and is a zero mean finite variance random error (noise). Functional linear models can be divided into two types based on the responses.
Replacing the vector covariate and the coefficient vector in model ( 3 ) by a centered functional covariate and coefficient function for and replacing the inner product in Euclidean space by that in Hilbert space , one arrives at the functional linear model
(4) |
The simple functional linear model ( 4 ) can be extended to multiple functional covariates, , also including additional vector covariates , where , by
(5) |
where is regression coefficient for , the domain of is , is the centered functional covariate given by , and is regression coefficient function for , for . Models ( 4 ) and ( 5 ) have been studied extensively. [10] [11] [12]
Consider a functional response on and multiple functional covariates , , . Two major models have been considered in this setup. [13] [7] One of these two models, generally referred to as functional linear model (FLM), can be written as:
(6) |
where is the functional intercept, for , is a centered functional covariate on , is the corresponding functional slopes with same domain, respectively, and is usually a random process with mean zero and finite variance. [13] In this case, at any given time , the value of , i.e., , depends on the entire trajectories of . Model ( 6 ) has been studied extensively. [14] [15] [16] [17] [18]
In particular, taking as a constant function yields a special case of model ( 6 )which is a functional linear model with functional responses and scalar covariates.
This model is given by,
(7) |
where are functional covariates on , are the coefficient functions defined on the same interval and is usually assumed to be a random process with mean zero and finite variance. [13] This model assumes that the value of depends on the current value of only and not the history or future value. Hence, it is a "concurrent regression model", which is also referred as "varying-coefficient" model. Further, various estimation methods have been proposed. [19] [20] [21] [22] [23] [24]
Direct nonlinear extensions of the classical functional linear regression models (FLMs) still involve a linear predictor, but combine it with a nonlinear link function, analogous to the idea of generalized linear model from the conventional linear model. Developments towards fully nonparametric regression models for functional data encounter problems such as curse of dimensionality. In order to bypass the "curse" and the metric selection problem, we are motivated to consider nonlinear functional regression models, which are subject to some structural constraints but do not overly infringe flexibility. One desires models that retain polynomial rates of convergence, while being more flexible than, say, functional linear models. Such models are particularly useful when diagnostics for the functional linear model indicate lack of fit, which is often encountered in real life situations. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear regression models.
Functional polynomial regression models may be viewed as a natural extension of the Functional Linear Models (FLMs) with scalar responses, analogous to extending linear regression model to polynomial regression model. For a scalar response and a functional covariate with domain and the corresponding centered predictor processes , the simplest and the most prominent member in the family of functional polynomial regression models is the quadratic functional regression [25] given as follows,where is the centered functional covariate, is a scalar coefficient, and are coefficient functions with domains and , respectively. In addition to the parameter function β that the above functional quadratic regression model shares with the FLM, it also features a parameter surface γ. By analogy to FLMs with scalar responses, estimation of functional polynomial models can be obtained through expanding both the centered covariate and the coefficient functions and in an orthonormal basis. [25] [26]
A functional multiple index model is given as below, with symbols having their usual meanings as formerly described,Here g represents an (unknown) general smooth function defined on a p-dimensional domain. The case yields a functional single index model while multiple index models correspond to the case . However, for , this model is problematic due to curse of dimensionality. With and relatively small sample sizes, the estimator given by this model often has large variance. [27] [28]
For a given orthonormal basis on , we can expand on the domain .
A functional linear model with scalar responses (see ( 3 )) can thus be written as follows,One form of FAMs is obtained by replacing the linear function of in the above expression ( i.e., ) by a general smooth function , analogous to the extension of multiple linear regression models to additive models and is expressed as,where satisfies for . [13] [7] This constraint on the general smooth functions ensures identifiability in the sense that the estimates of these additive component functions do not interfere with that of the intercept term . Another form of FAM is the continuously additive model, [29] expressed as,for a bivariate smooth additive surface which is required to satisfy for all , in order to ensure identifiability.
An obvious and direct extension of FLMs with scalar responses (see ( 3 )) is to add a link function leading to a generalized functional linear model (GFLM) [30] in analogy to the generalized linear model (GLM). The three components of the GFLM are:
For vector-valued multivariate data, k-means partitioning methods and hierarchical clustering are two main approaches. These classical clustering concepts for vector-valued multivariate data have been extended to functional data. For clustering of functional data, k-means clustering methods are more popular than hierarchical clustering methods. For k-means clustering on functional data, mean functions are usually regarded as the cluster centers. Covariance structures have also been taken into consideration. [31] Besides k-means type clustering, functional clustering [32] based on mixture models is also widely used in clustering vector-valued multivariate data and has been extended to functional data clustering. [33] [34] [35] [36] [37] Furthermore, Bayesian hierarchical clustering also plays an important role in the development of model-based functional clustering. [38] [39] [40] [41]
Functional classification assigns a group membership to a new data object either based on functional regression or functional discriminant analysis. Functional data classification methods based on functional regression models use class levels as responses and the observed functional data and other covariates as predictors. For regression based functional classification models, functional generalized linear models or more specifically, functional binary regression, such as functional logistic regression for binary responses, are commonly used classification approaches. More generally, the generalized functional linear regression model based on the FPCA approach is used. [42] Functional Linear Discriminant Analysis (FLDA) has also been considered as a classification method for functional data. [43] [44] [45] [46] [47] Functional data classification involving density ratios has also been proposed. [48] A study of the asymptotic behavior of the proposed classifiers in the large sample limit shows that under certain conditions the misclassification rate converges to zero, a phenomenon that has been referred to as "perfect classification". [49]
In addition to amplitude variation, [50] time variation may also be assumed to present in functional data. Time variation occurs when the subject-specific timing of certain events of interest varies among subjects. One classical example is the Berkeley Growth Study Data, [51] where the amplitude variation is the growth rate and the time variation explains the difference in children's biological age at which the pubertal and the pre-pubertal growth spurt occurred. In the presence of time variation, the cross-sectional mean function may not be an efficient estimate as peaks and troughs are located randomly and thus meaningful signals may be distorted or hidden.
Time warping, also known as curve registration, [52] curve alignment or time synchronization, aims to identify and separate amplitude variation and time variation. If both time and amplitude variation are present, then the observed functional data can be modeled as , where is a latent amplitude function and is a latent time warping function that corresponds to a cumulative distribution function. The time warping functions are assumed to be invertible and to satisfy .
The simplest case of a family of warping functions to specify phase variation is linear transformation, that is , which warps the time of an underlying template function by subjected-specific shift and scale. More general class of warping functions includes diffeomorphisms of the domain to itself, that is, loosely speaking, a class of invertible functions that maps the compact domain to itself such that both the function and its inverse are smooth. The set of linear transformation is contained in the set of diffeomorphisms. [53] One challenge in time warping is identifiability of amplitude and phase variation. Specific assumptions are required to break this non-identifiability.
Earlier approaches include dynamic time warping (DTW) used for applications such as speech recognition. [54] Another traditional method for time warping is landmark registration, [55] [56] which aligns special features such as peak locations to an average location. Other relevant warping methods include pairwise warping, [57] registration using distance [53] and elastic warping. [58]
The template function is determined through an iteration process, starting from cross-sectional mean, performing registration and recalculating the cross-sectional mean for the warped curves, expecting convergence after a few iterations. DTW minimizes a cost function through dynamic programming. Problems of non-smooth differentiable warps or greedy computation in DTW can be resolved by adding a regularization term to the cost function.
Landmark registration (or feature alignment) assumes well-expressed features are present in all sample curves and uses the location of such features as a gold-standard. Special features such as peak or trough locations in functions or derivatives are aligned to their average locations on the template function. [53] Then the warping function is introduced through a smooth transformation from the average location to the subject-specific locations. A problem of landmark registration is that the features may be missing or hard to identify due to the noise in the data.
So far we considered scalar valued stochastic process, , defined on one dimensional time domain.
The domain of can be in , for example the data could be a sample of random surfaces. [59] [60]
The range set of the stochastic process may be extended from to [61] [62] [63] and further to nonlinear manifolds, [64] Hilbert spaces [65] and eventually to metric spaces. [59]
There are Python packages to work with functional data, and its representation, perform exploratory analysis, or preprocessing, and among other tasks such as inference, classification, regression or clustering of functional data.
Some packages can handle functional data under both dense and longitudinal designs.
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.
In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails. The logistic distribution is a special case of the Tukey lambda distribution.
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model. It can contain indicator variables that indicate group membership in an ANOVA, or it can contain values of continuous variables.
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In statistics, binomial regression is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.
In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.
In statistics and machine learning, lasso is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term.
Inverse probability weighting is a statistical technique for estimating quantities related to a population other than the one from which the data was collected. Study designs with a disparate sampling population and population of target inference are common in application. There may be prohibitive factors barring researchers from directly sampling from the target population such as cost, time, or ethical concerns. A solution to this problem is to use an alternate design strategy, e.g. stratified sampling. Weighting, when correctly applied, can potentially improve the efficiency and reduce the bias of unweighted estimators.
Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L2 that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.
In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.
Functional regression is a version of regression analysis when responses or covariates include functional data. Functional regression models can be classified into four types depending on whether the responses or covariates are functional or scalar: (i) scalar responses with functional covariates, (ii) functional responses with scalar covariates, (iii) functional responses with functional covariates, and (iv) scalar or functional responses with functional and scalar covariates. In addition, functional regression models can be linear, partially linear, or nonlinear. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear models.
Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is a convex space, it is not a vector space.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)