Sufficient dimension reduction

Last updated

In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency.

Contents

Dimension reduction has long been a primary goal of regression analysis. Given a response variable y and a p-dimensional predictor vector , regression analysis aims to study the distribution of , the conditional distribution of given . A dimension reduction is a function that maps to a subset of , k < p, thereby reducing the dimension of . [1] For example, may be one or more linear combinations of .

A dimension reduction is said to be sufficient if the distribution of is the same as that of . In other words, no information about the regression is lost in reducing the dimension of if the reduction is sufficient. [1]

Graphical motivation

In a regression setting, it is often useful to summarize the distribution of graphically. For instance, one may consider a scatterplot of versus one or more of the predictors or a linear combination of the predictors. A scatterplot that contains all available regression information is called a sufficient summary plot.

When is high-dimensional, particularly when , it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction with small enough dimension, a sufficient summary plot of versus may be constructed and visually interpreted with relative ease.

Hence sufficient dimension reduction allows for graphical intuition about the distribution of , which might not have otherwise been available for high-dimensional data.

Most graphical methodology focuses primarily on dimension reduction involving linear combinations of . The rest of this article deals only with such reductions.

Dimension reduction subspace

Suppose is a sufficient dimension reduction, where is a matrix with rank . Then the regression information for can be inferred by studying the distribution of , and the plot of versus is a sufficient summary plot.

Without loss of generality, only the space spanned by the columns of need be considered. Let be a basis for the column space of , and let the space spanned by be denoted by . It follows from the definition of a sufficient dimension reduction that

where denotes the appropriate distribution function. Another way to express this property is

or is conditionally independent of , given . Then the subspace is defined to be a dimension reduction subspace (DRS). [2]

Structural dimensionality

For a regression , the structural dimension, , is the smallest number of distinct linear combinations of necessary to preserve the conditional distribution of . In other words, the smallest dimension reduction that is still sufficient maps to a subset of . The corresponding DRS will be d-dimensional. [2]

Minimum dimension reduction subspace

A subspace is said to be a minimum DRS for if it is a DRS and its dimension is less than or equal to that of all other DRSs for . A minimum DRS is not necessarily unique, but its dimension is equal to the structural dimension of , by definition. [2]

If has basis and is a minimum DRS, then a plot of y versus is a minimal sufficient summary plot, and it is (d + 1)-dimensional.

Central subspace

If a subspace is a DRS for , and if for all other DRSs , then it is a central dimension reduction subspace, or simply a central subspace, and it is denoted by . In other words, a central subspace for exists if and only if the intersection of all dimension reduction subspaces is also a dimension reduction subspace, and that intersection is the central subspace . [2]

The central subspace does not necessarily exist because the intersection is not necessarily a DRS. However, if does exist, then it is also the unique minimum dimension reduction subspace. [2]

Existence of the central subspace

While the existence of the central subspace is not guaranteed in every regression situation, there are some rather broad conditions under which its existence follows directly. For example, consider the following proposition from Cook (1998):

Let and be dimension reduction subspaces for . If has density for all and everywhere else, where is convex, then the intersection is also a dimension reduction subspace.

It follows from this proposition that the central subspace exists for such . [2]

Methods for dimension reduction

There are many existing methods for dimension reduction, both graphical and numeric. For example, sliced inverse regression (SIR) and sliced average variance estimation (SAVE) were introduced in the 1990s and continue to be widely used. [3] Although SIR was originally designed to estimate an effective dimension reducing subspace , it is now understood that it estimates only the central subspace, which is generally different.

More recent methods for dimension reduction include likelihood-based sufficient dimension reduction, [4] estimating the central subspace based on the inverse third moment (or kth moment), [5] estimating the central solution space, [6] graphical regression, [2] envelope model, and the principal support vector machine. [7] For more details on these and other methods, consult the statistical literature.

Principal components analysis (PCA) and similar methods for dimension reduction are not based on the sufficiency principle.

Example: linear regression

Consider the regression model

Note that the distribution of is the same as the distribution of . Hence, the span of is a dimension reduction subspace. Also, is 1-dimensional (unless ), so the structural dimension of this regression is .

The OLS estimate of is consistent, and so the span of is a consistent estimator of . The plot of versus is a sufficient summary plot for this regression.

See also

Notes

  1. 1 2 Cook & Adragni (2009) Sufficient Dimension Reduction and Prediction in Regression In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906): 4385–4405
  2. 1 2 3 4 5 6 7 Cook, R.D. (1998) Regression Graphics: Ideas for Studying Regressions Through Graphics, Wiley ISBN   0471193658
  3. Li, K-C. (1991) Sliced Inverse Regression for Dimension Reduction In: Journal of the American Statistical Association , 86(414): 316–327
  4. Cook, R.D. and Forzani, L. (2009) "Likelihood-Based Sufficient Dimension Reduction", Journal of the American Statistical Association , 104(485): 197–208
  5. Yin, X. and Cook, R.D. (2003) Estimating Central Subspaces via Inverse Third Moments In: Biometrika , 90(1): 113–125
  6. Li, B. and Dong, Y.D. (2009) Dimension Reduction for Nonelliptically Distributed Predictors In: Annals of Statistics , 37(3): 1272–1298
  7. Li, Bing; Artemiou, Andreas; Li, Lexin (2011). "Principal support vector machines for linear and nonlinear sufficient dimension reduction". The Annals of Statistics. 39 (6): 3182–3210. arXiv: 1203.2790 . doi:10.1214/11-AOS932. S2CID   88519106.

Related Research Articles

The likelihood function is the joint probability of observed data viewed as a function of the parameters of a statistical model.

In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

<span class="mw-page-title-main">Foliation</span> In mathematics, a type of equivalence relation on an n-manifold

In mathematics, a foliation is an equivalence relation on an n-manifold, the equivalence classes being connected, injectively immersed submanifolds, all of the same dimension p, modeled on the decomposition of the real coordinate space Rn into the cosets x + Rp of the standardly embedded subspace Rp. The equivalence classes are called the leaves of the foliation. If the manifold and/or the submanifolds are required to have a piecewise-linear, differentiable, or analytic structure then one defines piecewise-linear, differentiable, or analytic foliations, respectively. In the most important case of differentiable foliation of class Cr it is usually understood that r ≥ 1. The number p is called the dimension of the foliation and q = np is called its codimension.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable.

<span class="mw-page-title-main">Generalized inverse Gaussian distribution</span>

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling.

In statistics, binomial regression is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.

In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.

In statistics and machine learning, lasso is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term.

<span class="mw-page-title-main">Errors-in-variables models</span> Regression models accounting for possible errors in independent variables

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.

Sliced inverse regression is a tool for dimensionality reduction in the field of multivariate statistics.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

Functional regression is a version of regression analysis when responses or covariates include functional data. Functional regression models can be classified into four types depending on whether the responses or covariates are functional or scalar: (i) scalar responses with functional covariates, (ii) functional responses with scalar covariates, (iii) functional responses with functional covariates, and (iv) scalar or functional responses with functional and scalar covariates. In addition, functional regression models can be linear, partially linear, or nonlinear. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear models.

References