Partial least squares regression

Last updated

Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the Y is categorical.

Contents

PLS is used to find the fundamental relations between two matrices (X and Y), i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases (unless it is regularized).

Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS (and more correct according to Svante Wold [1] ) is projection to latent structures, but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology.

Underlying model

The general underlying model of multivariate PLS is

where X is an matrix of predictors, Y is an matrix of responses; T and U are matrices that are, respectively, projections of X (the X score, component or factor matrix) and projections of Y (the Y scores); P and Q are, respectively, and orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U.

Algorithms

A number of variants of PLS exist for estimating the factor and loading matrices T, U, P and Q. Most of them construct estimates of the linear regression between X and Y as . Some PLS algorithms are only appropriate for the case where Y is a column vector, while others deal with the general case of a matrix Y. Algorithms also differ on whether they estimate the factor matrix T as an orthogonal (that is, orthonormal) matrix or not. [2] [3] [4] [5] [6] [7] The final prediction will be the same for all these varieties of PLS, but the components will differ.

PLS1

PLS1 is a widely used algorithm appropriate for the vector Y case. It estimates T as an orthonormal matrix. In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted)

 1 function PLS1(X, y, l)  2       3     , an initial estimate of w.  4     forto  5           6          (note this is a scalar)  7           8           9          (note this is a scalar) 10         if 11             , break the for loop 12         if 13              14              15     endfor 16     defineW to be the matrix with columns .        Do the same to form the P matrix and q vector. 17      18      19     return

This form of the algorithm does not require centering of the input X and Y, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix X (subtraction of ), but deflation of the vector y is not performed, as it is not necessary (it can be proved that deflating y yields the same results as not deflating [8] ). The user-supplied variable l is the limit on the number of latent factors in the regression; if it equals the rank of the matrix X, the algorithm will yield the least squares regression estimates for B and

Extensions

In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. [9] Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies.

Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability. [10] In brief, a new Z matrix, with the same amount of columns as the X matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables.

In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF). [11] Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth. [12]

A PLS version based on singular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware. [13]

PLS correlation (PLSC) is another methodology related to PLS regression, [14] which has been used in neuroimaging [14] [15] [16] and more recently in sport science, [17] to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then uses singular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups. [18] It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration. [18] [14]

See also

Further reading

Related Research Articles

Principal component analysis Conversion of observations of possibly-correlated variables into values of fewer, uncorrelated variables

The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.

Symmetric matrix Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

Covariance matrix Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors, plus "error" terms.

Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting.

Total least squares

In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.

In statistics, multicollinearity is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors. That is, a multivariate regression model with collinear predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

Structural equation modeling Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a label for a diverse set of methods used by scientists in both experimental and observational research across the sciences, business, and other fields. It is used most in the social and behavioral sciences. A definition of SEM is difficult without reference to highly technical language, but a good starting place is the name itself.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.

Generalized chi-squared distribution

In probability theory and statistics, the generalized chi-squared distribution is the distribution of a quadratic form of a multinormal variable, or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

The partial least squares path modeling or partial least squares structural equation modeling is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent variables.

SmartPLS

SmartPLS is a software with graphical user interface for variance-based structural equation modeling (SEM) using the partial least squares (PLS) path modeling method. Besides estimating path models with latent variables using the PLS-SEM algorithm, the software computes standard results assessment criteria and it supports additional statistical analyses . Since SmartPLS is programmed in Java, it can be executed and run on different computer operating systems such as Windows and Mac.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

In statistics, confirmatory composite analysis (CCA) is a sub-type of structural equation modeling (SEM). Although, historically, CCA emerged from a re-orientation and re-start of partial least squares path modeling (PLS-PM), it has become an independent approach and the two should not be confused. In many ways it is similar to, but also quite distinct from confirmatory factor analysis (CFA). It shares with CFA the process of model specification, model identification, model estimation, and model assessment. However, in contrast to CFA which always assumes the existence of latent variables, in CCA all variables can be observable, with their interrelationships expressed in terms of composites, i.e., linear compounds of subsets of the variables. The composites are treated as the fundamental objects and path diagrams can be used to illustrate their relationships. This makes CCA particularly useful for disciplines examining theoretical concepts that are designed to attain certain goals, so-called artifacts, and their interplay with theoretical concepts of behavioral sciences.

References

  1. Wold, S; Sjöström, M.; Eriksson, L. (2001). "PLS-regression: a basic tool of chemometrics". Chemometrics and Intelligent Laboratory Systems. 58 (2): 109–130. doi:10.1016/S0169-7439(01)00155-1.
  2. Lindgren, F; Geladi, P; Wold, S (1993). "The kernel algorithm for PLS". J. Chemometrics. 7: 45–59. doi:10.1002/cem.1180070104. S2CID   122950427.
  3. de Jong, S.; ter Braak, C.J.F. (1994). "Comments on the PLS kernel algorithm". J. Chemometrics. 8 (2): 169–174. doi:10.1002/cem.1180080208.
  4. Dayal, B.S.; MacGregor, J.F. (1997). "Improved PLS algorithms". J. Chemometrics. 11 (1): 73–85. doi:10.1002/(SICI)1099-128X(199701)11:1<73::AID-CEM435>3.0.CO;2-#.
  5. de Jong, S. (1993). "SIMPLS: an alternative approach to partial least squares regression". Chemometrics and Intelligent Laboratory Systems. 18 (3): 251–263. doi:10.1016/0169-7439(93)85002-X.
  6. Rannar, S.; Lindgren, F.; Geladi, P.; Wold, S. (1994). "A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm". J. Chemometrics. 8 (2): 111–125. doi:10.1002/cem.1180080204. S2CID   121613293.
  7. Abdi, H. (2010). "Partial least squares regression and projection on latent structure regression (PLS-Regression)". Wiley Interdisciplinary Reviews: Computational Statistics. 2: 97–106. doi:10.1002/wics.51.
  8. Höskuldsson, Agnar (1988). "PLS Regression Methods". Journal of Chemometrics. 2 (3): 219. doi:10.1002/cem.1180020306. S2CID   120052390.
  9. Trygg, J; Wold, S (2002). "Orthogonal Projections to Latent Structures". Journal of Chemometrics. 16 (3): 119–128. doi:10.1002/cem.695. S2CID   122699039.
  10. Sæbøa, S.; Almøya, T.; Flatbergb, A.; Aastveita, A.H.; Martens, H. (2008). "LPLS-regression: a method for prediction and classification under the influence of background information on predictor variables". Chemometrics and Intelligent Laboratory Systems. 91 (2): 121–132. doi:10.1016/j.chemolab.2007.10.006.
  11. Kelly, Bryan; Pruitt, Seth (2015-06-01). "The three-pass regression filter: A new approach to forecasting using many predictors". Journal of Econometrics. High Dimensional Problems in Econometrics. 186 (2): 294–316. doi:10.1016/j.jeconom.2015.02.011.
  12. Kelly, Bryan; Pruitt, Seth (2013-10-01). "Market Expectations in the Cross-Section of Present Values". The Journal of Finance. 68 (5): 1721–1756. CiteSeerX   10.1.1.498.5973 . doi:10.1111/jofi.12060. ISSN   1540-6261.
  13. Lorenzi, Marco; Altmann, Andre; Gutman, Boris; Wray, Selina; Arber, Charles; Hibar, Derrek P.; Jahanshad, Neda; Schott, Jonathan M.; Alexander, Daniel C. (2018-03-20). "Susceptibility of brain atrophy to TRIB3 in Alzheimer's disease, evidence from functional prioritization in imaging genetics". Proceedings of the National Academy of Sciences. 115 (12): 3162–3167. doi: 10.1073/pnas.1706100115 . ISSN   0027-8424. PMC   5866534 . PMID   29511103.
  14. 1 2 3 Krishnan, Anjali; Williams, Lynne J.; McIntosh, Anthony Randal; Abdi, Hervé (May 2011). "Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review". NeuroImage. 56 (2): 455–475. doi:10.1016/j.neuroimage.2010.07.034. PMID   20656037. S2CID   8796113.
  15. McIntosh, Anthony R.; Mišić, Bratislav (2013-01-03). "Multivariate Statistical Analyses for Neuroimaging Data". Annual Review of Psychology. 64 (1): 499–525. doi:10.1146/annurev-psych-113011-143804. ISSN   0066-4308. PMID   22804773.
  16. Beggs, Clive B.; Magnano, Christopher; Belov, Pavel; Krawiecki, Jacqueline; Ramasamy, Deepa P.; Hagemeier, Jesper; Zivadinov, Robert (2016-05-02). de Castro, Fernando (ed.). "Internal Jugular Vein Cross-Sectional Area and Cerebrospinal Fluid Pulsatility in the Aqueduct of Sylvius: A Comparative Study between Healthy Subjects and Multiple Sclerosis Patients". PLOS ONE. 11 (5): e0153960. Bibcode:2016PLoSO..1153960B. doi: 10.1371/journal.pone.0153960 . ISSN   1932-6203. PMC   4852898 . PMID   27135831.
  17. Weaving, Dan; Jones, Ben; Ireton, Matt; Whitehead, Sarah; Till, Kevin; Beggs, Clive B. (2019-02-14). Connaboy, Chris (ed.). "Overcoming the problem of multicollinearity in sports performance data: A novel application of partial least squares correlation analysis". PLOS ONE. 14 (2): e0211776. Bibcode:2019PLoSO..1411776W. doi: 10.1371/journal.pone.0211776 . ISSN   1932-6203. PMC   6375576 . PMID   30763328.
  18. 1 2 Abdi, Hervé; Williams, Lynne J. (2013), Reisfeld, Brad; Mayeno, Arthur N. (eds.), "Partial Least Squares Methods: Partial Least Squares Correlation and Partial Least Square Regression", Computational Toxicology, Humana Press, 930, pp. 549–579, doi:10.1007/978-1-62703-059-5_23, ISBN   9781627030588, PMID   23086857