Partial least squares regression

Last updated

Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the Y is categorical.

Contents

PLS is used to find the fundamental relations between 2 matrices (X and Y), i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases (unless it is regularized).

Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is projection to latent structures, [1] [2] but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology.

Core Idea

Core Idea of PLS. The loading vectors
p
-
1
,
q
-
1
{\displaystyle {\vec {p}}_{1},{\vec {q}}_{1}}
in the input and output space are drawn in red (not normalized for better visibility). When
x
1
{\displaystyle x_{1}}
increases (independent of
x
2
{\displaystyle x_{2}}
),
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
increase. Core Idea PLS.png
Core Idea of PLS. The loading vectors in the input and output space are drawn in red (not normalized for better visibility). When increases (independent of ), and increase.

Given paired random samples . In the first step , the partial least squares regression searches for the normalized direction , such that the correlation is maximized [3] . Note below, the algorithm is denoted in matrix notation.

Underlying model

The general underlying model of multivariate PLS with components is

where

The decompositions of X and Y are made so as to maximise the covariance between T and U.

Note that this covariance is defined pair by pair: the covariance of column i of T (length n) with the column i of U (length n) is maximized. Additionally, the covariance of the column i of T with the column j of U (with ) is zero.

In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores).

Algorithms

A number of variants of PLS exist for estimating the factor and loading matrices T, U, P and Q. Most of them construct estimates of the linear regression between X and Y as . Some PLS algorithms are only appropriate for the case where Y is a column vector, while others deal with the general case of a matrix Y. Algorithms also differ on whether they estimate the factor matrix T as an orthogonal (that is, orthonormal) matrix or not. [4] [5] [6] [7] [8] [9] The final prediction will be the same for all these varieties of PLS, but the components will differ.

PLS is composed of iteratively repeating the following steps k times (for k components):

  1. finding the directions of maximal covariance in input and output space
  2. performing least squares regression on the input score
  3. deflating the input and/or target

PLS1

PLS1 is a widely used algorithm appropriate for the vector Y case. It estimates T as an orthonormal matrix. (Caution: the t vectors in the code below may not be normalized appropriately; see talk.) In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted).

 1 function PLS1(X, y, l)  2       3     , an initial estimate of w.  4     forto  5           6          (note this is a scalar)  7           8           9          (note this is a scalar) 10         if 11             , break the for loop 12         if 13              14              15     endfor 16     defineW to be the matrix with columns .        Do the same to form the P matrix and q vector. 17      18      19     return

This form of the algorithm does not require centering of the input X and Y, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix X (subtraction of ), but deflation of the vector y is not performed, as it is not necessary (it can be proved that deflating y yields the same results as not deflating [10] ). The user-supplied variable l is the limit on the number of latent factors in the regression; if it equals the rank of the matrix X, the algorithm will yield the least squares regression estimates for B and

Geometric interpretation of the deflation step in the input space Deflation-The-geometric-interpretation-of-the-deflation-step-in-the-PLS-Algorithm.jpg
Geometric interpretation of the deflation step in the input space

Extensions

OPLS

In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. [11] Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies.

The general underlying model of OPLS is

or in O2-PLS [12]

L-PLS

Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability. [13] In brief, a new Z matrix, with the same number of columns as the X matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables.

3PRF

In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF). [14] Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth. [15]

Partial Least Square SVD

A PLS version based on singular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware. [16]

PLS correlation

PLS correlation (PLSC) is another methodology related to PLS regression, [17] which has been used in neuroimaging [17] [18] [19] and sport science, [20] to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then uses singular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups. [21] It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration. [21] [17]

See also

Related Research Articles

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

<span class="mw-page-title-main">Least squares</span> Approximation method in statistics

The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals made in the results of each individual equation.

Covariance in probability theory and statistics is a measure of the joint variability of two random variables.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors plus "error" terms, hence factor analysis can be thought of as a special case of errors-in-variables models.

Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.

The Mahalanobis distance is a measure of the distance between a point and a distribution , introduced by P. C. Mahalanobis in 1936. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based on measurements in 1927.

<span class="mw-page-title-main">Total least squares</span>

In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.

<span class="mw-page-title-main">Nonlinear regression</span> Regression analysis

In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations.

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable.

Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregressive model by allowing for multivariate time series. VAR models are often used in economics and the natural sciences.

In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.

The partial least squares path modeling or partial least squares structural equation modeling is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent variables.

<span class="mw-page-title-main">SmartPLS</span> Software

SmartPLS is a software with graphical user interface for variance-based structural equation modeling (SEM) using the partial least squares (PLS) path modeling method. Users can estimate models with their data by using basic PLS-SEM, weighted PLS-SEM (WPLS), consistent PLS-SEM (PLSc-SEM), and sumscores regression algorithms. The software computes standard results assessment criteria and it supports additional statistical analyses . Since SmartPLS is programmed in Java, it can be executed and run on different computer operating systems such as Windows and Mac.

In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable. If the explanatory variables are measured with error then errors-in-variables models are required, also known as measurement error models.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

In statistics, confirmatory composite analysis (CCA) is a sub-type of structural equation modeling (SEM). Although, historically, CCA emerged from a re-orientation and re-start of partial least squares path modeling (PLS-PM), it has become an independent approach and the two should not be confused. In many ways it is similar to, but also quite distinct from confirmatory factor analysis (CFA). It shares with CFA the process of model specification, model identification, model estimation, and model assessment. However, in contrast to CFA which always assumes the existence of latent variables, in CCA all variables can be observable, with their interrelationships expressed in terms of composites, i.e., linear compounds of subsets of the variables. The composites are treated as the fundamental objects and path diagrams can be used to illustrate their relationships. This makes CCA particularly useful for disciplines examining theoretical concepts that are designed to attain certain goals, so-called artifacts, and their interplay with theoretical concepts of behavioral sciences.

References

  1. Wold, S; Sjöström, M.; Eriksson, L. (2001). "PLS-regression: a basic tool of chemometrics". Chemometrics and Intelligent Laboratory Systems. 58 (2): 109–130. doi:10.1016/S0169-7439(01)00155-1. S2CID   11920190.
  2. Abdi, Hervé (2010). "Partial least squares regression and projection on latent structure regression (PLS Regression)". WIREs Computational Statistics. 2: 97–106. doi:10.1002/wics.51. S2CID   122685021.
  3. See lecture https://www.youtube.com/watch?v=Px2otK2nZ1c&t=46s
  4. Lindgren, F; Geladi, P; Wold, S (1993). "The kernel algorithm for PLS". J. Chemometrics. 7: 45–59. doi:10.1002/cem.1180070104. S2CID   122950427.
  5. de Jong, S.; ter Braak, C.J.F. (1994). "Comments on the PLS kernel algorithm". J. Chemometrics. 8 (2): 169–174. doi:10.1002/cem.1180080208. S2CID   221549296.
  6. Dayal, B.S.; MacGregor, J.F. (1997). "Improved PLS algorithms". J. Chemometrics. 11 (1): 73–85. doi:10.1002/(SICI)1099-128X(199701)11:1<73::AID-CEM435>3.0.CO;2-#. S2CID   120753851.
  7. de Jong, S. (1993). "SIMPLS: an alternative approach to partial least squares regression". Chemometrics and Intelligent Laboratory Systems. 18 (3): 251–263. doi:10.1016/0169-7439(93)85002-X.
  8. Rannar, S.; Lindgren, F.; Geladi, P.; Wold, S. (1994). "A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm". J. Chemometrics. 8 (2): 111–125. doi:10.1002/cem.1180080204. S2CID   121613293.
  9. Abdi, H. (2010). "Partial least squares regression and projection on latent structure regression (PLS-Regression)". Wiley Interdisciplinary Reviews: Computational Statistics. 2: 97–106. doi:10.1002/wics.51. S2CID   122685021.
  10. Höskuldsson, Agnar (1988). "PLS Regression Methods". Journal of Chemometrics. 2 (3): 219. doi:10.1002/cem.1180020306. S2CID   120052390.
  11. Trygg, J; Wold, S (2002). "Orthogonal Projections to Latent Structures". Journal of Chemometrics. 16 (3): 119–128. doi:10.1002/cem.695. S2CID   122699039.
  12. Eriksson, S. Wold, and J. Tryg. "O2PLS® for improved analysis and visualization of complex data." https://www.dynacentrix.com/telecharg/SimcaP/O2PLS.pdf
  13. Sæbøa, S.; Almøya, T.; Flatbergb, A.; Aastveita, A.H.; Martens, H. (2008). "LPLS-regression: a method for prediction and classification under the influence of background information on predictor variables". Chemometrics and Intelligent Laboratory Systems. 91 (2): 121–132. doi:10.1016/j.chemolab.2007.10.006.
  14. Kelly, Bryan; Pruitt, Seth (2015-06-01). "The three-pass regression filter: A new approach to forecasting using many predictors". Journal of Econometrics. High Dimensional Problems in Econometrics. 186 (2): 294–316. doi:10.1016/j.jeconom.2015.02.011.
  15. Kelly, Bryan; Pruitt, Seth (2013-10-01). "Market Expectations in the Cross-Section of Present Values". The Journal of Finance. 68 (5): 1721–1756. CiteSeerX   10.1.1.498.5973 . doi:10.1111/jofi.12060. ISSN   1540-6261.
  16. Lorenzi, Marco; Altmann, Andre; Gutman, Boris; Wray, Selina; Arber, Charles; Hibar, Derrek P.; Jahanshad, Neda; Schott, Jonathan M.; Alexander, Daniel C. (2018-03-20). "Susceptibility of brain atrophy to TRIB3 in Alzheimer's disease, evidence from functional prioritization in imaging genetics". Proceedings of the National Academy of Sciences. 115 (12): 3162–3167. doi: 10.1073/pnas.1706100115 . ISSN   0027-8424. PMC   5866534 . PMID   29511103.
  17. 1 2 3 Krishnan, Anjali; Williams, Lynne J.; McIntosh, Anthony Randal; Abdi, Hervé (May 2011). "Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review". NeuroImage. 56 (2): 455–475. doi:10.1016/j.neuroimage.2010.07.034. PMID   20656037. S2CID   8796113.
  18. McIntosh, Anthony R.; Mišić, Bratislav (2013-01-03). "Multivariate Statistical Analyses for Neuroimaging Data". Annual Review of Psychology. 64 (1): 499–525. doi:10.1146/annurev-psych-113011-143804. ISSN   0066-4308. PMID   22804773.
  19. Beggs, Clive B.; Magnano, Christopher; Belov, Pavel; Krawiecki, Jacqueline; Ramasamy, Deepa P.; Hagemeier, Jesper; Zivadinov, Robert (2016-05-02). de Castro, Fernando (ed.). "Internal Jugular Vein Cross-Sectional Area and Cerebrospinal Fluid Pulsatility in the Aqueduct of Sylvius: A Comparative Study between Healthy Subjects and Multiple Sclerosis Patients". PLOS ONE. 11 (5): e0153960. Bibcode:2016PLoSO..1153960B. doi: 10.1371/journal.pone.0153960 . ISSN   1932-6203. PMC   4852898 . PMID   27135831.
  20. Weaving, Dan; Jones, Ben; Ireton, Matt; Whitehead, Sarah; Till, Kevin; Beggs, Clive B. (2019-02-14). Connaboy, Chris (ed.). "Overcoming the problem of multicollinearity in sports performance data: A novel application of partial least squares correlation analysis". PLOS ONE. 14 (2): e0211776. Bibcode:2019PLoSO..1411776W. doi: 10.1371/journal.pone.0211776 . ISSN   1932-6203. PMC   6375576 . PMID   30763328.
  21. 1 2 Abdi, Hervé; Williams, Lynne J. (2013), Reisfeld, Brad; Mayeno, Arthur N. (eds.), "Partial Least Squares Methods: Partial Least Squares Correlation and Partial Least Square Regression", Computational Toxicology, Humana Press, vol. 930, pp. 549–579, doi:10.1007/978-1-62703-059-5_23, ISBN   9781627030588, PMID   23086857

Literature