Lasso (statistics)

Last updated

In statistics and machine learning, lasso (least absolute shrinkage and selection operator; also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, [1] and later by Robert Tibshirani, [2] who coined the term.

Contents

Lasso was originally formulated for linear regression models. This simple case reveals a substantial amount about the estimator. These include its relationship to ridge regression and best subset selection and the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates do not need to be unique if covariates are collinear.

Though originally defined for linear regression, lasso regularization is easily extended to other statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. [2] [3] Lasso's ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms of geometry, Bayesian statistics and convex analysis.

The LASSO is closely related to basis pursuit denoising.

History

Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model. [2] [1]

Lasso was developed independently in geophysics literature in 1986, based on prior work that used the penalty for both fitting and penalization of the coefficients. Statistician Robert Tibshirani independently rediscovered and popularized it in 1996, based on Breiman's nonnegative garrote. [1] [4]

Prior to lasso, the most widely used method for choosing covariates was stepwise selection. That approach only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can increase prediction error.

At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable.

Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, excluding them from impacting prediction. This idea is similar to ridge regression, which also shrinks the size of the coefficients; however, ridge regression does not set coefficients to zero (and, thus, does not perform variable selection).

Basic form

Least squares

Consider a sample consisting of N cases, each of which consists of p covariates and a single outcome. Let be the outcome and be the covariate vector for the ith case. Then the objective of lasso is to solve [2]

Here is the constant coefficient, is the coefficient vector, and is a prespecified free parameter that determines the degree of regularization.

Letting be the covariate matrix, so that and is the ith row of , the expression can be written more compactly as where is the standard norm.

Denoting the scalar mean of the data points by and the mean of the response variables by , the resulting estimate for is , so that and therefore it is standard to work with variables that have been made zero-mean. Additionally, the covariates are typically standardized so that the solution does not depend on the measurement scale.

It can be helpful to rewrite in the so-called Lagrangian form where the exact relationship between and is data dependent.

Orthonormal covariates

Some basic properties of the lasso estimator can now be considered.

Assuming first that the covariates are orthonormal so that , where is the Kronecker delta, or, equivalently, , then using subgradient methods it can be shown that [2] is referred to as the soft thresholding operator, since it translates values towards zero (making them exactly zero if they are small enough) instead of setting smaller values to zero and leaving larger ones untouched as the hard thresholding operator, often denoted , would.

In ridge regression the objective is to minimize

Using and the ridge regression formula: , [5] it yields:

Ridge regression shrinks all coefficients by a uniform factor of and does not set any coefficients to zero. [6]

It can also be compared to regression with best subset selection, in which the goal is to minimize where is the " norm", which is defined as if exactly m components of z are nonzero. In this case, it can be shown that where is the so-called hard thresholding function and is an indicator function (it is 1 if its argument is true and 0 otherwise).

Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it.

Correlated covariates

In one special case two covariates, say j and k, are identical for each observation, so that , where . Then the values of and that minimize the lasso objective function are not uniquely determined. In fact, if some in which , then if replacing by and by , while keeping all the other fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers. [7] Several variants of the lasso, including the Elastic net regularization, have been designed to address this shortcoming.

General form

Lasso regularization can be extended to other objective functions such as those for generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. [2] [3] Given the objective function the lasso regularized version of the estimator s the solution to where only is penalized while is free to take any allowed value, just as was not penalized in the basic case.

Interpretations

Geometric interpretation

Forms of the constraint regions for lasso and ridge regression. L1 and L2 balls.svg
Forms of the constraint regions for lasso and ridge regression.

Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function but with respect to different constraints: for lasso and for ridge. The figure shows that the constraint region defined by the norm is a square rotated so that its corners lie on the axes (in general a cross-polytope), while the region defined by the norm is a circle (in general an n-sphere), which is rotationally invariant and, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or a higher-dimensional equivalent) of a hypercube, for which some components of are identically zero, while in the case of an n-sphere, the points on the boundary for which some of the components of are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components of are zero than one for which none of them are.

Making λ easier to interpret with an accuracy-simplicity tradeoff

The lasso can be rescaled so that it becomes easy to anticipate and influence the degree of shrinkage associated with a given value of . [8] It is assumed that is standardized with z-scores and that is centered (zero mean). Let represent the hypothesized regression coefficients and let refer to the data-optimized ordinary least squares solutions. We can then define the Lagrangian as a tradeoff between the in-sample accuracy of the data-optimized solutions and the simplicity of sticking to the hypothesized values. [9] This results in where is specified below and the "prime" symbol stands for transpose. The first fraction represents relative accuracy, the second fraction relative simplicity, and balances between the two.

Solution paths for the
l
1
{\displaystyle \ell _{1}}
norm and
l
2
{\displaystyle \ell _{2}}
norm when
b
OLS
=
2
{\displaystyle b_{\text{OLS}}=2}
and
b
0
=
0
{\displaystyle \beta _{0}=0} SolutionPathL1L2.svg
Solution paths for the norm and norm when and

Given a single regressor, relative simplicity can be defined by specifying as , which is the maximum amount of deviation from when . Assuming that , the solution path can be defined in terms of : If , the ordinary least squares solution (OLS) is used. The hypothesized value of is selected if is bigger than . Furthermore, if , then represents the proportional influence of . In other words, measures in percentage terms the minimal amount of influence of the hypothesized value relative to the data-optimized OLS solution.

If an -norm is used to penalize deviations from zero given a single regressor, the solution path is given by Like , moves in the direction of the point when is close to zero; but unlike , the influence of diminishes in if increases (see figure).
Given multiple regressors, the moment that a parameter is activated (i.e. allowed to deviate from ) is also determined by a regressor's contribution to accuracy. First, An of 75% means that in-sample accuracy improves by 75% if the unrestricted OLS solutions are used instead of the hypothesized values. The individual contribution of deviating from each hypothesis can be computed with the x matrix where . If when is computed, then the diagonal elements of sum to . The diagonal values may be smaller than 0 or, less often, larger than 1. If regressors are uncorrelated, then the diagonal element of simply corresponds to the value between and .

A rescaled version of the adaptive lasso of can be obtained by setting . [10] If regressors are uncorrelated, the moment that the parameter is activated is given by the diagonal element of . Assuming for convenience that is a vector of zeros, That is, if regressors are uncorrelated, again specifies the minimal influence of . Even when regressors are correlated, the first time that a regression parameter is activated occurs when is equal to the highest diagonal element of .

These results can be compared to a rescaled version of the lasso by defining , which is the average absolute deviation of from . Assuming that regressors are uncorrelated, then the moment of activation of the regressor is given by

For , the moment of activation is again given by . If is a vector of zeros and a subset of relevant parameters are equally responsible for a perfect fit of , then this subset is activated at a value of . The moment of activation of a relevant regressor then equals . In other words, the inclusion of irrelevant regressors delays the moment that relevant regressors are activated by this rescaled lasso. The adaptive lasso and the lasso are special cases of a '1ASTc' estimator. The latter only groups parameters together if the absolute correlation among regressors is larger than a user-specified value. [8]

Bayesian interpretation

Laplace distributions are sharply peaked at their mean with more probability density concentrated there compared to a normal distribution. Laplace pdf mod.svg
Laplace distributions are sharply peaked at their mean with more probability density concentrated there compared to a normal distribution.

Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normal prior distributions, lasso can be interpreted as linear regression for which the coefficients have Laplace prior distributions. [11] The Laplace distribution is sharply peaked at zero (its first derivative is discontinuous at zero) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not. [2]

Convex relaxation interpretation

Lasso can also be viewed as a convex relaxation of the best subset selection regression problem, which is to find the subset of covariates that results in the smallest value of the objective function for some fixed , where n is the total number of covariates. The " norm", , (the number of nonzero entries of a vector), is the limiting case of " norms", of the form (where the quotation marks signify that these are not really norms for since is not convex for , so the triangle inequality does not hold). Therefore, since p = 1 is the smallest value for which the " norm" is convex (and therefore actually a norm), lasso is, in some sense, the best convex approximation to the best subset selection problem, since the region defined by is the convex hull of the region defined by for .

Generalizations

Lasso variants have been created in order to remedy limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or exploiting dependencies among the covariates.

Elastic net regularization adds an additional ridge regression-like penalty that improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy. [7]

Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others. [12] Further extensions of group lasso perform variable selection within individual groups (sparse group lasso) and allow overlap between groups (overlap group lasso). [13] [14]

Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match system structure. [15] Lasso-regularized models can be fit using techniques including subgradient methods, least-angle regression (LARS), and proximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen using cross-validation.

Elastic net

In 2005, Zou and Hastie introduced the elastic net. [7] When p > n (the number of covariates is greater than the sample size) lasso can select only n covariates (even when more are associated with the outcome) and it tends to select one covariate from any set of highly correlated covariates. Additionally, even when n > p, ridge regression tends to perform better given strongly correlated covariates.

The elastic net extends lasso by adding an additional penalty term giving which is equivalent to solving

This problem can be written in a simple lasso form letting

Then , which, when the covariates are orthogonal to each other, gives

So the result of the elastic net penalty is a combination of the effects of the lasso and ridge penalties.

Returning to the general case, the fact that the penalty function is now strictly convex means that if , , which is a change from lasso. [7] In general, if is the sample correlation matrix because the 's are normalized.

Therefore, highly correlated covariates tend to have similar regression coefficients, with the degree of similarity depending on both and , which is different from lasso. This phenomenon, in which strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect. Grouping is desirable since, in applications such as tying genes to a disease, finding all the associated covariates is preferable, rather than selecting one from each set of correlated covariates, as lasso often does. [7] In addition, selecting only one from each group typically results in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso).

Group lasso

In 2006, Yuan and Lin introduced the group lasso to allow predefined groups of covariates to jointly be selected into or out of a model. [12] This is useful in many settings, perhaps most obviously when a categorical variable is coded as a collection of binary covariates. In this case, group lasso can ensure that all the variables encoding the categorical covariate are included or excluded together. Another setting in which grouping is natural is in biological studies. Since genes and proteins often lie in known pathways, which pathways are related to an outcome may be more significant than whether individual genes are. The objective function for the group lasso is a natural generalization of the standard lasso objective where the design matrix and covariate vector have been replaced by a collection of design matrices and covariate vectors , one for each of the J groups. Additionally, the penalty term is now a sum over norms defined by the positive definite matrices . If each covariate is in its own group and , then this reduces to the standard lasso, while if there is only a single group and , it reduces to ridge regression. Since the penalty reduces to an norm on the subspaces defined by each group, it cannot select out only some of the covariates from a group, just as ridge regression cannot. However, because the penalty is the sum over the different subspace norms, as in the standard lasso, the constraint has some non-differential points, which correspond to some subspaces being identically zero. Therefore, it can set the coefficient vectors corresponding to some subspaces to zero, while only shrinking others. However, it is possible to extend the group lasso to the so-called sparse group lasso, which can select individual covariates within a group, by adding an additional penalty to each group subspace. [13] Another extension, group lasso with overlap allows covariates to be shared across groups, e.g., if a gene were to occur in two pathways. [14]

The "gglasso" package by in R, allows for fast and efficient implementation of Group LASSO. [16]

Fused lasso

In some cases, the phenomenon under study may have important spatial or temporal structure that must be considered during analysis, such as time series or image-based data. In 2005, Tibshirani and colleagues introduced the fused lasso to extend the use of lasso to this type of data. [15] The fused lasso objective function is

The first constraint is the lasso constraint, while the second directly penalizes large changes with respect to the temporal or spatial structure, which forces the coefficients to vary smoothly to reflect the system's underlying logic. Clustered lasso [17] is a generalization of fused lasso that identifies and groups relevant covariates based on their effects (coefficients). The basic idea is to penalize the differences between the coefficients so that nonzero ones cluster. This can be modeled using the following regularization:

In contrast, variables can be clustered into highly correlated groups, and then a single representative covariate can be extracted from each cluster. [18]

Algorithms exist that solve the fused lasso problem, and some generalizations of it. Algorithms can solve it exactly in a finite number of operations. [19]

Quasi-norms and bridge regression

An example of a PQSQ (piece-wise quadratic function of subquadratic growth) potential function
u
(
x
)
{\displaystyle u(x)}
; here the majorant function is
f
(
x
)
=
x
{\displaystyle f(x)=x}
; the potential is defined with trimming after
r
3
{\displaystyle r_{3}}
. PQSQ potential.png
An example of a PQSQ (piece-wise quadratic function of subquadratic growth) potential function ; here the majorant function is ; the potential is defined with trimming after .
An example how efficient PQSQ regularized regression works just as
l
1
{\displaystyle \ell ^{1}}
-norm lasso. PQSQ2.png
An example how efficient PQSQ regularized regression works just as -norm lasso.

Lasso, elastic net, group and fused lasso construct the penalty functions from the and norms (with weights, if necessary). The bridge regression utilises general norms () and quasinorms (). [21] For example, for p=1/2 the analogue of lasso objective in the Lagrangian form is to solve where

It is claimed that the fractional quasi-norms () provide more meaningful results in data analysis both theoretically and empirically. [22] The non-convexity of these quasi-norms complicates the optimization problem. To solve this problem, an expectation-minimization procedure is developed [23] and implemented [20] for minimization of function where is an arbitrary concave monotonically increasing function (for example, gives the lasso penalty and gives the penalty).

The efficient algorithm for minimization is based on piece-wise quadratic approximation of subquadratic growth (PQSQ). [23]

Adaptive lasso

The adaptive lasso was introduced by Zou in 2006 for linear regression [10] and by Zhang and Lu in 2007 for proportional hazards regression. [24]

Prior lasso

The prior lasso was introduced for generalized linear models by Jiang et al. in 2016 to incorporate prior information, such as the importance of certain covariates. [25] In prior lasso, such information is summarized into pseudo responses (called prior responses) and then an additional criterion function is added to the usual objective function with a lasso penalty. Without loss of generality, in linear regression, the new objective function can be written as which is equivalent to

the usual lasso objective function with the responses being replaced by a weighted average of the observed responses and the prior responses (called the adjusted response values by the prior information).

In prior lasso, the parameter is called a balancing parameter, in that it balances the relative importance of the data and the prior information. In the extreme case of , prior lasso is reduced to lasso. If , prior lasso will solely rely on the prior information to fit the model. Furthermore, the balancing parameter has another appealing interpretation: it controls the variance of in its prior distribution from a Bayesian viewpoint.

Prior lasso is more efficient in parameter estimation and prediction (with a smaller estimation error and prediction error) when the prior information is of high quality, and is robust to the low quality prior information with a good choice of the balancing parameter .

Computing lasso solutions

The loss function of the lasso is not differentiable, but a wide variety of techniques from convex analysis and optimization theory have been developed to compute the solutions path of the lasso. These include coordinate descent, [26] subgradient methods, least-angle regression (LARS), and proximal gradient methods. [27] Subgradient methods are the natural generalization of traditional methods such as gradient descent and stochastic gradient descent to the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit efficiently, though it may not perform well in all circumstances. LARS generates complete solution paths. [27] Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant, the data and the available resources. However, proximal methods generally perform well.

The "glmnet" package in R, where "glm" is a reference to "generalized linear models" and "net" refers to the "net" from "elastic net" provides an extremely efficient way to implement LASSO and some of its variants. [28] [29] [30]

The "celer" package in Python provides a highly efficient solver for the Lasso problem, often outperforming traditional solvers like scikit-learn by up to 100 times in certain scenarios, particularly with high-dimensional datasets. This package leverages dual extrapolation techniques to achieve its performance gains. [31] [32] The celer package is available at GitHub.

Choice of regularization parameter

Choosing the regularization parameter () is a fundamental part of lasso. A good value is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction accuracy and interpretability. However, if the regularization becomes too strong, important variables may be omitted and coefficients may be shrunk excessively, which can harm both predictive capacity and inferencing. Cross-validation is often used to find the regularization parameter.

Information criteria such as the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) might be preferable to cross-validation, because they are faster to compute and their performance is less volatile in small samples. [33] An information criterion selects the estimator's regularization parameter by maximizing a model's in-sample accuracy while penalizing its effective number of parameters/degrees of freedom. Zou et al. proposed to measure the effective degrees of freedom by counting the number of parameters that deviate from zero. [34] The degrees of freedom approach was considered flawed by Kaufman and Rosset [35] and Janson et al., [36] because a model's degrees of freedom might increase even when it is penalized harder by the regularization parameter. As an alternative, the relative simplicity measure defined above can be used to count the effective number of parameters. [33] For the lasso, this measure is given by which monotonically increases from zero to as the regularization parameter decreases from to zero.

Selected applications

LASSO has been applied in economics and finance, and was found to improve prediction and to select sometimes neglected variables, for example in corporate bankruptcy prediction literature, [37] or high growth firms prediction. [38]

See also

Related Research Articles

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias.

<span class="mw-page-title-main">Ordinary least squares</span> Method for estimating the unknown parameters in a linear regression model

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.

<span class="mw-page-title-main">Regularization (mathematics)</span> Technique to make a model more generalizable and transferable

In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that converts the answer of a problem to a simpler one. It is often used in solving ill-posed problems or to prevent overfitting.

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. The hazard rate at time is the probability per short time dt that an event will occur between and given that up to time no event has occurred yet. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed, may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated.

Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.

A kernel smoother is a statistical technique to estimate a real valued function as the weighted average of neighboring observed data. The weight is defined by the kernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter. Kernel smoothing is a type of weighted moving average.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). PCR is a form of reduced rank regression. More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.

In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than typically considered in classical multivariate analysis. The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.

In statistics and, in particular, in the fitting of linear or logistic regression models, the elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods. Nevertheless, elastic net regularization is typically more accurate than both methods with regard to reconstruction.

Proximal gradientmethods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization of the form

In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as where the vector norm enforcing a regularization penalty on has been extended to a matrix norm on .

De-sparsified lasso contributes to construct confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in high-dimensional model.

In statistics, linear regression is a model that estimates the linear relationship between a scalar response and one or more explanatory variables. A model with exactly one explanatory variable is a simple linear regression; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable.

Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.

Structured sparsity regularization is a class of methods, and an area of research in statistical learning theory, that extend and generalize sparsity regularization learning methods. Both sparsity and structured sparsity regularization methods seek to exploit the assumption that the output variable to be learned can be described by a reduced number of variables in the input space . Sparsity regularization methods focus on selecting the input variables that best describe the output. Structured sparsity regularization methods generalize and extend sparsity regularization methods, by allowing for optimal selection over structures like groups or networks of input variables in .

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

References

  1. 1 2 3 Santosa, Fadil; Symes, William W. (1986). "Linear inversion of band-limited reflection seismograms". SIAM Journal on Scientific and Statistical Computing. 7 (4). SIAM: 1307–1330. doi:10.1137/0907087.
  2. 1 2 3 4 5 6 7 Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the lasso". Journal of the Royal Statistical Society. Series B (methodological). 58 (1). Wiley: 267–88. doi:10.1111/j.2517-6161.1996.tb02080.x. JSTOR   2346178.
  3. 1 2 Tibshirani, Robert (1997). "The lasso Method for Variable Selection in the Cox Model". Statistics in Medicine . 16 (4): 385–395. CiteSeerX   10.1.1.411.8024 . doi:10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3. PMID   9044528.
  4. Breiman, Leo (1995). "Better Subset Regression Using the Nonnegative Garrote". Technometrics. 37 (4): 373–84. doi:10.1080/00401706.1995.10484371.
  5. McDonald, Gary (2009). "Ridge regression". Wiley Interdisciplinary Reviews: Computational Statistics. 1: 93–100. doi:10.1002/wics.14. S2CID   64699223 . Retrieved August 22, 2022.
  6. Melkumova, L. E.; Shatskikh, S. Ya. (2017-01-01). "Comparing Ridge and LASSO estimators for data analysis". Procedia Engineering. 3rd International Conference “Information Technology and Nanotechnology", ITNT-2017, 25–27 April 2017, Samara, Russia. 201: 746–755. doi: 10.1016/j.proeng.2017.09.615 . ISSN   1877-7058.
  7. 1 2 3 4 5 Zou, Hui; Hastie, Trevor (2005). "Regularization and Variable Selection via the Elastic Net". Journal of the Royal Statistical Society. Series B (statistical Methodology). 67 (2). Wiley: 301–20. doi: 10.1111/j.1467-9868.2005.00503.x . JSTOR   3647580. S2CID   122419596.
  8. 1 2 Hoornweg, Victor (2018). "Chapter 8". Science: Under Submission. Hoornweg Press. ISBN   978-90-829188-0-9.
  9. Motamedi, Fahimeh; Sanchez, Horacio; Mehri, Alireza; Ghasemi, Fahimeh (October 2021). "Accelerating Big Data Analysis through LASSO-Random Forest Algorithm in QSAR Studies". Bioinformatics. 37 (19): 469–475. doi:10.1093/bioinformatics/btab659. ISSN   1367-4803. PMID   34979024.
  10. 1 2 Zou, Hui (2006). "The Adaptive Lasso and Its Oracle Properties" (PDF).
  11. Huang, Yunfei.; et al. (2022). "Sparse inference and active learning of stochastic differential equations from data". Scientific Reports. 12 (1): 21691. arXiv: 2203.11010 . Bibcode:2022NatSR..1221691H. doi: 10.1038/s41598-022-25638-9 . PMC   9755218 . PMID   36522347.
  12. 1 2 Yuan, Ming; Lin, Yi (2006). "Model Selection and Estimation in Regression with Grouped Variables". Journal of the Royal Statistical Society. Series B (statistical Methodology). 68 (1). Wiley: 49–67. doi: 10.1111/j.1467-9868.2005.00532.x . JSTOR   3647556. S2CID   6162124.
  13. 1 2 Puig, Arnau Tibau, Ami Wiesel, and Alfred O. Hero III. "A Multidimensional Shrinkage-Thresholding Operator". Proceedings of the 15th workshop on Statistical Signal Processing, SSP'09, IEEE, pp. 113–116.
  14. 1 2 Jacob, Laurent, Guillaume Obozinski, and Jean-Philippe Vert. "Group Lasso with Overlap and Graph LASSO". Appearing in Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada, 2009.
  15. 1 2 Tibshirani, Robert; Saunders, Michael; Rosset, Saharon; Zhu, Ji; Knight, Keith (2005). "Sparsity and Smoothness via the Fused Lasso". Journal of the Royal Statistical Society. Series B (Statistical Methodology). 67 (1): 91–108. ISSN   1369-7412.
  16. Yang, Yi; Zou, Hui (November 2015). "A fast unified algorithm for solving group-lasso penalize learning problems". Statistics and Computing. 25 (6): 1129–1141. doi:10.1007/s11222-014-9498-5. ISSN   0960-3174. S2CID   255072855.
  17. She, Yiyuan (2010). "Sparse regression with exact clustering". Electronic Journal of Statistics. 4: 1055–1096. doi: 10.1214/10-EJS578 .
  18. Reid, Stephen (2015). "Sparse regression and marginal testing using cluster prototypes". Biostatistics. 17 (2): 364–76. arXiv: 1503.00334 . Bibcode:2015arXiv150300334R. doi:10.1093/biostatistics/kxv049. PMC   5006118 . PMID   26614384.
  19. Bento, Jose (2018). "On the Complexity of the Weighted Fused Lasso". IEEE Signal Processing Letters. 25 (10): 1595–1599. arXiv: 1801.04987 . Bibcode:2018ISPL...25.1595B. doi:10.1109/LSP.2018.2867800. S2CID   5008891.
  20. 1 2 Mirkes E.M. PQSQ-regularized-regression repository, GitHub.
  21. Fu, Wenjiang J. 1998. “The Bridge versus the Lasso”. Journal of Computational and Graphical Statistics 7 (3). Taylor & Francis: 397-416.
  22. Aggarwal C.C., Hinneburg A., Keim D.A. (2001) "On the Surprising Behavior of Distance Metrics in High Dimensional Space." In: Van den Bussche J., Vianu V. (eds) Database Theory — ICDT 2001. ICDT 2001. Lecture Notes in Computer Science, Vol. 1973. Springer, Berlin, Heidelberg, pp. 420-434.
  23. 1 2 Gorban, A.N.; Mirkes, E.M.; Zinovyev, A. (2016) "Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning." Neural Networks, 84, 28-38.
  24. Zhang, H. H.; Lu, W. (2007-08-05). "Adaptive Lasso for Cox's proportional hazards model". Biometrika. 94 (3): 691–703. doi:10.1093/biomet/asm037. ISSN   0006-3444.
  25. Jiang, Yuan (2016). "Variable selection with prior information for generalized linear models via the prior lasso method". Journal of the American Statistical Association. 111 (513): 355–376. doi:10.1080/01621459.2015.1008363. PMC   4874534 . PMID   27217599.
  26. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2010. “Regularization Paths for Generalized Linear Models via Coordinate Descent”. Journal of Statistical Software 33 (1): 1-21. https://www.jstatsoft.org/article/view/v033i01/v33i01.pdf.
  27. 1 2 Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert (2004). "Least Angle Regression". The Annals of Statistics. 32 (2): 407–451. ISSN   0090-5364.
  28. Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert (2010). "Regularization Paths for Generalized Linear Models via Coordinate Descent". Journal of Statistical Software. 33 (1): 1–22. doi: 10.18637/jss.v033.i01 . ISSN   1548-7660. PMC   2929880 . PMID   20808728.
  29. Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob (2011). "Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent". Journal of Statistical Software. 39 (5): 1–13. doi: 10.18637/jss.v039.i05 . ISSN   1548-7660. PMC   4824408 . PMID   27065756.
  30. Tay, J. Kenneth; Narasimhan, Balasubramanian; Hastie, Trevor (2023). "Elastic Net Regularization Paths for All Generalized Linear Models". Journal of Statistical Software. 106 (1). doi: 10.18637/jss.v106.i01 . ISSN   1548-7660. PMC   10153598 . PMID   37138589.
  31. Massias, Mathurin; Gramfort, Alexandre; Salmon, Joseph (2018). "Celer: a Fast Solver for the Lasso with Dual Extrapolation" (PDF). Proceedings of the 35th International Conference on Machine Learning. 80: 3321–3330. arXiv: 1802.07481 .
  32. Massias, Mathurin; Vaiter, Samuel; Gramfort, Alexandre; Salmon, Joseph (2020). "Dual Extrapolation for Sparse GLMs". Journal of Machine Learning Research. 21 (234): 1–33.
  33. 1 2 Hoornweg, Victor (2018). "Chapter 9". Science: Under Submission. Hoornweg Press. ISBN   978-90-829188-0-9.
  34. Zou, Hui; Hastie, Trevor; Tibshirani, Robert (2007). "On the 'Degrees of Freedom' of the Lasso". The Annals of Statistics. 35 (5): 2173–2792. doi: 10.1214/009053607000000127 .
  35. Kaufman, S.; Rosset, S. (2014). "When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples". Biometrika. 101 (4): 771–784. doi:10.1093/biomet/asu034. ISSN   0006-3444.
  36. Janson, Lucas; Fithian, William; Hastie, Trevor J. (2015). "Effective degrees of freedom: a flawed metaphor". Biometrika. 102 (2): 479–485. doi:10.1093/biomet/asv019. ISSN   0006-3444. PMC   4787623 . PMID   26977114.
  37. Shaonan, Tian; Yu, Yan; Guo, Hui (2015). "Variable selection and corporate bankruptcy forecasts". Journal of Banking & Finance. 52 (1): 89–100. doi:10.1016/j.jbankfin.2014.12.003.
  38. Coad, Alex; Srhoj, Stjepan (2020). "Catching Gazelles with a Lasso: Big data techniques for the prediction of high-growth firms". Small Business Economics. 55 (1): 541–565. doi:10.1007/s11187-019-00203-3. S2CID   255011751.