Generalized additive model

Last updated

In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

Contents

GAMs were originally developed by Trevor Hastie and Robert Tibshirani [1] to blend properties of generalized linear models with additive models. They can be interpreted as the discriminative generalization of the naive Bayes generative model. [2]

The model relates a univariate response variable, Y, to some predictor variables, xi. An exponential family distribution is specified for Y (for example normal, binomial or Poisson distributions) along with a link function g (for example the identity or log functions) relating the expected value of Y to the predictor variables via a structure such as

The functions fi may be functions with a specified parametric form (for example a polynomial, or an un-penalized regression spline of a variable) or may be specified non-parametrically, or semi-parametrically, simply as 'smooth functions', to be estimated by non-parametric means. So a typical GAM might use a scatterplot smoothing function, such as a locally weighted mean, for f1(x1), and then use a factor model for f2(x2). This flexibility to allow non-parametric fits with relaxed assumptions on the actual relationship between response and predictor, provides the potential for better fits to data than purely parametric models, but arguably with some loss of interpretability.

Theoretical background

It had been known since the 1950s (via the Kolmogorov–Arnold representation theorem) that any multivariate continuous function could be represented as sums and compositions of univariate functions.

Unfortunately, though the Kolmogorov–Arnold representation theorem asserts the existence of a function of this form, it gives no mechanism whereby one could be constructed. Certain constructive proofs exist, but they tend to require highly complicated (i.e. fractal) functions, and thus are not suitable for modeling approaches. Therefore, the generalized additive model [1] drops the outer sum, and demands instead that the function belong to a simpler class.

where is a smooth monotonic function. Writing for the inverse of , this is traditionally written as

.

When this function is approximating the expectation of some observed quantity, it could be written as

Which is the standard formulation of a generalized additive model. It was then shown [1] [ how? ] that the backfitting algorithm will always converge for these functions.

Generality

The GAM model class is quite broad, given that smooth function is a rather broad category. For example, a covariate may be multivariate and the corresponding a smooth function of several variables, or might be the function mapping the level of a factor to the value of a random effect. Another example is a varying coefficient (geographic regression) term such as where and are both covariates. Or if is itself an observation of a function, we might include a term such as (sometimes known as a signal regression term). could also be a simple parametric function as might be used in any generalized linear model. The model class has been generalized in several directions, notably beyond exponential family response distributions, beyond modelling of only the mean and beyond univariate data. [3] [4] [5]

GAM fitting methods

The original GAM fitting method estimated the smooth components of the model using non-parametric smoothers (for example smoothing splines or local linear regression smoothers) via the backfitting algorithm. [1] Backfitting works by iterative smoothing of partial residuals and provides a very general modular estimation method capable of using a wide variety of smoothing methods to estimate the terms. A disadvantage of backfitting is that it is difficult to integrate with the estimation of the degree of smoothness of the model terms, so that in practice the user must set these, or select between a modest set of pre-defined smoothing levels.

If the are represented using smoothing splines [6] then the degree of smoothness can be estimated as part of model fitting using generalized cross validation, or by restricted maximum likelihood (REML, sometimes known as 'GML') which exploits the duality between spline smoothers and Gaussian random effects. [7] This full spline approach carries an computational cost, where is the number of observations for the response variable, rendering it somewhat impractical for moderately large datasets. More recent methods have addressed this computational cost either by up front reduction of the size of the basis used for smoothing (rank reduction) [8] [9] [10] [11] [12] or by finding sparse representations of the smooths using Markov random fields, which are amenable to the use of sparse matrix methods for computation. [13] These more computationally efficient methods use GCV (or AIC or similar) or REML or take a fully Bayesian approach for inference about the degree of smoothness of the model components. Estimating the degree of smoothness via REML can be viewed as an empirical Bayes method.

An alternative approach with particular advantages in high dimensional settings is to use boosting, although this typically requires bootstrapping for uncertainty quantification. [14] [15] GAMs fit using bagging and boosting have been found to generally outperform GAMs fit using spline methods. [16]

The rank reduced framework

Many modern implementations of GAMs and their extensions are built around the reduced rank smoothing approach, because it allows well founded estimation of the smoothness of the component smooths at comparatively modest computational cost, and also facilitates implementation of a number of model extensions in a way that is more difficult with other methods. At its simplest the idea is to replace the unknown smooth functions in the model with basis expansions

where the are known basis functions, usually chosen for good approximation theoretic properties (for example B splines or reduced rank thin plate splines), and the are coefficients to be estimated as part of model fitting. The basis dimension is chosen to be sufficiently large that we expect it to overfit the data to hand (thereby avoiding bias from model over-simplification), but small enough to retain computational efficiency. If then the computational cost of model estimation this way will be .

Notice that the are only identifiable to within an intercept term (we could add any constant to while subtracting it from without changing the model predictions at all), so identifiability constraints have to be imposed on the smooth terms to remove this ambiguity. Sharpest inference about the is generally obtained by using the sum-to-zero constraints

i.e. by insisting that the sum of each the evaluated at its observed covariate values should be zero. Such linear constraints can most easily be imposed by reparametrization at the basis setup stage, [11] so below it is assumed that this has been done.

Having replaced all the in the model with such basis expansions we have turned the GAM into a generalized linear model (GLM), with a model matrix that simply contains the basis functions evaluated at the observed values. However, because the basis dimensions, , have been chosen to be a somewhat larger than is believed to be necessary for the data, the model is over-parameterized and will overfit the data if estimated as a regular GLM. The solution to this problem is to penalize departure from smoothness in the model fitting process, controlling the weight given to the smoothing penalties using smoothing parameters. For example, consider the situation in which all the smooths are univariate functions. Writing all the parameters in one vector, , suppose that is the deviance (twice the difference between saturated log likelihood and the model log likelihood) for the model. Minimizing the deviance by the usual iteratively re-weighted least squares would result in overfit, so we seek to minimize

where the integrated square second derivative penalties serve to penalize wiggliness (lack of smoothness) of the during fitting, and the smoothing parameters control the tradeoff between model goodness of fit and model smoothness. In the example would ensure that the estimate of would be a straight line in .

Given the basis expansion for each the wiggliness penalties can be expressed as quadratic forms in the model coefficients. [11] That is we can write

,

where is a matrix of known coefficients computable from the penalty and basis, is the vector of coefficients for , and is just padded with zeros so that the second equality holds and we can write the penalty in terms of the full coefficient vector . Many other smoothing penalties can be written in the same way, and given the smoothing parameters the model fitting problem now becomes

,

which can be found using a penalized version of the usual iteratively reweighted least squares (IRLS) algorithm for GLMs: the algorithm is unchanged except that the sum of quadratic penalties is added to the working least squared objective at each iteration of the algorithm.

Penalization has several effects on inference, relative to a regular GLM. For one thing the estimates are subject to some smoothing bias, which is the price that must be paid for limiting estimator variance by penalization. However, if smoothing parameters are selected appropriately the (squared) smoothing bias introduced by penalization should be less than the reduction in variance that it produces, so that the net effect is a reduction in mean square estimation error, relative to not penalizing. A related effect of penalization is that the notion of degrees of freedom of a model has to be modified to account for the penalties' action in reducing the coefficients' freedom to vary. For example, if is the diagonal matrix of IRLS weights at convergence, and is the GAM model matrix, then the model effective degrees of freedom is given by where

,

is the effective degrees of freedom matrix. [11] In fact summing just the diagonal elements of corresponding to the coefficients of gives the effective degrees of freedom for the estimate of .

Bayesian smoothing priors

Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. [17] [18] [19] [20] Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed because we believe smooth functions to be more probable than wiggly ones, and if that is true then we might as well formalize this notion by placing a prior on model wiggliness. A very simple prior might be

(where is the GLM scale parameter introduced only for later convenience), but we can immediately recognize this as a multivariate normal prior with mean and precision matrix . Since the penalty allows some functions through unpenalized (straight lines, given the example penalties), is rank deficient, and the prior is actually improper, with a covariance matrix given by the Moore–Penrose pseudoinverse of (the impropriety corresponds to ascribing infinite variance to the unpenalized components of a smooth). [19]

Now if this prior is combined with the GLM likelihood, we find that the posterior mode for is exactly the found above by penalized IRLS. [19] [11] Furthermore, we have the large sample result that

which can be used to produce confidence/credible intervals for the smooth components, . The Gaussian smoothness priors are also the basis for fully Bayesian inference with GAMs, [9] as well as methods estimating GAMs as mixed models [12] [21] that are essentially empirical Bayes methods.

Smoothing parameter estimation

So far we have treated estimation and inference given the smoothing parameters, , but these also need to be estimated. One approach is to take a fully Bayesian approach, defining priors on the (log) smoothing parameters, and using stochastic simulation or high order approximation methods to obtain information about the posterior of the model coefficients. [9] [13] An alternative is to select the smoothing parameters to optimize a prediction error criterion such as Generalized cross validation (GCV) or the Akaike information criterion (AIC). [22] Finally we may choose to maximize the Marginal Likelihood (REML) obtained by integrating the model coefficients, out of the joint density of ,

.

Since is just the likelihood of , we can view this as choosing to maximize the average likelihood of random draws from the prior. The preceding integral is usually analytically intractable but can be approximated to quite high accuracy using Laplace's method. [21]

Smoothing parameter inference is the most computationally taxing part of model estimation/inference. For example, to optimize a GCV or marginal likelihood typically requires numerical optimization via a Newton or Quasi-Newton method, with each trial value for the (log) smoothing parameter vector requiring a penalized IRLS iteration to evaluate the corresponding alongside the other ingredients of the GCV score or Laplace approximate marginal likelihood (LAML). Furthermore, to obtain the derivatives of the GCV or LAML, required for optimization, involves implicit differentiation to obtain the derivatives of w.r.t. the log smoothing parameters, and this requires some care is efficiency and numerical stability are to be maintained. [21]

Software

Backfit GAMs were originally provided by the gam function in S, [23] now ported to the R language as the gam package. The SAS proc GAM also provides backfit GAMs. The recommended package in R for GAMs is mgcv, which stands for mixed GAM computational vehicle, [11] which is based on the reduced rank approach with automatic smoothing parameter selection. The SAS proc GAMPL is an alternative implementation. In Python, there is the InterpretML package, which implements a bagging and boosting approach. [24] There are many alternative packages. Examples include the R packages mboost, [14] which implements a boosting approach; gss, which provides the full spline smoothing methods; [25] VGAM which provides vector GAMs; [4] and gamlss, which provides Generalized additive model for location, scale and shape. BayesX and its R interface provides GAMs and extensions via MCMC and penalized likelihood methods. [26] The INLA software implements a fully Bayesian approach based on Markov random field representations exploiting sparse matrix methods. [13]

As an example of how models can be estimated in practice with software, consider R package mgcv. Suppose that our R workspace contains vectors y, x and z and we want to estimate the model

Within R we could issue the commands

library(mgcv)  # load the package b = gam(y ~ s(x) + s(z))

In common with most R modelling functions gam expects a model formula to be supplied, specifying the model structure to fit. The response variable is given to the left of the ~ while the specification of the linear predictor is given to the right. gam sets up bases and penalties for the smooth terms, estimates the model including its smoothing parameters and, in standard R fashion, returns a fitted model object, which can then be interrogated using various helper functions, such as summary, plot, predict, and AIC.

This simple example has used several default settings which it is important to be aware of. For example a Gaussian distribution and identity link has been assumed, and the smoothing parameter selection criterion was GCV. Also the smooth terms were represented using `penalized thin plate regression splines', and the basis dimension for each was set to 10 (implying a maximum of 9 degrees of freedom after identifiability constraints have been imposed). A second example illustrates how we can control these things. Suppose that we want to estimate the model

using REML smoothing parameter selection, and we expect to be a relatively complicated function which we would like to model with a penalized cubic regression spline. For we also have to decide whether and are naturally on the same scale so that an isotropic smoother such as thin plate spline is appropriate (specified via `s(v,w)'), or whether they are really on different scales so that we need separate smoothing penalties and smoothing parameters for and as provided by a tensor product smoother. Suppose we opted for the latter in this case, then the following R code would estimate the model

b1 = gam(y ~ x + s(t,bs="cr",k=100) + te(v,w),family=poisson,method="REML")

which uses a basis size of 100 for the smooth of . The specification of distribution and link function uses the `family' objects that are standard when fitting GLMs in R or S. Note that Gaussian random effects can also be added to the linear predictor.

These examples are only intended to give a very basic flavour of the way that GAM software is used, for more detail refer to the software documentation for the various packages and the references below. [11] [25] [4] [23] [14] [26]

Model checking

As with any statistical model it is important to check the model assumptions of a GAM. Residual plots should be examined in the same way as for any GLM. That is deviance residuals (or other standardized residuals) should be examined for patterns that might suggest a substantial violation of the independence or mean-variance assumptions of the model. This will usually involve plotting the standardized residuals against fitted values and covariates to look for mean-variance problems or missing pattern, and may also involve examining Correlograms (ACFs) and/or Variograms of the residuals to check for violation of independence. If the model mean-variance relationship is correct then scaled residuals should have roughly constant variance. Note that since GLMs and GAMs can be estimated using Quasi-likelihood, it follows that details of the distribution of the residuals beyond the mean-variance relationship are of relatively minor importance.

One issue that is more common with GAMs than with other GLMs is a danger of falsely concluding that data are zero inflated. The difficulty arises when data contain many zeroes that can be modelled by a Poisson or binomial with a very low expected value: the flexibility of the GAM structure will often allow representation of a very low mean over some region of covariate space, but the distribution of standardized residuals will fail to look anything like the approximate normality that introductory GLM classes teach us to expect, even if the model is perfectly correct. [27]

The one extra check that GAMs introduce is the need to check that the degrees of freedom chosen are appropriate. This is particularly acute when using methods that do not automatically estimate the smoothness of model components. When using methods with automatic smoothing parameter selection then it is still necessary to check that the choice of basis dimension was not restrictively small, although if the effective degrees of freedom of a term estimate is comfortably below its basis dimension then this is unlikely. In any case, checking is based on examining pattern in the residuals with respect to . This can be done using partial residuals overlaid on the plot of , or using permutation of the residuals to construct tests for residual pattern (as in the `gam.check' function in R package `mgcv').

Model selection

When smoothing parameters are estimated as part of model fitting then much of what would traditionally count as model selection has been absorbed into the fitting process: the smoothing parameters estimation has already selected between a rich family of models of different functional complexity. However smoothing parameter estimation does not typically remove a smooth term from the model altogether, because most penalties leave some functions un-penalized (e.g. straight lines are unpenalized by the spline derivative penalty given above). So the question of whether a term should be in the model at all remains. One simple approach to this issue is to add an extra penalty to each smooth term in the GAM, which penalizes the components of the smooth that would otherwise be unpenalized (and only those). Each extra penalty has its own smoothing parameter and estimation then proceeds as before, but now with the possibility that terms will be completely penalized to zero. [28] In high dimensional settings then it may make more sense to attempt this task using the lasso or elastic net regularization. Boosting also performs term selection automatically as part of fitting. [14]

An alternative is to use traditional stepwise regression methods for model selection. This is also the default method when smoothing parameters are not estimated as part of fitting, in which case each smooth term is usually allowed to take one of a small set of pre-defined smoothness levels within the model, and these are selected between in a stepwise fashion. Stepwise methods operate by iteratively comparing models with or without particular model terms (or possibly with different levels of term complexity), and require measures of model fit or term significance in order to decide which model to select at each stage. For example, we might use p-values for testing each term for equality to zero to decide on candidate terms for removal from a model, and we might compare Akaike information criterion (AIC) values for alternative models.

P-value computation for smooths is not straightforward, because of the effects of penalization, but approximations are available. [1] [11] AIC can be computed in two ways for GAMs. The marginal AIC is based on the Mariginal Likelihood (see above) with the model coefficients integrated out. In this case the AIC penalty is based on the number of smoothing parameters (and any variance parameters) in the model. However, because of the well known fact that REML is not comparable between models with different fixed effects structures, we can not usually use such an AIC to compare models with different smooth terms (since their un-penalized components act like fixed effects). Basing AIC on the marginal likelihood in which only the penalized effects are integrated out is possible (the number of un-penalized coefficients now gets added to the parameter count for the AIC penalty), but this version of the marginal likelihood suffers from the tendency to oversmooth that provided the original motivation for developing REML. Given these problems GAMs are often compared using the conditional AIC, in which the model likelihood (not marginal likelihood) is used in the AIC, and the parameter count is taken as the effective degrees of freedom of the model. [1] [22]

Naive versions of the conditional AIC have been shown to be much too likely to select larger models in some circumstances, a difficulty attributable to neglect of smoothing parameter uncertainty when computing the effective degrees of freedom, [29] however correcting the effective degrees of freedom for this problem restores reasonable performance. [3]

Caveats

Overfitting can be a problem with GAMs, [22] especially if there is un-modelled residual auto-correlation or un-modelled overdispersion. Cross-validation can be used to detect and/or reduce overfitting problems with GAMs (or other statistical methods), [30] and software often allows the level of penalization to be increased to force smoother fits. Estimating very large numbers of smoothing parameters is also likely to be statistically challenging, and there are known tendencies for prediction error criteria (GCV, AIC etc.) to occasionally undersmooth substantially, particularly at moderate sample sizes, with REML being somewhat less problematic in this regard. [31]

Where appropriate, simpler models such as GLMs may be preferable to GAMs unless GAMs improve predictive ability substantially (in validation sets) for the application in question.

See also

Related Research Articles

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias.

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

In statistics, a semiparametric model is a statistical model that has parametric and nonparametric components.

Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated.

In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. They also inherit from GLMs the idea of extending linear mixed models to non-normal data.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

Smoothing splines are function estimates, , obtained from a set of noisy observations of the target , in order to balance a measure of goodness of fit of to with a derivative based measure of the smoothness of . They provide a means for smoothing noisy data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where is a vector quantity.

In statistics and machine learning, lasso is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term.

In statistics, projection pursuit regression (PPR) is a statistical model developed by Jerome H. Friedman and Werner Stuetzle which is an extension of additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables.

The Generalized Additive Model for Location, Scale and Shape (GAMLSS) is an approach to statistical modelling and learning. GAMLSS is a modern distribution-based approach to (semiparametric) regression. A parametric distribution is assumed for the response (target) variable but the parameters of this distribution can vary according to explanatory variables using linear, nonlinear or smooth functions. In machine learning parlance, GAMLSS is a form of supervised machine learning.

In statistics and, in particular, in the fitting of linear or logistic regression models, the elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

<span class="mw-page-title-main">Asymmetric Laplace distribution</span> Continuous probability distribution

In probability theory and statistics, the asymmetric Laplace distribution (ALD) is a continuous probability distribution which is a generalization of the Laplace distribution. Just as the Laplace distribution consists of two exponential distributions of equal scale back-to-back about x = m, the asymmetric Laplace consists of two exponential distributions of unequal scale back to back about x = m, adjusted to assure continuity and normalization. The difference of two variates exponentially distributed with different means and rate parameters will be distributed according to the ALD. When the two rate parameters are equal, the difference will be distributed according to the Laplace distribution.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

References

  1. 1 2 3 4 5 6 Hastie, T. J.; Tibshirani, R. J. (1990). Generalized Additive Models. Chapman & Hall/CRC. ISBN   978-0-412-34390-2.
  2. Rubinstein, Y. Dan; Hastie, Trevor (1997-08-14). "Discriminative vs informative learning". Proceedings of the Third International Conference on Knowledge Discovery and Data Mining. KDD'97. Newport Beach, CA: AAAI Press: 49–53.
  3. 1 2 Wood, S. N.; Pya, N.; Saefken, B. (2016). "Smoothing parameter and model selection for general smooth models (with discussion)". Journal of the American Statistical Association . 111 (516): 1548–1575. arXiv: 1511.03864 . doi:10.1080/01621459.2016.1180986. S2CID   54802107.
  4. 1 2 3 Yee, Thomas (2015). Vector generalized linear and additive models. Springer. ISBN   978-1-4939-2817-0.
  5. Rigby, R.A.; Stasinopoulos, D.M. (2005). "Generalized additive models for location, scale and shape (with discussion)". Journal of the Royal Statistical Society, Series C. 54 (3): 507–554. doi: 10.1111/j.1467-9876.2005.00510.x .
  6. Wahba, Grace. Spline Models for Observational Data. SIAM.
  7. Gu, C.; Wahba, G. (1991). "Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method" (PDF). SIAM Journal on Scientific and Statistical Computing. 12 (2): 383–398. doi:10.1137/0912021.
  8. Wood, S. N. (2000). "Modelling and smoothing parameter estimation with multiple quadratic penalties" (PDF). Journal of the Royal Statistical Society . Series B. 62 (2): 413–428. doi:10.1111/1467-9868.00240. S2CID   15500664.
  9. 1 2 3 Fahrmeier, L.; Lang, S. (2001). "Bayesian Inference for Generalized Additive Mixed Models based on Markov Random Field Priors". Journal of the Royal Statistical Society, Series C. 50 (2): 201–220. CiteSeerX   10.1.1.304.8706 . doi:10.1111/1467-9876.00229. S2CID   18074478.
  10. Kim, Y.J.; Gu, C. (2004). "Smoothing spline Gaussian regression: more scalable computation via efficient approximation". Journal of the Royal Statistical Society, Series B. 66 (2): 337–356. doi: 10.1046/j.1369-7412.2003.05316.x . S2CID   41334749.
  11. 1 2 3 4 5 6 7 8 Wood, S. N. (2017). Generalized Additive Models: An Introduction with R (2nd ed). Chapman & Hall/CRC. ISBN   978-1-58488-474-3.
  12. 1 2 Ruppert, D.; Wand, M.P.; Carroll, R.J. (2003). Semiparametric Regression. Cambridge University Press.
  13. 1 2 3 Rue, H.; Martino, Sara; Chopin, Nicolas (2009). "Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (with discussion)". Journal of the Royal Statistical Society, Series B. 71 (2): 319–392. doi: 10.1111/j.1467-9868.2008.00700.x .
  14. 1 2 3 4 Schmid, M.; Hothorn, T. (2008). "Boosting additive models using component-wise P-splines". Computational Statistics and Data Analysis. 53 (2): 298–311. doi:10.1016/j.csda.2008.09.009.
  15. Mayr, A.; Fenske, N.; Hofner, B.; Kneib, T.; Schmid, M. (2012). "Generalized additive models for location, scale and shape for high dimensional data - a flexible approach based on boosting". Journal of the Royal Statistical Society, Series C. 61 (3): 403–427. doi:10.1111/j.1467-9876.2011.01033.x. S2CID   123646605.
  16. Lou, Yin; Caruana, Rich; Gehrke, Johannes (2012). "Intelligible models for classification and regression". Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '12. p. 150. doi:10.1145/2339530.2339556. ISBN   9781450314626. S2CID   7715182.
  17. Wahba, G. (1983). "Bayesian Confidence Intervals for the Cross Validated Smoothing Spline" (PDF). Journal of the Royal Statistical Society, Series B. 45: 133–150.
  18. Nychka, D. (1988). "Bayesian confidence intervals for smoothing splines". Journal of the American Statistical Association. 83 (404): 1134–1143. doi:10.1080/01621459.1988.10478711.
  19. 1 2 3 Silverman, B.W. (1985). "Some Aspects of the Spline Smoothing Approach to Non-Parametric Regression Curve Fitting (with discussion)" (PDF). Journal of the Royal Statistical Society, Series B. 47: 1–53.
  20. Marra, G.; Wood, S.N. (2012). "Coverage properties of confidence intervals for generalized additive model components" (PDF). Scandinavian Journal of Statistics. 39: 53–74. doi:10.1111/j.1467-9469.2011.00760.x. S2CID   49393564.
  21. 1 2 3 Wood, S.N. (2011). "Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models" (PDF). Journal of the Royal Statistical Society, Series B. 73: 3–36. doi:10.1111/j.1467-9868.2010.00749.x. S2CID   123001831.
  22. 1 2 3 Wood, Simon N. (2008). "Fast stable direct fitting and smoothness selection for generalized additive models". Journal of the Royal Statistical Society, Series B. 70 (3): 495–518. arXiv: 0709.3906 . doi:10.1111/j.1467-9868.2007.00646.x. S2CID   17511583.
  23. 1 2 Chambers, J.M.; Hastie, T. (1993). Statistical Models in S. Chapman and Hall.
  24. Nori, Harsha; Jenkins, Samuel; Koch, Paul; Caruana, Rich (2019). "InterpretML: A Unified Framework for Machine Learning Interpretability". arXiv: 1909.09223 [cs.LG].
  25. 1 2 Gu, Chong (2013). Smoothing Spline ANOVA Models (2nd ed.). Springer.
  26. 1 2 Umlauf, Nikolaus; Adler, Daniel; Kneib, Thomas; Lang, Stefan; Zeileis, Achim. "Structured Additive Regression Models: An R Interface to BayesX" (PDF). Journal of Statistical Software. 63 (21): 1–46.
  27. Augustin, N.H.; Sauleau, E-A; Wood, S.N. (2012). "On quantile quantile plots for generalized linear models" (PDF). Computational Statistics and Data Analysis. 56 (8): 2404–2409. doi:10.1016/j.csda.2012.01.026. S2CID   2960406.
  28. Marra, G.; Wood, S.N. (2011). "Practical Variable Selection for Generalized Additive Models". Computational Statistics and Data Analysis. 55 (7): 2372–2387. doi:10.1016/j.csda.2011.02.004.
  29. Greven, Sonja; Kneib, Thomas (2010). "On the behaviour of marginal and conditional AIC in linear mixed models". Biometrika. 97 (4): 773–789. doi:10.1093/biomet/asq042.
  30. Brian Junker (March 22, 2010). "Additive models and cross-validation" (PDF).
  31. Reiss, P.T.; Ogden, T.R. (2009). "Smoothing parameter selection for a class of semiparametric linear models". Journal of the Royal Statistical Society, Series B. 71 (2): 505–523. doi: 10.1111/j.1467-9868.2008.00695.x . S2CID   51945597.