Part of a series on |

Regression analysis |
---|

Models |

Estimation |

Background |

The **general linear model** or **general multivariate regression model** is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as^{ [1] }

- Comparison to multiple linear regression
- Comparison to generalized linear model
- Applications
- See also
- Notes
- References

where **Y** is a matrix with series of multivariate measurements (each column being a set of measurements on one of the dependent variables), **X** is a matrix of observations on independent variables that might be a design matrix (each column being a set of observations on one of the independent variables), **B** is a matrix containing parameters that are usually to be estimated and **U** is a matrix containing errors (noise). The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about **Y** and **U**.

The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression, *t*-test and *F*-test. The general linear model is a generalization of multiple linear regression to the case of more than one dependent variable. If **Y**, **B**, and **U** were column vectors, the matrix equation above would represent multiple linear regression.

Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of **Y** are tested together, whereas in univariate tests the columns of **Y** are tested independently, i.e., as multiple univariate tests with the same design matrix.

Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is

for each observation *i* = 1, ... , *n*.

In the formula above we consider *n* observations of one dependent variable and *p* independent variables. Thus, *Y*_{i} is the *i*^{th} observation of the dependent variable, *X*_{ij} is *i*^{th} observation of the *j*^{th} independent variable, *j* = 1, 2, ..., *p*. The values *β*_{j} represent parameters to be estimated, and *ε*_{i} is the *i*^{th} independent identically distributed normal error.

In the more general multivariate linear regression, there is one equation of the above form for each of *m* > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:

for all observations indexed as *i* = 1, ... , *n* and for all dependent variables indexed as *j = 1, ... , *m*.*

Note that, since each dependent variable has its own set of regression parameters to be fitted, from a computational point of view the general multivariate regression is simply a sequence of standard multiple linear regressions using the same explanatory variables.

The general linear model and the generalized linear model (GLM) ^{ [2] }^{ [3] } are two commonly used families of statistical methods to relate some number of continuous and/or categorical predictors to a single outcome variable.

The main difference between the two approaches is that the general linear model strictly assumes that the residuals will follow a conditionally normal distribution,^{ [4] } while the GLM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals.^{ [2] } Of note, the general linear model is a special case of the GLM in which the distribution of the residuals follow a conditionally normal distribution.

The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLM family. Commonly used models in the GLM family include binary logistic regression ^{ [5] } for binary or dichotomous outcomes, Poisson regression ^{ [6] } for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLM may be spoken of as a general family of statistical models or as specific models for specific outcome types.

General linear model | Generalized linear model | |
---|---|---|

Typical estimation method | Least squares, best linear unbiased prediction | Maximum likelihood or Bayesian |

Examples | ANOVA, ANCOVA, linear regression | linear regression, logistic regression, Poisson regression, gamma regression,^{ [7] } general linear model |

Extensions and related methods | MANOVA, MANCOVA, linear mixed model | generalized linear mixed model (GLMM), generalized estimating equations (GEE) |

R package and function | lm() in stats package (base R) | glm() in stats package (base R) |

Matlab function | mvregress() | glmfit() |

SAS procedures | PROC GLM, PROC REG | PROC GENMOD, PROC LOGISTIC (for binary & ordered or unordered categorical outcomes) |

Stata command | regress | glm |

SPSS command | regression, glm | genlin, logistic |

Wolfram Language & Mathematica function | LinearModelFit[]^{ [8] } | GeneralizedLinearModelFit[]^{ [9] } |

EViews command | ls^{ [10] } | glm^{ [11] } |

An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where `Y` contains data from brain scanners, `X` contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a *mass-univariate* in this setting) and is often referred to as statistical parametric mapping.^{ [12] }

- ↑ K. V. Mardia, J. T. Kent and J. M. Bibby (1979).
*Multivariate Analysis*. Academic Press. ISBN 0-12-471252-5. - 1 2 McCullagh, P.; Nelder, J. A. (1989), "An outline of generalized linear models",
*Generalized Linear Models*, Springer US, pp. 21–47, doi:10.1007/978-1-4899-3242-6_2, ISBN 9780412317606 - ↑ Fox, J. (2015).
*Applied regression analysis and generalized linear models*. Sage Publications. - ↑ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences.
- ↑ Hosmer Jr, D. W., Lemeshow, S., & Sturdivant, R. X. (2013).
*Applied logistic regression*(Vol. 398). John Wiley & Sons. - ↑ Gardner, W.; Mulvey, E. P.; Shaw, E. C. (1995). "Regression analyses of counts and rates: Poisson, overdispersed Poisson, and negative binomial models".
*Psychological Bulletin*.**118**(3): 392–404. doi:10.1037/0033-2909.118.3.392. - ↑ McCullagh, Peter; Nelder, John (1989).
*Generalized Linear Models, Second Edition*. Boca Raton: Chapman and Hall/CRC. ISBN 978-0-412-31760-6. - ↑ LinearModelFit, Wolfram Language Documentation Center.
- ↑ GeneralizedLinearModelFit, Wolfram Language Documentation Center.
- ↑ ls, EViews Help.
- ↑ glm, EViews Help.
- ↑ K.J. Friston; A.P. Holmes; K.J. Worsley; J.-B. Poline; C.D. Frith; R.S.J. Frackowiak (1995). "Statistical Parametric Maps in functional imaging: A general linear approach".
*Human Brain Mapping*.**2**(4): 189–210. doi:10.1002/hbm.460020402.

In probability theory and statistics, the **multivariate normal distribution**, **multivariate Gaussian distribution**, or **joint normal distribution** is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be *k*-variate normally distributed if every linear combination of its *k* components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

The method of **least squares** is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of every single equation.

In statistics, the **Gauss–Markov theorem** states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.

In statistics, the **logistic model** is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.

In statistics, a **generalized linear model** (**GLM**) is a flexible generalization of ordinary linear regression that allows for the response variable to have an error distribution other than the normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a *link function* and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In statistical modeling, **regression analysis** is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

In applied statistics, **total least squares** is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.

In statistics, **nonlinear regression** is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations.

In statistics, the **coefficient of determination**, denoted *R*^{2} or *r*^{2} and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).

In statistics, a **probit model** is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from * probability* +

In statistics, **ordinary least squares** (**OLS**) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

**Weighted least squares** (**WLS**), also known as **weighted linear regression**, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a specialization of generalized least squares.

In statistics, a **generalized additive model (GAM)** is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

In statistics, **multinomial logistic regression** is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables.

In statistics, **binomial regression** is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

In statistics, **polynomial regression** is a form of regression analysis in which the relationship between the independent variable *x* and the dependent variable *y* is modelled as an *n*th degree polynomial in *x*. Polynomial regression fits a nonlinear relationship between the value of *x* and the corresponding conditional mean of *y*, denoted E(*y* |*x*). Although *polynomial regression* fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(*y* | *x*) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

**Linear least squares** (**LLS**) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

In statistics and in machine learning, a **linear predictor function** is a linear function of a set of coefficients and explanatory variables, whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers, as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".

In statistics, **linear regression** is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called *simple linear regression*; for more than one, the process is called **multiple linear regression**. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

In statistics, the class of **vector generalized linear models** (**VGLMs**) was proposed to enlarge the scope of models catered for by generalized linear models (**GLMs**). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a *link function*. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

- Christensen, Ronald (2002).
*Plane Answers to Complex Questions: The Theory of Linear Models*(Third ed.). New York: Springer. ISBN 0-387-95361-2. - Wichura, Michael J. (2006).
*The coordinate-free approach to linear models*. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455. - Rawlings, John O.; Pantula, Sastry G.; Dickey, David A., eds. (1998). "Applied Regression Analysis". Springer Texts in Statistics. doi:10.1007/b98890. ISBN 0-387-98454-2.Cite journal requires
`|journal=`

(help)

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.