Poisson regression

Last updated

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. [1] Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

Contents

Negative binomial regression is a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution.

Poisson regression models are generalized linear models with the logarithm as the (canonical) link function, and the Poisson distribution function as the assumed probability distribution of the response.

Regression models

If is a vector of independent variables, then the model takes the form

where and . Sometimes this is written more compactly as

where is now an (n + 1)-dimensional vector consisting of n independent variables concatenated to the number one. Here is simply concatenated to .

Thus, when given a Poisson regression model and an input vector , the predicted mean of the associated Poisson distribution is given by

If are independent observations with corresponding values of the predictor variables, then can be estimated by maximum likelihood. The maximum-likelihood estimates lack a closed-form expression and must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques.

Interpretation of coefficients

Suppose we have a model with a single predictor, that is, :

Suppose we compute the predicted values at point and :

By subtracting the first from the second:

Suppose now that . We obtain:

So the coefficient of the model is to be interpreted as the increase in the logarithm of the count of the outcome variable when the independent variable increases by 1.

By applying the rules of logarithms:

That is, when the independent variable increases by 1, the outcome variable is multiplied by the exponentiated coefficient.

The exponentiated coefficient is also called the incidence ratio.

Average partial effect

Often, the object of interest is the average partial effect or average marginal effect , which is interpreted as the change in the outcome for a one unit change in the independent variable . The average partial effect in the Poisson model for a continuous can be shown to be: [2]

This can be estimated using the coefficient estimates from the Poisson model with the observed values of .

Maximum likelihood-based parameter estimation

Given a set of parameters θ and an input vector x, the mean of the predicted Poisson distribution, as stated above, is given by

and thus, the Poisson distribution's probability mass function is given by

Now suppose we are given a data set consisting of m vectors , along with a set of m values . Then, for a given set of parameters θ, the probability of attaining this particular set of data is given by

By the method of maximum likelihood, we wish to find the set of parameters θ that makes this probability as large as possible. To do this, the equation is first rewritten as a likelihood function in terms of θ:

Note that the expression on the right hand side has not actually changed. A formula in this form is typically difficult to work with; instead, one uses the log-likelihood:

Notice that the parameters θ only appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value for θ we may drop the yi! and simply write

To find a maximum, we need to solve an equation which has no closed-form solution. However, the negative log-likelihood, , is a convex function, and so standard convex optimization techniques such as gradient descent can be applied to find the optimal value of θ.

Poisson regression in practice

Poisson regression may be appropriate when the dependent variable is a count, for instance of events such as the arrival of a telephone call at a call centre. [3] The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day.

"Exposure" and offset

Poisson regression may also be appropriate for rate data, where the rate is a count of events divided by some measure of that unit's exposure (a particular unit of observation). [4] For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as an offset. If the rate is count/exposure, multiplying both sides of the equation by exposure moves it to the right side of the equation. When both sides of the equation are then logged, the final model contains log(exposure) as a term that is added to the regression coefficients. This logged variable, log(exposure), is called the offset variable and enters on the right-hand side of the equation with a parameter estimate (for log(exposure)) constrained to 1.

which implies

Offset in the case of a GLM in R can be achieved using the offset() function:

glm(y~offset(log(exposure))+x,family=poisson(link=log))

Overdispersion and zero inflation

A characteristic of the Poisson distribution is that its mean is equal to its variance. In certain circumstances, it will be found that the observed variance is greater than the mean; this is known as overdispersion and indicates that the model is not appropriate. A common reason is the omission of relevant explanatory variables, or dependent observations. Under some circumstances, the problem of overdispersion can be solved by using quasi-likelihood estimation or a negative binomial distribution instead. [5] [6]

Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: If E(Y) = μ, the quasi-Poisson model assumes var(Y) = θμ while the gamma-Poisson assumes var(Y) = μ(1 + κμ), where θ is the quasi-Poisson overdispersion parameter, and κ is the shape parameter of the negative binomial distribution. For both models, parameters are estimated using iteratively reweighted least squares. For quasi-Poisson, the weights are μ/θ. For negative binomial, the weights are μ/(1 + κμ). With large μ and substantial extra-Poisson variation, the negative binomial weights are capped at 1/κ. Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean. [7]

Another common problem with Poisson regression is excess zeros: if there are two processes at work, one determining whether there are zero events or any events, and a Poisson process determining how many events there are, there will be more zeros than a Poisson regression would predict. An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers.

Other generalized linear models such as the negative binomial model or zero-inflated model may function better in these cases.

On the contrary, underdispersion may pose an issue for parameter estimation. [8]

Use in survival analysis

Poisson regression creates proportional hazards models, one class of survival analysis: see proportional hazards models for descriptions of Cox models.

Extensions

Regularized Poisson regression

When estimating the parameters for Poisson regression, one typically tries to find values for θ that maximize the likelihood of an expression of the form

where m is the number of examples in the data set, and is the probability mass function of the Poisson distribution with the mean set to . Regularization can be added to this optimization problem by instead maximizing [9]

for some positive constant . This technique, similar to ridge regression, can reduce overfitting.

See also

Related Research Articles

A likelihood function measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and a rate parameter
<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.

Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents a convenient approach for setting hyperparameters, but has been mostly supplanted by fully Bayesian hierarchical analyses since the 2000s with the increasing availability of well-performing computation techniques. It is still commonly used, however, for variational methods in Deep Learning, such as variational autoencoders, where latent variable spaces are high-dimensional.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In Bayesian probability theory, if, given a likelihood function , the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions with respect to that likelihood function and the prior is called a conjugate prior for the likelihood function .

<span class="mw-page-title-main">Ordinary least squares</span> Method for estimating the unknown parameters in a linear regression model

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.

In statistics, the Vuong closeness test is a likelihood-ratio-based test for model selection using the Kullback–Leibler information criterion. This statistic makes probabilistic statements about two models. They can be nested, strictly non-nested or partially non-nested. The statistic tests the null hypothesis that the two models are equally close to the true data generating process, against the alternative that one model is closer. It cannot make any decision whether the "closer" model is the true model.

In statistics, binomial regression is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.

In probability and statistics, a compound probability distribution is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with the parameters of that distribution themselves being random variables. If the parameter is a scale parameter, the resulting mixture is also called a scale mixture.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White, who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient, or as a function of the Hessian matrix of the log-likelihood function.

<span class="mw-page-title-main">Hyperbolastic functions</span> Mathematical functions

The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression.

References

  1. Nelder, J. A. (1974). "Log Linear Models for Contingency Tables: A Generalization of Classical Least Squares". Journal of the Royal Statistical Society, Series C (Applied Statistics). 23 (3): pp. 323–329. doi:10.2307/2347125. JSTOR   2347125.
  2. Wooldridge, Jeffrey (2010). Econometric Analysis of Cross Section and Panel Data (2nd ed.). Cambridge, Massachusetts: The MIT Press. p. 726.
  3. Greene, William H. (2003). Econometric Analysis (Fifth ed.). Prentice-Hall. pp.  740–752. ISBN   978-0130661890.
  4. Frome, Edward L. (1983). "The Analysis of Rates Using Poisson Regression Models". Biometrics. 39 (3): pp. 665–674. doi:10.2307/2531094. JSTOR   2531094.
  5. Paternoster R, Brame R (1997). "Multiple routes to delinquency? A test of developmental and general theories of crime". Criminology. 35: 49–84. doi:10.1111/j.1745-9125.1997.tb00870.x. eISSN   1745-9125. ISSN   0011-1384.
  6. Berk R, MacDonald J (2008). "Overdispersion and Poisson regression". Journal of Quantitative Criminology. 24 (3): 269–284. doi:10.1007/s10940-008-9048-4. S2CID   121273486.
  7. Ver Hoef, JAY M.; Boveng, Peter L. (2007-01-01). "Quasi-Poisson vs. Negative Binomial Regression: How should we model overdispersed count data?". Ecology. 88 (11): 2766–2772. Bibcode:2007Ecol...88.2766V. doi:10.1890/07-0043.1. PMID   18051645 . Retrieved 2016-09-01.
  8. Schwarzenegger, Rafael; Quigley, John; Walls, Lesley (23 November 2021). "Is eliciting dependency worth the effort? A study for the multivariate Poisson-Gamma probability model". Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability. 237 (5): 5. doi: 10.1177/1748006X211059417 .
  9. Perperoglou, Aris (2011-09-08). "Fitting survival data with penalized Poisson regression". Statistical Methods & Applications. 20 (4). Springer Nature: 451–462. doi:10.1007/s10260-011-0172-1. ISSN   1618-2510. S2CID   10883925.

Further reading