Wald test

Last updated

In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. [1] [2] Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, [3] it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance. [4]


Together with the Lagrange multiplier test and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic. [5] [6] That is because the Wald statistic is derived from a Taylor expansion, [7] and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients. [8] Another aberration, known as the Hauck–Donner effect, [9] can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constrained parameter. [10] [11]

Mathematical details

Under the Wald test, the estimated that was found as the maximizing argument of the unconstrained likelihood function is compared with a hypothesized value . In particular, the squared difference is weighted by the curvature of the log-likelihood function.

Test on a single parameter

If the hypothesis involves only a single parameter restriction, then the Wald statistic takes the following form:

which under the null hypothesis follows an asymptotic χ2-distribution with one degree of freedom. The square root of the single-restriction Wald statistic can be understood as a (pseudo) t-ratio that is, however, not actually t-distributed except for the special case of linear regression with normally distributed errors. [12] In general, it follows an asymptotic z distribution. [13]

where is the standard error of the maximum likelihood estimate (MLE), the square root of the variance. There are several ways to consistently estimate the variance matrix which in finite samples leads to alternative estimates of standard errors and associated test statistics and p-values. [14]

Test(s) on multiple parameters

The Wald test can be used to test a single hypothesis on multiple parameters, as well as to test jointly multiple hypotheses on single/multiple parameters. Let be our sample estimator of P parameters (i.e., is a vector), which is supposed to follow asymptotically a normal distribution with covariance matrix  V, . The test of Q hypotheses on the P parameters is expressed with a matrix R:

The test statistic is:

where is an estimator of the covariance matrix. [15]


Suppose . Then, by Slutsky's theorem and by the properties of the normal distribution, multiplying by R has distribution:

Recalling that a quadratic form of normal distribution has a Chi-squared distribution:

Rearranging n finally gives:

What if the covariance matrix is not known a-priori and needs to be estimated from the data? If we have a consistent estimator of , then by the independence of the covariance estimator and equation above, we have:

Nonlinear hypothesis

In the standard form, the Wald test is used to test linear hypotheses that can be represented by a single matrix R. If one wishes to test a non-linear hypothesis of the form:

The test statistic becomes:

where is the derivative of c evaluated at the sample estimator. This result is obtained using the delta method, which uses a first order approximation of the variance.

Non-invariance to re-parameterisations

The fact that one uses an approximation of the variance has the drawback that the Wald statistic is not-invariant to a non-linear transformation/reparametrisation of the hypothesis: it can give different answers to the same question, depending on how the question is phrased. [16] [5] For example, asking whether R = 1 is the same as asking whether log R = 0; but the Wald statistic for R = 1 is not the same as the Wald statistic for log R = 0 (because there is in general no neat relationship between the standard errors of R and log R, so it needs to be approximated). [17]

Alternatives to the Wald test

There exist several alternatives to the Wald test, namely the likelihood-ratio test and the Lagrange multiplier test (also known as the score test). Robert F. Engle showed that these three tests, the Wald test, the likelihood-ratio test and the Lagrange multiplier test are asymptotically equivalent. [18] Although they are asymptotically equivalent, in finite samples, they could disagree enough to lead to different conclusions.

There are several reasons to prefer the likelihood ratio test or the Lagrange multiplier to the Wald test: [19] [20] [21]

See also

Related Research Articles

In statistics, the likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. It is formed from the joint probability distribution of the sample, but viewed and used as a function of the parameters only, thus treating the random variables as fixed at the observed values.

In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In statistics, the score is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher. The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.

Consistent estimator Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable.

Empirical distribution function

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.

In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation. 48 samples of robust M-estimators can be founded in a recent review study.

Maximum spacing estimation

In statistics, maximum spacing estimation, or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model. The method requires maximization of the geometric mean of spacings in the data, which are the differences between the values of the cumulative distribution function at neighbouring data points.

In statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by Amemiya (1985).

In statistics: asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size n may grow indefinitely; the properties of estimators and tests are then evaluated under the limit of n → ∞. In practice, a limit evaluation is considered to be approximately valid for large finite sample sizes too.

In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of iid sampling from a regular parametric model.

In statistics, Hodges' estimator, named for Joseph Hodges, is a famous counterexample of an estimator which is "superefficient", i.e. it attains smaller asymptotic variance than regular efficient estimators. The existence of such a counterexample is the reason for the introduction of the notion of regular estimators.

In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test.

Denote a binary response index model as: , where .


  1. Fahrmeir, Ludwig; Kneib, Thomas; Lang, Stefan; Marx, Brian (2013). Regression : Models, Methods and Applications. Berlin: Springer. p. 663. ISBN   978-3-642-34332-2.
  2. Ward, Michael D.; Ahlquist, John S. (2018). Maximum Likelihood for Social Science : Strategies for Analysis. Cambridge University Press. p. 36. ISBN   978-1-316-63682-4.
  3. Martin, Vance; Hurn, Stan; Harris, David (2013). Econometric Modelling with Time Series: Specification, Estimation and Testing. Cambridge University Press. p. 138. ISBN   978-0-521-13981-6.
  4. Davidson, Russell; MacKinnon, James G. (1993). "The Method of Maximum Likelihood : Fundamental Concepts and Notation". Estimation and Inference in Econometrics. New York: Oxford University Press. p. 89. ISBN   0-19-506011-3.
  5. 1 2 3 Gregory, Allan W.; Veall, Michael R. (1985). "Formulating Wald Tests of Nonlinear Restrictions". Econometrica . 53 (6): 1465–1468. JSTOR   1913221.
  6. Phillips, P. C. B.; Park, Joon Y. (1988). "On the Formulation of Wald Tests of Nonlinear Restrictions". Econometrica . 56 (5): 1065–1083. JSTOR   1911359.
  7. Hayashi, Fumio (2000). Econometrics. Princeton: Princeton University Press. pp. 489–491. ISBN   1-4008-2383-8.,
  8. Lafontaine, Francine; White, Kenneth J. (1986). "Obtaining Any Wald Statistic You Want". Economics Letters . 21 (1): 35–40. doi:10.1016/0165-1765(86)90117-5.
  9. Hauck, Walter W., Jr.; Donner, Allan (1977). "Wald's Test as Applied to Hypotheses in Logit Analysis". Journal of the American Statistical Association . 72 (360a): 851–853. doi:10.1080/01621459.1977.10479969.
  10. King, Maxwell L.; Goh, Kim-Leng (2002). "Improvements to the Wald Test". Handbook of Applied Econometrics and Statistical Inference. New York: Marcel Dekker. pp. 251–276. ISBN   0-8247-0652-8.
  11. Yee, Thomas William (2020). "On the Hauck–Donner Effect in Wald Tests: Detection, Tipping Points, and Parameter Space Characterization". arXiv: 2001.08431 .Cite journal requires |journal= (help)
  12. Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics : Methods and Applications. New York: Cambridge University Press. p. 137. ISBN   0-521-84805-9.
  13. Davidson, Russell; MacKinnon, James G. (1993). "The Method of Maximum Likelihood : Fundamental Concepts and Notation". Estimation and Inference in Econometrics. New York: Oxford University Press. p. 89. ISBN   0-19-506011-3.
  14. Martin, Vance; Hurn, Stan; Harris, David (2013). Econometric Modelling with Time Series : Specification, Estimation and Testing. New York: Cambridge University Press. p. 129. ISBN   978-0-521-13981-6.
  15. Harrell, Frank E., Jr. (2001). "Section 9.3.1". Regression modeling strategies. New York: Springer-Verlag. ISBN   0387952322.
  16. Fears, Thomas R.; Benichou, Jacques; Gail, Mitchell H. (1996). "A reminder of the fallibility of the Wald statistic". The American Statistician . 50 (3): 226–227. doi:10.1080/00031305.1996.10474384.
  17. Critchley, Frank; Marriott, Paul; Salmon, Mark (1996). "On the Differential Geometry of the Wald Test with Nonlinear Restrictions". Econometrica . 64 (5): 1213–1222. JSTOR   2171963.
  18. Engle, Robert F. (1983). "Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics". In Intriligator, M. D.; Griliches, Z. (eds.). Handbook of Econometrics. II. Elsevier. pp. 796–801. ISBN   978-0-444-86185-6.
  19. Harrell, Frank E., Jr. (2001). "Section 9.3.3". Regression modeling strategies. New York: Springer-Verlag. ISBN   0387952322.
  20. Collett, David (1994). Modelling Survival Data in Medical Research. London: Chapman & Hall. ISBN   0412448807.
  21. Pawitan, Yudi (2001). In All Likelihood. New York: Oxford University Press. ISBN   0198507658.
  22. Agresti, Alan (2002). Categorical Data Analysis (2nd ed.). Wiley. p.  232. ISBN   0471360937.

Further reading