Studentized range

Last updated

In statistics, the studentized range, denoted q, is the difference between the largest and smallest data in a sample normalized by the sample standard deviation. It is named after William Sealy Gosset (who wrote under the pseudonym "Student"), and was introduced by him in 1927. [1] The concept was later discussed by Newman (1939), [2] Keuls (1952), [3] and John Tukey in some unpublished notes. Its statistical distribution is the studentized range distribution , which is used for multiple comparison procedures, such as the single step procedure Tukey's range test, the Newman–Keuls method, and the Duncan's step down procedure, and establishing confidence intervals that are still valid after data snooping has occurred. [4]

Contents

Description

The value of the studentized range, most often represented by the variable q, can be defined based on a random sample x1, ..., xn from the N(0, 1) distribution of numbers, and another random variable s that is independent of all the xi, and νs2 has a χ2 distribution with ν degrees of freedom. Then

has the Studentized range distribution for n groups and ν degrees of freedom. In applications, the xi are typically the means of samples each of size m, s2 is the pooled variance, and the degrees of freedom are ν = n(m  1).

The critical value of q is based on three factors:

  1. α (the probability of rejecting a true null hypothesis)
  2. n (the number of observations or groups)
  3. ν (the degrees of freedom used to estimate the sample variance)

Distribution

If X1, ..., Xn are independent identically distributed random variables that are normally distributed, the probability distribution of their studentized range is what is usually called the studentized range distribution. Note that the definition of q does not depend on the expected value or the standard deviation of the distribution from which the sample is drawn, and therefore its probability distribution is the same regardless of those parameters.

Studentization

Generally, the term studentized means that the variable's scale was adjusted by dividing by an estimate of a population standard deviation (see also studentized residual). The fact that the standard deviation is a sample standard deviation rather than the population standard deviation, and thus something that differs from one random sample to the next, is essential to the definition and the distribution of the Studentized data. The variability in the value of the sample standard deviation contributes additional uncertainty into the values calculated. This complicates the problem of finding the probability distribution of any statistic that is studentized.

See also

Related Research Articles

<span class="mw-page-title-main">Median</span> Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

<span class="mw-page-title-main">Chi-squared distribution</span> Probability distribution and special case of gamma distribution

In probability theory and statistics, the chi-squared distribution with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by alpha (α) and beta (β), that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value". The error of an observation is the deviation of the observed value from the true value of a quantity of interest. The residual is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals. In econometrics, "errors" are also called disturbances.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are different.

In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's t-statistic, with the estimate of error varying between points.

Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

In statistics, the Behrens–Fisher problem, named after Walter Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

<span class="mw-page-title-main">Studentized range distribution</span>

In probability and statistics, studentized range distribution is the continuous probability distribution of the studentized range of an i.i.d. sample from a normally distributed population.

Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSDtest, is a single-step multiple comparison procedure and statistical test. It can be used to find means that are significantly different from each other.

<span class="mw-page-title-main">Relationships among probability distributions</span> Topic in probability theory and statistics

In probability theory and statistics, there are several relationships among probability distributions. These relations can be categorized in the following groups:

The Newman–Keuls or Student–Newman–Keuls (SNK)method is a stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It was named after Student (1927), D. Newman, and M. Keuls. This procedure is often used as a post-hoc test whenever a significant difference between three or more sample means has been revealed by an analysis of variance (ANOVA). The Newman–Keuls method is similar to Tukey's range test as both procedures use studentized range statistics. Unlike Tukey's range test, the Newman–Keuls method uses different critical values for different pairs of mean comparisons. Thus, the procedure is more likely to reveal significant differences between group means and to commit type I errors by incorrectly rejecting a null hypothesis when it is true. In other words, the Neuman-Keuls procedure is more powerful but less conservative than Tukey's range test.

In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.

References

  1. Student (1927). "Errors of routine analysis". Biometrika. 19 (1/2): 151–164. doi:10.2307/2332181. JSTOR   2332181.
  2. Newman D. (1939). "The Distribution of Range in Samples from a Normal Population Expressed in Terms of an Independent Estimate of Standard Deviation". Biometrika. 31 (1–2): 20–30. doi:10.1093/biomet/31.1-2.20.
  3. Keuls M. (1952). "The Use of the "Studentized Range" in Connection with an Analysis of Variance". Euphytica. 1 (2): 112–122. doi:10.1007/bf01908269. S2CID   19365087.
  4. John A. Rafter (2002). "Multiple Comparison Methods for Means". SIAM Review. 44 (2): 259–278. Bibcode:2002SIAMR..44..259R. CiteSeerX   10.1.1.132.2976 . doi:10.1137/s0036144501357233.

Further reading