Dunnett's test

Last updated

In statistics, Dunnett's test is a multiple comparison procedure [1] developed by Canadian statistician Charles Dunnett [2] to compare each of a number of treatments with a single control. [3] [4] Multiple comparisons to a control are also referred to as many-to-one comparisons.

Contents

History

Dunnett's test was developed in 1955; [5] an updated table of critical values was published in 1964. [6]

Multiple Comparisons Problem

The multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values. The major issue in any discussion of multiple-comparison procedures is the question of the probability of Type I errors. Most differences among alternative techniques result from different approaches to the question of how to control these errors. The problem is in part technical; but it is really much more a subjective question of how you want to define the error rate and how large you are willing to let the maximum possible error rate be. [7] Dunnett's test are well known and widely used in multiple comparison procedure for simultaneously comparing, by interval estimation or hypothesis testing, all active treatments with a control when sampling from a distribution where the normality assumption is reasonable. Dunnett's test is designed to hold the family-wise error rate at or below when performing multiple comparisons of treatment group with control. [7]

Uses of Dunnett’s test

The original work on Multiple Comparisons problem was made by Tukey and Scheffé. Their method was a general one, which considered all kinds of pairwise comparisons. [7] Tukey's and Scheffé's methods allow any number of comparisons among a set of sample means. On the other hand, Dunnett's test only compares one group with the others, addressing a special case of multiple comparisons problem — pairwise comparisons of multiple treatment groups with a single control group. In the general case, where we compare each of the pairs, we make comparisons (where k is the number of groups), but in the treatment vs. controls case we will make only comparisons. If in the case of treatment and control groups we were to use the more general Tukey's and Scheffé's methods, they can result in unnecessarily wide confidence intervals. Dunnett's test takes into consideration the special structure of comparing treatment against control, yielding narrower confidence intervals. [5]
It is very common to use Dunnett's test in medical experiments, for example comparing blood count measurements on three groups of animals, one of which served as a control while the other two were treated with two different drugs. Another common use of this method is among agronomists: agronomists may want to study the effect of certain chemicals added to the soil on crop yield, so they will leave some plots untreated (control plots) and compare them to the plots where chemicals were added to the soil (treatment plots).

Formal description of Dunnett's test

Dunnett's test is performed by computing a Student's t-statistic for each experimental, or treatment, group where the statistic compares the treatment group to a single control group. [8] [9] Since each comparison has the same control in common, the procedure incorporates the dependencies between these comparisons. In particular, the t-statistics are all derived from the same estimate of the error variance which is obtained by pooling the sums of squares for error across all (treatment and control) groups. The formal test statistic for Dunnett's test is either the largest in absolute value of these t-statistics (if a two-tailed test is required), or the most negative or most positive of the t-statistics (if a one-tailed test is required).

In Dunnett's test we can use a common table of critical values, but more flexible options are nowadays readily available in many statistics packages. The critical values for any given percentage point depend on: whether a one- or- two-tailed test is performed; the number of groups being compared; the overall number of trials.

Assumptions

The analysis considers the case where the results of the experiment are numerical, and the experiment is performed to compare p treatments with a control group. The results can be summarized as a set of calculated means of the sets of observations, , while are referring to the treatment and is referring to the control set of observations, and is an independent estimate of the common standard deviation of all sets of observations. All of the sets of observations are assumed to be independently and normally distributed with a common variance and means . There is also an assumption that there is an available estimate for .

Calculation

Dunnett's test's calculation is a procedure that is based on calculating confidence statements about the true or the expected values of the differences , thus the differences between treatment groups' mean and control group's mean. This procedure ensures that the probability of all statements being simultaneously correct is equal to a specified value, . When calculating one sided upper (or lower) Confidence interval for the true value of the difference between the mean of the treatment and the control group, constitutes the probability that this actual value will be less than the upper (or greater than the lower) limit of that interval. When calculating two-sided confidence interval, constitutes the probability that the true value will be between the upper and the lower limits.

First, we will denote the available N observations by when and and estimate the common variance by, for example: when is the mean of group and is the number of observations in group , and degrees of freedom. As mentioned before, we would like to obtain separate confidence limits for each of the differences such that the probability that all confidence intervals will contain the corresponding is equal to .

We will consider the general case where there are treatment groups and one control group. We will write:

we will also write: , which follows the Student's t-statistic distribution with n degrees of freedom. The lower confidence limits with joint confidence coefficient for the treatment effects will be given by:

and the constants are chosen so that . Similarly, the upper limits will be given by:

For bounding in both directions, the following interval might be taken:

when are chosen to satisfy . The solution to those particular values of for two sided test and for one sided test is given in the tables. [5] An updated table of critical values was published in 1964. [6]

Examples

Breaking strength of fabric

The following example was adapted from one given by Villars. [10] The data represent measurements on the breaking strength of fabric treated by three different chemical process compared with a standard method of manufacture. [5]

breaking strength(lbs.)
standardprocess 1process 2process 3
55555550
47644944
48645241
Means50615245
Variance1927921

Here, p=3 and N=3. The average variance is , which is an estimate of the common variance of the four sets with (p+1)(N-1)=8 degrees of freedom. This can be calculated as follows:

.

The standard deviation is and the estimated standard error of a difference between two means is .

The quantity which must be added to and/or subtracted from the observed differences between the means to give their confidence limits has been called by Tukey an "allowance" and is given by , where t drawn from the Multivariate t-distribution, or can be obtained from Dunnett's Table 1 if one side limits are desired or from Dunnett's Table 2 if two-sided limits are wanted. For p=3 and d.f.=8, t=2.42 for one side limits and t=2.88 for two-sided limits for p=95%. Analogous values of t can be determined from the tables if p=99% confidence is required. For one-sided limits, the allowance is A=(2.42)(3.56)=9 and the experimenter can conclude that:

The joint statement consisting of the above three conclusions has a confidence coefficient of 95%, i.e., in the long run, 95% of such joint statements will actually be correct. Upper limits for the three differences could be obtained in an analogous manner. For two-sided limits, the allowance is A=(2.88)(3.56)=11 and the experimenter can conclude that:

and

and .

and . The joint confidence coefficient for these three statement is greater than 95%. (Due to an approximation made in computing Tables 2a and 2b, the tabulated values of t are somewhat larger than necessary so that the actual p's attained are slightly greater than 95 and 99%.No such approximation was made in computing Tables 1a and 1b).

Related Research Articles

<span class="texhtml mvar" style="font-style:italic;">e</span> (mathematical constant) 2.71828..., base of natural logarithms

The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of natural logarithms. It is the limit of (1 + 1/n)n as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite series

<span class="mw-page-title-main">Natural logarithm</span> Logarithm to the base of the mathematical constant e

The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.

<span class="mw-page-title-main">Chi-squared distribution</span> Probability distribution and special case of gamma distribution

In probability theory and statistics, the chi-squared distribution with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient ― also known as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ― is a measure of linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900. In contexts where it is important to improve a distinction between the test statistic and its distribution, names similar to Pearson χ-squared test or statistic are used.

In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a true positive detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

<span class="mw-page-title-main">Confidence interval</span> Range to estimate an unknown parameter

In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated confidence level; the 95% confidence level is most common, but other levels, such as 90% or 99%, are sometimes used. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.

In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

A t-test is a type of statistical analysis used to compare the averages of two groups and determine if the differences between them are more likely to arise from random chance. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are different.

In complex analysis, a branch of mathematics, a generalized continued fraction is a generalization of regular continued fractions in canonical form, in which the partial numerators and partial denominators can assume arbitrary complex values.

Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complicated studies there may be several different sample sizes: for example, in a stratified survey there would be different sizes for each stratum. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.

In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments. In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known.

In mathematics, an infinite periodic continued fraction is a continued fraction that can be placed in the form

The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5. It is more precisely called the principal square root of 5, to distinguish it from the negative number with the same property. This number appears in the fractional expression for the golden ratio. It can be denoted in surd form as:

Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSDtest, is a single-step multiple comparison procedure and statistical test. It can be used to find means that are significantly different from each other.

In statistics, a paired difference test is a type of location test that is used when comparing two sets of paired measurements to assess whether their population means differ. A paired difference test uses additional information about the sample that is not present in an ordinary unpaired testing situation, either to increase the statistical power, or to reduce the effects of confounders.

In statistics, the strictly standardized mean difference (SSMD) is a measure of effect size. It is the mean divided by the standard deviation of a difference between two random values each from one of two groups. It was initially proposed for quality control and hit selection in high-throughput screening (HTS) and has become a statistical parameter measuring effect sizes for the comparison of any two groups with random values.

In statistics, particularly regression analysis, the Working–Hotelling procedure, named after Holbrook Working and Harold Hotelling, is a method of simultaneous estimation in linear regression models. One of the first developments in simultaneous inference, it was devised by Working and Hotelling for the simple linear regression model in 1929. It provides a confidence region for multiple mean responses, that is, it gives the upper and lower bounds of more than one value of a dependent variable at several levels of the independent variables at a certain confidence level. The resulting confidence bands are known as the Working–Hotelling–Scheffé confidence bands.

References

  1. Upton G. & Cook I. (2006.) A Dictionary of Statistics, 2e, Oxford University Press, Oxford, United Kingdom.
  2. Rumsey, Deborah (2009-08-19). Statistics II for Dummies . Wiley. p.  186 . Retrieved 2012-08-22. dunnett's test developed by.
  3. Everett B. S. & Shrondal A. (2010.) The Cambridge Dictionary of Statistics, 4e, Cambridge University Press, Cambridge, United Kingdom.
  4. "Statistical Software | University of Kentucky Information Technology". Uky.edu. Archived from the original on 2012-07-31. Retrieved 2012-08-22.
  5. 1 2 3 4 Dunnett C. W. (1955). "A multiple comparison procedure for comparing several treatments with a control". Journal of the American Statistical Association. 50: 1096–1121. doi:10.1080/01621459.1955.10501294.
  6. 1 2 Dunnett C. W. (1964.) "New tables for multiple comparisons with a control", Biometrics, 20:482491.
  7. 1 2 3 David C. Howell, "Statistical Methods for Psychology",8th ed.
  8. Dunnett's test, HyperStat Online: An Introductory Statistics Textbook and Online Tutorial for Help in Statistics Courses
  9. Mechanics of Different Tests - Biostatistics BI 345 Archived 2010-06-01 at the Wayback Machine , Saint Anselm College
  10. Villars, Donald Statler (1951). Statistical Design and Analysis of Experiments for Development Research. Dubuque, Iowa: Wm. C. Brown Co.