Multinomial distribution

Last updated

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided dice rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

Contents

When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).

The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.

Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi  0 for i = 1, ..., k and . Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes Xi are dependent because they must be summed to n.

Multinomial
Parameters

number of trials
number of mutually exclusive events (integer)

event probabilities, where
Support
PMF
Mean
Variance
Entropy
MGF
CF where
PGF

Definitions

Probability mass function

Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:

for non-negative integers x1, ..., xk.

The probability mass function can be expressed using the gamma function as:

This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.

Example

Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?

Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size [1] .

Properties

Normalization

The multinomial distribution is normalized according to:

where the sum is over all permutations of such that .

Expected value and variance

The expected number of times the outcome i was observed over n trials is

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore

The off-diagonal entries are the covariances:

for i, j distinct.

All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.

When these expressions are combined into a matrix with i, j element the result is a k×k positive-semidefinite covariance matrix of rank k  1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.

The entries of the corresponding correlation matrix are

Note that the number of trials n drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set

Its number of elements is

Matrix notation

In matrix notation,

and

with pT = the row vector transpose of the column vector p.

Visualization

As slices of generalized Pascal's triangle

Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid.[ citation needed ]

As polynomial coefficients

Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of when expanded, one can interpret the multinomial distribution as the coefficients of when expanded, noting that just the coefficients must sum up to 1.

Large deviation theory

Asymptotics

By Stirling's formula, at the limit of , we havewhere relative frequencies in the data can be interpreted as probabilities from the empirical distribution , and is the Kullback–Leibler divergence.

This formula can be interpreted as follows.

Consider , the space of all possible distributions over the categories . It is a simplex. After independent samples from the categorical distribution (which is how we construct the multinomial distribution), we obtain an empirical distribution .

By the asymptotic formula, the probability that empirical distribution deviates from the actual distribution decays exponentially, at a rate . The more experiments and the more different is from , the less likely it is to see such an empirical distribution.

If is a closed subset of , then by dividing up into pieces, and reasoning about the growth rate of on each piece , we obtain Sanov's theorem, which states that

Concentration at large n

Due to the exponential decay, at large , almost all the probability mass is concentrated in a small neighborhood of . In this small neighborhood, we can take the first nonzero term in the Taylor expansion of , to obtainThis resembles the gaussian distribution, which suggests the following theorem:

Theorem. At the limit, converges in distribution to the chi-squared distribution .

If we sample from the multinomial distribution , and plot the heatmap of the samples within the 2-dimensional simplex (here shown as a black triangle), we notice that as , the distribution converges to a gaussian around the point , with the contours converging in shape to ellipses, with radii converging as . Meanwhile, the separation between the discrete points converge as , and so the discrete multinomial distribution converges to a continuous gaussian distribution.
[Proof]

The space of all distributions over categories is a simplex: , and the set of all possible empirical distributions after experiments is a subset of the simplex: . That is, it is the intersection between and the lattice .

As increases, most of the probability mass is concentrated in a subset of near , and the probability distribution near becomes well-approximated by From this, we see that the subset upon which the mass is concentrated has radius on the order of , but the points in the subset are separated by distance on the order of , so at large , the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of in . However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density , where is a constant.

Finally, since the simplex is not all of , but only within a -dimensional plane, we obtain the desired result.

Conditional concentration at large n

The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for Pearson's chi-squared test.

Theorem. Given frequencies observed in a dataset with points, we impose independent linear constraints (notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical satisfy all these constraints simultaneously. Let denote the -projection of prior distribution on the sub-region of the simplex allowed by the linear constraints. At the limit, sampled counts from the multinomial distribution conditional on the linear constraints are governed by which converges in distribution to the chi-squared distribution .

[Proof]

An analogous proof applies in this Diophantine problem of coupled linear equations in count variables , [2] but this time is the intersection of with and hyperplanes, all linearly independent, so the probability density is restricted to a -dimensional plane. In particular, expanding the KL divergence around its minimum (the -projection of on ) in the constrained problem ensures by the Pythagorean theorem for -divergence that any constant and linear term in the counts vanishes from the conditional probability to multinationally sample those counts.

Notice that by definition, every one of must be a rational number, whereas may be chosen from any real number in and need not satisfy the Diophantine system of equations. Only asymptotically as , the 's can be regarded as probabilities over .

Away from empirically observed constraints (such as moments or prevalences) the theorem can be generalized:

Theorem.

  • Given functions , such that they are continuously differentiable in a neighborhood of , and the vectors are linearly independent;
  • given sequences , such that asymptotically for each ;
  • then for the multinomial distribution conditional on constraints , we have the quantity converging in distribution to at the limit.

In the case that all are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy. [3] [4]

In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range ; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.

Statistical inference

Equivalence tests for multinomial distributions

The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.

Let denote a theoretical multinomial distribution and let be a true underlying distribution. The distributions and are considered equivalent if for a distance and a tolerance parameter . The equivalence test problem is versus . The true underlying distribution is unknown. Instead, the counting frequencies are observed, where is a sample size. An equivalence test uses to reject . If can be rejected then the equivalence between and is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010). [5] The equivalence test for the total variation distance is developed in Ostrovski (2017). [6] The exact equivalence test for the specific cumulative distance is proposed in Frey (2009). [7]

The distance between the true underlying distribution and a family of the multinomial distributions is defined by . Then the equivalence test problem is given by and . The distance is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018). [8]

Confidence intervals for the difference of two proportions

In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events, , requires the incorporation of the negative covariance between the sample estimators and .

Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case.

Wald's standard error (SE) of the difference of proportion can be estimated using: [9] :378 [10]

For a approximate confidence interval, the margin of error may incorporate the appropriate quantile from the standard normal distribution, as follows:

[Proof]

As the sample size () increases, the sample proportions will approximately follow a multivariate normal distribution, thanks to the multidimensional central limit theorem (and it could also be shown using the Cramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators are weakly consistent and plugging them into the SE estimator makes it also weakly consistent. Hence, thanks to Slutsky's theorem, the pivotal quantity approximately follows the standard normal distribution. And from that, the above approximate confidence interval is directly derived.

The SE can be constructed using the calculus of the variance of the difference of two random variables:

A modification which includes a continuity correction adds to the margin of error as follows: [11] :102–3

Another alternative is to rely on a Bayesian estimator using Jeffreys prior which leads to using a dirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the k elements, leading to an overall increase of the sample size by . This was originally developed for a multinomial distribution with four events, and is known as wald+2, for analyzing matched pairs data (see the next section for more details). [12]

This leads to the following SE:

[Proof]

Which can just be plugged into the original Wald formula as follows:

Occurrence and applications

Confidence intervals for the difference in matched-pairs binary data (using multinomial with k=4)

For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time.

Such scenarios can be represented using a two-by-two contingency table with the number of elements that had each of the combination of events. We can use small f for sampling frequencies: , and capital F for population frequencies: . These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can be n and N respectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table:

Test 2 positiveTest 2 negativeRow total
Test 1 positive
Test 1 negative
Column total

In this case, checking the difference in marginal proportions means we are interested in using the following definitions: , . And the difference we want to build confidence intervals for is:

Hence, a confidence intervals for the marginal positive proportions () is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table ().

Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions.

The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as: [11] :102–3

Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application. [11]

One such modification includes Agresti and Min’s Wald+2 (similar to some of their other works [13] ) in which each cell frequency had an extra added to it. [12] This leads to the Wald+2 confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a dirichlet distribution with all parameters being equal to 0.5 (which is, in fact, the Jeffreys prior). The +2 in the name wald+2 can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).

This leads to the following modified SE for the case of matched pairs data:

Which can just be plugged into the original Wald formula as follows:

Other modifications include Bonett and Price’s Adjusted Wald, and Newcombe’s Score.

Computational methods

Random variate generation

First, reorder the parameters such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component

{Xj = 1, Xk = 0 for k  j } is one observation from the multinomial distribution with and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.

Sampling using repeated conditional binomial samples

Given the parameters and a total for the sample such that , it is possible to sample sequentially for the number in an arbitrary state , by partitioning the state space into and not-, conditioned on any prior samples already taken, repeatedly.

Algorithm: Sequential conditional binomial sampling

S=n rho=1foriin[1,k-1]: ifrho!=0: X[i]~Binom(S,p[i]/rho)elseX[i]=0S=S-X[i]rho=rho-p[i] X[k]=S 

Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency. [14]

Software implementations

See also

Related Research Articles

<span class="mw-page-title-main">Binomial distribution</span> Probability distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Log-normal distribution</span> Probability distribution

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Order statistic</span> Kth smallest value in a statistical sample

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of children from a primary school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

<span class="mw-page-title-main">Rayleigh distribution</span> Probability distribution

In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh.

In statistics, a studentized residual is the dimensionless ratio resulting from the division of a residual by an estimate of its standard deviation, both expressed in the same units. It is a form of a Student's t-statistic, with the estimate of error varying between points.

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.

<span class="mw-page-title-main">Simple linear regression</span> Linear regression model with a single explanatory variable

In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

<span class="mw-page-title-main">Empirical distribution function</span> Distribution function associated with the empirical measure of a sample

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In statistics, the delta method is a method of deriving the asymptotic distribution of a random variable. It is applicable when the random variable being considered can be defined as a differentiable function of a random variable which is asymptotically Gaussian.

In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments. In other words, a binomial proportion confidence interval is an interval estimate of a success probability when only the number of experiments and the number of successes are known.

In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.

<span class="mw-page-title-main">Inverse Gaussian distribution</span> Family of continuous probability distributions

In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).

Bootstrapping is a procedure for estimating the distribution of an estimator by resampling one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.

In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(x0, p)) to more than two outcomes.

In pure and applied mathematics, quantum mechanics and computer graphics, a tensor operator generalizes the notion of operators which are scalars and vectors. A special class of these are spherical tensor operators which apply the notion of the spherical basis and spherical harmonics. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. The coordinate-free generalization of a tensor operator is known as a representation operator.

References

  1. "probability - multinomial distribution sampling". Cross Validated. Retrieved 2022-07-28.
  2. Loukas, Orestis; Chung, Ho Ryun (2023). "Total Empiricism: Learning from Data". arXiv: 2311.08315 [math.ST].
  3. Loukas, Orestis; Chung, Ho Ryun (April 2022). "Categorical Distributions of Maximum Entropy under Marginal Constraints". arXiv: 2204.03406 [hep-th].
  4. Loukas, Orestis; Chung, Ho Ryun (June 2022). "Entropy-based Characterization of Modeling Constraints". arXiv: 2206.14105 [stat.ME].
  5. Wellek, Stefan (2010). Testing statistical hypotheses of equivalence and noninferiority. Chapman and Hall/CRC. ISBN   978-1439808184.
  6. Ostrovski, Vladimir (May 2017). "Testing equivalence of multinomial distributions". Statistics & Probability Letters. 124: 77–82. doi:10.1016/j.spl.2017.01.004. S2CID   126293429. Official web link (subscription required). Alternate, free web link.
  7. Frey, Jesse (March 2009). "An exact multinomial test for equivalence". The Canadian Journal of Statistics. 37: 47–59. doi:10.1002/cjs.10000. S2CID   122486567. Official web link (subscription required).
  8. Ostrovski, Vladimir (March 2018). "Testing equivalence to families of multinomial distributions with application to the independence model". Statistics & Probability Letters. 139: 61–66. doi:10.1016/j.spl.2018.03.014. S2CID   126261081. Official web link (subscription required). Alternate, free web link.
  9. Fleiss, Joseph L.; Levin, Bruce; Paik, Myunghee Cho (2003). Statistical Methods for Rates and Proportions (3rd ed.). Hoboken, N.J: J. Wiley. p. 760. ISBN   9780471526292.
  10. Newcombe, R. G. (1998). "Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods". Statistics in Medicine. 17 (8): 873–890. doi:10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I. PMID   9595617.
  11. 1 2 3 "Confidence Intervals for the Difference Between Two Correlated Proportions" (PDF). NCSS. Retrieved 2022-03-22.
  12. 1 2 Agresti, Alan; Min, Yongyi (2005). "Simple improved confidence intervals for comparing matched proportions" (PDF). Statistics in Medicine. 24 (5): 729–740. doi:10.1002/sim.1781. PMID   15696504.
  13. Agresti, A.; Caffo, B. (2000). "Simple and effective confidence intervals for proportions and difference of proportions result from adding two successes and two failures". The American Statistician. 54 (4): 280–288. doi:10.1080/00031305.2000.10474560.
  14. "11.5: The Multinomial Distribution". Statistics LibreTexts. 2020-05-05. Retrieved 2023-09-13.
  15. "MultinomialCI - Confidence Intervals for Multinomial Proportions". CRAN. 11 May 2021. Retrieved 2024-03-23.

Further reading