# Beta distribution

Last updated
Beta
Probability density function
Cumulative distribution function
NotationBeta(α, β)
Parametersα > 0 shape (real)
β > 0 shape (real)
Support ${\displaystyle x\in [0,1]\!}$ or ${\displaystyle x\in (0,1)\!}$
PDF ${\displaystyle {\frac {x^{\alpha -1}(1-x)^{\beta -1}}{\mathrm {B} (\alpha ,\beta )}}\!}$
where ${\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}$ and ${\displaystyle \Gamma }$ is the Gamma function.
CDF ${\displaystyle I_{x}(\alpha ,\beta )\!}$ (the regularised incomplete beta function)
Mean ${\displaystyle \operatorname {E} [X]={\frac {\alpha }{\alpha +\beta }}\!}$
${\displaystyle \operatorname {E} [\ln X]=\psi (\alpha )-\psi (\alpha +\beta )\!}$

${\displaystyle \operatorname {E} [X\,\ln X]={\frac {\alpha }{\alpha +\beta }}\,\left[\psi (\alpha +1)-\psi (\alpha +\beta +1)\right]\!}$
(see digamma function and see section: Geometric mean)
Median ${\displaystyle {\begin{matrix}I_{\frac {1}{2}}^{[-1]}(\alpha ,\beta ){\text{ (in general) }}\\[0.5em]\approx {\frac {\alpha -{\tfrac {1}{3}}}{\alpha +\beta -{\tfrac {2}{3}}}}{\text{ for }}\alpha ,\beta >1\end{matrix}}}$
Mode ${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2}}\!}$ for α, β > 1

any value in ${\displaystyle (0,1)}$ for α, β = 1

## Contents

{0, 1} (bimodal) for α, β < 1

0 for α ≤ 1, β > 1

1 for α > 1, β ≤ 1
Variance ${\displaystyle \operatorname {var} [X]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)}}\!}$
${\displaystyle \operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )\!}$
(see trigamma function and see section: Geometric variance)
Skewness ${\displaystyle {\frac {2\,(\beta -\alpha ){\sqrt {\alpha +\beta +1}}}{(\alpha +\beta +2){\sqrt {\alpha \beta }}}}}$
Ex. kurtosis ${\displaystyle {\frac {6[(\alpha -\beta )^{2}(\alpha +\beta +1)-\alpha \beta (\alpha +\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)}}}$
Entropy ${\displaystyle {\begin{matrix}\ln \mathrm {B} (\alpha ,\beta )-(\alpha -1)\psi (\alpha )-(\beta -1)\psi (\beta )\\[0.5em]+(\alpha +\beta -2)\psi (\alpha +\beta )\end{matrix}}}$
MGF ${\displaystyle 1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r}}\right){\frac {t^{k}}{k!}}}$
CF ${\displaystyle {}_{1}F_{1}(\alpha ;\alpha +\beta ;i\,t)\!}$ (see Confluent hypergeometric function)
Fisher information ${\displaystyle {\begin{bmatrix}\operatorname {var} [\ln X]&\operatorname {cov} [\ln X,\ln(1-X)]\\\operatorname {cov} [\ln X,\ln(1-X)]&\operatorname {var} [\ln(1-X)]\end{bmatrix}}}$
see section: Fisher information matrix

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. It is a special case of the Dirichlet distribution.

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.

In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. Examples of random phenomena can include the results of an experiment or survey.

The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines.

In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial and geometric distributions. For example, the beta distribution can be used in Bayesian analysis to describe initial knowledge concerning probability of success such as the probability that a space vehicle will successfully complete a specified mission. The beta distribution is a suitable model for the random behavior of percentages and proportions.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability that is, the probability distribution of any single experiment that asks a yes–no question; the question results in a boolean-valued outcome, a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: success/yes/true/one or failure/no/false/zero. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

The usual formulation of the beta distribution is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution.

In probability theory and statistics, the beta prime distribution is an absolutely continuous probability distribution defined for with two parameters α and β, having the probability density function:

## Characterization

### Probability density function

The probability density function (pdf) of the beta distribution, for 0 ≤ x ≤ 1, and shape parameters α, β > 0, is a power function of the variable x and of its reflection (1 − x) as follows:

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

In mathematics, a reflection formula or reflection relation for a function f is a relationship between f(a − x) and f(x). It is a special case of a functional equation, and it is very common in the literature to use the term "functional equation" when "reflection formula" is meant.

{\displaystyle {\begin{aligned}f(x;\alpha ,\beta )&=\mathrm {constant} \cdot x^{\alpha -1}(1-x)^{\beta -1}\\[3pt]&={\frac {x^{\alpha -1}(1-x)^{\beta -1}}{\displaystyle \int _{0}^{1}u^{\alpha -1}(1-u)^{\beta -1}\,du}}\\[6pt]&={\frac {\Gamma (\alpha +\beta )}{\Gamma (\alpha )\Gamma (\beta )}}\,x^{\alpha -1}(1-x)^{\beta -1}\\[6pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )}}x^{\alpha -1}(1-x)^{\beta -1}\end{aligned}}}

where Γ(z) is the gamma function. The beta function, ${\displaystyle \mathrm {B} }$, is a normalization constant to ensure that the total probability is 1. In the above equations x is a realization an observed value that actually occurredof a random process  X.

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For any positive integer ,

In mathematics, the beta function, also called the Euler integral of the first kind, is a special function defined by

In probability and statistics, a realization, observation, or observed value, of a random variable is the value that is actually observed. The random variable itself is the process dictating how the observation comes about. Statistical quantities computed from realizations without deploying a statistical model are often called "empirical", as in empirical distribution function or empirical probability.

This definition includes both ends x = 0 and x = 1, which is consistent with definitions for other continuous distributions supported on a bounded interval which are special cases of the beta distribution, for example the arcsine distribution, and consistent with several authors, like N. L. Johnson and S. Kotz. [1] [2] [3] [4] However, the inclusion of x = 0 and x = 1 does not work for α, β < 1; accordingly, several other authors, including W. Feller, [5] [6] [7] choose to exclude the ends x = 0 and x = 1, (so that the two ends are not actually part of the domain of the density function) and consider instead 0 < x < 1.

Several authors, including N. L. Johnson and S. Kotz, [1] use the symbols p and q (instead of α and β) for the shape parameters of the beta distribution, reminiscent of the symbols traditionally used for the parameters of the Bernoulli distribution, because the beta distribution approaches the Bernoulli distribution in the limit when both shape parameters α and β approach the value of zero.

In the following, a random variable X beta-distributed with parameters α and β will be denoted by: [8] [9]

${\displaystyle X\sim \operatorname {Beta} (\alpha ,\beta )}$

Other notations for beta-distributed random variables used in the statistical literature are ${\displaystyle X\sim {\mathcal {B}}e(\alpha ,\beta )}$ [10] and ${\displaystyle X\sim \beta _{\alpha ,\beta }}$. [5]

### Cumulative distribution function

${\displaystyle F(x;\alpha ,\beta )={\frac {\mathrm {B} {}(x;\alpha ,\beta )}{\mathrm {B} {}(\alpha ,\beta )}}=I_{x}(\alpha ,\beta )}$

where ${\displaystyle \mathrm {B} (x;\alpha ,\beta )}$ is the incomplete beta function and ${\displaystyle I_{x}(\alpha ,\beta )}$ is the regularized incomplete beta function.

## Properties

### Measures of central tendency

#### Mode

The mode of a Beta distributed random variable X with α, β > 1 is the most likely value of the distribution (corresponding to the peak in the PDF), and is given by the following expression: [1]

${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2}}.}$

When both parameters are less than one (α, β < 1), this is the anti-mode: the lowest point of the probability density curve. [3]

Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β > 1 the mode (resp. anti-mode when α, β < 1), is at the center of the distribution: it is symmetric in those cases. See Shapes section in this article for a full list of mode cases, for arbitrary values of α and β. For several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the (maximum) value of the density function occurring at the end is finite. For example, in the case of α = 2, β = 1 (or α = 1, β = 2), the density function becomes a right-triangle distribution which is finite at both ends. In several other cases there is a singularity at one end, where the value of the density function approaches infinity. For example, in the case α = β = 1/2, the Beta distribution simplifies to become the arcsine distribution. There is debate among mathematicians about some of these cases and whether the ends (x = 0, and x = 1) can be called modes or not. [6] [8]

• Whether the ends are part of the domain of the density function
• Whether a singularity can ever be called a mode
• Whether cases with two maxima should be called bimodal

#### Median

The median of the beta distribution is the unique real number ${\displaystyle x=I_{\frac {1}{2}}^{[-1]}(\alpha ,\beta )}$ for which the regularized incomplete beta function ${\displaystyle I_{x}(\alpha ,\beta )={\tfrac {1}{2}}}$. There is no general closed-form expression for the median of the beta distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow:[ citation needed ]

• For symmetric cases α = β, median = 1/2.
• For α = 1 and β > 0, median ${\displaystyle =1-2^{-{\frac {1}{\beta }}}}$ (this case is the mirror-image of the power function [0,1] distribution)
• For α > 0 and β = 1, median = ${\displaystyle 2^{-{\frac {1}{\alpha }}}}$ (this case is the power function [0,1] distribution [6] )
• For α = 3 and β = 2, median = 0.6142724318676105..., the real solution to the quartic equation 1 − 8x3 + 6x4 = 0, which lies in [0,1].
• For α = 2 and β = 3, median = 0.38572756813238945... = 1−median(Beta(3, 2))

The following are the limits with one parameter finite (non-zero) and the other approaching these limits:[ citation needed ]

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}{\text{median}}=\lim _{\alpha \to \infty }{\text{median}}=1,\\\lim _{\alpha \to 0}{\text{median}}=\lim _{\beta \to \infty }{\text{median}}=0.\end{aligned}}}

A reasonable approximation of the value of the median of the beta distribution, for both α and β greater or equal to one, is given by the formula [11]

${\displaystyle {\text{median}}\approx {\frac {\alpha -{\tfrac {1}{3}}}{\alpha +\beta -{\tfrac {2}{3}}}}{\text{ for }}\alpha ,\beta \geq 1.}$

When α, β ≥ 1, the relative error (the absolute error divided by the median) in this approximation is less than 4% and for both α ≥ 2 and β ≥ 2 it is less than 1%. The absolute error divided by the difference between the mean and the mode is similarly small:

#### Mean

The expected value (mean) (μ) of a Beta distribution random variable X with two parameters α and β is a function of only the ratio β/α of these parameters: [1]

{\displaystyle {\begin{aligned}\mu =\operatorname {E} [X]&=\int _{0}^{1}xf(x;\alpha ,\beta )\,dx\\&=\int _{0}^{1}x\,{\frac {x^{\alpha -1}(1-x)^{\beta -1}}{\mathrm {B} (\alpha ,\beta )}}\,dx\\&={\frac {\alpha }{\alpha +\beta }}\\&={\frac {1}{1+{\frac {\beta }{\alpha }}}}\end{aligned}}}

Letting α = β in the above expression one obtains μ = 1/2, showing that for α = β the mean is at the center of the distribution: it is symmetric. Also, the following limits can be obtained from the above expression:

{\displaystyle {\begin{aligned}\lim _{{\frac {\beta }{\alpha }}\to 0}\mu =1\\\lim _{{\frac {\beta }{\alpha }}\to \infty }\mu =0\end{aligned}}}

Therefore, for β/α → 0, or for α/β → ∞, the mean is located at the right end, x = 1. For these limit ratios, the beta distribution becomes a one-point degenerate distribution with a Dirac delta function spike at the right end, x = 1, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the right end, x = 1.

Similarly, for β/α → ∞, or for α/β → 0, the mean is located at the left end, x = 0. The beta distribution becomes a 1-point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the left end, x = 0. Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}\mu =\lim _{\alpha \to \infty }\mu =1\\\lim _{\alpha \to 0}\mu =\lim _{\beta \to \infty }\mu =0\end{aligned}}}

While for typical unimodal distributions (with centrally located modes, inflexion points at both sides of the mode, and longer tails) (with Beta(α, β) such that α, β > 2) it is known that the sample mean (as an estimate of location) is not as robust as the sample median, the opposite is the case for uniform or "U-shaped" bimodal distributions (with Beta(α, β) such that α, β ≤ 1), with the modes located at the ends of the distribution. As Mosteller and Tukey remark ( [12] p. 207) "the average of the two extreme observations uses all the sample information. This illustrates how, for short-tailed distributions, the extreme observations should get more weight." By contrast, it follows that the median of "U-shaped" bimodal distributions with modes at the edge of the distribution (with Beta(α, β) such that α, β ≤ 1) is not robust, as the sample median drops the extreme sample observations from consideration. A practical application of this occurs for example for random walks, since the probability for the time of the last visit to the origin in a random walk is distributed as the arcsine distribution Beta(1/2, 1/2): [5] [13] the mean of a number of realizations of a random walk is a much more robust estimator than the median (which is an inappropriate sample measure estimate in this case).

#### Geometric mean

The logarithm of the geometric mean GX of a distribution with random variable X is the arithmetic mean of ln(X), or, equivalently, its expected value:

${\displaystyle \ln G_{X}=\operatorname {E} [\ln X]}$

For a beta distribution, the expected value integral gives:

{\displaystyle {\begin{aligned}\operatorname {E} [\ln X]&=\int _{0}^{1}\ln x\,f(x;\alpha ,\beta )\,dx\\[4pt]&=\int _{0}^{1}\ln x\,{\frac {x^{\alpha -1}(1-x)^{\beta -1}}{\mathrm {B} (\alpha ,\beta )}}\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )}}\,\int _{0}^{1}{\frac {\partial x^{\alpha -1}(1-x)^{\beta -1}}{\partial \alpha }}\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )}}{\frac {\partial }{\partial \alpha }}\int _{0}^{1}x^{\alpha -1}(1-x)^{\beta -1}\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )}}{\frac {\partial \mathrm {B} (\alpha ,\beta )}{\partial \alpha }}\\[4pt]&={\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha }}\\[4pt]&={\frac {\partial \ln \Gamma (\alpha )}{\partial \alpha }}-{\frac {\partial \ln \Gamma (\alpha +\beta )}{\partial \alpha }}\\[4pt]&=\psi (\alpha )-\psi (\alpha +\beta )\end{aligned}}}

where ψ is the digamma function.

Therefore, the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows:

${\displaystyle G_{X}=e^{\operatorname {E} [\ln X]}=e^{\psi (\alpha )-\psi (\alpha +\beta )}}$

While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2: 0 < GX < 1/2. The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln(X) strongly tends towards negative infinity as X approaches zero, while ln(X) flattens towards zero as X → 1.

Along a line α = β, the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}G_{X}=0\\&\lim _{\alpha =\beta \to \infty }G_{X}={\tfrac {1}{2}}\end{aligned}}}

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}G_{X}=\lim _{\alpha \to \infty }G_{X}=1\\\lim _{\alpha \to 0}G_{X}=\lim _{\beta \to \infty }G_{X}=0\end{aligned}}}

The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2. Besides the fact that the difference between them approaches zero as α and β approach infinity and that the difference becomes large for values of α and β approaching zero, one can observe an evident asymmetry of the geometric mean with respect to the shape parameters α and β. The difference between the geometric mean and the mean is larger for small values of α in relation to β than when exchanging the magnitudes of β and α.

N. L.Johnson and S. Kotz [1] suggest the logarithmic approximation to the digamma function ψ(α) ≈ ln(α  1/2) which results in the following approximation to the geometric mean:

${\displaystyle G_{X}\approx {\frac {\alpha \,-{\frac {1}{2}}}{\alpha +\beta -{\frac {1}{2}}}}{\text{ if }}\alpha ,\beta >1.}$

Numerical values for the relative error in this approximation follow: [(α = β = 1): 9.39%]; [(α = β = 2): 1.29%]; [(α = 2, β = 3): 1.51%]; [(α = 3, β = 2): 0.44%]; [(α = β = 3): 0.51%]; [(α = β = 4): 0.26%]; [(α = 3, β = 4): 0.55%]; [(α = 4, β = 3): 0.24%].

Similarly, one can calculate the value of shape parameters required for the geometric mean to equal 1/2. Given the value of the parameter β, what would be the value of the other parameter, α, required for the geometric mean to equal 1/2?. The answer is that (for β > 1), the value of α required tends towards β + 1/2 as β → ∞. For example, all these couples have the same geometric mean of 1/2: [β = 1, α = 1.4427], [β = 2, α = 2.46958], [β = 3, α = 3.47943], [β = 4, α = 4.48449], [β = 5, α = 5.48756], [β = 10, α = 10.4938], [β = 100, α = 100.499].

The fundamental property of the geometric mean, which can be proven to be false for any other mean, is

${\displaystyle G\left({\frac {X_{i}}{Y_{i}}}\right)={\frac {G(X_{i})}{G(Y_{i})}}}$

This makes the geometric mean the only correct mean when averaging normalized results, that is results that are presented as ratios to reference values. [14] This is relevant because the beta distribution is a suitable model for the random behavior of percentages and it is particularly suitable to the statistical modelling of proportions. The geometric mean plays a central role in maximum likelihood estimation, see section "Parameter estimation, maximum likelihood." Actually, when performing maximum likelihood estimation, besides the geometric mean GX based on the random variable X, also another geometric mean appears naturally: the geometric mean based on the linear transformation ––(1 − X), the mirror-image of X, denoted by G(1−X):

${\displaystyle G_{(1-X)}=e^{\operatorname {E} [\ln(1-X)]}=e^{\psi (\beta )-\psi (\alpha +\beta )}}$

Along a line α = β, the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}G_{(1-X)}=0\\&\lim _{\alpha =\beta \to \infty }G_{(1-X)}={\tfrac {1}{2}}\end{aligned}}}

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}G_{(1-X)}=\lim _{\alpha \to \infty }G_{(1-X)}=0\\\lim _{\alpha \to 0}G_{(1-X)}=\lim _{\beta \to \infty }G_{(1-X)}=1\end{aligned}}}

It has the following approximate value:

${\displaystyle G_{(1-X)}\approx {\frac {\beta -{\frac {1}{2}}}{\alpha +\beta -{\frac {1}{2}}}}{\text{ if }}\alpha ,\beta >1.}$

Although both GX and G(1−X) are asymmetric, in the case that both shape parameters are equal α = β, the geometric means are equal: GX = G(1−X). This equality follows from the following symmetry displayed between both geometric means:

${\displaystyle G_{X}(\mathrm {B} (\alpha ,\beta ))=G_{(1-X)}(\mathrm {B} (\beta ,\alpha )).}$

#### Harmonic mean

The inverse of the harmonic mean (HX) of a distribution with random variable X is the arithmetic mean of 1/X, or, equivalently, its expected value. Therefore, the harmonic mean (HX) of a beta distribution with shape parameters α and β is:

{\displaystyle {\begin{aligned}H_{X}&={\frac {1}{\operatorname {E} \left[{\frac {1}{X}}\right]}}\\&={\frac {1}{\int _{0}^{1}{\frac {f(x;\alpha ,\beta )}{x}}\,dx}}\\&={\frac {1}{\int _{0}^{1}{\frac {x^{\alpha -1}(1-x)^{\beta -1}}{x\mathrm {B} (\alpha ,\beta )}}\,dx}}\\&={\frac {\alpha -1}{\alpha +\beta -1}}{\text{ if }}\alpha >1{\text{ and }}\beta >0\\\end{aligned}}}

The harmonic mean (HX) of a Beta distribution with α < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter α less than unity.

Letting α = β in the above expression one obtains

${\displaystyle H_{X}={\frac {\alpha -1}{2\alpha -1}},}$

showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}H_{X}{\text{ is undefined}}\\&\lim _{\alpha \to 1}H_{X}=\lim _{\beta \to \infty }H_{X}=0\\&\lim _{\beta \to 0}H_{X}=\lim _{\alpha \to \infty }H_{X}=1\end{aligned}}}

The harmonic mean plays a role in maximum likelihood estimation for the four parameter case, in addition to the geometric mean. Actually, when performing maximum likelihood estimation for the four parameter case, besides the harmonic mean HX based on the random variable X, also another harmonic mean appears naturally: the harmonic mean based on the linear transformation (1  X), the mirror-image of X, denoted by H1  X:

${\displaystyle H_{1-X}={\frac {1}{\operatorname {E} \left[{\frac {1}{1-X}}\right]}}={\frac {\beta -1}{\alpha +\beta -1}}{\text{ if }}\beta >1,{\text{ and }}\alpha >0.}$

The harmonic mean (H(1  X)) of a Beta distribution with β < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter β less than unity.

Letting α = β in the above expression one obtains

${\displaystyle H_{(1-X)}={\frac {\beta -1}{2\beta -1}},}$

showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\beta \to 0}H_{1-X}{\text{ is undefined}}\\&\lim _{\beta \to 1}H_{1-X}=\lim _{\alpha \to \infty }H_{1-X}=0\\&\lim _{\alpha \to 0}H_{1-X}=\lim _{\beta \to \infty }H_{1-X}=1\end{aligned}}}

Although both HX and H1−X are asymmetric, in the case that both shape parameters are equal α = β, the harmonic means are equal: HX = H1−X. This equality follows from the following symmetry displayed between both harmonic means:

${\displaystyle H_{X}(\mathrm {B} (\alpha ,\beta ))=H_{1-X}(\mathrm {B} (\beta ,\alpha )){\text{ if }}\alpha ,\beta >1.}$

### Measures of statistical dispersion

#### Variance

The variance (the second moment centered on the mean) of a Beta distribution random variable X with parameters α and β is: [1] [15]

${\displaystyle \operatorname {var} (X)=\operatorname {E} [(X-\mu )^{2}]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)}}}$

Letting α = β in the above expression one obtains

${\displaystyle \operatorname {var} (X)={\frac {1}{4(2\beta +1)}},}$

showing that for α = β the variance decreases monotonically as α = β increases. Setting α = β = 0 in this expression, one finds the maximum variance var(X) = 1/4 [1] which only occurs approaching the limit, at α = β = 0.

The beta distribution may also be parametrized in terms of its mean μ(0 < μ < 1) and sample size ν = α + β (ν > 0) (see section below titled "Mean and sample size"):

{\displaystyle {\begin{aligned}\alpha &=\mu \nu ,{\text{ where }}\nu =(\alpha +\beta )>0\\\beta &=(1-\mu )\nu ,{\text{ where }}\nu =(\alpha +\beta )>0.\end{aligned}}}

Using this parametrization, one can express the variance in terms of the mean μ and the sample size ν as follows:

${\displaystyle \operatorname {var} (X)={\frac {\mu (1-\mu )}{1+\nu }}}$

Since ν = (α + β) > 0, it must follow that var(X) < μ(1 − μ).

For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore:

${\displaystyle \operatorname {var} (X)={\frac {1}{4(1+\nu )}}{\text{ if }}\mu ={\tfrac {1}{2}}}$

Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\beta \to 0}\operatorname {var} (X)=\lim _{\alpha \to 0}\operatorname {var} (X)=\lim _{\beta \to \infty }\operatorname {var} (X)=\lim _{\alpha \to \infty }\operatorname {var} (X)=\lim _{\nu \to \infty }\operatorname {var} (X)=\lim _{\mu \to 0}\operatorname {var} (X)=\lim _{\mu \to 1}\operatorname {var} (X)=0\\&\lim _{\nu \to 0}\operatorname {var} (X)=\mu (1-\mu )\end{aligned}}}

#### Geometric variance and covariance

The logarithm of the geometric variance, ln(varGX), of a distribution with random variable X is the second moment of the logarithm of X centered on the geometric mean of X, ln(GX):

{\displaystyle {\begin{aligned}\ln \operatorname {var} _{GX}&=\operatorname {E} \left[(\ln X-\ln G_{X})^{2}\right]\\&=\operatorname {E} [(\ln X-\operatorname {E} \left[\ln X])^{2}\right]\\&=\operatorname {E} \left[(\ln X)^{2}\right]-(\operatorname {E} [\ln X])^{2}\\&=\operatorname {var} [\ln X]\end{aligned}}}

and therefore, the geometric variance is:

${\displaystyle \operatorname {var} _{GX}=e^{\operatorname {var} [\ln X]}}$

In the Fisher information matrix, and the curvature of the log likelihood function, the logarithm of the geometric variance of the reflected variable 1  X and the logarithm of the geometric covariance between X and 1  X appear:

{\displaystyle {\begin{aligned}\ln \operatorname {var_{G(1-X)}} &=\operatorname {E} [(\ln(1-X)-\ln G_{1-X})^{2}]\\&=\operatorname {E} [(\ln(1-X)-\operatorname {E} [\ln(1-X)])^{2}]\\&=\operatorname {E} [(\ln(1-X))^{2}]-(\operatorname {E} [\ln(1-X)])^{2}\\&=\operatorname {var} [\ln(1-X)]\\&\\\operatorname {var_{G(1-X)}} &=e^{\operatorname {var} [\ln(1-X)]}\\&\\\ln \operatorname {cov_{G{X,1-X}}} &=\operatorname {E} [(\ln X-\ln G_{X})(\ln(1-X)-\ln G_{1-X})]\\&=\operatorname {E} [(\ln X-\operatorname {E} [\ln X])(\ln(1-X)-\operatorname {E} [\ln(1-X)])]\\&=\operatorname {E} \left[\ln X\ln(1-X)\right]-\operatorname {E} [\ln X]\operatorname {E} [\ln(1-X)]\\&=\operatorname {cov} [\ln X,\ln(1-X)]\\&\\\operatorname {cov} _{G{X,(1-X)}}&=e^{\operatorname {cov} [\ln X,\ln(1-X)]}\end{aligned}}}

For a beta distribution, higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions. See the section titled "Other moments, Moments of transformed random variables, Moments of logarithmically transformed random variables". The variance of the logarithmic variables and covariance of ln X and ln(1−X) are:

${\displaystyle \operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \operatorname {cov} [\ln X,\ln(1-X)]=-\psi _{1}(\alpha +\beta )}$

where the trigamma function , denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {d^{2}\ln \Gamma (\alpha )}{d\alpha ^{2}}}={\frac {d\,\psi (\alpha )}{d\alpha }}.}$

Therefore,

${\displaystyle \ln \operatorname {var} _{GX}=\operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \ln \operatorname {var} _{G(1-X)}=\operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \ln \operatorname {cov} _{GX,1-X}=\operatorname {cov} [\ln X,\ln(1-X)]=-\psi _{1}(\alpha +\beta )}$

The accompanying plots show the log geometric variances and log geometric covariance versus the shape parameters α and β. The plots show that the log geometric variances and log geometric covariance are close to zero for shape parameters α and β greater than 2, and that the log geometric variances rapidly rise in value for shape parameter values α and β less than unity. The log geometric variances are positive for all values of the shape parameters. The log geometric covariance is negative for all values of the shape parameters, and it reaches large negative values for α and β less than unity.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}\ln \operatorname {var} _{GX}=\lim _{\beta \to 0}\ln \operatorname {var} _{G(1-X)}=\infty \\&\lim _{\beta \to 0}\ln \operatorname {var} _{GX}=\lim _{\alpha \to \infty }\ln \operatorname {var} _{GX}=\lim _{\alpha \to 0}\ln \operatorname {var} _{G(1-X)}=\lim _{\beta \to \infty }\ln \operatorname {var} _{G(1-X)}=\lim _{\alpha \to \infty }\ln \operatorname {cov} _{GX,(1-X)}=\lim _{\beta \to \infty }\ln \operatorname {cov} _{GX,(1-X)}=0\\&\lim _{\beta \to \infty }\ln \operatorname {var} _{GX}=\psi _{1}(\alpha )\\&\lim _{\alpha \to \infty }\ln \operatorname {var} _{G(1-X)}=\psi _{1}(\beta )\\&\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)}=-\psi _{1}(\beta )\\&\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)}=-\psi _{1}(\alpha )\end{aligned}}}

Limits with two parameters varying:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to \infty }(\lim _{\beta \to \infty }\ln \operatorname {var} _{GX})=\lim _{\beta \to \infty }(\lim _{\alpha \to \infty }\ln \operatorname {var} _{G(1-X)})=\lim _{\alpha \to \infty }(\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)})=\lim _{\beta \to \infty }(\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)})=0\\&\lim _{\alpha \to \infty }(\lim _{\beta \to 0}\ln \operatorname {var} _{GX})=\lim _{\beta \to \infty }(\lim _{\alpha \to 0}\ln \operatorname {var} _{G(1-X)})=\infty \\&\lim _{\alpha \to 0}(\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)})=\lim _{\beta \to 0}(\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)})=-\infty \end{aligned}}}

Although both ln(varGX) and ln(varG(1  X)) are asymmetric, when the shape parameters are equal, α = β, one has: ln(varGX) = ln(varG(1−X)). This equality follows from the following symmetry displayed between both log geometric variances:

${\displaystyle \ln \operatorname {var} _{GX}(\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {var} _{G(1-X)}(\mathrm {B} (\beta ,\alpha )).}$

The log geometric covariance is symmetric:

${\displaystyle \ln \operatorname {cov} _{GX,(1-X)}(\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {cov} _{GX,(1-X)}(\mathrm {B} (\beta ,\alpha ))}$

#### Mean absolute deviation around the mean

The mean absolute deviation around the mean for the beta distribution with shape parameters α and β is: [6]

${\displaystyle \operatorname {E} [|X-E[X]|]={\frac {2\alpha ^{\alpha }\beta ^{\beta }}{\mathrm {B} (\alpha ,\beta )(\alpha +\beta )^{\alpha +\beta +1}}}}$

The mean absolute deviation around the mean is a more robust estimator of statistical dispersion than the standard deviation for beta distributions with tails and inflection points at each side of the mode, Beta(α, β) distributions with α,β > 2, as it depends on the linear (absolute) deviations rather than the square deviations from the mean. Therefore, the effect of very large deviations from the mean are not as overly weighted.

Using Stirling's approximation to the Gamma function, N.L.Johnson and S.Kotz [1] derived the following approximation for values of the shape parameters greater than unity (the relative error for this approximation is only −3.5% for α = β = 1, and it decreases to zero as α → ∞, β → ∞):

{\displaystyle {\begin{aligned}{\frac {\text{mean abs. dev. from mean}}{\text{standard deviation}}}&={\frac {\operatorname {E} [|X-E[X]|]}{\sqrt {\operatorname {var} (X)}}}\\&\approx {\sqrt {\frac {2}{\pi }}}\left(1+{\frac {7}{12(\alpha +\beta )}}{}-{\frac {1}{12\alpha }}-{\frac {1}{12\beta }}\right),{\text{ if }}\alpha ,\beta >1.\end{aligned}}}

At the limit α → ∞, β → ∞, the ratio of the mean absolute deviation to the standard deviation (for the beta distribution) becomes equal to the ratio of the same measures for the normal distribution: ${\displaystyle {\sqrt {\frac {2}{\pi }}}}$. For α = β = 1 this ratio equals ${\displaystyle {\frac {\sqrt {3}}{2}}}$, so that from α = β = 1 to α, β → ∞ the ratio decreases by 8.5%. For α = β = 0 the standard deviation is exactly equal to the mean absolute deviation around the mean. Therefore, this ratio decreases by 15% from α = β = 0 to α = β = 1, and by 25% from α = β = 0 to α, β → ∞ . However, for skewed beta distributions such that α → 0 or β → 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity (although each of them, individually, approaches zero) because the mean absolute deviation approaches zero faster than the standard deviation.

Using the parametrization in terms of mean μ and sample size ν = α + β > 0:

α = μν, β = (1−μ)ν

one can express the mean absolute deviation around the mean in terms of the mean μ and the sample size ν as follows:

${\displaystyle \operatorname {E} [|X-E[X]|]={\frac {2\mu ^{\mu \nu }(1-\mu )^{(1-\mu )\nu }}{\nu \mathrm {B} (\mu \nu ,(1-\mu )\nu )}}}$

For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore:

{\displaystyle {\begin{aligned}\operatorname {E} [|X-E[X]|]={\frac {2^{1-\nu }}{\nu \mathrm {B} ({\tfrac {\nu }{2}},{\tfrac {\nu }{2}})}}&={\frac {2^{1-\nu }\Gamma (\nu )}{\nu (\Gamma ({\tfrac {\nu }{2}}))^{2}}}\\\lim _{\nu \to 0}\left(\lim _{\mu \to {\frac {1}{2}}}\operatorname {E} [|X-E[X]|]\right)&={\tfrac {1}{2}}\\\lim _{\nu \to \infty }\left(\lim _{\mu \to {\frac {1}{2}}}\operatorname {E} [|X-E[X]|]\right)&=0\end{aligned}}}

Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}\operatorname {E} [|X-E[X]|]&=\lim _{\alpha \to 0}\operatorname {E} [|X-E[X]|]=0\\\lim _{\beta \to \infty }\operatorname {E} [|X-E[X]|]&=\lim _{\alpha \to \infty }\operatorname {E} [|X-E[X]|]=0\\\lim _{\mu \to 0}\operatorname {E} [|X-E[X]|]&=\lim _{\mu \to 1}\operatorname {E} [|X-E[X]|]=0\\\lim _{\nu \to 0}\operatorname {E} [|X-E[X]|]&={\sqrt {\mu (1-\mu )}}\\\lim _{\nu \to \infty }\operatorname {E} [|X-E[X]|]&=0\end{aligned}}}

#### Mean absolute difference

The mean absolute difference for the Beta distribution is:

${\displaystyle \mathrm {MD} =\int _{0}^{1}\int _{0}^{1}f(x;\alpha ,\beta )\,f(y;\alpha ,\beta )\,|x-y|\,dx\,dy=\left({\frac {4}{\alpha +\beta }}\right){\frac {B(\alpha +\beta ,\alpha +\beta )}{B(\alpha ,\alpha )B(\beta ,\beta )}}}$

The Gini coefficient for the Beta distribution is half of the relative mean absolute difference:

${\displaystyle \mathrm {G} =\left({\frac {2}{\alpha }}\right){\frac {B(\alpha +\beta ,\alpha +\beta )}{B(\alpha ,\alpha )B(\beta ,\beta )}}}$

### Skewness

The skewness (the third moment centered on the mean, normalized by the 3/2 power of the variance) of the beta distribution is [1]

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2}}}={\frac {2(\beta -\alpha ){\sqrt {\alpha +\beta +1}}}{(\alpha +\beta +2){\sqrt {\alpha \beta }}}}.}$

Letting α = β in the above expression one obtains γ1 = 0, showing once again that for α = β the distribution is symmetric and hence the skewness is zero. Positive skew (right-tailed) for α < β, negative skew (left-tailed) for α > β.

Using the parametrization in terms of mean μ and sample size ν = α + β:

{\displaystyle {\begin{aligned}\alpha &{}=\mu \nu ,{\text{ where }}\nu =(\alpha +\beta )>0\\\beta &{}=(1-\mu )\nu ,{\text{ where }}\nu =(\alpha +\beta )>0.\end{aligned}}}

one can express the skewness in terms of the mean μ and the sample size ν as follows:

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2}}}={\frac {2(1-2\mu ){\sqrt {1+\nu }}}{(2+\nu ){\sqrt {\mu (1-\mu )}}}}.}$

The skewness can also be expressed just in terms of the variance var and the mean μ as follows:

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2}}}={\frac {2(1-2\mu ){\sqrt {\text{ var }}}}{\mu (1-\mu )+\operatorname {var} }}{\text{ if }}\operatorname {var} <\mu (1-\mu )}$

The accompanying plot of skewness as a function of variance and mean shows that maximum variance (1/4) is coupled with zero skewness and the symmetry condition (μ = 1/2), and that maximum skewness (positive or negative infinity) occurs when the mean is located at one end or the other, so that the "mass" of the probability distribution is concentrated at the ends (minimum variance).

The following expression for the square of the skewness, in terms of the sample size ν = α + β and the variance var, is useful for the method of moments estimation of four parameters:

${\displaystyle (\gamma _{1})^{2}={\frac {(\operatorname {E} [(X-\mu )^{3}])^{2}}{(\operatorname {var} (X))^{3}}}={\frac {4}{(2+\nu )^{2}}}{\bigg (}{\frac {1}{\text{var}}}-4(1+\nu ){\bigg )}}$

This expression correctly gives a skewness of zero for α = β, since in that case (see section titled "Variance"): ${\displaystyle \operatorname {var} ={\frac {1}{4(1+\nu )}}}$.

For the symmetric case (α = β), skewness = 0 over the whole range, and the following limits apply:

${\displaystyle \lim _{\alpha =\beta \to 0}\gamma _{1}=\lim _{\alpha =\beta \to \infty }\gamma _{1}=\lim _{\nu \to 0}\gamma _{1}=\lim _{\nu \to \infty }\gamma _{1}=\lim _{\mu \to {\frac {1}{2}}}\gamma _{1}=0}$

For the asymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}\gamma _{1}=\lim _{\mu \to 0}\gamma _{1}=\infty \\&\lim _{\beta \to 0}\gamma _{1}=\lim _{\mu \to 1}\gamma _{1}=-\infty \\&\lim _{\alpha \to \infty }\gamma _{1}=-{\frac {2}{\beta }},\quad \lim _{\beta \to 0}(\lim _{\alpha \to \infty }\gamma _{1})=-\infty ,\quad \lim _{\beta \to \infty }(\lim _{\alpha \to \infty }\gamma _{1})=0\\&\lim _{\beta \to \infty }\gamma _{1}={\frac {2}{\alpha }},\quad \lim _{\alpha \to 0}(\lim _{\beta \to \infty }\gamma _{1})=\infty ,\quad \lim _{\alpha \to \infty }(\lim _{\beta \to \infty }\gamma _{1})=0\\&\lim _{\nu \to 0}\gamma _{1}={\frac {1-2\mu }{\sqrt {\mu (1-\mu )}}},\quad \lim _{\mu \to 0}(\lim _{\nu \to 0}\gamma _{1})=\infty ,\quad \lim _{\mu \to 1}(\lim _{\nu \to 0}\gamma _{1})=-\infty \end{aligned}}}

### Kurtosis

The beta distribution has been applied in acoustic analysis to assess damage to gears, as the kurtosis of the beta distribution has been reported to be a good indicator of the condition of a gear. [16] Kurtosis has also been used to distinguish the seismic signal generated by a person's footsteps from other signals. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, one can separate different targets based on the seismic waves they generate. Kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by human footsteps than other signals generated by vehicles, winds, noise, etc. [17] Unfortunately, the notation for kurtosis has not been standardized. Kenney and Keeping [18] use the symbol γ2 for the excess kurtosis, but Abramowitz and Stegun [19] use different terminology. To prevent confusion [20] between kurtosis (the fourth moment centered on the mean, normalized by the square of the variance) and excess kurtosis, when using symbols, they will be spelled out as follows: [6] [7]

{\displaystyle {\begin{aligned}{\text{excess kurtosis}}&={\text{kurtosis}}-3\\&={\frac {\operatorname {E} [(X-\mu )^{4}]}{(\operatorname {var} (X))^{2}}}-3\\&={\frac {6[\alpha ^{3}-\alpha ^{2}(2\beta -1)+\beta ^{2}(\beta +1)-2\alpha \beta (\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)}}\\&={\frac {6[(\alpha -\beta )^{2}(\alpha +\beta +1)-\alpha \beta (\alpha +\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)}}.\end{aligned}}}

Letting α = β in the above expression one obtains

${\displaystyle {\text{excess kurtosis}}=-{\frac {6}{3+2\alpha }}{\text{ if }}\alpha =\beta }$.

Therefore, for symmetric beta distributions, the excess kurtosis is negative, increasing from a minimum value of −2 at the limit as {α = β} → 0, and approaching a maximum value of zero as {α = β} → ∞. The value of −2 is the minimum value of excess kurtosis that any distribution (not just beta distributions, but any distribution of any possible kind) can ever achieve. This minimum value is reached when all the probability density is entirely concentrated at each end x = 0 and x = 1, with nothing in between: a 2-point Bernoulli distribution with equal probability 1/2 at each end (a coin toss: see section below "Kurtosis bounded by the square of the skewness" for further discussion). The description of kurtosis as a measure of the "potential outliers" (or "potential rare, extreme values") of the probability distribution, is correct for all distributions including the beta distribution. When rare, extreme values can occur in the beta distribution, the higher its kurtosis; otherwise, the kurtosis is lower. For α ≠ β, skewed beta distributions, the excess kurtosis can reach unlimited positive values (particularly for α → 0 for finite β, or for β → 0 for finite α) because the side away from the mode will produce occasional extreme values. Minimum kurtosis takes place when the mass density is concentrated equally at each end (and therefore the mean is at the center), and there is no probability mass density in between the ends.

Using the parametrization in terms of mean μ and sample size ν = α + β:

{\displaystyle {\begin{aligned}\alpha &{}=\mu \nu ,{\text{ where }}\nu =(\alpha +\beta )>0\\\beta &{}=(1-\mu )\nu ,{\text{ where }}\nu =(\alpha +\beta )>0.\end{aligned}}}

one can express the excess kurtosis in terms of the mean μ and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis}}={\frac {6}{3+\nu }}{\bigg (}{\frac {(1-2\mu )^{2}(1+\nu )}{\mu (1-\mu )(2+\nu )}}-1{\bigg )}}$

The excess kurtosis can also be expressed in terms of just the following two parameters: the variance var, and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis}}={\frac {6}{(3+\nu )(2+\nu )}}\left({\frac {1}{\text{ var }}}-6-5\nu \right){\text{ if }}{\text{ var }}<\mu (1-\mu )}$

and, in terms of the variance var and the mean μ as follows:

${\displaystyle {\text{excess kurtosis}}={\frac {6{\text{ var }}(1-{\text{ var }}-5\mu (1-\mu ))}{({\text{var }}+\mu (1-\mu ))(2{\text{ var }}+\mu (1-\mu ))}}{\text{ if }}{\text{ var }}<\mu (1-\mu )}$

The plot of excess kurtosis as a function of the variance and the mean shows that the minimum value of the excess kurtosis (−2, which is the minimum possible value for excess kurtosis for any distribution) is intimately coupled with the maximum value of variance (1/4) and the symmetry condition: the mean occurring at the midpoint (μ = 1/2). This occurs for the symmetric case of α = β = 0, with zero skewness. At the limit, this is the 2 point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. (A coin toss: one face of the coin being x = 0 and the other face being x = 1.) Variance is maximum because the distribution is bimodal with nothing in between the two modes (spikes) at each end. Excess kurtosis is minimum: the probability density "mass" is zero at the mean and it is concentrated at the two peaks at each end. Excess kurtosis reaches the minimum possible value (for any distribution) when the probability density function has two spikes at each end: it is bi-"peaky" with nothing in between them.

On the other hand, the plot shows that for extreme skewed cases, where the mean is located near one or the other end (μ = 0 or μ = 1), the variance is close to zero, and the excess kurtosis rapidly approaches infinity when the mean of the distribution approaches either end.

Alternatively, the excess kurtosis can also be expressed in terms of just the following two parameters: the square of the skewness, and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis}}={\frac {6}{3+\nu }}{\bigg (}{\frac {(2+\nu )}{4}}({\text{skewness}})^{2}-1{\bigg )}{\text{ if (skewness)}}^{2}-2<{\text{excess kurtosis}}<{\frac {3}{2}}({\text{skewness}})^{2}}$

From this last expression, one can obtain the same limits published practically a century ago by Karl Pearson in his paper, [21] for the beta distribution (see section below titled "Kurtosis bounded by the square of the skewness"). Setting α + β= ν = 0 in the above expression, one obtains Pearson's lower boundary (values for the skewness and excess kurtosis below the boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region"). The limit of α + β = ν → ∞ determines Pearson's upper boundary.

{\displaystyle {\begin{aligned}&\lim _{\nu \to 0}{\text{excess kurtosis}}=({\text{skewness}})^{2}-2\\&\lim _{\nu \to \infty }{\text{excess kurtosis}}={\tfrac {3}{2}}({\text{skewness}})^{2}\end{aligned}}}

therefore:

${\displaystyle ({\text{skewness}})^{2}-2<{\text{excess kurtosis}}<{\tfrac {3}{2}}({\text{skewness}})^{2}}$

Values of ν = α + β such that ν ranges from zero to infinity, 0 < ν < ∞, span the whole region of the beta distribution in the plane of excess kurtosis versus squared skewness.

For the symmetric case (α = β), the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}{\text{excess kurtosis}}=-2\\&\lim _{\alpha =\beta \to \infty }{\text{excess kurtosis}}=0\\&\lim _{\mu \to {\frac {1}{2}}}{\text{excess kurtosis}}=-{\frac {6}{3+\nu }}\end{aligned}}}

For the unsymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}{\text{excess kurtosis}}=\lim _{\beta \to 0}{\text{excess kurtosis}}=\lim _{\mu \to 0}{\text{excess kurtosis}}=\lim _{\mu \to 1}{\text{excess kurtosis}}=\infty \\&\lim _{\alpha \to \infty }{\text{excess kurtosis}}={\frac {6}{\beta }},{\text{ }}\lim _{\beta \to 0}(\lim _{\alpha \to \infty }{\text{excess kurtosis}})=\infty ,{\text{ }}\lim _{\beta \to \infty }(\lim _{\alpha \to \infty }{\text{excess kurtosis}})=0\\&\lim _{\beta \to \infty }{\text{excess kurtosis}}={\frac {6}{\alpha }},{\text{ }}\lim _{\alpha \to 0}(\lim _{\beta \to \infty }{\text{excess kurtosis}})=\infty ,{\text{ }}\lim _{\alpha \to \infty }(\lim _{\beta \to \infty }{\text{excess kurtosis}})=0\\&\lim _{\nu \to 0}{\text{excess kurtosis}}=-6+{\frac {1}{\mu (1-\mu )}},{\text{ }}\lim _{\mu \to 0}(\lim _{\nu \to 0}{\text{excess kurtosis}})=\infty ,{\text{ }}\lim _{\mu \to 1}(\lim _{\nu \to 0}{\text{excess kurtosis}})=\infty \end{aligned}}}

### Characteristic function

The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function (of the first kind): [1] [19] [22]

{\displaystyle {\begin{aligned}\varphi _{X}(\alpha ;\beta ;t)&=\operatorname {E} \left[e^{itX}\right]\\&=\int _{0}^{1}e^{itx}f(x;\alpha ,\beta )dx\\&={}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\!\\&=\sum _{n=0}^{\infty }{\frac {\alpha ^{(n)}(it)^{n}}{(\alpha +\beta )^{(n)}n!}}\\&=1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r}}\right){\frac {(it)^{k}}{k!}}\end{aligned}}}

where

${\displaystyle x^{(n)}=x(x+1)(x+2)\cdots (x+n-1)}$

is the rising factorial, also called the "Pochhammer symbol". The value of the characteristic function for t = 0, is one:

${\displaystyle \varphi _{X}(\alpha ;\beta ;0)={}_{1}F_{1}(\alpha ;\alpha +\beta ;0)=1}$.

Also, the real and imaginary parts of the characteristic function enjoy the following symmetries with respect to the origin of variable t:

${\displaystyle {\textrm {Re}}\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\right]={\textrm {Re}}\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)\right]}$
${\displaystyle {\textrm {Im}}\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\right]=-{\textrm {Im}}\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)\right]}$

The symmetric case α = β simplifies the characteristic function of the beta distribution to a Bessel function, since in the special case α + β = 2α the confluent hypergeometric function (of the first kind) reduces to a Bessel function (the modified Bessel function of the first kind ${\displaystyle I_{\alpha -{\frac {1}{2}}}}$ ) using Kummer's second transformation as follows:

{\displaystyle {\begin{aligned}{}_{1}F_{1}(\alpha ;2\alpha ;it)&=e^{\frac {it}{2}}{}_{0}F_{1}\left(;\alpha +{\tfrac {1}{2}};{\frac {(it)^{2}}{16}}\right)\\&=e^{\frac {it}{2}}\left({\frac {it}{4}}\right)^{{\frac {1}{2}}-\alpha }\Gamma \left(\alpha +{\tfrac {1}{2}}\right)I_{\alpha -{\frac {1}{2}}}\left({\frac {it}{2}}\right).\end{aligned}}}

In the accompanying plots, the real part (Re) of the characteristic function of the beta distribution is displayed for symmetric (α = β) and skewed (α ≠ β) cases.

### Other moments

#### Moment generating function

It also follows [1] [6] that the moment generating function is

{\displaystyle {\begin{aligned}M_{X}(\alpha ;\beta ;t)&=\operatorname {E} \left[e^{tX}\right]\\[4pt]&=\int _{0}^{1}e^{tx}f(x;\alpha ,\beta )\,dx\\[4pt]&={}_{1}F_{1}(\alpha ;\alpha +\beta ;t)\\[4pt]&=\sum _{n=0}^{\infty }{\frac {\alpha ^{(n)}}{(\alpha +\beta )^{(n)}}}{\frac {t^{n}}{n!}}\\[4pt]&=1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r}}\right){\frac {t^{k}}{k!}}\end{aligned}}}

In particular MX(α; β; 0) = 1.

#### Higher moments

Using the moment generating function, the k-th raw moment is given by [1] the factor

${\displaystyle \prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r}}}$

multiplying the (exponential series) term ${\displaystyle \left({\frac {t^{k}}{k!}}\right)}$ in the series of the moment generating function

${\displaystyle \operatorname {E} [X^{k}]={\frac {\alpha ^{(k)}}{(\alpha +\beta )^{(k)}}}=\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r}}}$

where (x)(k) is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as

${\displaystyle \operatorname {E} [X^{k}]={\frac {\alpha +k-1}{\alpha +\beta +k-1}}\operatorname {E} [X^{k-1}].}$

Since the moment generating function ${\displaystyle M_{X}(\alpha ;\beta ;\cdot )}$ has a positive radius of convergence, the beta distribution is determined by its moments. [23]

#### Moments of transformed random variables

##### Moments of linearly transformed, product and inverted random variables

One can also show the following expectations for a transformed random variable, [1] where the random variable X is Beta-distributed with parameters α and β: X ~ Beta(α, β). The expected value of the variable 1  X is the mirror-symmetry of the expected value based on X:

{\displaystyle {\begin{aligned}&\operatorname {E} [1-X]={\frac {\beta }{\alpha +\beta }}\\&\operatorname {E} [X(1-X)]=\operatorname {E} [(1-X)X]={\frac {\alpha \beta }{(\alpha +\beta )(\alpha +\beta +1)}}\end{aligned}}}

Due to the mirror-symmetry of the probability density function of the beta distribution, the variances based on variables X and 1  X are identical, and the covariance on X(1  X is the negative of the variance:

${\displaystyle \operatorname {var} [(1-X)]=\operatorname {var} [X]=-\operatorname {cov} [X,(1-X)]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)}}}$

These are the expected values for inverted variables, (these are related to the harmonic means, see section titled "Harmonic mean"):

{\displaystyle {\begin{aligned}&\operatorname {E} \left[{\frac {1}{X}}\right]={\frac {\alpha +\beta -1}{\alpha -1}}{\text{ if }}\alpha >1\\&\operatorname {E} \left[{\frac {1}{1-X}}\right]={\frac {\alpha +\beta -1}{\beta -1}}{\text{ if }}\beta >1\end{aligned}}}

The following transformation by dividing the variable X by its mirror-image X/(1  X) results in the expected value of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI): [1]

{\displaystyle {\begin{aligned}&\operatorname {E} \left[{\frac {X}{1-X}}\right]={\frac {\alpha }{\beta -1}}{\text{ if }}\beta >1\\&\operatorname {E} \left[{\frac {1-X}{X}}\right]={\frac {\beta }{\alpha -1}}{\text{ if }}\alpha >1\end{aligned}}}

Variances of these transformed variables can be obtained by integration, as the expected values of the second moments centered on the corresponding variables:

${\displaystyle \operatorname {var} \left[{\frac {1}{X}}\right]=\operatorname {E} \left[\left({\frac {1}{X}}-\operatorname {E} \left[{\frac {1}{X}}\right]\right)^{2}\right]=}$
${\displaystyle \operatorname {var} \left[{\frac {1-X}{X}}\right]=\operatorname {E} \left[\left({\frac {1-X}{X}}-\operatorname {E} \left[{\frac {1-X}{X}}\right]\right)^{2}\right]={\frac {\beta (\alpha +\beta -1)}{(\alpha -2)(\alpha -1)^{2}}}{\text{ if }}\alpha >2}$

The following variance of the variable X divided by its mirror-image (X/(1−X) results in the variance of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI): [1]

${\displaystyle \operatorname {var} \left[{\frac {1}{1-X}}\right]=\operatorname {E} \left[\left({\frac {1}{1-X}}-\operatorname {E} \left[{\frac {1}{1-X}}\right]\right)^{2}\right]=\operatorname {var} \left[{\frac {X}{1-X}}\right]=}$
${\displaystyle \operatorname {E} \left[\left({\frac {X}{1-X}}-\operatorname {E} \left[{\frac {X}{1-X}}\right]\right)^{2}\right]={\frac {\alpha (\alpha +\beta -1)}{(\beta -2)(\beta -1)^{2}}}{\text{ if }}\beta >2}$

The covariances are:

${\displaystyle \operatorname {cov} \left[{\frac {1}{X}},{\frac {1}{1-X}}\right]=\operatorname {cov} \left[{\frac {1-X}{X}},{\frac {X}{1-X}}\right]=\operatorname {cov} \left[{\frac {1}{X}},{\frac {X}{1-X}}\right]=\operatorname {cov} \left[{\frac {1-X}{X}},{\frac {1}{1-X}}\right]={\frac {\alpha +\beta -1}{(\alpha -1)(\beta -1)}}{\text{ if }}\alpha ,\beta >1}$

These expectations and variances appear in the four-parameter Fisher information matrix (section titled "Fisher information," "four parameters")

##### Moments of logarithmically transformed random variables

Expected values for logarithmic transformations (useful for maximum likelihood estimates, see section titled "Parameter estimation, Maximum likelihood" below) are discussed in this section. The following logarithmic linear transformations are related to the geometric means GX and G(1−X) (see section titled "Geometric mean"):

{\displaystyle {\begin{aligned}\operatorname {E} [\ln(X)]&=\psi (\alpha )-\psi (\alpha +\beta )=-\operatorname {E} \left[\ln \left({\frac {1}{X}}\right)\right],\\\operatorname {E} [\ln(1-X)]&=\psi (\beta )-\psi (\alpha +\beta )=-\operatorname {E} \left[\ln \left({\frac {1}{1-X}}\right)\right].\end{aligned}}}

Where the digamma function ψ(α) is defined as the logarithmic derivative of the gamma function: [19]

${\displaystyle \psi (\alpha )={\frac {d\ln \Gamma (\alpha )}{d\alpha }}}$

Logit transformations are interesting, [24] as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable:

{\displaystyle {\begin{aligned}\operatorname {E} \left[\ln \left({\frac {X}{1-X}}\right)\right]&=\psi (\alpha )-\psi (\beta )=\operatorname {E} [\ln(X)]+\operatorname {E} \left[\ln \left({\frac {1}{1-X}}\right)\right],\\\operatorname {E} \left[\ln \left({\frac {1-X}{X}}\right)\right]&=\psi (\beta )-\psi (\alpha )=-\operatorname {E} \left[\ln \left({\frac {X}{1-X}}\right)\right].\end{aligned}}}

Johnson [25] considered the distribution of the logit - transformed variable ln(X/1−X), including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞).

Higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions as follows:

{\displaystyle {\begin{aligned}\operatorname {E} \left[\ln ^{2}(X)\right]&=(\psi (\alpha )-\psi (\alpha +\beta ))^{2}+\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta ),\\\operatorname {E} \left[\ln ^{2}(1-X)\right]&=(\psi (\beta )-\psi (\alpha +\beta ))^{2}+\psi _{1}(\beta )-\psi _{1}(\alpha +\beta ),\\\operatorname {E} \left[\ln(X)\ln(1-X)\right]&=(\psi (\alpha )-\psi (\alpha +\beta ))(\psi (\beta )-\psi (\alpha +\beta ))-\psi _{1}(\alpha +\beta ).\end{aligned}}}

therefore the variance of the logarithmic variables and covariance of ln(X) and ln(1−X) are:

{\displaystyle {\begin{aligned}\operatorname {cov} [\ln(X),\ln(1-X)]&=\operatorname {E} \left[\ln(X)\ln(1-X)\right]-\operatorname {E} [\ln(X)]\operatorname {E} [\ln(1-X)]=-\psi _{1}(\alpha +\beta )\\&\\\operatorname {var} [\ln X]&=\operatorname {E} [\ln ^{2}(X)]-(\operatorname {E} [\ln(X)])^{2}\\&=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )\\&=\psi _{1}(\alpha )+\operatorname {cov} [\ln(X),\ln(1-X)]\\&\\\operatorname {var} [\ln(1-X)]&=\operatorname {E} [\ln ^{2}(1-X)]-(\operatorname {E} [\ln(1-X)])^{2}\\&=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )\\&=\psi _{1}(\beta )+\operatorname {cov} [\ln(X),\ln(1-X)]\end{aligned}}}

where the trigamma function , denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {d^{2}\ln \Gamma (\alpha )}{d\alpha ^{2}}}={\frac {d\psi (\alpha )}{d\alpha }}}$.

The variances and covariance of the logarithmically transformed variables X and (1−X) are different, in general, because the logarithmic transformation destroys the mirror-symmetry of the original variables X and (1−X), as the logarithm approaches negative infinity for the variable approaching zero.

These logarithmic variances and covariance are the elements of the Fisher information matrix for the beta distribution. They are also a measure of the curvature of the log likelihood function (see section on Maximum likelihood estimation).

The variances of the log inverse variables are identical to the variances of the log variables:

{\displaystyle {\begin{aligned}\operatorname {var} \left[\ln \left({\frac {1}{X}}\right)\right]&=\operatorname {var} [\ln(X)]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta ),\\\operatorname {var} \left[\ln \left({\frac {1}{1-X}}\right)\right]&=\operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta ),\\\operatorname {cov} \left[\ln \left({\frac {1}{X}}\right),\ln \left({\frac {1}{1-X}}\right)\right]&=\operatorname {cov} [\ln(X),\ln(1-X)]=-\psi _{1}(\alpha +\beta ).\end{aligned}}}

It also follows that the variances of the logit transformed variables are:

${\displaystyle \operatorname {var} \left[\ln \left({\frac {X}{1-X}}\right)\right]=\operatorname {var} \left[\ln \left({\frac {1-X}{X}}\right)\right]=-\operatorname {cov} \left[\ln \left({\frac {X}{1-X}}\right),\ln \left({\frac {1-X}{X}}\right)\right]=\psi _{1}(\alpha )+\psi _{1}(\beta )}$

### Quantities of information (entropy)

Given a beta distributed random variable, X ~ Beta(α, β), the differential entropy of X is [26] (measured in nats), the expected value of the negative of the logarithm of the probability density function:

{\displaystyle {\begin{aligned}h(X)&=\operatorname {E} [-\ln(f(x;\alpha ,\beta ))]\\[4pt]&=\int _{0}^{1}-f(x;\alpha ,\beta )\ln(f(x;\alpha ,\beta ))\,dx\\[4pt]&=\ln(\mathrm {B} (\alpha ,\beta ))-(\alpha -1)\psi (\alpha )-(\beta -1)\psi (\beta )+(\alpha +\beta -2)\psi (\alpha +\beta )\end{aligned}}}

where f(x; α, β) is the probability density function of the beta distribution:

${\displaystyle f(x;\alpha ,\beta )={\frac {1}{\mathrm {B} (\alpha ,\beta )}}x^{\alpha -1}(1-x)^{\beta -1}}$

The digamma function ψ appears in the formula for the differential entropy as a consequence of Euler's integral formula for the harmonic numbers which follows from the integral:

${\displaystyle \int _{0}^{1}{\frac {1-x^{\alpha -1}}{1-x}}\,dx=\psi (\alpha )-\psi (1)}$

The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero. It is to be expected that the maximum entropy should take place when the beta distribution becomes equal to the uniform distribution, since uncertainty is maximal when all possible events are equiprobable.

For α or β approaching zero, the differential entropy approaches its minimum value of negative infinity. For (either or both) α or β approaching zero, there is a maximum amount of order: all the probability density is concentrated at the ends, and there is zero probability density at points located between the ends. Similarly for (either or both) α or β approaching infinity, the differential entropy approaches its minimum value of negative infinity, and a maximum amount of order. If either α or β approaches infinity (and the other is finite) all the probability density is concentrated at an end, and the probability density is zero everywhere else. If both shape parameters are equal (the symmetric case), α = β, and they approach infinity simultaneously, the probability density becomes a spike (Dirac delta function) concentrated at the middle x = 1/2, and hence there is 100% probability at the middle x = 1/2 and zero probability everywhere else.

The (continuous case) differential entropy was introduced by Shannon in his original paper (where he named it the "entropy of a continuous distribution"), as the concluding part [27] of the same paper where he defined the discrete entropy. It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative (as it is for the beta distribution). What really matters is the relative value of entropy.

Given two beta distributed random variables, X1 ~ Beta(α, β) and X2 ~ Beta(α, β), the cross entropy is (measured in nats) [28]

{\displaystyle {\begin{aligned}H(X_{1},X_{2})&=\int _{0}^{1}-f(x;\alpha ,\beta )\ln(f(x;\alpha ',\beta '))\,dx\\[4pt]&=\ln \left(\mathrm {B} (\alpha ',\beta ')\right)-(\alpha '-1)\psi (\alpha )-(\beta '-1)\psi (\beta )+(\alpha '+\beta '-2)\psi (\alpha +\beta ).\end{aligned}}}

The cross entropy has been used as an error metric to measure the distance between two hypotheses. [29] [30] Its absolute value is minimum when the two distributions are identical. It is the information measure most closely related to the log maximum likelihood [28] (see section on "Parameter estimation. Maximum likelihood estimation")).

The relative entropy, or Kullback–Leibler divergence DKL(X1, X2), is a measure of the inefficiency of assuming that the distribution is X2 ~ Beta(α, β) when the distribution is really X1 ~ Beta(α, β). It is defined as follows (measured in nats).

{\displaystyle {\begin{aligned}D_{\mathrm {KL} }(X_{1},X_{2})&=\int _{0}^{1}f(x;\alpha ,\beta )\ln \left({\frac {f(x;\alpha ,\beta )}{f(x;\alpha ',\beta ')}}\right)\,dx\\[4pt]&=\left(\int _{0}^{1}f(x;\alpha ,\beta )\ln(f(x;\alpha ,\beta ))\,dx\right)-\left(\int _{0}^{1}f(x;\alpha ,\beta )\ln(f(x;\alpha ',\beta '))\,dx\right)\\[4pt]&=-h(X_{1})+H(X_{1},X_{2})\\[4pt]&=\ln \left({\frac {\mathrm {B} (\alpha ',\beta ')}{\mathrm {B} (\alpha ,\beta )}}\right)+(\alpha -\alpha ')\psi (\alpha )+(\beta -\beta ')\psi (\beta )+(\alpha '-\alpha +\beta '-\beta )\psi (\alpha +\beta ).\end{aligned}}}

The relative entropy, or Kullback–Leibler divergence, is always non-negative. A few numerical examples follow:

• X1 ~ Beta(1, 1) and X2 ~ Beta(3, 3); DKL(X1, X2) = 0.598803; DKL(X2, X1) = 0.267864; h(X1) = 0; h(X2) = −0.267864
• X1 ~ Beta(3, 0.5) and X2 ~ Beta(0.5, 3); DKL(X1, X2) = 7.21574; DKL(X2, X1) = 7.21574; h(X1) = −1.10805; h(X2) = −1.10805.

The Kullback–Leibler divergence is not symmetric DKL(X1, X2) ≠ DKL(X2, X1) for the case in which the individual beta distributions Beta(1, 1) and Beta(3, 3) are symmetric, but have different entropies h(X1) ≠ h(X2). The value of the Kullback divergence depends on the direction traveled: whether going from a higher (differential) entropy to a lower (differential) entropy or the other way around. In the numerical example above, the Kullback divergence measures the inefficiency of assuming that the distribution is (bell-shaped) Beta(3, 3), rather than (uniform) Beta(1, 1). The "h" entropy of Beta(1, 1) is higher than the "h" entropy of Beta(3, 3) because the uniform distribution Beta(1, 1) has a maximum amount of disorder. The Kullback divergence is more than two times higher (0.598803 instead of 0.267864) when measured in the direction of decreasing entropy: the direction that assumes that the (uniform) Beta(1, 1) distribution is (bell-shaped) Beta(3, 3) rather than the other way around. In this restricted sense, the Kullback divergence is consistent with the second law of thermodynamics.

The Kullback–Leibler divergence is symmetric DKL(X1, X2) = DKL(X2, X1) for the skewed cases Beta(3, 0.5) and Beta(0.5, 3) that have equal differential entropy h(X1) = h(X2).

The symmetry condition:

${\displaystyle D_{\mathrm {KL} }(X_{1},X_{2})=D_{\mathrm {KL} }(X_{2},X_{1}),{\text{ if }}h(X_{1})=h(X_{2}),{\text{ for (skewed) }}\alpha \neq \beta }$

follows from the above definitions and the mirror-symmetry f(x; α, β) = f(1−x; α, β) enjoyed by the beta distribution.

### Relationships between statistical measures

#### Mean, mode and median relationship

If 1 < α < β then mode ≤ median ≤ mean. [11] Expressing the mode (only for α, β > 1), and the mean in terms of α and β:

${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2}}\leq {\text{median}}\leq {\frac {\alpha }{\alpha +\beta }},}$

If 1 < β < α then the order of the inequalities are reversed. For α, β > 1 the absolute distance between the mean and the median is less than 5% of the distance between the maximum and minimum values of x. On the other hand, the absolute distance between the mean and the mode can reach 50% of the distance between the maximum and minimum values of x, for the (pathological) case of α = 1 and β = 1 (for which values the beta distribution approaches the uniform distribution and the differential entropy approaches its maximum value, and hence maximum "disorder").

For example, for α = 1.0001 and β = 1.00000001:

• mode = 0.9999; PDF(mode) = 1.00010
• mean = 0.500025; PDF(mean) = 1.00003
• median = 0.500035; PDF(median) = 1.00003
• mean − mode = −0.499875
• mean − median = −9.65538 × 10−6

(where PDF stands for the value of the probability density function)

#### Mean, geometric mean and harmonic mean relationship

It is known from the inequality of arithmetic and geometric means that the geometric mean is lower than the mean. Similarly, the harmonic mean is lower than the geometric mean. The accompanying plot shows that for α = β, both the mean and the median are exactly equal to 1/2, regardless of the value of α = β, and the mode is also equal to 1/2 for α = β > 1, however the geometric and harmonic means are lower than 1/2 and they only approach this value asymptotically as α = β → ∞.

#### Kurtosis bounded by the square of the skewness

As remarked by Feller, [5] in the Pearson system the beta probability density appears as type I (any difference between the beta distribution and Pearson's type I distribution is only superficial and it makes no difference for the following discussion regarding the relationship between kurtosis and skewness). Karl Pearson showed, in Plate 1 of his paper [21] published in 1916, a graph with the kurtosis as the vertical axis (ordinate) and the square of the skewness as the horizontal axis (abscissa), in which a number of distributions were displayed. [31] The region occupied by the beta distribution is bounded by the following two lines in the (skewness2,kurtosis) plane, or the (skewness2,excess kurtosis) plane:

${\displaystyle ({\text{skewness}})^{2}+1<{\text{kurtosis}}<{\frac {3}{2}}({\text{skewness}})^{2}+3}$

or, equivalently,

${\displaystyle ({\text{skewness}})^{2}-2<{\text{excess kurtosis}}<{\frac {3}{2}}({\text{skewness}})^{2}}$

(At a time when there were no powerful digital computers), Karl Pearson accurately computed further boundaries, [4] [21] for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line (excess kurtosis + 2 − skewness2 = 0) is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed [21] that this upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is also the intersection with Pearson's distribution III, which has unlimited support in one direction (towards positive infinity), and can be bell-shaped or J-shaped. His son, Egon Pearson, showed [31] that the region (in the kurtosis/squared-skewness plane) occupied by the beta distribution (equivalently, Pearson's distribution I) as it approaches this boundary (excess kurtosis − (3/2) skewness2 = 0) is shared with the noncentral chi-squared distribution. Karl Pearson [32] (Pearson 1895, pp. 357, 360, 373–376) also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line. (This can be shown from the fact that the excess kurtosis of the gamma distribution is 6/k and the square of the skewness is 4/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied by the gamma distribution regardless of the value of the parameter "k"). Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line (as it is apparent from the fact that for the chi-squared distribution the excess kurtosis is 12/k and the square of the skewness is 8/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied regardless of the value of the parameter "k"). This is to be expected, since the chi-squared distribution X ~ χ2(k) is a special case of the gamma distribution, with parametrization X ~ Γ(k/2, 1/2) where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution.

An example of a beta distribution near the upper boundary (excess kurtosis − (3/2) skewness2 = 0) is given by α = 0.1, β = 1000, for which the ratio (excess kurtosis)/(skewness2) = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary (excess kurtosis + 2 − skewness2 = 0) is given by α= 0.0001, β = 0.1, for which values the expression (excess kurtosis + 2)/(skewness2) = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis (ordinate). (However, in Pearson's original chart, the ordinate is kurtosis, instead of excess kurtosis, and it increases downwards rather than upwards).

Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region." The boundary for this "impossible region" is determined by (symmetric or skewed) bimodal "U"-shaped distributions for which parameters α and β approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a 2-point distribution: the probability can only take 2 values (Bernoulli distribution), one value with probability p and the other with probability q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2 (this is the lowest excess kurtosis possible for any distribution), and the probabilities are pq ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities ${\displaystyle p={\tfrac {\beta }{\alpha +\beta }}}$ at the left end x = 0 and ${\displaystyle q=1-p={\tfrac {\alpha }{\alpha +\beta }}}$ at the right end x = 1.

### Symmetry

All statements are conditional on α, β > 0

${\displaystyle f(x;\alpha ,\beta )=f(1-x;\beta ,\alpha )}$
${\displaystyle F(x;\alpha ,\beta )=I_{x}(\alpha ,\beta )=1-F(1-x;\beta ,\alpha )=1-I_{1-x}(\beta ,\alpha )}$
${\displaystyle \operatorname {mode} (\mathrm {B} (\alpha ,\beta ))=1-\operatorname {mode} (\mathrm {B} (\beta ,\alpha )),{\text{ if }}\mathrm {B} (\beta ,\alpha )\neq \mathrm {B} (1,1)}$
${\displaystyle \operatorname {median} (\mathrm {B} (\alpha ,\beta ))=1-\operatorname {median} (\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \mu (\mathrm {B} (\alpha ,\beta ))=1-\mu (\mathrm {B} (\beta ,\alpha ))}$
• Geometric Means each is individually asymmetric, the following symmetry applies between the geometric mean based on X and the geometric mean based on its reflection (1-X)
${\displaystyle G_{X}(\mathrm {B} (\alpha ,\beta ))=G_{(1-X)}(\mathrm {B} (\beta ,\alpha ))}$
• Harmonic means each is individually asymmetric, the following symmetry applies between the harmonic mean based on X and the harmonic mean based on its reflection (1-X)
${\displaystyle H_{X}(\mathrm {B} (\alpha ,\beta ))=H_{(1-X)}(\mathrm {B} (\beta ,\alpha )){\text{ if }}\alpha ,\beta >1}$ .
• Variance symmetry
${\displaystyle \operatorname {var} (\mathrm {B} (\alpha ,\beta ))=\operatorname {var} (\mathrm {B} (\beta ,\alpha ))}$
• Geometric variances each is individually asymmetric, the following symmetry applies between the log geometric variance based on X and the log geometric variance based on its reflection (1-X)
${\displaystyle \ln(\operatorname {var_{GX}} (\mathrm {B} (\alpha ,\beta )))=\ln(\operatorname {var_{G(1-X)}} (\mathrm {B} (\beta ,\alpha )))}$
• Geometric covariance symmetry
${\displaystyle \ln \operatorname {cov_{GX,(1-X)}} (\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {cov_{GX,(1-X)}} (\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \operatorname {E} [|X-E[X]|](\mathrm {B} (\alpha ,\beta ))=\operatorname {E} [|X-E[X]|](\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \operatorname {skewness} (\mathrm {B} (\alpha ,\beta ))=-\operatorname {skewness} (\mathrm {B} (\beta ,\alpha ))}$
• Excess kurtosis symmetry
${\displaystyle {\text{excess kurtosis}}(\mathrm {B} (\alpha ,\beta ))={\text{excess kurtosis}}(\mathrm {B} (\beta ,\alpha ))}$
• Characteristic function symmetry of Real part (with respect to the origin of variable "t")
${\displaystyle {\text{Re}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]={\text{Re}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
${\displaystyle {\text{Im}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]=-{\text{Im}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
• Characteristic function symmetry of Absolute value (with respect to the origin of variable "t")
${\displaystyle {\text{Abs}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]={\text{Abs}}[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
• Differential entropy symmetry
${\displaystyle h(\mathrm {B} (\alpha ,\beta ))=h(\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle D_{\mathrm {KL} }(X_{1},X_{2})=D_{\mathrm {KL} }(X_{2},X_{1}),{\text{ if }}h(X_{1})=h(X_{2}){\text{, for (skewed) }}\alpha \neq \beta }$
• Fisher information matrix symmetry
${\displaystyle {\mathcal {I}}_{i,j}={\mathcal {I}}_{j,i}}$

### Geometry of the probability density function

#### Inflection points

For certain values of the shape parameters α and β, the probability density function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution.

Defining the following quantity:

${\displaystyle \kappa ={\frac {\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}{\alpha +\beta -2}}}$

Points of inflection occur, [1] [3] [6] [7] depending on the value of the shape parameters α and β, as follows:

• (α > 2, β > 2) The distribution is bell-shaped (symmetric for α = β and skewed otherwise), with two inflection points, equidistant from the mode:
${\displaystyle x={\text{mode}}\pm \kappa ={\frac {\alpha -1\pm {\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}}{\alpha +\beta -2}}}$
• (α = 2, β > 2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
${\displaystyle x={\text{mode}}+\kappa ={\frac {2}{\beta }}}$
• (α > 2, β = 2) The distribution is unimodal, negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
${\displaystyle x={\text{mode}}-\kappa =1-{\frac {2}{\alpha }}}$
• (1 < α < 2, β > 2, α+β>2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
${\displaystyle x={\text{mode}}+\kappa ={\frac {\alpha -1+{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}}{\alpha +\beta -2}}}$
• (0 < α < 1, 1 < β < 2) The distribution has a mode at the left end x = 0 and it is positively skewed, right-tailed. There is one inflection point, located to the right of the mode:
${\displaystyle x={\frac {\alpha -1+{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}}{\alpha +\beta -2}}}$
• (α > 2, 1 < β < 2) The distribution is unimodal negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
${\displaystyle x={\text{mode}}-\kappa ={\frac {\alpha -1-{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}}{\alpha +\beta -2}}}$
• (1 < α < 2, 0 < β < 1) The distribution has a mode at the right end x=1 and it is negatively skewed, left-tailed. There is one inflection point, located to the left of the mode:
${\displaystyle x={\frac {\alpha -1-{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3}}}}{\alpha +\beta -2}}}$

There are no inflection points in the remaining (symmetric and skewed) regions: U-shaped: (α, β < 1) upside-down-U-shaped: (1 < α < 2, 1 < β < 2), reverse-J-shaped (α < 1, β > 2) or J-shaped: (α > 2, β < 1)

The accompanying plots show the inflection point locations (shown vertically, ranging from 0 to 1) versus α and β (the horizontal axes ranging from 0 to 5). There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode.

#### Shapes

The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes (using only two parameters) is partly responsible for finding wide application for modeling actual measurements:

##### Symmetric (α = β)
• the density function is symmetric about 1/2 (blue & teal plots).
• median = mean = 1/2.
• skewness = 0.
• α = β < 1
• U-shaped (blue plot).
• bimodal: left mode = 0, right mode =1, anti-mode = 1/2
• 1/12 < var(X) < 1/4 [1]
• −2 < excess kurtosis(X) < −6/5
• α = β = 1/2 is the arcsine distribution
• var(X) = 1/8
• excess kurtosis(X) = −3/2
• CF = Rinc (t) [33]
• α = β → 0 is a 2-point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. A coin toss: one face of the coin being x = 0 and the other face being x = 1.
• ${\displaystyle \lim _{\alpha =\beta \to 0}\operatorname {var} (X)={\tfrac {1}{4}}}$
• ${\displaystyle \lim _{\alpha =\beta \to 0}\operatorname {excess\ kurtosis} (X)=-2}$ a lower value than this is impossible for any distribution to reach.
• The differential entropy approaches a minimum value of −∞
• α = β = 1
• α = β > 1
• symmetric unimodal
• mode = 1/2.
• 0 < var(X) < 1/12 [1]
• −6/5 < excess kurtosis(X) < 0
• α = β = 3/2 is a semi-elliptic [0, 1] distribution, see: Wigner semicircle distribution [34]
• var(X) = 1/16.
• excess kurtosis(X) = −1
• CF = 2 Jinc (t)
• α = β = 2 is the parabolic [0, 1] distribution
• var(X) = 1/20
• excess kurtosis(X) = −6/7
• CF = 3 Tinc (t) [35]
• α = β > 2 is bell-shaped, with inflection points located to either side of the mode
• 0 < var(X) < 1/20
• −6/7 < excess kurtosis(X) < 0
• α = β → ∞ is a 1-point Degenerate distribution with a Dirac delta function spike at the midpoint x = 1/2 with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the single point x = 1/2.
• ${\displaystyle \lim _{\alpha =\beta \to \infty }\operatorname {var} (X)=0}$
• ${\displaystyle \lim _{\alpha =\beta \to \infty }\operatorname {excess\ kurtosis} (X)=0}$
• The differential entropy approaches a minimum value of −∞
##### Skewed (α ≠ β)

The density function is skewed. An interchange of parameter values yields the mirror image (the reverse) of the initial curve, some more specific cases:

• α < 1, β < 1
• U-shaped
• Positive skew for α < β, negative skew for α > β.
• bimodal: left mode = 0, right mode = 1, anti-mode = ${\displaystyle {\tfrac {\alpha -1}{\alpha +\beta -2}}}$
• 0 < median < 1.
• 0 < var(X) < 1/4
• α > 1, β > 1
• unimodal (magenta & cyan plots),
• Positive skew for α < β, negative skew for α > β.
• ${\displaystyle {\text{mode }}={\tfrac {\alpha -1}{\alpha +\beta -2}}}$
• 0 < median < 1
• 0 < var(X) < 1/12
• α < 1, β ≥ 1
• reverse J-shaped with a right tail,
• positively skewed,
• strictly decreasing, convex
• mode = 0
• 0 < median < 1/2.
• ${\displaystyle 0<\operatorname {var} (X)<{\tfrac {-11+5{\sqrt {5}}}{2}},}$ (maximum variance occurs for ${\displaystyle \alpha ={\tfrac {-1+{\sqrt {5}}}{2}},\beta =1}$, or α = Φ the golden ratio conjugate)
• α ≥ 1, β < 1
• J-shaped with a left tail,
• negatively skewed,
• strictly increasing, convex
• mode = 1
• 1/2 < median < 1
• ${\displaystyle 0<\operatorname {var} (X)<{\tfrac {-11+5{\sqrt {5}}}{2}},}$ (maximum variance occurs for ${\displaystyle \alpha =1,\beta ={\tfrac {-1+{\sqrt {5}}}{2}}}$, or β = Φ the golden ratio conjugate)
• α = 1, β > 1
• positively skewed,
• strictly decreasing (red plot),
• a reversed (mirror-image) power function [0,1] distribution
• mode = 0
• α = 1, 1 < β < 2
• concave
• ${\displaystyle 1-{\tfrac {1}{\sqrt {2}}}<{\text{median}}<{\tfrac {1}{2}}}$
• 1/18 < var(X) < 1/12.
• α = 1, β = 2
• a straight line with slope −2, the right-triangular distribution with right angle at the left end, at x = 0
• ${\displaystyle {\text{median}}=1-{\tfrac {1}{\sqrt {2}}}}$
• var(X) = 1/18
• α = 1, β > 2
• reverse J-shaped with a right tail,
• convex
• ${\displaystyle 0<{\text{median}}<1-{\tfrac {1}{\sqrt {2}}}}$
• 0 < var(X) < 1/18
• α > 1, β = 1
• negatively skewed,
• strictly increasing (green plot),
• the power function [0, 1] distribution [6]
• mode =1
• 2 > α > 1, β = 1
• concave
• ${\displaystyle {\tfrac {1}{2}}<{\text{median}}<{\tfrac {1}{\sqrt {2}}}}$
• 1/18 < var(X) < 1/12
• α = 2, β = 1
• a straight line with slope +2, the right-triangular distribution with right angle at the right end, at x = 1
• ${\displaystyle {\text{median}}={\tfrac {1}{\sqrt {2}}}}$
• var(X) = 1/18
• α > 2, β = 1
• J-shaped with a left tail, convex
• ${\displaystyle {\tfrac {1}{\sqrt {2}}}<{\text{median}}<1}$
• 0 < var(X) < 1/18

## Parameter estimation

### Method of moments

#### Two unknown parameters

Two unknown parameters (${\displaystyle ({\hat {\alpha }},{\hat {\beta }})}$ of a beta distribution supported in the [0,1] interval) can be estimated, using the method of moments, with the first two moments (sample mean and sample variance) as follows. Let:

${\displaystyle {\text{sample mean(X)}}={\bar {x}}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}}$

be the sample mean estimate and

${\displaystyle {\text{sample variance(X)}}={\bar {v}}={\frac {1}{N-1}}\sum _{i=1}^{N}(X_{i}-{\bar {x}})^{2}}$

be the sample variance estimate. The method-of-moments estimates of the parameters are

${\displaystyle {\hat {\alpha }}={\bar {x}}\left({\frac {{\bar {x}}(1-{\bar {x}})}{\bar {v}}}-1\right),}$ if ${\displaystyle {\bar {v}}<{\bar {x}}(1-{\bar {x}}),}$
${\displaystyle {\hat {\beta }}=(1-{\bar {x}})\left({\frac {{\bar {x}}(1-{\bar {x}})}{\bar {v}}}-1\right),}$ if ${\displaystyle {\bar {v}}<{\bar {x}}(1-{\bar {x}}).}$

When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ${\displaystyle {\bar {x}}}$ with ${\displaystyle {\frac {{\bar {y}}-a}{c-a}},}$ and ${\displaystyle {\bar {v}}}$ with ${\displaystyle {\frac {\bar {v_{Y}}}{(c-a)^{2}}}}$ in the above couple of equations for the shape parameters (see the "Alternative parametrizations, four parameters" section below)., [36] where:

${\displaystyle {\text{sample mean(Y)}}={\bar {y}}={\frac {1}{N}}\sum _{i=1}^{N}Y_{i}}$
${\displaystyle {\text{sample variance(Y)}}={\bar {v_{Y}}}={\frac {1}{N-1}}\sum _{i=1}^{N}(Y_{i}-{\bar {y}})^{2}}$

#### Four unknown parameters

All four parameters (${\displaystyle {\hat {\alpha }},{\hat {\beta }},{\hat {a}},{\hat {c}}}$ of a beta distribution supported in the [a, c] interval -see section "Alternative parametrizations, Four parameters"-) can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis). [1] [37] [38] The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, (see previous section "Kurtosis") as follows:

${\displaystyle {\text{excess kurtosis}}={\frac {6}{3+\nu }}\left({\frac {(2+\nu )}{4}}({\text{skewness}})^{2}-1\right){\text{ if (skewness)}}^{2}-2<{\text{excess kurtosis}}<{\tfrac {3}{2}}({\text{skewness}})^{2}}$

One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows: [37]

${\displaystyle {\hat {\nu }}={\hat {\alpha }}+{\hat {\beta }}=3{\frac {({\text{sample excess kurtosis}})-({\text{sample skewness}})^{2}+2}{{\frac {3}{2}}({\text{sample skewness}})^{2}-{\text{(sample excess kurtosis)}}}}}$
${\displaystyle {\text{ if (sample skewness)}}^{2}-2<{\text{sample excess kurtosis}}<{\tfrac {3}{2}}({\text{sample skewness}})^{2}}$

This is the ratio (multiplied by a factor of 3) between the previously derived limit boundaries for the beta distribution in a space (as originally done by Karl Pearson [21] ) defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis (see previous section titled "Kurtosis bounded by the square of the skewness"):

The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2

${\displaystyle {\hat {\alpha }}={\hat {\beta }}={\frac {\hat {\nu }}{2}}={\frac {{\frac {3}{2}}({\text{sample excess kurtosis}})+3}{-{\text{(sample excess kurtosis)}}}}}$
${\displaystyle {\text{ if sample skewness}}=0{\text{ and }}-2<{\text{sample excess kurtosis}}<0}$

(Excess kurtosis is negative for the beta distribution with zero skewness, ranging from -2 to 0, so that ${\displaystyle {\hat {\nu }}}$ -and therefore the sample shape parameters- is positive, ranging from zero when the shape parameters approach zero and the excess kurtosis approaches -2, to infinity when the shape parameters approach infinity and the excess kurtosis approaches zero).

For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters ${\displaystyle {\hat {a}},{\hat {c}}}$, the parameters ${\displaystyle {\hat {\alpha }},{\hat {\beta }}}$ can be uniquely determined from the sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables (sample skewness and sample excess kurtosis) and two unknowns (the shape parameters):

${\displaystyle ({\text{sample skewness}})^{2}={\frac {4({\hat {\beta }}-{\hat {\alpha }})^{2}(1+{\hat {\alpha }}+{\hat {\beta }})}{{\hat {\alpha }}{\hat {\beta }}(2+{\hat {\alpha }}+{\hat {\beta }})^{2}}}}$
${\displaystyle {\text{sample excess kurtosis}}={\frac {6}{3+{\hat {\alpha }}+{\hat {\beta }}}}\left({\frac {(2+{\hat {\alpha }}+{\hat {\beta }})}{4}}({\text{sample skewness}})^{2}-1\right)}$
${\displaystyle {\text{ if (sample skewness)}}^{2}-2<{\text{sample excess kurtosis}}<{\tfrac {3}{2}}({\text{sample skewness}})^{2}}$

resulting in the following solution: [37]

${\displaystyle {\hat {\alpha }},{\hat {\beta }}={\frac {\hat {\nu }}{2}}\left(1\pm {\frac {1}{\sqrt {1+{\frac {16({\hat {\nu }}+1)}{({\hat {\nu }}+2)^{2}({\text{sample skewness}})^{2}}}}}}\right)}$
${\displaystyle {\text{ if sample skewness}}\neq 0{\text{ and }}({\text{sample skewness}})^{2}-2<{\text{sample excess kurtosis}}<{\tfrac {3}{2}}({\text{sample skewness}})^{2}}$

Where one should take the solutions as follows: ${\displaystyle {\hat {\alpha }}>{\hat {\beta }}}$ for (negative) sample skewness < 0, and ${\displaystyle {\hat {\alpha }}<{\hat {\beta }}}$ for (positive) sample skewness > 0.

The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of (sample excess kurtosis) and (sample squared skewness) and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front (lower) edge defined by "the impossible boundary" line (excess kurtosis + 2 - skewness2 = 0). Along this front (lower) boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities ${\displaystyle p={\tfrac {\beta }{\alpha +\beta }}}$ at the left end x = 0 and ${\displaystyle q=1-p={\tfrac {\alpha }{\alpha +\beta }}}$ at the right end x = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton, [39] sampling in the neighborhood of the line (sample excess kurtosis - (3/2)(sample skewness)2 = 0) (the just-J-shaped portion of the rear edge where blue meets beige), "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton [39] write that "the higher moment parameters (kurtosis and skewness) are extremely fragile (near that line). However the mean and standard deviation are fairly reliable." Therefore, the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches (3/2) times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See section titled "Kurtosis bounded by the square of the skewness" for a numerical example and further comments about this rear edge boundary line (sample excess kurtosis - (3/2)(sample skewness)2 = 0). As remarked by Karl Pearson himself [40] this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped (or mirror-image J-shaped) distributions with very different values of shape parameters that are unlikely to occur much in practice). The usual skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem.

The remaining two parameters ${\displaystyle {\hat {a}},{\hat {c}}}$ can be determined using the sample mean and the sample variance using a variety of equations. [1] [37] One alternative is to calculate the support interval range ${\displaystyle ({\hat {c}}-{\hat {a}})}$ based on the sample variance and the sample kurtosis. For this purpose one can solve, in terms of the range ${\displaystyle ({\hat {c}}-{\hat {a}})}$, the equation expressing the excess kurtosis in terms of the sample variance, and the sample size ν (see section titled "Kurtosis" and "Alternative parametrizations, four parameters"):

${\displaystyle {\text{sample excess kurtosis}}={\frac {6}{(3+{\hat {\nu }})(2+{\hat {\nu }})}}{\bigg (}{\frac {({\hat {c}}-{\hat {a}})^{2}}{\text{(sample variance)}}}-6-5{\hat {\nu }}{\bigg )}}$

to obtain:

${\displaystyle ({\hat {c}}-{\hat {a}})={\sqrt {\text{(sample variance)}}}{\sqrt {6+5{\hat {\nu }}+{\frac {(2+{\hat {\nu }})(3+{\hat {\nu }})}{6}}{\text{(sample excess kurtosis)}}}}}$

Another alternative is to calculate the support interval range ${\displaystyle ({\hat {c}}-{\hat {a}})}$ based on the sample variance and the sample skewness. [37] For this purpose one can solve, in terms of the range ${\displaystyle ({\hat {c}}-{\hat {a}})}$, the equation expressing the squared skewness in terms of the sample variance, and the sample size ν (see section titled "Skewness" and "Alternative parametrizations, four parameters"):

${\displaystyle ({\text{sample skewness}})^{2}={\frac {4}{(2+{\hat {\nu }})^{2}}}{\bigg (}{\frac {({\hat {c}}-{\hat {a}})^{2}}{\text{(sample variance)}}}-4(1+{\hat {\nu }}){\bigg )}}$

to obtain: [37]

${\displaystyle ({\hat {c}}-{\hat {a}})={\frac {\sqrt {\text{(sample variance)}}}{2}}{\sqrt {(2+{\hat {\nu }})^{2}({\text{sample skewness}})^{2}+16(1+{\hat {\nu }})}}}$

The remaining parameter can be determined from the sample mean and the previously obtained parameters: ${\displaystyle ({\hat {c}}-{\hat {a}}),{\hat {\alpha }},{\hat {\nu }}={\hat {\alpha }}+{\hat {\beta }}}$:

${\displaystyle {\hat {a}}=({\text{sample mean}})-\left({\frac {\hat {\alpha }}{\hat {\nu }}}\right)({\hat {c}}-{\hat {a}})}$

and finally, of course, ${\displaystyle {\hat {c}}=({\hat {c}}-{\hat {a}})+{\hat {a}}}$.

In the above formulas one may take, for example, as estimates of the sample moments:

{\displaystyle {\begin{aligned}{\text{sample mean}}&={\overline {y}}={\frac {1}{N}}\sum _{i=1}^{N}Y_{i}\\{\text{sample variance}}&={\overline {v}}_{Y}={\frac {1}{N-1}}\sum _{i=1}^{N}(Y_{i}-{\overline {y}})^{2}\\{\text{sample skewness}}&=G_{1}={\frac {N}{(N-1)(N-2)}}{\frac {\sum _{i=1}^{N}(Y_{i}-{\overline {y}})^{3}}{{\overline {v}}_{Y}^{\frac {3}{2}}}}\\{\text{sample excess kurtosis}}&=G_{2}={\frac {N(N+1)}{(N-1)(N-2)(N-3)}}{\frac {\sum _{i=1}^{N}(Y_{i}-{\overline {y}})^{4}}{{\overline {v}}_{Y}^{2}}}-{\frac {3(N-1)^{2}}{(N-2)(N-3)}}\end{aligned}}}

The estimators G1 for sample skewness and G2 for sample kurtosis are used by DAP/SAS, PSPP/SPSS, and Excel. However, they are not used by BMDP and (according to [41] ) they were not used by MINITAB in 1998. Actually, Joanes and Gill in their 1998 study [41] concluded that the skewness and kurtosis estimators used in BMDP and in MINITAB (at that time) had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in DAP/SAS, PSPP/SPSS, namely G1 and G2, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness (as shown by Joanes and Gill [41] ).

### Maximum likelihood

#### Two unknown parameters

As is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1, ..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is:

{\displaystyle {\begin{aligned}\ln \,{\mathcal {L}}(\alpha ,\beta \mid X)&=\sum _{i=1}^{N}\ln \left({\mathcal {L}}_{i}(\alpha ,\beta \mid X_{i})\right)\\&=\sum _{i=1}^{N}\ln \left(f(X_{i};\alpha ,\beta )\right)\\&=\sum _{i=1}^{N}\ln \left({\frac {X_{i}^{\alpha -1}(1-X_{i})^{\beta -1}}{\mathrm {B} (\alpha ,\beta )}}\right)\\&=(\alpha -1)\sum _{i=1}^{N}\ln(X_{i})+(\beta -1)\sum _{i=1}^{N}\ln(1-X_{i})-N\ln \mathrm {B} (\alpha ,\beta )\end{aligned}}}

Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:

${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \alpha }}=\sum _{i=1}^{N}\ln X_{i}-N{\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha }}=0}$
${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \beta }}=\sum _{i=1}^{N}\ln(1-X_{i})-N{\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \beta }}=0}$

where:

${\displaystyle {\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha }}=-{\frac {\partial \ln \Gamma (\alpha +\beta )}{\partial \alpha }}+{\frac {\partial \ln \Gamma (\alpha )}{\partial \alpha }}+{\frac {\partial \ln \Gamma (\beta )}{\partial \alpha }}=-\psi (\alpha +\beta )+\psi (\alpha )+0}$
${\displaystyle {\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \beta }}=-{\frac {\partial \ln \Gamma (\alpha +\beta )}{\partial \beta }}+{\frac {\partial \ln \Gamma (\alpha )}{\partial \beta }}+{\frac {\partial \ln \Gamma (\beta )}{\partial \beta }}=-\psi (\alpha +\beta )+0+\psi (\beta )}$

since the digamma function denoted ψ(α) is defined as the logarithmic derivative of the gamma function: [19]

${\displaystyle \psi (\alpha )={\frac {\partial \ln \Gamma (\alpha )}{\partial \alpha }}}$

To ensure that the values with zero tangent slope are indeed a maximum (instead of a saddle-point or a minimum) one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative

${\displaystyle {\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \alpha ^{2}}}=-N{\frac {\partial ^{2}\ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha ^{2}}}<0}$
${\displaystyle {\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \beta ^{2}}}=-N{\frac {\partial ^{2}\ln \mathrm {B} (\alpha ,\beta )}{\partial \beta ^{2}}}<0}$

using the previous equations, this is equivalent to:

${\displaystyle {\frac {\partial ^{2}\ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha ^{2}}}=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )>0}$
${\displaystyle {\frac {\partial ^{2}\ln \mathrm {B} (\alpha ,\beta )}{\partial \beta ^{2}}}=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )>0}$

where the trigamma function , denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {\partial ^{2}\ln \Gamma (\alpha )}{\partial \alpha ^{2}}}=\,{\frac {\partial \,\psi (\alpha )}{\partial \alpha }}.}$

These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since:

${\displaystyle \operatorname {var} [\ln(X)]=\operatorname {E} [\ln ^{2}(X)]-(\operatorname {E} [\ln(X)])^{2}=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \operatorname {var} [\ln(1-X)]=\operatorname {E} [\ln ^{2}(1-X)]-(\operatorname {E} [\ln(1-X)])^{2}=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )}$

Therefore, the condition of negative curvature at a maximum is equivalent to the statements:

${\displaystyle \operatorname {var} [\ln(X)]>0}$
${\displaystyle \operatorname {var} [\ln(1-X)]>0}$

Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following logarithmic derivatives of the geometric means GX and G(1−X) are positive, since:

${\displaystyle \psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )={\frac {\partial \ln G_{X}}{\partial \alpha }}>0}$
${\displaystyle \psi _{1}(\beta )-\psi _{1}(\alpha +\beta )={\frac {\partial \ln G_{(1-X)}}{\partial \beta }}>0}$

While these slopes are indeed positive, the other slopes are negative:

${\displaystyle {\frac {\partial \,\ln G_{X}}{\partial \beta }},{\frac {\partial \ln G_{(1-X)}}{\partial \alpha }}<0.}$

The slopes of the mean and the median with respect to α and β display similar sign behavior.

From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled maximum likelihood estimate equations (for the average log-likelihoods) that needs to be inverted to obtain the (unknown) shape parameter estimates ${\displaystyle {\hat {\alpha }},{\hat {\beta }}}$ in terms of the (known) average of logarithms of the samples X1, ..., XN: [1]

{\displaystyle {\begin{aligned}{\hat {\operatorname {E} }}[\ln(X)]&=\psi ({\hat {\alpha }})-\psi ({\hat {\alpha }}+{\hat {\beta }})={\frac {1}{N}}\sum _{i=1}^{N}\ln X_{i}=\ln {\hat {G}}_{X}\\{\hat {\operatorname {E} }}[\ln(1-X)]&=\psi ({\hat {\beta }})-\psi ({\hat {\alpha }}+{\hat {\beta }})={\frac {1}{N}}\sum _{i=1}^{N}\ln(1-X_{i})=\ln {\hat {G}}_{(1-X)}\end{aligned}}}

where we recognize ${\displaystyle \log {\hat {G}}_{X}}$ as the logarithm of the sample geometric mean and ${\displaystyle \log {\hat {G}}_{(1-X)}}$ as the logarithm of the sample geometric mean based on (1  X), the mirror-image of X. For ${\displaystyle {\hat {\alpha }}={\hat {\beta }}}$, it follows that ${\displaystyle {\hat {G}}_{X}={\hat {G}}_{(1-X)}}$.

{\displaystyle {\begin{aligned}{\hat {G}}_{X}&=\prod _{i=1}^{N}(X_{i})^{1/N}\\{\hat {G}}_{(1-X)}&=\prod _{i=1}^{N}(1-X_{i})^{1/N}\end{aligned}}}

These coupled equations containing digamma functions of the shape parameter estimates ${\displaystyle {\hat {\alpha }},{\hat {\beta }}}$ must be solved by numerical methods as done, for example, by Beckman et al. [42] Gnanadesikan et al. give numerical solutions for a few cases. [43] N.L.Johnson and S.Kotz [1] suggest that for "not too small" shape parameter estimates ${\displaystyle {\hat {\alpha }},{\hat {\beta }}}$, the logarithmic approximation to the digamma function ${\displaystyle \psi ({\hat {\alpha }})\approx \ln({\hat {\alpha }}-{\tfrac {1}{2}})}$ may be used to obtain initial values for an iterative solution, since the equations resulting from this approximation can be solved exactly:

${\displaystyle \ln {\frac {{\hat {\alpha }}-{\frac {1}{2}}}{{\hat {\alpha }}+{\hat {\beta }}-{\frac {1}{2}}}}\approx \ln {\hat {G}}_{X}}$
${\displaystyle \ln {\frac {{\hat {\beta }}-{\frac {1}{2}}}{{\hat {\alpha }}+{\hat {\beta }}-{\frac {1}{2}}}}\approx \ln {\hat {G}}_{(1-X)}}$

which leads to the following solution for the initial values (of the estimate shape parameters in terms of the sample geometric means) for an iterative solution:

${\displaystyle {\hat {\alpha }}\approx {\tfrac {1}{2}}+{\frac {{\hat {G}}_{X}}{2(1-{\hat {G}}_{X}-{\hat {G}}_{(1-X)})}}{\text{ if }}{\hat {\alpha }}>1}$
${\displaystyle {\hat {\beta }}\approx {\tfrac {1}{2}}+{\frac {{\hat {G}}_{(1-X)}}{2(1-{\hat {G}}_{X}-{\hat {G}}_{(1-X)})}}{\text{ if }}{\hat {\beta }}>1}$

Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions.

When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ln(Xi) in the first equation with

${\displaystyle \ln {\frac {Y_{i}-a}{c-a}},}$

and replace ln(1−Xi) in the second equation with

${\displaystyle \ln {\frac {c-Y_{i}}{c-a}}}$

(see "Alternative parametrizations, four parameters" section below).

If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter (for skewed cases such that ${\displaystyle {\hat {\alpha }}\neq {\hat {\beta }}}$, otherwise, if symmetric, both -equal- parameters are known when one is known):

${\displaystyle {\hat {\operatorname {E} }}\left[\ln \left({\frac {X}{1-X}}\right)\right]=\psi ({\hat {\alpha }})-\psi ({\hat {\beta }})={\frac {1}{N}}\sum _{i=1}^{N}\ln {\frac {X_{i}}{1-X_{i}}}=\ln {\hat {G}}_{X}-\ln \left({\hat {G}}_{(1-X)}\right)}$

This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image (X/(1 - X) resulting in the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) with support [0, +∞). As previously discussed in the section "Moments of logarithmically transformed random variables," the logit transformation ${\displaystyle \ln {\frac {X}{1-X}}}$, studied by Johnson, [25] extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞).

If, for example, ${\displaystyle {\hat {\beta }}}$ is known, the unknown parameter ${\displaystyle {\hat {\alpha }}}$ can be obtained in terms of the inverse [44] digamma function of the right hand side of this equation:

${\displaystyle \psi ({\hat {\alpha }})={\frac {1}{N}}\sum _{i=1}^{N}\ln {\frac {X_{i}}{1-X_{i}}}+\psi ({\hat {\beta }})}$
${\displaystyle {\hat {\alpha }}=(\operatorname {Inversedigamma} )[\ln {\hat {G}}_{X}-\ln {\hat {G}}_{(1-X)}+\psi ({\hat {\beta }})]}$

In particular, if one of the shape parameters has a value of unity, for example for ${\displaystyle {\hat {\beta }}=1}$ (the power function distribution with bounded support [0,1]), using the identity ψ(x + 1) = ψ(x) + 1/x in the equation ${\displaystyle \psi ({\hat {\alpha }})-\psi ({\hat {\alpha }}+{\hat {\beta }})=\ln {\hat {G}}_{X}}$, the maximum likelihood estimator for the unknown parameter ${\displaystyle {\hat {\alpha }}}$ is, [1] exactly:

${\displaystyle {\hat {\alpha }}=-{\frac {1}{{\frac {1}{N}}\sum _{i=1}^{N}\ln X_{i}}}=-{\frac {1}{\ln {\hat {G}}_{X}}}}$

The beta has support [0, 1], therefore ${\displaystyle {\hat {G}}_{X}<1}$, and hence ${\displaystyle (-\ln {\hat {G}}_{X})>0}$, and therefore ${\displaystyle {\hat {\alpha }}>0.}$

In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are (in general) a complicated function of the sample geometric mean, and of the sample geometric mean based on (1−X), the mirror-image of X. One may ask, if the variance (in addition to the mean) is necessary to estimate two shape parameters with the method of moments, why is the (logarithmic or geometric) variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters α = β, the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion (the variance). On the other hand, the geometric mean of a beta distribution with equal shape parameters α = β, depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on X and geometric mean based on (1  X), the maximum likelihood method is able to provide best estimates for both parameters α = β, without need of employing the variance.

One can express the joint log likelihood per N iid observations in terms of the sufficient statistics (the sample geometric means) as follows:

${\displaystyle {\frac {\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N}}=(\alpha -1)\ln {\hat {G}}_{X}+(\beta -1)\ln {\hat {G}}_{(1-X)}-\ln \mathrm {B} (\alpha ,\beta ).}$

We can plot the joint log likelihood per N observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators ${\displaystyle {\hat {\alpha }},{\hat {\beta }}}$ correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy (the maximum entropy occurs for shape parameters equal to unity: the uniform distribution). It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances

${\displaystyle {\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \alpha ^{2}}}=-\operatorname {var} [\ln X]}$
${\displaystyle {\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{\partial \beta ^{2}}}=-\operatorname {var} [\ln(1-X)]}$

These variances (and therefore the curvatures) are much larger for small values of the shape parameter α and β. However, for shape parameter values α, β > 1, the variances (and therefore the curvatures) flatten out. Equivalently, this result follows from the Cramér–Rao bound, since the Fisher information matrix components for the beta distribution are these logarithmic variances. The Cramér–Rao bound states that the variance of any unbiased estimator ${\displaystyle {\hat {\alpha }}}$ of α is bounded by the reciprocal of the Fisher information:

${\displaystyle \mathrm {var} ({\hat {\alpha }})\geq {\frac {1}{\operatorname {var} [\ln X]}}\geq {\frac {1}{\psi _{1}({\hat {\alpha }})-\psi _{1}({\hat {\alpha }}+{\hat {\beta }})}}}$
${\displaystyle \mathrm {var} ({\hat {\beta }})\geq {\frac {1}{\operatorname {var} [\ln(1-X)]}}\geq {\frac {1}{\psi _{1}({\hat {\beta }})-\psi _{1}({\hat {\alpha }}+{\hat {\beta }})}}}$

so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease.

Also one can express the joint log likelihood per N iid observations in terms of the digamma function expressions for the logarithms of the sample geometric means as follows:

${\displaystyle {\frac {\ln \,{\mathcal {L}}(\alpha ,\beta \mid X)}{N}}=(\alpha -1)(\psi ({\hat {\alpha }})-\psi ({\hat {\alpha }}+{\hat {\beta }}))+(\beta -1)(\psi ({\hat {\beta }})-\psi ({\hat {\alpha }}+{\hat {\beta }}))-\ln \mathrm {B} (\alpha ,\beta )}$

this expression is identical to the negative of the cross-entropy (see section on "Quantities of information (entropy)"). Therefore, finding the maximum of the joint log likelihood of the shape parameters, per N iid observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters.

${\displaystyle {\frac {\ln \,{\mathcal {L}}(\alpha ,\beta \mid X)}{N}}=-H=-h-D_{\mathrm {KL} }=-\ln \mathrm {B} (\alpha ,\beta )+(\alpha -1)\psi ({\hat {\alpha }})+(\beta -1)\psi ({\hat {\beta }})-(\alpha +\beta -2)\psi ({\hat {\alpha }}+{\hat {\beta }})}$

with the cross-entropy defined as follows:

${\displaystyle H=\int _{0}^{1}-f(X;{\hat {\alpha }},{\hat {\beta }})\ln(f(X;\alpha ,\beta ))\,{\rm {d}}X}$

#### Four unknown parameters

The procedure is similar to the one followed in the two unknown parameter case. If Y1, ..., YN are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for N iid observations is:

{\displaystyle {\begin{aligned}\ln \,{\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)&=\sum _{i=1}^{N}\ln \,{\mathcal {L}}_{i}(\alpha ,\beta ,a,c\mid Y_{i})\\&=\sum _{i=1}^{N}\ln \,f(Y_{i};\alpha ,\beta ,a,c)\\&=\sum _{i=1}^{N}\ln \,{\frac {(Y_{i}-a)^{\alpha -1}(c-Y_{i})^{\beta -1}}{(c-a)^{\alpha +\beta -1}\mathrm {B} (\alpha ,\beta )}}\\&=(\alpha -1)\sum _{i=1}^{N}\ln(Y_{i}-a)+(\beta -1)\sum _{i=1}^{N}\ln(c-Y_{i})-N\ln \mathrm {B} (\alpha ,\beta )-N(\alpha +\beta -1)\ln(c-a)\end{aligned}}}

Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:

${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial \alpha }}=\sum _{i=1}^{N}\ln(Y_{i}-a)-N(-\psi (\alpha +\beta )+\psi (\alpha ))-N\ln(c-a)=0}$
${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial \beta }}=\sum _{i=1}^{N}\ln(c-Y_{i})-N(-\psi (\alpha +\beta )+\psi (\beta ))-N\ln(c-a)=0}$
${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial a}}=-(\alpha -1)\sum _{i=1}^{N}{\frac {1}{Y_{i}-a}}\,+N(\alpha +\beta -1){\frac {1}{c-a}}=0}$
${\displaystyle {\frac {\partial \ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial c}}=(\beta -1)\sum _{i=1}^{N}{\frac {1}{c-Y_{i}}}\,-N(\alpha +\beta -1){\frac {1}{c-a}}=0}$

these equations can be re-arranged as the following system of four coupled equations (the first two equations are geometric means and the second two equations are the harmonic means) in terms of the maximum likelihood estimates for the four parameters ${\displaystyle {\hat {\alpha }},{\hat {\beta }},{\hat {a}},{\hat {c}}}$:

${\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}\ln {\frac {Y_{i}-{\hat {a}}}{{\hat {c}}-{\hat {a}}}}=\psi ({\hat {\alpha }})-\psi ({\hat {\alpha }}+{\hat {\beta }})=\ln {\hat {G}}_{X}}$
${\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}\ln {\frac {{\hat {c}}-Y_{i}}{{\hat {c}}-{\hat {a}}}}=\psi ({\hat {\beta }})-\psi ({\hat {\alpha }}+{\hat {\beta }})=\ln {\hat {G}}_{1-X}}$
${\displaystyle {\frac {1}{{\frac {1}{N}}\sum _{i=1}^{N}{\frac {{\hat {c}}-{\hat {a}}}{Y_{i}-{\hat {a}}}}}}={\frac {{\hat {\alpha }}-1}{{\hat {\alpha }}+{\hat {\beta }}-1}}={\hat {H}}_{X}}$
${\displaystyle {\frac {1}{{\frac {1}{N}}\sum _{i=1}^{N}{\frac {{\hat {c}}-{\hat {a}}}{{\hat {c}}-Y_{i}}}}}={\frac {{\hat {\beta }}-1}{{\hat {\alpha }}+{\hat {\beta }}-1}}={\hat {H}}_{1-X}}$

with sample geometric means:

${\displaystyle {\hat {G}}_{X}=\prod _{i=1}^{N}\left({\frac {Y_{i}-{\hat {a}}}{{\hat {c}}-{\hat {a}}}}\right)^{\frac {1}{N}}}$
${\displaystyle {\hat {G}}_{(1-X)}=\prod _{i=1}^{N}\left({\frac {{\hat {c}}-Y_{i}}{{\hat {c}}-{\hat {a}}}}\right)^{\frac {1}{N}}}$

The parameters ${\displaystyle {\hat {a}},{\hat {c}}}$ are embedded inside the geometric mean expressions in a nonlinear way (to the power 1/N). This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for ${\displaystyle {\hat {\alpha }},{\hat {\beta }}>1}$, which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is positive-definite only for α, β > 2 (for further discussion, see section on Fisher information matrix, four parameter case), for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. The following Fisher information components (that represent the expectations of the curvature of the log likelihood function) have singularities at the following values:

${\displaystyle \alpha =2:\quad \operatorname {E} \left[-{\frac {1}{N}}{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial a^{2}}}\right]={\mathcal {I}}_{a,a}}$
${\displaystyle \beta =2:\quad \operatorname {E} \left[-{\frac {1}{N}}{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial c^{2}}}\right]={\mathcal {I}}_{c,c}}$
${\displaystyle \alpha =2:\quad \operatorname {E} \left[-{\frac {1}{N}}{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial \alpha \partial a}}\right]={\mathcal {I}}_{\alpha ,a}}$
${\displaystyle \beta =1:\quad \operatorname {E} \left[-{\frac {1}{N}}{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta ,a,c\mid Y)}{\partial \beta \partial c}}\right]={\mathcal {I}}_{\beta ,c}}$

(for further discussion see section on Fisher information matrix). Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the uniform distribution (Beta(1, 1, a, c)), and the arcsine distribution (Beta(1/2, 1/2, a, c)). N.L.Johnson and S.Kotz [1] ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of a, c, α and β are required, the above procedure (for the two unknown parameter case, with X transformed as X = (Y  a)/(c  a)) can be repeated using a succession of trial values of a and c, until the pair (a, c) for which maximum likelihood (given a and c) is as great as possible, is attained" (where, for the purpose of clarity, their notation for the parameters has been translated into the present notation).

### Fisher information matrix

Let a random variable X have a probability density f(x;α). The partial derivative with respect to the (unknown, and to be estimated) parameter α of the log likelihood function is called the score. The second moment of the score is called the Fisher information:

${\displaystyle {\mathcal {I}}(\alpha )=\operatorname {E} \left[\left({\frac {\partial }{\partial \alpha }}\ln {\mathcal {L}}(\alpha \mid X)\right)^{2}\right],}$

The expectation of the score is zero, therefore the Fisher information is also the second moment centered on the mean of the score: the variance of the score.

If the log likelihood function is twice differentiable with respect to the parameter α, and under certain regularity conditions, [45] then the Fisher information may also be written as follows (which is often a more convenient form for calculation purposes):

${\displaystyle {\mathcal {I}}(\alpha )=-\operatorname {E} \left[{\frac {\partial ^{2}}{\partial \alpha ^{2}}}\ln({\mathcal {L}}(\alpha \mid X))\right].}$

Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter α of the log likelihood function. Therefore, Fisher information is a measure of the curvature of the log likelihood function of α. A low curvature (and therefore high radius of curvature), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature (and therefore low radius of curvature) has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters ("the observed Fisher information matrix") it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms. [46] The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any estimator of a parameter α:

${\displaystyle \operatorname {var} [{\hat {\alpha }}]\geq {\frac {1}{{\mathcal {I}}(\alpha )}}.}$

The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter. [47]

When there are N parameters

${\displaystyle {\begin{bmatrix}\theta _{1}\\\theta _{2}\\\dots \\\theta _{N}\end{bmatrix}},}$

then the Fisher information takes the form of an N×N positive semidefinite symmetric matrix, the Fisher Information Matrix, with typical element:

${\displaystyle {({\mathcal {I}}(\theta ))}_{i,j}=\operatorname {E} \left[\left({\frac {\partial }{\partial \theta _{i}}}\ln {\mathcal {L}}\right)\left({\frac {\partial }{\partial \theta _{j}}}\ln {\mathcal {L}}\right)\right].}$

Under certain regularity conditions, [45] the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation:

${\displaystyle {({\mathcal {I}}(\theta ))}_{i,j}=-\operatorname {E} \left[{\frac {\partial ^{2}}{\partial \theta _{i}\,\partial \theta _{j}}}\ln({\mathcal {L}})\right]\,.}$

With X1, ..., XN iid random variables, an N-dimensional "box" can be constructed with sides X1, ..., XN. Costa and Cover [48] show that the (Shannon) differential entropy h(X) is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set.

#### Two parameters

For X1, ..., XN independent random variables each having a beta distribution parametrized with shape parameters α and β, the joint log likelihood function for N iid observations is:

${\displaystyle \ln({\mathcal {L}}(\alpha ,\beta \mid X))=(\alpha -1)\sum _{i=1}^{N}\ln X_{i}+(\beta -1)\sum _{i=1}^{N}\ln(1-X_{i})-N\ln \mathrm {B} (\alpha ,\beta )}$

therefore the joint log likelihood function per N iid observations is:

${\displaystyle {\frac {1}{N}}\ln({\mathcal {L}}(\alpha ,\beta \mid X))=(\alpha -1){\frac {1}{N}}\sum _{i=1}^{N}\ln X_{i}+(\beta -1){\frac {1}{N}}\sum _{i=1}^{N}\ln(1-X_{i})-\,\ln \mathrm {B} (\alpha ,\beta )}$

For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal).

Aryal and Nadarajah [49] calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows:

${\displaystyle -{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\partial \alpha ^{2}}}=\operatorname {var} [\ln(X)]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )={\mathcal {I}}_{\alpha ,\alpha }=\operatorname {E} \left[-{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\partial \alpha ^{2}}}\right]=\ln \operatorname {var} _{GX}}$
${\displaystyle -{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\,\partial \beta ^{2}}}=\operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )={\mathcal {I}}_{\beta ,\beta }=\operatorname {E} \left[-{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\partial \beta ^{2}}}\right]=\ln \operatorname {var} _{G(1-X)}}$
${\displaystyle -{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\,\partial \alpha \,\partial \beta }}=\operatorname {cov} [\ln X,\ln(1-X)]=-\psi _{1}(\alpha +\beta )={\mathcal {I}}_{\alpha ,\beta }=\operatorname {E} \left[-{\frac {\partial ^{2}\ln {\mathcal {L}}(\alpha ,\beta \mid X)}{N\,\partial \alpha \,\partial \beta }}\right]=\ln \operatorname {cov} _{G{X,(1-X)}}}$

Since the Fisher information matrix is symmetric

${\displaystyle {\mathcal {I}}_{\alpha ,\beta }={\mathcal {I}}_{\beta ,\alpha }=\ln \operatorname {cov} _{G{X,(1-X)}}}$

The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as trigamma functions , denoted ψ1(α), the second of the polygamma functions, defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {d^{2}\ln \Gamma (\alpha )}{\partial \alpha ^{2}}}=\,{\frac {\partial \psi (\alpha )}{\partial \alpha }}.}$

These derivatives are also derived in the section titled "Parameter estimation", "Maximum likelihood", "Two unknown parameters," and plots of the log likelihood function are also shown in that section. The section titled "Geometric variance and covariance" contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. The section titled "Other moments", "Moments of transformed random variables", "Moments of logarithmically transformed random variables" contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components ${\displaystyle {\mathcal {I}}_{\alpha ,\alpha },{\mathcal {I}}_{\beta ,\beta }}$ and ${\displaystyle {\mathcal {I}}_{\alpha ,\beta }}$ are shown in the section titled "Geometric variance".

The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is:

{\displaystyle {\begin{aligned}\det({\mathcal {I}}(\alpha ,\beta ))&={\mathcal {I}}_{\alpha ,\alpha }{\mathcal {I}}_{\beta ,\beta }-{\mathcal {I}}_{\alpha ,\beta }{\mathcal {I}}_{\alpha ,\beta }\\[4pt]&=(\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta ))(\psi _{1}(\beta )-\psi _{1}(\alpha +\beta ))-(-\psi _{1}(\alpha +\beta ))(-\psi _{1}(\alpha +\beta ))\\[4pt]&=\psi _{1}(\alpha )\psi _{1}(\beta )-(\psi _{1}(\alpha )+\psi _{1}(\beta ))\psi _{1}(\alpha +\beta )\\[4pt]\lim _{\alpha \to 0}\det({\mathcal {I}}(\alpha ,\beta ))&=\lim _{\beta \to 0}\det({\mathcal {I}}(\alpha ,\beta ))=\infty \\[4pt]\lim _{\alpha \to \infty }\det({\mathcal {I}}(\alpha ,\beta ))&=\lim _{\beta \to \infty }\det({\mathcal {I}}(\alpha ,\beta ))=0\end{aligned}}}

From Sylvester's criterion (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is positive-definite (under the standard condition that the shape parameters are positive α > 0 and β > 0).

#### Four parameters

If Y1, ..., YN are independent random variables each having a beta distribution with four parameters: the exponents α and β, and also a (the minimum of the distribution range), and c (the maximum of the distribution range) (section titled "Alternative parametrizations", "Four parameters"), with probability density function:

${\displaystyle f(y;\alpha ,\beta ,a,c)={\frac {f(x;\alpha ,\beta )}{c-a}}={\frac {\left({\frac {y-a}{c-a}}\right)^{\alpha -1}\left({\frac {c-y}{c-a}}\right)^{\beta -1}}{(c-a)B(\alpha ,\beta )}}={\frac {(y-a)^{\alpha -1}(c-y)^{\beta -1}}{(c-a)^{\alpha +\beta -1}B(\alpha ,\beta )}}.}$

the joint log likelihood function per N iid observations is:

${\displaystyle {\frac {1}{N}}\ln({\mathcal {L}}(\alpha ,\beta ,a,c\mid Y))={\frac {\alpha -1}{N}}\sum _{i=1}^{N}\ln(Y_{i}-a)+{\frac {\beta -1}{N}}\sum _{i=1}^{N}\ln(c-Y_{i})-\ln \mathrm {B} (\alpha ,\beta )-(\alpha +\beta -1)\ln(c-a)}$

For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components = (4×4 total − 4 diagonal). Since the Fisher information matrix is symmetric, half of these components (12/2=6) are independent. Therefore, the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah [49] calculated Fisher's information matrix for the four parameter case as follows: