Triangular distribution

Last updated
Triangular
Probability density function
Triangular distribution PMF.png
Cumulative distribution function
Triangular distribution CMF.png
Parameters

Support
PDF
CDF
Mean
Median
Mode
Variance
Skewness
Ex. kurtosis
Entropy
MGF
CF

In probability theory and statistics, the triangular distribution is a continuous probability distribution with lower limit a, upper limit b and mode c, where a < b and a  c  b.

Contents

Special cases

Mode at a bound

The distribution simplifies when c = a or c = b. For example, if a = 0, b = 1 and c = 1, then the PDF and CDF become:

Distribution of the absolute difference of two standard uniform variables

This distribution for a = 0, b = 1 and c = 0 is the distribution of X = |X1  X2|, where X1, X2 are two independent random variables with standard uniform distribution.

Symmetric triangular distribution

The symmetric case arises when c = (a + b) / 2. In this case, an alternate form of the distribution function is:

Distribution of the mean of two standard uniform variables

This distribution for a = 0, b = 1 and c = 0.5—the mode (i.e., the peak) is exactly in the middle of the interval—corresponds to the distribution of the mean of two standard uniform variables, i.e., the distribution of X = (X1 + X2) / 2, where X1, X2 are two independent random variables with standard uniform distribution in [0, 1]. [1]

Generating triangular-distributed random variates

Given a random variate U drawn from the uniform distribution in the interval (0, 1), then the variate

[2]

where , has a triangular distribution with parameters and . This can be obtained from the cumulative distribution function.

Use of the distribution

The triangular distribution is typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known but data is scarce (possibly because of the high cost of collection). It is based on a knowledge of the minimum and maximum and an "inspired guess" [3] as to the modal value. For these reasons, the triangle distribution has been called a "lack of knowledge" distribution.

Business simulations

The triangular distribution is therefore often used in business decision making, particularly in simulations. Generally, when not much is known about the distribution of an outcome (say, only its smallest and largest values), it is possible to use the uniform distribution. But if the most likely outcome is also known, then the outcome can be simulated by a triangular distribution. See for example under corporate finance.

Project management

The triangular distribution, along with the PERT distribution, is also widely used in project management (as an input into PERT and hence critical path method (CPM)) to model events which take place within an interval defined by a minimum and maximum value.

Audio dithering

The symmetric triangular distribution is commonly used in audio dithering, where it is called TPDF (triangular probability density function).

See also

Related Research Articles

In probability theory, the expected value of a random variable , denoted or , is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.

Random variable Variable representing a random phenomenon

In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is described informally as a variable whose values depend on outcomes of a random phenomenon. The formal mathematical treatment of random variables is a topic in probability theory. In that context, a random variable is understood as a measurable function defined on a probability space that maps from the sample space to the real numbers.

Variance Statistical measure

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have

Beta distribution Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

Cantor distribution Probability distribution

The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

Continuous uniform distribution uniform distribution on an interval

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, a and b, which are the minimum and maximum values. The interval can be either be closed or open. Therefore, the distribution is often abbreviated U, where U stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

Empirical distribution function

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

Beta-binomial distribution distribution

In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data.

In mathematics, a contraharmonic mean is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2.

Truncated normal distribution probability distribution derived from that of a normally distributed random variable by bounding the random variable from below, above or both

In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the Tobit model.

Irwin–Hall distribution

In probability and statistics, the Irwin–Hall distribution, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution. For this reason it is also known as the uniform sum distribution.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product

Bates distribution

In probability and statistics, the Bates distribution, named after Grace Bates, is a probability distribution of the mean of a number of statistically independent uniformly distributed random variables on the unit interval. This distribution is sometimes confused with the Irwin–Hall distribution, which is the distribution of the sum of n independent random variables uniformly distributed from 0 to 1. Thus, the two distributions are simply versions of each other as they only differ in scale.

In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value. The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables.

Beta rectangular distribution

In probability theory and statistics, the beta rectangular distribution is a probability distribution that is a finite mixture distribution of the beta distribution and the continuous uniform distribution. The support is of the distribution is indicated by the parameters a and b, which are the minimum and maximum values respectively. The distribution provides an alternative to the beta distribution such that it allows more density to be placed at the extremes of the bounded interval of support. Thus it is a bounded distribution that allows for outliers to have a greater chance of occurring than does the beta distribution.

In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are complex numbers. Complex random variables can always be considered as pairs of real random variables: their real and imaginary parts. Therefore, the distribution of one complex random variable may be interpreted as the joint distribution of two real random variables.

References

  1. Beyond Beta: Other Continuous Families of Distributions with Bounded Support and Applications. Samuel Kotz and Johan René van Dorp. https://books.google.de/books?id=JO7ICgAAQBAJ&lpg=PA1&dq=chapter%201%20dig%20out%20suitable%20substitutes%20of%20the%20beta%20distribution%20one%20of%20our%20goals&pg=PA3#v=onepage&q&f=false
  2. https://web.archive.org/web/20140407075018/http://www.asianscientist.com/books/wp-content/uploads/2013/06/5720_chap1.pdf
  3. "Archived copy" (PDF). Archived from the original (PDF) on 2006-09-23. Retrieved 2006-09-23.CS1 maint: archived copy as title (link)