WikiMili The Free Encyclopedia

Probability density function | |

Cumulative distribution function | |

Parameters | |
---|---|

Support | |

CDF | |

Mean | |

Median | |

Mode | |

Variance | |

Skewness | |

Ex. kurtosis | |

Entropy | |

MGF | |

CF |

In probability theory and statistics, the **triangular distribution** is a continuous probability distribution with lower limit *a*, upper limit *b* and mode *c*, where *a* < *b* and *a* ≤ *c* ≤ *b*.

**Probability theory** is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

**Statistics** is a branch of mathematics dealing with data collection, organization, analysis, interpretation and presentation. In applying statistics to, for example, a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with all aspects of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.

In probability theory and statistics, a **probability distribution** is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for *X* = heads, and 0.5 for *X* = tails. Examples of random phenomena can include the results of an experiment or survey.

The distribution simplifies when *c* = *a* or *c* = *b*. For example, if *a* = 0, *b* = 1 and *c* = 1, then the PDF and CDF become:

In probability theory, a **probability density function** (**PDF**), or **density** of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a *relative likelihood* that the value of the random variable would equal that sample. In other words, while the *absolute likelihood* for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

In probability theory and statistics, the **cumulative distribution function** (**CDF**) of a real-valued random variable , or just **distribution function** of , evaluated at , is the probability that will take a value less than or equal to .

This distribution for *a* = 0, *b* = 1 and *c* = 0 is the distribution of *X* = |*X*_{1} − *X*_{2}|, where *X*_{1}, *X*_{2} are two independent random variables with standard uniform distribution.

In probability theory and statistics, the **continuous uniform distribution** or **rectangular distribution** is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, *a* and *b*, which are its minimum and maximum values. The distribution is often abbreviated *U*(*a*,*b*). It is the maximum entropy probability distribution for a random variable *X* under no constraint other than that it is contained in the distribution's support.

The symmetric case arises when *c* = (*a* + *b*) / 2.

This distribution for *a* = 0, *b* = 1 and *c* = 0.5—the mode (i.e., the peak) is exactly in the middle of the interval—corresponds to the distribution of the mean of two standard uniform variables, i.e., the distribution of *X* = (*X*_{1} + *X*_{2}) / 2, where *X*_{1}, *X*_{2} are two independent random variables with standard uniform distribution in [0, 1].^{ [1] }

Given a random variate *U* drawn from the uniform distribution in the interval (0, 1), then the variate

^{ [2] }

where , has a triangular distribution with parameters and . This can be obtained from the cumulative distribution function.

The triangular distribution is typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known but data is scarce (possibly because of the high cost of collection). It is based on a knowledge of the minimum and maximum and an "inspired guess"^{ [3] } as to the modal value. For these reasons, the triangle distribution has been called a "lack of knowledge" distribution.

The triangular distribution is therefore often used in business decision making, particularly in simulations. Generally, when not much is known about the distribution of an outcome (say, only its smallest and largest values), it is possible to use the uniform distribution. But if the most likely outcome is also known, then the outcome can be simulated by a triangular distribution. See for example under corporate finance.

The triangular distribution, along with the PERT distribution, is also widely used in project management (as an input into PERT and hence critical path method (CPM)) to model events which take place within an interval defined by a minimum and maximum value.

The symmetric triangular distribution is commonly used in audio dithering, where it is called TPDF (triangular probability density function).

- Trapezoidal distribution
- Thomas Simpson
- Three-point estimation
- Five-number summary
- Seven-number summary
- Triangular function
- Central limit theorem — The triangle distribution often occurs as a result of adding two uniform random variables together. In other words, the triangle distribution is often (not always) the result of the very first iteration of the central limit theorem summing process (i.e. ). In this sense, the triangle distribution can occasionally occur naturally. If this process of summing together more random variables continues (i.e. ), then the distribution will become increasingly bell shaped.
- Irwin–Hall distribution — Using an Irwin–Hall distribution is an easy way to generate a triangle distribution.
- Bates distribution — Similar to the Irwin–Hall distribution, but with the values rescaled back into the 0 to 1 range. Useful for computation of a triangle distribution which can subsequently be rescaled and shifted to create other triangle distributions outside of the 0 to 1 range.

In probability theory and statistics, the **binomial distribution** with parameters *n* and *p* is the discrete probability distribution of the number of successes in a sequence of *n* independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: a random variable containing a single bit of information: success/yes/true/one or failure/no/false/zero. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., *n* = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

In probability theory, the **expected value** of a random variable, intuitively, is the long-run average value of repetitions of the **same experiment** it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the **expectation**, **mathematical expectation**, **EV**, **average**, **mean value**, **mean**, or **first moment**.

In probability and statistics, a **random variable**, **random quantity**, **aleatory variable**, or **stochastic variable** is a variable whose possible values are outcomes of a random phenomenon. More specifically, a random variable is defined as a function that maps the outcomes of an unpredictable process to numerical quantities, typically real numbers. It is a variable, in the sense that it depends on the outcome of an underlying process providing the input to this function, and it is random in the sense that the underlying process is assumed to be random.

In probability theory and statistics, **variance** is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

In probability theory and statistics, the **exponential distribution** is the probability distribution that describes the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In probability theory and statistics, the **multivariate normal distribution**, **multivariate Gaussian distribution**, or **joint normal distribution** is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be *k*-variate normally distributed if every linear combination of its *k* components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In probability theory, a **log-normal distribution** is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then *Y* = ln(*X*) has a normal distribution. Likewise, if Y has a normal distribution, then the exponential function of Y, *X* = exp(*Y*), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. The distribution is occasionally referred to as the **Galton distribution** or **Galton's distribution**, after Francis Galton. The log-normal distribution also has been associated with other names, such as McAlister, Gibrat and Cobb–Douglas.

In probability theory, **Chebyshev's inequality** guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/*k*^{2} of the distribution's values can be more than *k* standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In probability theory and statistics, the **Bernoulli distribution**, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability that is, the probability distribution of any single experiment that asks a yes–no question; the question results in a boolean-valued outcome, a single bit of information whose value is success/yes/true/one with probability *p* and failure/no/false/zero with probability *q*. It can be used to represent a coin toss where 1 and 0 would represent "heads" and "tails", respectively, and *p* would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have

In probability theory and statistics, the **beta distribution** is a family of continuous probability distributions defined on the interval [0, 1] parametrized by two positive shape parameters, denoted by *α* and *β*, that appear as exponents of the random variable and control the shape of the distribution. It is a special case of the Dirichlet distribution.

In probability theory, although simple examples illustrate that linear uncorrelatedness of two random variables does not in general imply their independence, it is sometimes mistakenly thought that it does imply that when the two random variables are normally distributed. This article demonstrates that assumption of normal distributions does not have that consequence, although the multivariate normal distribution, including the bivariate normal distribution, does.

In mathematics, a **contraharmonic mean** is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2.

In statistics, the **probability integral transform** or **transformation** relates to the result that data values that are modelled as being random variables from any given continuous distribution can be converted to random variables having a standard uniform distribution. This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data, the result will hold approximately in large samples.

In probability and statistics, the **truncated normal distribution** is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the Tobit model.

In probability and statistics, the **Irwin–Hall distribution**, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution. For this reason it is also known as the **uniform sum distribution**.

A **product distribution** is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables *X* and *Y*, the distribution of the random variable *Z* that is formed as the product

In probability and statistics, the **Bates distribution**, named after Grace Bates, is a probability distribution of the mean of a number of statistically independent uniformly distributed random variables on the unit interval. This distribution is sometimes confused with the Irwin–Hall distribution, which is the distribution of the **sum** of *n* independent random variables uniformly distributed from 0 to 1.

In probability theory, **concentration inequalities** provide bounds on how a random variable deviates from some value. The laws of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables.

In probability theory and statistics, the **beta rectangular distribution** is a probability distribution that is a finite mixture distribution of the beta distribution and the continuous uniform distribution. The support is of the distribution is indicated by the parameters *a* and *b*, which are the minimum and maximum values respectively. The distribution provides an alternative to the beta distribution such that it allows more density to be placed at the extremes of the bounded interval of support. Thus it is a bounded distribution that allows for outliers to have a greater chance of occurring than does the beta distribution.

- ↑
*Beyond Beta: Other Continuous Families of Distributions with Bounded Support and Applications*. Samuel Kotz and Johan René van Dorp. https://books.google.de/books?id=JO7ICgAAQBAJ&lpg=PA1&dq=chapter%201%20dig%20out%20suitable%20substitutes%20of%20the%20beta%20distribution%20one%20of%20our%20goals&pg=PA3#v=onepage&q&f=false - ↑ https://web.archive.org/web/20140407075018/http://www.asianscientist.com/books/wp-content/uploads/2013/06/5720_chap1.pdf
- ↑ http://www.decisionsciences.org/DecisionLine/Vol31/31_3/31_3clas.pdf

- Weisstein, Eric W. "Triangular Distribution".
*MathWorld*. - Triangle Distribution, decisionsciences.org
- Triangular Distribution, brighton-webs.co.uk

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.