Equiprobability

Last updated

Equiprobability is a property for a collection of events that each have the same probability of occurring. In statistics and probability theory it is applied in the discrete uniform distribution and the equidistribution theorem for rational numbers. If there are events under consideration, the probability of each occurring is

Contents

In philosophy it corresponds to a concept that allows one to assign equal probabilities to outcomes when they are judged to be equipossible or to be "equally likely" in some sense. The best-known formulation of the rule is Laplace's principle of indifference (or principle of insufficient reason), which states that, when "we have no other information than" that exactly mutually exclusive events can occur, we are justified in assigning each the probability This subjective assignment of probabilities is especially justified for situations such as rolling dice and lotteries since these experiments carry a symmetry structure, and one's state of knowledge must clearly be invariant under this symmetry.

A similar argument could lead to the seemingly absurd conclusion that the sun is as likely to rise as to not rise tomorrow morning. However, the conclusion that the sun is equally likely to rise as it is to not rise is only absurd when additional information is known, such as the laws of gravity and the sun's history. Similar applications of the concept are effectively instances of circular reasoning, with "equally likely" events being assigned equal probabilities, which means in turn that they are equally likely. Despite this, the notion remains useful in probabilistic and statistical modeling.

In Bayesian probability, one needs to establish prior probabilities for the various hypotheses before applying Bayes' theorem. One procedure is to assume that these prior probabilities have some symmetry which is typical of the experiment, and then assign a prior which is proportional to the Haar measure for the symmetry group: this generalization of equiprobability is known as the principle of transformation groups and leads to misuse of equiprobability as a model for incertitude.

See also

Related Research Articles

Entropy (information theory) Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable , with possible outcomes , which occur with probability the entropy of is formally defined as:

Probability Branch of mathematics concerning chance and uncertainty

Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty. The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2.

Sample space

In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points, are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U. The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion. Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.

Random variable Variable representing a random phenomenon

A random variable is a variable whose values depend on outcomes of a random event. Also called random quantity, aleatory variable, or stochastic variable. It is formally defined as a measurable function. It maps from the sample space to a measurable space on the probability space.

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data.

The principle of indifference is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence equally among all the possible outcomes under consideration.

Probability amplitude Complex number whose squared absolute value is a probability

In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. The modulus squared of this quantity represents a probability density.

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

Doomsday argument Doomsday scenario on human births

The Doomsday argument (DA) is a probabilistic argument that claims to predict the number of future members of the human species given an estimate of the total number of humans born so far.

Sunrise problem Problem asking the probability that the sun will rise tomorrow

The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs.

The classical definition or interpretation of probability is identified with the works of Jacob Bernoulli and Pierre-Simon Laplace. As stated in Laplace's Théorie analytique des probabilités,

In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein a finite number of values are equally likely to be observed; every one of n values has equal probability 1/n. Another way of saying "discrete uniform distribution" would be "a known, finite number of outcomes equally likely to happen".

Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what can occur in a probability experiment. For example, when considering rolling a six-sided die, why do we typically view the possible outcomes as {1,2,3,4,5,6} rather than, say, {6, not 6}? The former set contains equally possible alternatives, while the latter does not because there are five times as many alternatives inherent in 'not 6' as in 6. This is true even if the die is biased so that 6 and 'not 6' are equally likely to occur.

In statistics, additive smoothing, also called Laplace smoothing, or Lidstone smoothing, is a technique used to smooth categorical data. Given a set of observation counts from a -dimensional multinomial distribution with trials, a "smoothed" version of the counts gives the estimator:

The principle of transformation groups is a rule for assigning epistemic probabilities in a statistical inference problem. It was first suggested by Edwin T. Jaynes and can be seen as a generalisation of the principle of indifference.

In probability theory, an outcome is a possible result of an experiment or trial. Each possible outcome of a particular experiment is unique, and different outcomes are mutually exclusive. All of the possible outcomes of an experiment form the elements of a sample space.

Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world.

References