Event (probability theory)

Last updated

In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. [1] A single outcome may be an element of many different events, [2] and different events in an experiment are usually not equally likely, since they may include very different groups of outcomes. [3] An event consisting of only a single outcome is called an elementary event or an atomic event; that is, it is a singleton set. An event that has more than one possible outcomes is called compound event. An event is said to occur if contains the outcome of the experiment (or trial) (that is, if ). [4] The probability (with respect to some probability measure) that an event occurs is the probability that contains the outcome of an experiment (that is, it is the probability that ). An event defines a complementary event, namely the complementary set (the event not occurring), and together these define a Bernoulli trial: did the event occur or not?

Contents

Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). [5] However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see Events in probability spaces , below).

A simple example

If we assemble a deck of 52 playing cards with no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each card is a possible outcome. An event, however, is any subset of the sample space, including any singleton set (an elementary event), the empty set (an impossible event, with probability zero) and the sample space itself (a certain event, with probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include:

An Euler diagram of an event.
B
{\displaystyle B}
is the sample space and
A
{\displaystyle A}
is an event.
By the ratio of their areas, the probability of
A
{\displaystyle A}
is approximately 0.4. Venn A subset B.svg
An Euler diagram of an event. is the sample space and is an event.
By the ratio of their areas, the probability of is approximately 0.4.

Since all events are sets, they are usually written as sets (for example, {1, 2, 3}), and represented graphically using Venn diagrams. In the situation where each outcome in the sample space Ω is equally likely, the probability of an event is the following formula: This rule can readily be applied to each of the example events above.

Events in probability spaces

Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under complementation and countable unions of its members. The most natural choice of σ-algebra is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice.

In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected 𝜎-algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the 𝜎-algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all events of interest are elements of the 𝜎-algebra.

A note on notation

Even though events are subsets of some sample space they are often written as predicates or indicators involving random variables. For example, if is a real-valued random variable defined on the sample space the event can be written more conveniently as, simply, This is especially common in formulas for a probability, such as The set is an example of an inverse image under the mapping because if and only if

See also

Notes

  1. Leon-Garcia, Alberto (2008). Probability, statistics and random processes for electrical engineering. Upper Saddle River, NJ: Pearson. ISBN   9780131471221.
  2. Pfeiffer, Paul E. (1978). Concepts of probability theory. Dover Publications. p. 18. ISBN   978-0-486-63677-1.
  3. Foerster, Paul A. (2006). Algebra and trigonometry: Functions and Applications, Teacher's edition (Classics ed.). Upper Saddle River, NJ: Prentice Hall. p.  634. ISBN   0-13-165711-9.
  4. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Ludolf Erwin, Meester (2005). Dekking, Michel (ed.). A modern introduction to probability and statistics: understanding why and how. Springer texts in statistics. London [Heidelberg]: Springer. p. 14. ISBN   978-1-85233-896-1.
  5. Širjaev, Alʹbert N. (2016). Probability-1. Graduate texts in mathematics. Translated by Boas, Ralph Philip; Chibisov, Dmitry (3rd ed.). New York Heidelberg Dordrecht London: Springer. ISBN   978-0-387-72205-4.

Related Research Articles

<span class="mw-page-title-main">Sample space</span> Set of all possible outcomes or results of a statistical trial or experiment

In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points, are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U. The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which

<span class="mw-page-title-main">Independence (probability theory)</span> When the occurrence of one event does not affect the likelihood of another

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

In mathematical analysis and in probability theory, a σ-algebra on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair is called a measurable space.

<span class="mw-page-title-main">Probability space</span> Mathematical concept

In probability theory, a probability space or a probability triple is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a die.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

In mathematics, a filtration is an indexed family of subobjects of a given algebraic structure , with the index running over some totally ordered index set , subject to the condition that

In mathematics, particularly measure theory, a 𝜎-ideal, or sigma ideal, of a σ-algebra is a subset with certain desirable closure properties. It is a special type of ideal. Its most frequent application is in probability theory.

In physics and mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x as

In mathematics, a π-system on a set is a collection of certain subsets of such that

In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”

In the study of stochastic processes in mathematics, a hitting time is the first time at which a given process "hits" a given subset of the state space. Exit times and return times are also examples of hitting times.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

<span class="mw-page-title-main">Realization (probability)</span> Observed value of a random variable

In probability and statistics, a realization, observation, or observed value, of a random variable is the value that is actually observed. The random variable itself is the process dictating how the observation comes about. Statistical quantities computed from realizations without deploying a statistical model are often called "empirical", as in empirical distribution function or empirical probability.

In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms.

In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree.

In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and