Algebra of random variables

Last updated

The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations.

Contents

In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable using symbolic algebra. It is possible to identify some key rules for each of those operators, resulting in different types of algebra for random variables, apart from the elementary symbolic algebra: Expectation algebra, Variance algebra, Covariance algebra, Moment algebra, etc.

Elementary symbolic algebra of random variables

Considering two random variables and , the following algebraic operations are possible:

In all cases, the variable resulting from each operation is also a random variable. All commutative and associative properties of conventional algebraic operations are also valid for random variables. If any of the random variables is replaced by a deterministic variable or by a constant value, all the previous properties remain valid.

Expectation algebra for random variables

The expected value of the random variable resulting from an algebraic operation between two random variables can be calculated using the following set of rules:

If any of the random variables is replaced by a deterministic variable or by a constant value (), the previous properties remain valid considering that and, therefore, .

If is defined as a general non-linear algebraic function of a random variable , then:

Some examples of this property include:

The exact value of the expectation of the non-linear function will depend on the particular probability distribution of the random variable .

Variance algebra for random variables

The variance of the random variable resulting from an algebraic operation between random variables can be calculated using the following set of rules:

where represents the covariance operator between random variables and .

The variance of a random variable can also be expressed directly in terms of the covariance or in terms of the expected value:

If any of the random variables is replaced by a deterministic variable or by a constant value (), the previous properties remain valid considering that and , and . Special cases are the addition and multiplication of a random variable with a deterministic variable or a constant, where:

If is defined as a general non-linear algebraic function of a random variable , then:

The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable .

Covariance algebra for random variables

The covariance () between the random variable resulting from an algebraic operation and the random variable can be calculated using the following set of rules:

The covariance of a random variable can also be expressed directly in terms of the expected value:

If any of the random variables is replaced by a deterministic variable or by a constant value ( ), the previous properties remain valid considering that , and .

If is defined as a general non-linear algebraic function of a random variable , then:

The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable .

Approximations by Taylor series expansions of moments

If the moments of a certain random variable are known (or can be determined by integration if the probability density function is known), then it is possible to approximate the expected value of any general non-linear function as a Taylor series expansion of the moments, as follows:

, where is the mean value of .

, where is the n-th moment of about its mean. Note that by their definition, and . The first order term always vanishes but was kept to obtain a closed form expression.

Then,

, where the Taylor expansion is truncated after the -th moment.

Particularly for functions of normal random variables, it is possible to obtain a Taylor expansion in terms of the standard normal distribution: [1]

, where is a normal random variable, and is the standard normal distribution. Thus,

, where the moments of the standard normal distribution are given by:

Similarly for normal random variables, it is also possible to approximate the variance of the non-linear function as a Taylor series expansion as:

, where

, and

Algebra of complex random variables

In the algebraic axiomatization of probability theory, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable. The measurable space and the probability measure arise from the random variables and expectations by means of well-known representation theorems of analysis. One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones.

Random variables are assumed to have the following properties:

  1. complex constants are possible realizations of a random variable;
  2. the sum of two random variables is a random variable;
  3. the product of two random variables is a random variable;
  4. addition and multiplication of random variables are both commutative; and
  5. there is a notion of conjugation of random variables, satisfying (XY)* = Y*X* and X** = X for all random variables X,Y and coinciding with complex conjugation if X is a constant.

This means that random variables form complex commutative *-algebras. If X = X* then the random variable X is called "real".

An expectation E on an algebra A of random variables is a normalized, positive linear functional. What this means is that

  1. E[k] = k where k is a constant;
  2. E[X*X] ≥ 0 for all random variables X;
  3. E[X + Y] = E[X] + E[Y] for all random variables X and Y; and
  4. E[kX] = kE[X] if k is a constant.

One may generalize this setup, allowing the algebra to be noncommutative. This leads to other areas of noncommutative probability such as quantum probability, random matrix theory, and free probability.

See also

Related Research Articles

<span class="mw-page-title-main">Independence (probability theory)</span> When the occurrence of one event does not affect the likelihood of another

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Multivariate random variable</span> Random variable with multiple component dimensions

In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

Covariance in probability theory and statistics is a measure of the joint variability of two random variables.

In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

<span class="mw-page-title-main">Bernoulli distribution</span> Probability distribution modeling a coin toss which need not be fair

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are Boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads. In particular, unfair coins would have

In probability theory and statistics, the cumulantsκn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

<span class="mw-page-title-main">Chi distribution</span> Probability distribution

In probability theory and statistics, the chi distribution is a continuous probability distribution over the non-negative real line. It is the distribution of the positive square root of a sum of squared independent Gaussian random variables. Equivalently, it is the distribution of the Euclidean distance between a multivariate Gaussian random variable and the origin. The chi distribution describes the positive square roots of a variable obeying a chi-squared distribution.

In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz

<span class="mw-page-title-main">Inverse Gaussian distribution</span> Family of continuous probability distributions

In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).

This article discusses how information theory is related to measure theory.

In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the i, j position is the covariance between the i-th element of a random vector and j-th element of another random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution. Intuitively, the cross-covariance matrix generalizes the notion of covariance to multiple dimensions.

<span class="mw-page-title-main">Distance correlation</span> Statistical measure

In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast to Pearson's correlation, which can only detect linear association between two random variables.

References

  1. Hernandez, Hugo (2016). "Modelling the effect of fluctuation in nonlinear systems using variance algebra - Application to light scattering of ideal gases". ForsChem Research Reports. 2016–1. doi:10.13140/rg.2.2.36501.52969.

Further reading