In probability theory, the Wick product, named for Italian physicist Gian-Carlo Wick, is a particular way of defining an adjusted product of a set of random variables. In the lowest order product the adjustment corresponds to subtracting off the mean value, to leave a result whose mean is zero. For the higher-order products the adjustment involves subtracting off lower order (ordinary) products of the random variables, in a symmetric way, again leaving a result whose mean is zero. The Wick product is a polynomial function of the random variables, their expected values, and expected values of their products.
The definition of the Wick product immediately leads to the Wick power of a single random variable, and this allows analogues of other functions of random variables to be defined on the basis of replacing the ordinary powers in a power series expansion by the Wick powers. The Wick powers of commonly-seen random variables can be expressed in terms of special functions such as Bernoulli polynomials or Hermite polynomials.
Assume that X1, ..., Xk are random variables with finite moments. The Wick product
is a sort of product defined recursively as follows:[ citation needed ]
(i.e. the empty product —the product of no random variables at all—is 1). For k ≥ 1, we impose the requirement
where means that Xi is absent, together with the constraint that the average is zero,
Equivalently, the Wick product can be defined by writing the monomial X1, ..., Xk as a "Wick polynomial":
where denotes the Wick product if This is easily seen to satisfy the inductive definition.
It follows that
In the notation conventional among physicists, the Wick product is often denoted thus:
and the angle-bracket notation
is used to denote the expected value of the random variable X.
The nth Wick power of a random variable X is the Wick product
with n factors.
The sequence of polynomials Pn such that
form an Appell sequence, i.e. they satisfy the identity
for n = 0, 1, 2, ... and P0(x) is a nonzero constant.
For example, it can be shown that if X is uniformly distributed on the interval [0, 1], then
where Bn is the nth-degree Bernoulli polynomial. Similarly, if X is normally distributed with variance 1, then
where Hn is the nth Hermite polynomial.
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(May 2012) |
In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.
The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.
Covariance in probability theory and statistics is a measure of the joint variability of two random variables.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In probability theory and statistics, the cumulantsκn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.
The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In mathematics and computer algebra, automatic differentiation, also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program.
In mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many primary ideals. The theorem was first proven by Emanuel Lasker for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by Emmy Noether.
In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space. If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
In mathematics, Molien's formula computes the generating function attached to a linear representation of a group G on a finite-dimensional vector space, that counts the homogeneous polynomials of a given total degree that are invariants for G. It is named for Theodor Molien.
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.
In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.
In mathematics, specifically the field of abstract algebra, Bergman's Diamond Lemma is a method for confirming whether a given set of monomials of an algebra forms a -basis. It is an extension of Gröbner bases to non-commutative rings. The proof of the lemma gives rise to an algorithm for obtaining a non-commutative Gröbner basis of the algebra from its defining relations. However, in contrast to Buchberger's algorithm, in the non-commutative case, this algorithm may not terminate.
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.