Positive-definite function

Last updated

In mathematics, a positive-definite function is, depending on the context, either of two types of function.

Contents

Definition 1

Let be the set of real numbers and be the set of complex numbers.

A function is called positive semi-definite if for any[ clarification needed ] real numbers x1, …, xn the n×n matrix

is a positive semi-definite matrix.[ citation needed ]

By definition, a positive semi-definite matrix, such as , is Hermitian; therefore f(−x) is the complex conjugate of f(x)).

In particular, it is necessary (but not sufficient) that

(these inequalities follow from the condition for n = 1, 2.)

A function is negative semi-definite if the inequality is reversed. A function is definite if the weak inequality is replaced with a strong (<, > 0).

Examples

If is a real inner product space, then , is positive definite for every : for all and all we have

As nonnegative linear combinations of positive definite functions are again positive definite, the cosine function is positive definite as a nonnegative linear combination of the above functions:

One can create a positive definite function easily from positive definite function for any vector space : choose a linear function and define . Then

where where are distinct as is linear. [1]

Bochner's theorem

Positive-definiteness arises naturally in the theory of the Fourier transform; it can be seen directly that to be positive-definite it is sufficient for f to be the Fourier transform of a function g on the real line with g(y) ≥ 0.

The converse result is Bochner's theorem , stating that any continuous positive-definite function on the real line is the Fourier transform of a (positive) measure. [2]

Applications

In statistics, and especially Bayesian statistics, the theorem is usually applied to real functions. Typically, n scalar measurements of some scalar value at points in are taken and points that are mutually close are required to have measurements that are highly correlated. In practice, one must be careful to ensure that the resulting covariance matrix (an n×n matrix) is always positive-definite. One strategy is to define a correlation matrix A which is then multiplied by a scalar to give a covariance matrix: this must be positive-definite. Bochner's theorem states that if the correlation between two points is dependent only upon the distance between them (via function f), then function f must be positive-definite to ensure the covariance matrix A is positive-definite. See Kriging.

In this context, Fourier terminology is not normally used and instead it is stated that f(x) is the characteristic function of a symmetric probability density function (PDF).

Generalization

One can define positive-definite functions on any locally compact abelian topological group; Bochner's theorem extends to this context. Positive-definite functions on groups occur naturally in the representation theory of groups on Hilbert spaces (i.e. the theory of unitary representations).

Definition 2

Alternatively, a function is called positive-definite on a neighborhood D of the origin if and for every non-zero . [3] [4]

Note that this definition conflicts with definition 1, given above.

In physics, the requirement that is sometimes dropped (see, e.g., Corney and Olsen [5] ).

See also

Related Research Articles

Bra–ket notation, or Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. Many phenomena that are explained using quantum mechanics are explained using bra–ket notation.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

The Cauchy–Schwarz inequality is considered one of the most important and widely used inequalities in mathematics.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

<span class="mw-page-title-main">Direct limit</span> Special case of colimit in category theory

In mathematics, a direct limit is a way to construct a object from many objects that are put together in a specific way. These objects may be groups, rings, vector spaces or in general objects from any category. The way they are put together is specified by a system of homomorphisms between those smaller objects. The direct limit of the objects , where ranges over some directed set , is denoted by .

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

Pseudo-spectral methods, also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows representation of functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform.

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but do not converge uniformly i.e. do not converge with respect to the supremum norm.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.

In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz

In mathematics, a holomorphic vector bundle is a complex vector bundle over a complex manifold X such that the total space E is a complex manifold and the projection map π : EX is holomorphic. Fundamental examples are the holomorphic tangent bundle of a complex manifold, and its dual, the holomorphic cotangent bundle. A holomorphic line bundle is a rank one holomorphic vector bundle.

In operator theory, Naimark's dilation theorem is a result that characterizes positive operator valued measures. It can be viewed as a consequence of Stinespring's dilation theorem.

In optimization, a self-concordant function is a function for which

Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.

For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.

Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.

Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems.

References

Notes

  1. Cheney, Elliot Ward (2009). A course in Approximation Theory. American Mathematical Society. pp. 77–78. ISBN   9780821847985 . Retrieved 3 February 2022.
  2. Bochner, Salomon (1959). Lectures on Fourier integrals . Princeton University Press.
  3. Verhulst, Ferdinand (1996). Nonlinear Differential Equations and Dynamical Systems (2nd ed.). Springer. ISBN   3-540-60934-2.
  4. Hahn, Wolfgang (1967). Stability of Motion . Springer.
  5. Corney, J. F.; Olsen, M. K. (19 February 2015). "Non-Gaussian pure states and positive Wigner functions". Physical Review A. 91 (2): 023824. arXiv: 1412.4868 . Bibcode:2015PhRvA..91b3824C. doi:10.1103/PhysRevA.91.023824. ISSN   1050-2947. S2CID   119293595.