Definite quadratic form

Last updated

In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign (always positive or always negative) for every non-zero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite.

Contents

A semidefinite (or semi-definite) quadratic form is defined in much the same way, except that "always positive" and "always negative" are replaced by "never negative" and "never positive", respectively. In other words, it may take on zero values for some non-zero vectors of V.

An indefinite quadratic form takes on both positive and negative values and is called an isotropic quadratic form.

More generally, these definitions apply to any vector space over an ordered field. [1]

Associated symmetric bilinear form

Quadratic forms correspond one-to-one to symmetric bilinear forms over the same space. [2] A symmetric bilinear form is also described as definite, semidefinite, etc. according to its associated quadratic form. A quadratic form Q and its associated symmetric bilinear form B are related by the following equations:

The latter formula arises from expanding

Examples

As an example, let , and consider the quadratic form

where and c1 and c2 are constants. If c1 > 0 and c2 > 0 , the quadratic form Q is positive-definite, so Q evaluates to a positive number whenever If one of the constants is positive and the other is 0, then Q is positive semidefinite and always evaluates to either 0 or a positive number. If c1 > 0 and c2 < 0 , or vice versa, then Q is indefinite and sometimes evaluates to a positive number and sometimes to a negative number. If c1 < 0 and c2 < 0 , the quadratic form is negative-definite and always evaluates to a negative number whenever And if one of the constants is negative and the other is 0, then Q is negative semidefinite and always evaluates to either 0 or a negative number.

In general a quadratic form in two variables will also involve a cross-product term in x1·x2:

This quadratic form is positive-definite if and negative-definite if and and indefinite if It is positive or negative semidefinite if with the sign of the semidefiniteness coinciding with the sign of

This bivariate quadratic form appears in the context of conic sections centered on the origin. If the general quadratic form above is equated to 0, the resulting equation is that of an ellipse if the quadratic form is positive or negative-definite, a hyperbola if it is indefinite, and a parabola if

The square of the Euclidean norm in n-dimensional space, the most commonly used measure of distance, is

In two dimensions this means that the distance between two points is the square root of the sum of the squared distances along the axis and the axis.

Matrix form

A quadratic form can be written in terms of matrices as

where x is any n×1 Cartesian vector in which at least one element is not 0; A is an n × n symmetric matrix; and superscript T denotes a matrix transpose. If A is diagonal this is equivalent to a non-matrix form containing solely terms involving squared variables; but if A has any non-zero off-diagonal elements, the non-matrix form will also contain some terms involving products of two different variables.

Positive or negative-definiteness or semi-definiteness, or indefiniteness, of this quadratic form is equivalent to the same property of A, which can be checked by considering all eigenvalues of A or by checking the signs of all of its principal minors.

Optimization

Definite quadratic forms lend themselves readily to optimization problems. Suppose the matrix quadratic form is augmented with linear terms, as

where b is an n×1 vector of constants. The first-order conditions for a maximum or minimum are found by setting the matrix derivative to the zero vector:

giving

assuming A is nonsingular. If the quadratic form, and hence A, is positive-definite, the second-order conditions for a minimum are met at this point. If the quadratic form is negative-definite, the second-order conditions for a maximum are met.

An important example of such an optimization arises in multiple regression, in which a vector of estimated parameters is sought which minimizes the sum of squared deviations from a perfect fit within the dataset.

See also

Notes

  1. Milnor & Husemoller 1973 , p. 61.
  2. This is true only over a field of characteristic other than 2, but here we consider only ordered fields, which necessarily have characteristic 0.

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In mathematics, a quadric or quadric surface, is a generalization of conic sections. It is a hypersurface in a (D + 1)-dimensional space, and it is defined as the zero set of an irreducible polynomial of degree two in D + 1 variables. When the defining polynomial is not absolutely irreducible, the zero set is generally not considered a quadric, although it is often called a degenerate quadric or a reducible quadric.

Square matrix Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

Orthogonal group Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication. The orthogonal group is an algebraic group and a Lie group. It is compact.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In the mathematical field of differential geometry, a metric tensor allows defining distances and angles near each point of a surface, in the same way as inner product allows defining distances and angles in Euclidean spaces. More precisely, a metric tensor at a point of a manifold is a bilinear form defined on the tangent space at this point.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, the indefinite orthogonal group, O(p, q) is the Lie group of all linear transformations of an n-dimensional real vector space that leave invariant a nondegenerate, symmetric bilinear form of signature (p, q), where n = p + q. It is also called the pseudo-orthogonal group or generalized orthogonal group. The dimension of the group is n(n − 1)/2.

In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".

In mathematics, a bilinear form on a vector space V over a field K is a bilinear map V × VK. In other words, a bilinear form is a function B : V × VK that is linear in each argument separately:

In mathematics, the signature(v, p, r) of a metric tensor g is the number of positive, negative and zero eigenvalues of the real symmetric matrix gab of the metric tensor with respect to a basis. In relativistic physics, the v represents the time or virtual dimension, and the p for the space and physical dimension. Alternatively, it can be defined as the dimensions of a maximal positive and null subspace. By Sylvester's law of inertia these numbers do not depend on the choice of basis. The signature thus classifies the metric up to a choice of basis. The signature is often denoted by a pair of integers (v, p) implying r= 0, or as an explicit list of signs of eigenvalues such as (+, −, −, −) or (−, +, +, +) for the signatures (1, 3, 0) and (3, 1, 0), respectively.

In mathematics, specifically linear algebra, a degenerate bilinear formf (x, y ) on a vector space V is a bilinear form such that the map from V to V given by v is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if A is the symmetric matrix that defines the quadratic form, and S is any invertible matrix such that D = SAST is diagonal, then the number of negative elements in the diagonal of D is always the same, for all such S; and the same goes for the number of positive elements.

In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics and theoretical physics, a pseudo-Euclidean space is a finite-dimensional real n-space together with a non-degenerate quadratic form q. Such a quadratic form can, given a suitable choice of basis (e1, …, en), be applied to a vector x = x1e1 + ⋯ + xnen, giving

In mathematics and theoretical physics, a quasi-sphere is a generalization of the hypersphere and the hyperplane to the context of a pseudo-Euclidean space. It may be described as the set of points for which the quadratic form for the space applied to the displacement vector from a centre point is a constant value, with the inclusion of hyperplanes as a limiting case.

References