In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.
Symmetric bilinear forms on finite-dimensional vector spaces precisely correspond to symmetric matrices given a basis for V. Among bilinear forms, the symmetric ones are important because they are the ones for which the vector space admits a particularly simple kind of basis known as an orthogonal basis (at least when the characteristic of the field is not 2).
Given a symmetric bilinear form B, the function q(x) = B(x, x) is the associated quadratic form on the vector space. Moreover, if the characteristic of the field is not 2, B is the unique symmetric bilinear form associated with q.
Let V be a vector space of dimension n over a field K. A map is a symmetric bilinear form on the space if:
The last two axioms only establish linearity in the first argument, but the first axiom (symmetry) then immediately implies linearity in the second argument as well.
Let V = Rn, the n dimensional real vector space. Then the standard dot product is a symmetric bilinear form, B(x, y) = x ⋅ y. The matrix corresponding to this bilinear form (see below) on a standard basis is the identity matrix.
Let V be any vector space (including possibly infinite-dimensional), and assume T is a linear function from V to the field. Then the function defined by B(x, y) = T(x)T(y) is a symmetric bilinear form.
Let V be the vector space of continuous single-variable real functions. For one can define . By the properties of definite integrals, this defines a symmetric bilinear form on V. This is an example of a symmetric bilinear form which is not associated to any symmetric matrix (since the vector space is infinite-dimensional).
Let be a basis for V. Define the n × n matrix A by . The matrix A is a symmetric matrix exactly due to symmetry of the bilinear form. If the n×1 matrix x represents a vector v with respect to this basis, and analogously, y represents w, then is given by :
Suppose C' is another basis for V, with : with S an invertible n×n matrix. Now the new matrix representation for the symmetric bilinear form is given by
Two vectors v and w are defined to be orthogonal with respect to the bilinear form B if B(v, w) = 0, which, for a symmetric bilinear form, is equivalent to B(w, v) = 0.
The radical of a bilinear form B is the set of vectors orthogonal with every vector in V. That this is a subspace of V follows from the linearity of B in each of its arguments. When working with a matrix representation A with respect to a certain basis, v, represented by x, is in the radical if and only if
The matrix A is singular if and only if the radical is nontrivial.
If W is a subset of V, then its orthogonal complement W⊥ is the set of all vectors in V that are orthogonal to every vector in W; it is a subspace of V. When B is non-degenerate, the radical of B is trivial and the dimension of W⊥ is dim(W⊥) = dim(V) − dim(W).
A basis is orthogonal with respect to B if and only if :
When the characteristic of the field is not two, V always has an orthogonal basis. This can be proven by induction.
A basis C is orthogonal if and only if the matrix representation A is a diagonal matrix.
In a more general form, Sylvester's law of inertia says that, when working over an ordered field, the numbers of diagonal elements in the diagonalized form of a matrix that are positive, negative and zero respectively are independent of the chosen orthogonal basis. These three numbers form the signature of the bilinear form.
When working in a space over the reals, one can go a bit a further. Let be an orthogonal basis.
We define a new basis
Now, the new matrix representation A will be a diagonal matrix with only 0, 1 and −1 on the diagonal. Zeroes will appear if and only if the radical is nontrivial.
When working in a space over the complex numbers, one can go further as well and it is even easier. Let be an orthogonal basis.
We define a new basis :
Now the new matrix representation A will be a diagonal matrix with only 0 and 1 on the diagonal. Zeroes will appear if and only if the radical is nontrivial.
Let B be a symmetric bilinear form with a trivial radical on the space V over the field K with characteristic not 2. One can now define a map from D(V), the set of all subspaces of V, to itself:
This map is an orthogonal polarity on the projective space PG(W). Conversely, one can prove all orthogonal polarities are induced in this way, and that two symmetric bilinear forms with trivial radical induce the same polarity if and only if they are equal up to scalar multiplication.
In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers, and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,
In mathematics, a symplectic vector space is a vector space V over a field F equipped with a symplectic bilinear form.
In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V over a field K. In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately:
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
In mathematics, an ordered basis of a vector space of finite dimension n allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of n scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector v on one basis is, in general, different from the coordinate vector that represents v on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis.
In mathematics, in the area of numerical analysis, Galerkin methods, named after the Russian mathematician Boris Galerkin, convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions.
In mathematics, the classical groups are defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form