# Bilinear form

Last updated

In mathematics, a bilinear form on a vector space V is a bilinear map V × VK, where K is the field of scalars. In other words, a bilinear form is a function B : V × VK that is linear in each argument separately:

## Contents

• B(u + v, w) = B(u, w) + B(v, w)    and   B(λu, v) = λB(u, v)
• B(u, v + w) = B(u, v) + B(u, w)    and   B(u, λv) = λB(u, v)

The dot product on ${\displaystyle \mathbb {R} ^{n}}$ is an example of a bilinear form. [1]

The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms.

When K is the field of complex numbers C, one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument.

## Coordinate representation

Let VKn be an n-dimensional vector space with basis {e1, ..., en}.

The n × n matrix A, defined by Aij = B(ei, ej) is called the matrix of the bilinear form on the basis {e1, ..., en}.

If the n × 1 matrix x represents a vector v with respect to this basis, and analogously, y represents another vector w, then:

${\displaystyle B(\mathbf {v} ,\mathbf {w} )=\mathbf {x} ^{\textsf {T}}A\mathbf {y} =\sum _{i,j=1}^{n}x_{i}a_{ij}y_{j}.}$

A bilinear form has different matrices on different bases. However, the matrices of a bilinear on different bases are all congruent. More precisely, if {f1, ..., fn} is another basis of V, then

${\displaystyle \mathbf {f} _{j}=\sum _{i=1}^{n}S_{i,j}\mathbf {e} _{i},}$

where the ${\displaystyle S_{i,j}}$ form an invertible matrix S. Then, the matrix of the bilinear form on the new basis is STAS.

## Maps to the dual space

Every bilinear form B on V defines a pair of linear maps from V to its dual space V. Define B1, B2: VV by

B1(v)(w) = B(v, w)
B2(v)(w) = B(w, v)

This is often denoted as

B1(v) = B(v, ⋅)
B2(v) = B(⋅, v)

where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying).

For a finite-dimensional vector space V, if either of B1 or B2 is an isomorphism, then both are, and the bilinear form B is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element:

${\displaystyle B(x,y)=0\,}$ for all ${\displaystyle y\in V}$ implies that x = 0 and
${\displaystyle B(x,y)=0\,}$ for all ${\displaystyle x\in V}$ implies that y = 0.

The corresponding notion for a module over a commutative ring is that a bilinear form is unimodular if VV is an isomorphism. Given a finitely generated module over a commutative ring, the pairing may be injective (hence "nondegenerate" in the above sense) but not unimodular. For example, over the integers, the pairing B(x, y) = 2xy is nondegenerate but not unimodular, as the induced map from V = Z to V = Z is multiplication by 2.

If V is finite-dimensional then one can identify V with its double dual V∗∗. One can then show that B2 is the transpose of the linear map B1 (if V is infinite-dimensional then B2 is the transpose of B1 restricted to the image of V in V∗∗). Given B one can define the transpose of B to be the bilinear form given by

tB(v, w) = B(w, v).

The left radical and right radical of the form B are the kernels of B1 and B2 respectively; [2] they are the vectors orthogonal to the whole space on the left and on the right. [3]

If V is finite-dimensional then the rank of B1 is equal to the rank of B2. If this number is equal to dim(V) then B1 and B2 are linear isomorphisms from V to V. In this case B is nondegenerate. By the rank–nullity theorem, this is equivalent to the condition that the left and equivalently right radicals be trivial. For finite-dimensional spaces, this is often taken as the definition of nondegeneracy:

Definition:B is nondegenerate if B(v, w) = 0 for all w implies v = 0.

Given any linear map A : VV one can obtain a bilinear form B on V via

B(v, w) = A(v)(w).

This form will be nondegenerate if and only if A is an isomorphism.

If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero. Likewise, a nondegenerate form is one for which the determinant of the associated matrix is non-zero (the matrix is non-singular). These statements are independent of the chosen basis. For a module over a commutative ring, a unimodular form is one for which the determinant of the associate matrix is a unit (for example 1), hence the term; note that a form whose matrix determinant is non-zero but not a unit will be nondegenerate but not unimodular, for example B(x, y) = 2xy over the integers.

## Symmetric, skew-symmetric and alternating forms

We define a bilinear form to be

• symmetric if B(v, w) = B(w, v) for all v, w in V;
• alternating if B(v, v) = 0 for all v in V;
• skew-symmetric if B(v, w) = −B(w, v) for all v, w in V;
Proposition: Every alternating form is skew-symmetric.
Proof: This can be seen by expanding B(v + w, v + w).

If the characteristic of K is not 2 then the converse is also true: every skew-symmetric form is alternating. If, however, char(K) = 2 then a skew-symmetric form is the same as a symmetric form and there exist symmetric/skew-symmetric forms that are not alternating.

A bilinear form is symmetric (respectively skew-symmetric) if and only if its coordinate matrix (relative to any basis) is symmetric (respectively skew-symmetric). A bilinear form is alternating if and only if its coordinate matrix is skew-symmetric and the diagonal entries are all zero (which follows from skew-symmetry when char(K) ≠ 2).

A bilinear form is symmetric if and only if the maps B1, B2: VV are equal, and skew-symmetric if and only if they are negatives of one another. If char(K) ≠ 2 then one can decompose a bilinear form into a symmetric and a skew-symmetric part as follows

${\displaystyle B^{+}={\tfrac {1}{2}}(B+{}^{\text{t}}B)\qquad B^{-}={\tfrac {1}{2}}(B-{}^{\text{t}}B),}$

where tB is the transpose of B (defined above).

For any bilinear form B : V × VK, there exists an associated quadratic form Q : VK defined by Q : VK : vB(v, v).

When char(K) ≠ 2, the quadratic form Q is determined by the symmetric part of the bilinear form B and is independent of the antisymmetric part. In this case there is a one-to-one correspondence between the symmetric part of the bilinear form and the quadratic form, and it makes sense to speak of the symmetric bilinear form associated with a quadratic form.

When char(K) = 2 and dim V > 1, this correspondence between quadratic forms and symmetric bilinear forms breaks down.

## Reflexivity and orthogonality

Definition: A bilinear form B : V × VK is called reflexive if B(v, w) = 0 implies B(w, v) = 0 for all v, w in V.
Definition: Let B : V × VK be a reflexive bilinear form. v, w in V are orthogonal with respect to B if B(v, w) = 0.

A bilinear form B is reflexive if and only if it is either symmetric or alternating. [4] In the absence of reflexivity we have to distinguish left and right orthogonality. In a reflexive space the left and right radicals agree and are termed the kernel or the radical of the bilinear form: the subspace of all vectors orthogonal with every other vector. A vector v, with matrix representation x, is in the radical of a bilinear form with matrix representation A, if and only if Ax = 0 ⇔ xTA = 0. The radical is always a subspace of V. It is trivial if and only if the matrix A is nonsingular, and thus if and only if the bilinear form is nondegenerate.

Suppose W is a subspace. Define the orthogonal complement [5]

${\displaystyle W^{\perp }=\{\mathbf {v} \mid B(\mathbf {v} ,\mathbf {w} )=0{\text{ for all }}\mathbf {w} \in W\}\ .}$

For a non-degenerate form on a finite-dimensional space, the map V/WW is bijective, and the dimension of W is dim(V) − dim(W).

## Different spaces

Much of the theory is available for a bilinear mapping from two vector spaces over the same base field to that field

B : V × WK.

Here we still have induced linear mappings from V to W, and from W to V. It may happen that these mappings are isomorphisms; assuming finite dimensions, if one is an isomorphism, the other must be. When this occurs, B is said to be a perfect pairing.

In finite dimensions, this is equivalent to the pairing being nondegenerate (the spaces necessarily having the same dimensions). For modules (instead of vector spaces), just as how a nondegenerate form is weaker than a unimodular form, a nondegenerate pairing is a weaker notion than a perfect pairing. A pairing can be nondegenerate without being a perfect pairing, for instance Z × ZZ via (x, y) ↦ 2xy is nondegenerate, but induces multiplication by 2 on the map ZZ.

Terminology varies in coverage of bilinear forms. For example, F. Reese Harvey discusses "eight types of inner product". [6] To define them he uses diagonal matrices Aij having only +1 or −1 for non-zero elements. Some of the "inner products" are symplectic forms and some are sesquilinear forms or Hermitian forms. Rather than a general field K, the instances with real numbers R, complex numbers C, and quaternions H are spelled out. The bilinear form

${\displaystyle \sum _{k=1}^{p}x_{k}y_{k}-\sum _{k=p+1}^{n}x_{k}y_{k}}$

is called the real symmetric case and labeled R(p, q), where p + q = n. Then he articulates the connection to traditional terminology: [7]

Some of the real symmetric cases are very important. The positive definite case R(n, 0) is called Euclidean space, while the case of a single minus, R(n−1, 1) is called Lorentzian space. If n = 4, then Lorentzian space is also called Minkowski space or Minkowski spacetime. The special case R(p, p) will be referred to as the split-case.

## Relation to tensor products

By the universal property of the tensor product, there is a canonical correspondence between bilinear forms on V and linear maps VVK. If B is a bilinear form on V the corresponding linear map is given by

vwB(v, w)

In the other direction, if F : VVK is a linear map the corresponding bilinear form is given by composing F with the bilinear map V × VVV that sends (v, w) to vw.

The set of all linear maps VVK is the dual space of VV, so bilinear forms may be thought of as elements of (VV) which (when V is finite-dimensional) is canonically isomorphic to VV.

Likewise, symmetric bilinear forms may be thought of as elements of Sym2(V) (the second symmetric power of V), and alternating bilinear forms as elements of Λ2V (the second exterior power of V).

## On normed vector spaces

Definition: A bilinear form on a normed vector space (V, ‖·‖) is bounded, if there is a constant C such that for all u, vV,

${\displaystyle B(\mathbf {u} ,\mathbf {v} )\leq C\left\|\mathbf {u} \right\|\left\|\mathbf {v} \right\|.}$

Definition: A bilinear form on a normed vector space (V, ‖·‖) is elliptic, or coercive, if there is a constant c > 0 such that for all uV,

${\displaystyle B(\mathbf {u} ,\mathbf {u} )\geq c\left\|\mathbf {u} \right\|^{2}.}$

## Generalization to modules

Given a ring R and a right R-module M and its dual module M, a mapping B : M × MR is called a bilinear form if

B(u + v, x) = B(u, x) + B(v, x)
B(u, x + y) = B(u, x) + B(u, y)
B(αu, ) = αB(u, x)β

for all u, vM, all x, yM and all α, βR.

The mapping , : M × MR : (u, x) ↦ u(x) is known as the natural pairing , also called the canonical bilinear form on M × M. [8]

A linear map S : MM : uS(u) induces the bilinear form B : M × MR : (u, x) ↦ S(u), x, and a linear map T : MM : xT(x) induces the bilinear form B : M × MR : (u, x) ↦ u, T(x)).

Conversely, a bilinear form B : M × MR induces the R-linear maps S : MM : u ↦ (xB(u, x)) and T : MM∗∗ : x ↦ (uB(u, x)). Here, M∗∗ denotes the double dual of M.

## Citations

1. "Chapter 3. Bilinear forms — Lecture notes for MA1212" (PDF). 2021-01-16.
2. Jacobson 2009, p. 346.
3. Zhelobenko 2006, p. 11.
4. Adkins & Weintraub 1992, p. 359.
5. Harvey 1990, p. 22.
6. Harvey 1990, p. 23.
7. Bourbaki 1970, p. 233.

## Related Research Articles

In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example.

In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on , together with the vector space structure of pointwise addition and scalar multiplication by constants.

In mathematics, an inner product space or a Hausdorff pre-Hilbert space is a vector space with a binary operation called an inner product. This operation associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors, often denoted using angle brackets. Inner products allow the rigorous introduction of intuitive geometrical notions, such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors. Inner product spaces generalize Euclidean spaces to vector spaces of any dimension, and are studied in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

In mathematics, a linear map is a mapping between two modules that preserves the operations of addition and scalar multiplication. If a linear map is a bijection then it is called a linear isomorphism.

In mathematics, the tensor productVW of two vector spaces V and W is a vector space, endowed with a bilinear map from the Cartesian product V × W to VW. This bilinear map is universal in the sense that, for every vector space X, the bilinear maps from V × W to X are in one to one correspondence with the linear maps from VW to X.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A.

In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications.

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors v and w at a point of a surface and produces a real number scalar g(v, w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold.

In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors and , denoted by , is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of can be interpreted as the area of the parallelogram with sides and , which in three dimensions can also be computed using the cross product of the two vectors. More generally, all parallel plane surfaces with the same orientation and area have the same bivector as a measure of their oriented area. Like the cross product, the exterior product is anticommutative, meaning that for all vectors and , but, unlike the cross product, the exterior product is associative.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematics, a symplectic vector space is a vector space V over a field F equipped with a symplectic bilinear form.

In mathematics, specifically linear algebra, a degenerate bilinear formf(x, y) on a vector space V is a bilinear form such that the map from V to V given by v is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that

In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix sesqui- meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by . If the vectors are real and the columns of matrix , then the Gram matrix is .

In mathematics, an ordered basis of a vector space of finite dimension n allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of n scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector v on one basis is, in general, different from the coordinate vector that represents v on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis.

In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space.

A symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.

In mathematics, the classical groups are defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.