Orthonormal basis

Last updated

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. [1] [2] [3] For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for arises in this fashion.

Contents

For a general inner product space an orthonormal basis can be used to define normalized orthogonal coordinates on Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of under the dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using the Gram–Schmidt process.

In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional) inner product spaces. [4] Given a pre-Hilbert space an orthonormal basis for is an orthonormal set of vectors with the property that every vector in can be written as an infinite linear combination of the vectors in the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for Note that an orthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required. [5] Specifically, the linear span of the basis must be dense in although not necessarily the entire space.

If we go on to Hilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all. For instance, any square-integrable function on the interval can be expressed (almost everywhere) as an infinite sum of Legendre polynomials (an orthonormal basis), but not necessarily as an infinite sum of the monomials

A different generalisation is to pseudo-inner product spaces, finite-dimensional vector spaces equipped with a non-degenerate symmetric bilinear form known as the metric tensor. In such a basis, the metric takes the form with positive ones and negative ones.

Examples

Basic formula

If is an orthogonal basis of then every element may be written as

When is orthonormal, this simplifies to

and the square of the norm of can be given by

Even if is uncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called the Fourier expansion of and the formula is usually known as Parseval's identity.

If is an orthonormal basis of then is isomorphic to in the following sense: there exists a bijective linear map such that

Incomplete orthogonal sets

Given a Hilbert space and a set of mutually orthogonal vectors in we can take the smallest closed linear subspace of containing Then will be an orthogonal basis of which may of course be smaller than itself, being an incomplete orthogonal set, or be when it is a complete orthogonal set.

Existence

Using Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), one can show that every Hilbert space admits an orthonormal basis; [6] furthermore, any two orthonormal bases of the same space have the same cardinality (this can be proven in a manner akin to that of the proof of the usual dimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space is separable if and only if it admits a countable orthonormal basis. (One can prove this last statement without using the axiom of choice.)

Choice of basis as a choice of isomorphism

For concreteness we discuss orthonormal bases for a real, -dimensional vector space with a positive definite symmetric bilinear form .

One way to view an orthonormal basis with respect to is as a set of vectors , which allow us to write , and or . With respect to this basis, the components of are particularly simple: (where is the Kronecker delta).

We can now view the basis as a map which is an isomorphism of inner product spaces: to make this more explicit we can write

Explicitly we can write where is the dual basis element to .

The inverse is a component map

These definitions make it manifest that there is a bijection

The space of isomorphisms admits actions of orthogonal groups at either the side or the side. For concreteness we fix the isomorphisms to point in the direction , and consider the space of such maps, .

This space admits a left action by the group of isometries of , that is, such that , with the action given by composition:

This space also admits a right action by the group of isometries of , that is, , with the action again given by composition: .

As a principal homogeneous space

The set of orthonormal bases for with the standard inner product is a principal homogeneous space or G-torsor for the orthogonal group and is called the Stiefel manifold of orthonormal -frames. [7]

In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given the space of orthonormal bases, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis.

The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal -frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any -frame can be taken to any other -frame by an orthogonal map, but this map is not uniquely determined.

See also

Related Research Articles

Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.

In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix sesqui- meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In pure and applied mathematics, particularly quantum mechanics and computer graphics and their applications, a spherical basis is the basis used to express spherical tensors. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions.

<span class="mw-page-title-main">Orthogonality (mathematics)</span>

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

References

  1. Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN   0-321-28713-4.
  2. Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN   0-03-010567-6.
  3. Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN   0-387-98258-2.
  4. Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN   0-07-054234-1.
  5. Roman 2008, p. 218, ch. 9.
  6. Linear Functional Analysis Authors: Rynne, Bryan, Youngson, M.A. page 79
  7. "CU Faculty". engfac.cooper.edu. Retrieved 2021-04-15.