Orthogonality (mathematics)

Last updated

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

Contents

Two elements u and v of a vector space with bilinear form are orthogonal when . Depending on the bilinear form, the vector space may contain non-zero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form an orthogonal basis.

The concept has been used in the context of orthogonal functions, orthogonal polynomials, and combinatorics.

Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle ph, right: in Minkowski spacetime through hyperbolic angle ph (red lines labelled c denote the worldlines of a light signal, a vector is orthogonal to itself if it lies on this line). Orthogonality and rotation.svg
Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle ϕ, right: in Minkowski spacetime through hyperbolic angle ϕ (red lines labelled c denote the worldlines of a light signal, a vector is orthogonal to itself if it lies on this line).

Definitions

A set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set.

In certain cases, the word normal is used to mean orthogonal, particularly in the geometric sense as in the normal to a surface. For example, the y-axis is normal to the curve at the origin. However, normal may also refer to the magnitude of a vector. In particular, a set is called orthonormal (orthogonal plus normal) if it is an orthogonal set of unit vectors. As a result, use of the term normal to mean "orthogonal" is often avoided. The word "normal" also has a different meaning in probability and statistics.

A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two vectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality. In the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given .

Euclidean vector spaces

In Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i.e. they make an angle of 90° ( radians), or one of the vectors is zero. [4] Hence orthogonality of vectors is an extension of the concept of perpendicular vectors to spaces of any dimension.

The orthogonal complement of a subspace is the space of all vectors that are orthogonal to every vector in the subspace. In a three-dimensional Euclidean vector space, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to it, and vice versa. [5]

Note that the geometric concept of two planes being perpendicular does not correspond to the orthogonal complement, since in three dimensions a pair of vectors, one from each of a pair of perpendicular planes, might meet at any angle.

In four-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that of a plane is a plane. [5]

Orthogonal functions

By using integral calculus, it is common to use the following to define the inner product of two functions and with respect to a nonnegative weight function over an interval :

In simple cases, .

We say that functions and are orthogonal if their inner product (equivalently, the value of this integral) is zero:

Orthogonality of two functions with respect to one inner product does not imply orthogonality with respect to another inner product.

We write the norm with respect to this inner product as

The members of a set of functions are orthogonal with respect to on the interval if

The members of such a set of functions are orthonormal with respect to on the interval if

where

is the Kronecker delta.

In other words, every pair of them (excluding pairing of a function with itself) is orthogonal, and the norm of each is 1. See in particular the orthogonal polynomials.

Examples

Orthogonal polynomials

Various polynomial sequences named for mathematicians of the past are sequences of orthogonal polynomials. In particular:

Combinatorics

In combinatorics, two Latin squares are said to be orthogonal if their superimposition yields all possible combinations of entries. [6]

Completely orthogonal

Two flat planes and of a Euclidean four-dimensional space are called completely orthogonal if and only if every line in is orthogonal to every line in . [7] In that case the planes and intersect at a single point , so that if a line in intersects with a line in , they intersect at . and are perpendicular and Clifford parallel.

In 4 dimensional space we can construct 4 perpendicular axes and 6 perpendicular planes through a point. Without loss of generality, we may take these to be the axes and orthogonal central planes of a Cartesian coordinate system. In 4 dimensions we have the same 3 orthogonal planes that we have in 3 dimensions, and also 3 others . Each of the 6 orthogonal planes shares an axis with 4 of the others, and is completely orthogonal to just one of the others: the only one with which it does not share an axis. Thus there are 3 pairs of completely orthogonal planes: and intersect only at the origin; and intersect only at the origin; and intersect only at the origin.

More generally, two flat subspaces and of dimensions and of a Euclidean space of at least dimensions are called completely orthogonal if every line in is orthogonal to every line in . If then and intersect at a single point . If then and may or may not intersect. If then a line in and a line in may or may not intersect; if they intersect then they intersect at . [8]

See also

Related Research Articles

In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.

In mathematics, a geometric algebra is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector space and complex vector space are kinds of vector spaces based on different kinds of scalars: real coordinate space or complex coordinate space.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of making two or more vectors perpendicular to each other.

<span class="mw-page-title-main">Normal (geometry)</span> Line or vector perpendicular to a curve or a surface

In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the line perpendicular to the tangent line to the curve at the point.

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">3D projection</span> Design technique

A 3D projection is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of .

The name paravector is used for the combination of a scalar and a vector in any Clifford algebra, known as geometric algebra among physicists.

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

The concept of angles between lines, between two planes or between a line and a plane can be generalized to arbitrary dimensions. This generalization was first discussed by Camille Jordan. For any pair of flats in a Euclidean space of arbitrary dimension one can define a set of mutual angles which are invariant under isometric transformation of the Euclidean space. If the flats do not intersect, their shortest distance is one more invariant. These angles are called canonical or principal. The concept of angles can be generalized to pairs of flats in a finite-dimensional inner product space over the complex numbers.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 58. ISBN   0-7167-0344-0.
  2. "Wolfram MathWorld".
  3. Bourbaki, "ch. II §2.4", Algebra I, p. 234
  4. Trefethen, Lloyd N. & Bau, David (1997). Numerical linear algebra. SIAM. p. 13. ISBN   978-0-89871-361-9.
  5. 1 2 R. Penrose (2007). The Road to Reality . Vintage books. pp. 417–419. ISBN   978-0-679-77631-4.
  6. Hedayat, A.; et al. (1999). Orthogonal arrays: theory and applications. Springer. p. 168. ISBN   978-0-387-98766-8.
  7. Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover. p. 124.
  8. P.H.Schoute: Mehrdimensionale Geometrie. Leipzig: G.J.Göschensche Verlagshandlung. Volume 1 (Sammlung Schubert XXXV): Die linearen Räume, 1902.[ page needed ]