Orthogonality (mathematics)

Last updated

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

Contents

Two elements u and v of a vector space with bilinear form are orthogonal when . Depending on the bilinear form, the vector space may contain null vectors, non-zero self-orthogonal vectors, in which case perpendicularity is replaced with hyperbolic orthogonality.

In the case of function spaces, families of functions are used to form an orthogonal basis, such as in the contexts of orthogonal polynomials, orthogonal functions, and combinatorics.

Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle ph, right: in Minkowski spacetime through hyperbolic angle ph (red lines labelled c denote the worldlines of a light signal, a vector is orthogonal to itself if it lies on this line). Orthogonality and rotation.svg
Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle ϕ, right: in Minkowski spacetime through hyperbolic angle ϕ (red lines labelled c denote the worldlines of a light signal, a vector is orthogonal to itself if it lies on this line).

Definitions

A set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set.

In certain cases, the word normal is used to mean orthogonal, particularly in the geometric sense as in the normal to a surface. For example, the y-axis is normal to the curve at the origin. However, normal may also refer to the magnitude of a vector. In particular, a set is called orthonormal (orthogonal plus normal) if it is an orthogonal set of unit vectors. As a result, use of the term normal to mean "orthogonal" is often avoided. The word "normal" also has a different meaning in probability and statistics.

A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two vectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality. In the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given .

Euclidean vector spaces

In Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i.e. they make an angle of 90° ( radians), or one of the vectors is zero. [4] Hence orthogonality of vectors is an extension of the concept of perpendicular vectors to spaces of any dimension.

The orthogonal complement of a subspace is the space of all vectors that are orthogonal to every vector in the subspace. In a three-dimensional Euclidean vector space, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to it, and vice versa. [5]

Note that the geometric concept of two planes being perpendicular does not correspond to the orthogonal complement, since in three dimensions a pair of vectors, one from each of a pair of perpendicular planes, might meet at any angle.

In four-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that of a plane is a plane. [5]

Orthogonal functions

By using integral calculus, it is common to use the following to define the inner product of two functions and with respect to a nonnegative weight function over an interval :

In simple cases, .

We say that functions and are orthogonal if their inner product (equivalently, the value of this integral) is zero:

Orthogonality of two functions with respect to one inner product does not imply orthogonality with respect to another inner product.

We write the norm with respect to this inner product as

The members of a set of functions are orthogonal with respect to on the interval if

The members of such a set of functions are orthonormal with respect to on the interval if

where

is the Kronecker delta.

In other words, every pair of them (excluding pairing of a function with itself) is orthogonal, and the norm of each is 1. See in particular the orthogonal polynomials.

Examples

Orthogonal polynomials

Various polynomial sequences named for mathematicians of the past are sequences of orthogonal polynomials. In particular:

Combinatorics

In combinatorics, two Latin squares are said to be orthogonal if their superimposition yields all possible combinations of entries. [6]

Completely orthogonal

Two flat planes and of a Euclidean four-dimensional space are called completely orthogonal if and only if every line in is orthogonal to every line in . [7] In that case the planes and intersect at a single point , so that if a line in intersects with a line in , they intersect at . and are perpendicular and Clifford parallel.

In 4 dimensional space we can construct 4 perpendicular axes and 6 perpendicular planes through a point. Without loss of generality, we may take these to be the axes and orthogonal central planes of a Cartesian coordinate system. In 4 dimensions we have the same 3 orthogonal planes that we have in 3 dimensions, and also 3 others . Each of the 6 orthogonal planes shares an axis with 4 of the others, and is completely orthogonal to just one of the others: the only one with which it does not share an axis. Thus there are 3 pairs of completely orthogonal planes: and intersect only at the origin; and intersect only at the origin; and intersect only at the origin.

More generally, two flat subspaces and of dimensions and of a Euclidean space of at least dimensions are called completely orthogonal if every line in is orthogonal to every line in . If then and intersect at a single point . If then and may or may not intersect. If then a line in and a line in may or may not intersect; if they intersect then they intersect at . [8]

See also

Related Research Articles

In mathematics, a geometric algebra is an algebra that can represent and manipulate geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, a quadric or quadric surface (quadric hypersurface in higher dimensions), is a generalization of conic sections (ellipses, parabolas, and hyperbolas). It is a hypersurface (of dimension D) in a (D + 1)-dimensional space, and it is defined as the zero set of an irreducible polynomial of degree two in D + 1 variables; for example, D = 1 in the case of conic sections. When the defining polynomial is not absolutely irreducible, the zero set is generally not considered a quadric, although it is often called a degenerate quadric or a reducible quadric.

<span class="mw-page-title-main">Normal (geometry)</span> Line or vector perpendicular to a curve or a surface

In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the line perpendicular to the tangent line to the curve at the point.

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion. An orthonormal basis can be derived from an orthogonal basis via normalization. The choice of an origin and an orthonormal basis forms a coordinate frame known as an orthonormal frame.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

<span class="mw-page-title-main">Frenet–Serret formulas</span> Formulas in differential geometry

In differential geometry, the Frenet–Serret formulas describe the kinematic properties of a particle moving along a differentiable curve in three-dimensional Euclidean space or the geometric properties of the curve itself irrespective of any motion. More specifically, the formulas describe the derivatives of the so-called tangent, normal, and binormal unit vectors in terms of each other. The formulas are named after the two French mathematicians who independently discovered them: Jean Frédéric Frenet, in his thesis of 1847, and Joseph Alfred Serret, in 1851. Vector notation and linear algebra currently used to write these formulas were not yet available at the time of their discovery.

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of .

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space. A Hilbert space is a special case of a Banach space.

The concept of angles between lines, between two planes or between a line and a plane can be generalized to arbitrary dimensions. This generalization was first discussed by Camille Jordan. For any pair of flats in a Euclidean space of arbitrary dimension one can define a set of mutual angles which are invariant under isometric transformation of the Euclidean space. If the flats do not intersect, their shortest distance is one more invariant. These angles are called canonical or principal. The concept of angles can be generalized to pairs of flats in a finite-dimensional inner product space over the complex numbers.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 58. ISBN   0-7167-0344-0.
  2. "Wolfram MathWorld".
  3. Bourbaki, "ch. II §2.4", Algebra I, p. 234
  4. Trefethen, Lloyd N. & Bau, David (1997). Numerical linear algebra. SIAM. p. 13. ISBN   978-0-89871-361-9.
  5. 1 2 R. Penrose (2007). The Road to Reality . Vintage books. pp. 417–419. ISBN   978-0-679-77631-4.
  6. Hedayat, A.; et al. (1999). Orthogonal arrays: theory and applications. Springer. p. 168. ISBN   978-0-387-98766-8.
  7. Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover. p. 124.
  8. P.H.Schoute: Mehrdimensionale Geometrie. Leipzig: G.J.Göschensche Verlagshandlung. Volume 1 (Sammlung Schubert XXXV): Die linearen Räume, 1902.[ page needed ]