Operator (mathematics)

Last updated

In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space (possibly and sometimes required to be the same space). There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly (for example in the case of an integral operator), and may be extended so as to act on related objects (an operator that acts on functions may act also on differential equations whose solutions are functions that satisfy the equation). See Operator (physics) for other examples.

Contents

The most basic operators are linear maps, which act on vector spaces. Linear operators refer to linear maps whose domain and range are the same space, for example from to [1] [2] [lower-alpha 1] Such operators often preserve properties, such as continuity. For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators, integral operators or integro-differential operators.

Operator is also used for denoting the symbol of a mathematical operation. This is related with the meaning of "operator" in computer programming; see Operator (computer programming).

Linear operators

The most common kind of operators encountered are linear operators. Let U and V be vector spaces over some field K . A mapping is linear if

for all x and y in U, and for all α, β in K .

This means that a linear operator preserves vector space operations, in the sense that it does not matter whether you apply the linear operator before or after the operations of addition and scalar multiplication. In more technical words, linear operators are morphisms between vector spaces.

In the finite-dimensional case linear operators can be represented by matrices in the following way. Let K be a field, and and V be finite-dimensional vector spaces over K . Let us select a basis in U and in V . Then let be an arbitrary vector in (assuming Einstein convention), and be a linear operator. Then

Then with all is the matrix form of the operator in the fixed basis The tensor does not depend on the choice of and if Thus in fixed bases n-by-m matrices are in bijective correspondence to linear operators from to

The important concepts directly related to operators between finite-dimensional vector spaces are the ones of rank, determinant, inverse operator, and eigenspace.

Linear operators also play a great role in the infinite-dimensional case. The concepts of rank and determinant cannot be extended to infinite-dimensional matrices. This is why very different techniques are employed when studying linear operators (and operators in general) in the infinite-dimensional case. The study of linear operators in the infinite-dimensional case is known as functional analysis (so called because various classes of functions form interesting examples of infinite-dimensional vector spaces).

The space of sequences of real numbers, or more generally sequences of vectors in any vector space, themselves form an infinite-dimensional vector space. The most important cases are sequences of real or complex numbers, and these spaces, together with linear subspaces, are known as sequence spaces. Operators on these spaces are known as sequence transformations.

Bounded linear operators over Banach space form a Banach algebra in respect to the standard operator norm. The theory of Banach algebras develops a very general concept of spectra that elegantly generalizes the theory of eigenspaces.

Bounded operators

Let U and V be two vector spaces over the same ordered field (for example, ), and they are equipped with norms. Then a linear operator from U to V is called bounded if there exists c > 0 such that

for every x in U . Bounded operators form a vector space. On this vector space we can introduce a norm that is compatible with the norms of U and V:

In case of operators from U to itself it can be shown that

  [lower-alpha 2]

Any unital normed algebra with this property is called a Banach algebra. It is possible to generalize spectral theory to such algebras. C*-algebras, which are Banach algebras with some additional structure, play an important role in quantum mechanics.

Examples

Analysis (calculus)

From the point of view of functional analysis, calculus is the study of two linear operators: the differential operator , and the Volterra operator

Fundamental analysis operators on scalar and vector fields

Three operators are key to vector calculus:

As an extension of vector calculus operators to physics, engineering and tensor spaces, grad, div and curl operators also are often associated with tensor calculus as well as vector calculus. [3]

Geometry

In geometry, additional structures on vector spaces are sometimes studied. Operators that map such vector spaces to themselves bijectively are very useful in these studies, they naturally form groups by composition.

For example, bijective operators preserving the structure of a vector space are precisely the invertible linear operators. They form the general linear group under composition. However, they do not form a vector space under operator addition; since, for example, both the identity and −identity are invertible (bijective), but their sum, 0, is not.

Operators preserving the Euclidean metric on such a space form the isometry group, and those that fix the origin form a subgroup known as the orthogonal group. Operators in the orthogonal group that also preserve the orientation of vector tuples form the special orthogonal group, or the group of rotations.

Probability theory

Operators are also involved in probability theory, such as expectation, variance, and covariance, which are used to name both number statistics and the operators which produce them. Indeed, every covariance is basically a dot product: Every variance is a dot product of a vector with itself, and thus is a quadratic norm; every standard deviation is a norm (square root of the quadratic norm); the corresponding cosine to this dot product is the Pearson correlation coefficient; expected value is basically an integral operator (used to measure weighted shapes in the space).

Fourier series and Fourier transform

The Fourier transform is useful in applied mathematics, particularly physics and signal processing. It is another integral operator; it is useful mainly because it converts a function on one (temporal) domain to a function on another (frequency) domain, in a way effectively invertible. No information is lost, as there is an inverse transform operator. In the simple case of periodic functions, this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves:

The tuple ( a0, a1, b1, a2, b2, ... ) is in fact an element of an infinite-dimensional vector space 2, and thus Fourier series is a linear operator.

When dealing with general function the transform takes on an integral form:

Laplace transform

The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations.

Given f = f(s) , it is defined by:

Footnotes

  1. (1) A linear transformation from V to V is called a linear operator on V.
    The set of all linear operators on V is denoted (V) . A linear operator on a real vector space is called a real operator and a linear operator on a complex vector space is called a complex operator. ... We should also mention that some authors use the term linear operator for any linear transformation from V to W. ...
    Definition: The following terms are also employed:
    (2) endomorphism for linear operator ...
    (6) automorphism for bijective linear operator.
    — Roman (2008) [2]
  2. In this expression, the raised dot merely represents multiplication in whatever scalar field is used with V .

See also

Related Research Articles

In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.

In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.

<span class="mw-page-title-main">Functional analysis</span> Area of mathematics

Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.

The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector space and complex vector space are kinds of vector spaces based on different kinds of scalars: real coordinate space or complex coordinate space.

In mathematics, the dimension of a vector space V is the cardinality of a basis of V over its base field. It is sometimes called Hamel dimension or algebraic dimension to distinguish it from other types of dimension.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Linear span</span> In linear algebra, generated subspace

In mathematics, the linear span (also called the linear hull or just span) of a set S of vectors (from a vector space), denoted span(S), is defined as the set of all linear combinations of the vectors in S. For example, two linearly independent vectors span a plane. The linear span can be characterized either as the intersection of all linear subspaces that contain S, or as the smallest subspace containing S. The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

<span class="mw-page-title-main">Isometry</span> Distance-preserving mathematical transformation

In mathematics, an isometry is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion.

In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of . Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.

In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.

In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.

In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue.

In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions. If the spaces involved are also topological spaces, then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces, the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In the branch of mathematics called functional analysis, a complemented subspace of a topological vector space is a vector subspace for which there exists some other vector subspace of called its (topological) complement in , such that is the direct sum in the category of topological vector spaces. Formally, topological direct sums strengthen the algebraic direct sum by requiring certain maps be continuous; the result retains many nice properties from the operation of direct sum in finite-dimensional vector spaces.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. Rudin, Walter (1976). "Chapter 9: Functions of several variables". Principles of Mathematical Analysis (3rd ed.). McGraw-Hill. p. 207. ISBN   0-07-054235-X. Linear transformations of X into X are often called linear operators on X .
  2. 1 2 Roman, Steven (2008). "Chapter 2: Linear Transformations". Advanced Linear Algebra (3rd ed.). Springer. p. 59. ISBN   978-0-387-72828-5.
  3. Schey, H.M. (2005). Div, Grad, Curl, and All That. New York, NY: W.W. Norton. ISBN   0-393-92516-1.