Operator norm

Last updated

In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its operator norm. Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces.

Contents

Introduction and definition

Given two normed vector spaces and (over the same base field, either the real numbers or the complex numbers ), a linear map is continuous if and only if there exists a real number such that [1]

The norm on the left is the one in and the norm on the right is the one in . Intuitively, the continuous operator never increases the length of any vector by more than a factor of Thus the image of a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known as bounded operators. In order to "measure the size" of it then seems natural to take the infimum of the numbers such that the above inequality holds for all This number represents the maximum scalar factor by which "lengthens" vectors. In other words, we measure the "size" of by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm of as

The infimum is attained as the set of all such is closed, nonempty, and bounded from below. [2]

It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spaces and W.

Examples

Every real -by- matrix corresponds to a linear map from to Each pair of the plethora of (vector) norms applicable to real vector spaces induces an operator norm for all -by- matrices of real numbers; these induced norms form a subset of matrix norms.

If we specifically choose the Euclidean norm on both and then the matrix norm given to a matrix is the square root of the largest eigenvalue of the matrix (where denotes the conjugate transpose of ). [3] This is equivalent to assigning the largest singular value of

Passing to a typical infinite-dimensional example, consider the sequence space which is an Lp space, defined by

This can be viewed as an infinite-dimensional analogue of the Euclidean space Now consider a bounded sequence The sequence is an element of the space with a norm given by

Define an operator by pointwise multiplication:

The operator is bounded with operator norm

This discussion extends directly to the case where is replaced by a general space with and replaced by

Equivalent definitions

Let be a linear operator between normed spaces. The first four definitions are always equivalent, and if in addition then they are all equivalent:

If then the sets in the last two rows will be empty, and consequently their supremums over the set will equal instead of the correct value of If the supremum is taken over the set instead, then the supremum of the empty set is and the formulas hold for any If is bounded then [4]

and [4]

where is the transpose of which is the linear operator defined by

Properties

The operator norm is indeed a norm on the space of all bounded operators between and W. This means

The following inequality is an immediate consequence of the definition:

The operator norm is also compatible with the composition, or multiplication, of operators: if , and are three normed spaces over the same base field, and and are two bounded operators, then it is a sub-multiplicative norm, that is:

For bounded operators on , this implies that operator multiplication is jointly continuous.

It follows from the definition that if a sequence of operators converges in operator norm, it converges uniformly on bounded sets.

Table of common operator norms

Some common operator norms are easy to calculate, and others are NP-hard. Except for the NP-hard norms, all these norms can be calculated in operations (for an matrix), with the exception of the norm (which requires operations for the exact answer, or fewer if you approximate it with the power method or Lanczos iterations).

Computability of Operator Norms [5]
Co-domain
DomainMaximum norm of a columnMaximum norm of a columnMaximum norm of a column
NP-hardMaximum singular valueMaximum norm of a row
NP-hardNP-hardMaximum norm of a row

The norm of the adjoint or transpose can be computed as follows. We have that for any then where are Hölder conjugate to that is, and

Operators on a Hilbert space

Suppose is a real or complex Hilbert space. If is a bounded linear operator, then we have

and

where denotes the adjoint operator of (which in Euclidean spaces with the standard inner product corresponds to the conjugate transpose of the matrix ).

In general, the spectral radius of is bounded above by the operator norm of :

To see why equality may not always hold, consider the Jordan canonical form of a matrix in the finite-dimensional case. Because there are non-zero entries on the superdiagonal, equality may be violated. The quasinilpotent operators is one class of such examples. A nonzero quasinilpotent operator has spectrum So while

However, when a matrix is normal, its Jordan canonical form is diagonal (up to unitary equivalence); this is the spectral theorem. In that case it is easy to see that

This formula can sometimes be used to compute the operator norm of a given bounded operator : define the Hermitian operator determine its spectral radius, and take the square root to obtain the operator norm of

The space of bounded operators on with the topology induced by operator norm, is not separable. For example, consider the Lp space which is a Hilbert space. For let be the characteristic function of and be the multiplication operator given by that is,

Then each is a bounded operator with operator norm 1 and

But is an uncountable set. This implies the space of bounded operators on is not separable, in operator norm. One can compare this with the fact that the sequence space is not separable.

The associative algebra of all bounded operators on a Hilbert space, together with the operator norm and the adjoint operation, yields a C*-algebra.

See also

Notes

  1. Kreyszig, Erwin (1978), Introductory functional analysis with applications, John Wiley & Sons, p. 97, ISBN   9971-51-381-1
  2. See e.g. Lemma 6.2 of Aliprantis & Border (2007).
  3. Weisstein, Eric W. "Operator Norm". mathworld.wolfram.com. Retrieved 2020-03-14.
  4. 1 2 Rudin 1991, pp. 92-115.
  5. section 4.3.1, Joel Tropp's PhD thesis,

Bibliography

Related Research Articles

In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz. Lp spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, finance, engineering, and other disciplines.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions than classical solutions, or appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta function.

In mathematics, a trace-class operator is a compact operator for which a trace may be defined, such that the trace is finite and independent of the choice of basis. Trace-class operators are essentially the same as nuclear operators, though many authors reserve the term "trace-class operator" for the special case of nuclear operators on Hilbert spaces and reserve "nuclear operator" for usage in more general topological vector spaces.

In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.

In mathematics, particularly in functional analysis, a seminorm is a vector space norm that need not be positive definite. Seminorms are intimately connected with convex sets: every seminorm is the Minkowski functional of some absorbing disk and, conversely, the Minkowski functional of any such set is a seminorm.

In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space.

In functional analysis, a bounded linear operator is a linear transformation between topological vector spaces (TVSs) and that maps bounded subsets of to bounded subsets of If and are normed vector spaces, then is bounded if and only if there exists some such that for all in

In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.

In functional analysis and related branches of mathematics, the Banach–Alaoglu theorem states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology. A common proof identifies the unit ball with the weak-* topology as a closed subset of a product of compact sets with the product topology. As a consequence of Tychonoff's theorem, this product, and hence the unit ball within, is compact.

In mathematics, specifically in functional analysis, each bounded linear operator on a complex Hilbert space has a corresponding Hermitian adjoint. Adjoints of operators generalize conjugate transposes of square matrices to (possibly) infinite-dimensional situations. If one thinks of operators on a complex Hilbert space as generalized complex numbers, then the adjoint of an operator plays the role of the complex conjugate of a complex number.

In mathematics, a norm is a function from a real or complex vector space to the nonnegative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance of a vector from the origin is a norm, called the Euclidean norm, or 2-norm, which may also be defined as the square root of the inner product of a vector with itself.

In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices.

In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.

In linear algebra, a sublinear function, also called a quasi-seminorm or a Banach functional, on a vector space is a real-valued function with only some of the properties of a seminorm. Unlike seminorms, a sublinear function does not have to be nonnegative-valued and also does not have to be absolutely homogeneous. Seminorms are themselves abstractions of the more well known notion of norms, where a seminorm has all the defining properties of a norm except that it is not required to map non-zero vectors to non-zero values.

In mathematics, there are usually many different ways to construct a topological tensor product of two topological vector spaces. For Hilbert spaces or nuclear spaces there is a simple well-behaved theory of tensor products, but for general Banach spaces or locally convex topological vector spaces the theory is notoriously subtle.

In mathematics, the Fraňková–Helly selection theorem is a generalisation of Helly's selection theorem for functions of bounded variation to the case of regulated functions. It was proved in 1991 by the Czech mathematician Dana Fraňková.

In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.

This is a glossary for the terminology in a mathematical field of functional analysis.

In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued functions on a non-empty open subset that have compact support. The space of all test functions, denoted by is endowed with a certain topology, called the canonical LF-topoogy, that makes into a complete Hausdorff locally convex TVS. The strong dual space of is called the space of distributions on and is denoted by where the "" subscript indicates that the continuous dual space of denote by is endowed with the strong dual topology.

References