This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.
Given a field of either real or complex numbers, let be the K-vector space of matrices with rows and columns and entries in the field . A matrix norm is a norm on .
Norms are often expressed with double vertical bars (like so: ). Thus, the matrix norm is a function that must satisfy the following properties: [1] [2]
For all scalars and matrices ,
The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative: [1] [2] [3]
Every norm on Kn×n can be rescaled to be sub-multiplicative; in some books, the terminology matrix norm is reserved for sub-multiplicative norms. [4]
Suppose a vector norm on and a vector norm on are given. Any matrix A induces a linear operator from to with respect to the standard basis, and one defines the corresponding induced norm or operator norm or subordinate norm on the space of all matrices as follows: where denotes the supremum. This norm measures how much the mapping induced by can stretch vectors. Depending on the vector norms , used, notation other than can be used for the operator norm.
If the p-norm for vectors () is used for both spaces and then the corresponding operator norm is: [2] These induced norms are different from the "entry-wise" p-norms and the Schatten p-norms for matrices treated below, which are also usually denoted by
Geometrically speaking, one can imagine a p-norm unit ball in , then apply the linear map to the ball. It would end up becoming a distorted convex shape , and measures the longest "radius" of the distorted convex shape. In other words, we must take a p-norm unit ball in , then multiply it by at least , in order for it to be large enough to contain .
When , we have simple formulas.which is simply the maximum absolute column sum of the matrix.which is simply the maximum absolute row sum of the matrix. For example, for we have that
When (the Euclidean norm or -norm for vectors), the induced matrix norm is the spectral norm. The two values do not coincide in infinite dimensions — see Spectral radius for further discussion. The spectral radius should not be confused with the spectral norm. The spectral norm of a matrix is the largest singular value of , i.e., the square root of the largest eigenvalue of the matrix where denotes the conjugate transpose of : [5] where represents the largest singular value of matrix
There are further properties:
We can generalize the above definition. Suppose we have vector norms and for spaces and respectively; the corresponding operator norm isIn particular, the defined previously is the special case of .
In the special cases of and , the induced matrix norms can be computed by where is the i-th row of matrix .
In the special cases of and , the induced matrix norms can be computed by where is the j-th column of matrix .
Hence, and are the maximum row and column 2-norm of the matrix, respectively.
Any operator norm is consistent with the vector norms that induce it, giving
Suppose ; ; and are operator norms induced by the respective pairs of vector norms ; ; and . Then,
this follows from and
Suppose is an operator norm on the space of square matrices induced by vector norms and . Then, the operator norm is a sub-multiplicative matrix norm:
Moreover, any such norm satisfies the inequality
(1) |
for all positive integers r, where ρ(A) is the spectral radius of A. For symmetric or hermitian A, we have equality in ( 1 ) for the 2-norm, since in this case the 2-norm is precisely the spectral radius of A. For an arbitrary matrix, we may not have equality for any norm; a counterexample would be which has vanishing spectral radius. In any case, for any matrix norm, we have the spectral radius formula:
If the vector norms and are given in terms of energy norms based on symmetric positive definite matrices and respectively, the resulting operator norm is given as
Using the symmetric matrix square roots of and respectively, the operator norm can be expressed as the spectral norm of a modified matrix:
A matrix norm on is called consistent with a vector norm on and a vector norm on , if: for all and all . In the special case of m = n and , is also called compatible with .
All induced norms are consistent by definition. Also, any sub-multiplicative matrix norm on induces a compatible vector norm on by defining .
These norms treat an matrix as a vector of size , and use one of the familiar vector norms. For example, using the p-norm for vectors, p ≥ 1, we get:
This is a different norm from the induced p-norm (see above) and the Schatten p-norm (see below), but the notation is the same.
The special case p = 2 is the Frobenius norm, and p = ∞ yields the maximum norm.
Let be the columns of matrix . From the original definition, the matrix presents n data points in m-dimensional space. The norm [6] is the sum of the Euclidean norms of the columns of the matrix:
The norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used in robust data analysis and sparse coding.
For p, q ≥ 1, the norm can be generalized to the norm as follows:
When p = q = 2 for the norm, it is called the Frobenius norm or the Hilbert–Schmidt norm, though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional) Hilbert space. This norm can be defined in various ways:
where the trace is the sum of diagonal entries, and are the singular values of . The second equality is proven by explicit computation of . The third equality is proven by singular value decomposition of , and the fact that the trace is invariant under circular shifts.
The Frobenius norm is an extension of the Euclidean norm to and comes from the Frobenius inner product on the space of all matrices.
The Frobenius norm is sub-multiplicative and is very useful for numerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using Cauchy–Schwarz inequality.
Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant under rotations (and unitary operations in general). That is, for any unitary matrix . This property follows from the cyclic nature of the trace ():
and analogously:
where we have used the unitary nature of (that is, ).
It also satisfies
and
where is the Frobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices)
The max norm is the elementwise norm in the limit as p = q goes to infinity:
This norm is not sub-multiplicative; but modifying the right-hand side to makes it so.
Note that in some literature (such as Communication complexity), an alternative definition of max-norm, also called the -norm, refers to the factorization norm:
The Schatten p-norms arise when applying the p-norm to the vector of singular values of a matrix. [2] If the singular values of the matrix are denoted by σi, then the Schatten p-norm is defined by
These norms again share the notation with the induced and entry-wise p-norms, but they are different.
All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that for all matrices and all unitary matrices and .
The most familiar cases are p = 1, 2, ∞. The case p = 2 yields the Frobenius norm, introduced before. The case p = ∞ yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally, p = 1 yields the nuclear norm (also known as the trace norm, or the Ky Fan 'n'-norm [7] ), defined as:
where denotes a positive semidefinite matrix such that . More precisely, since is a positive semidefinite matrix, its square root is well defined. The nuclear norm is a convex envelope of the rank function , so it is often used in mathematical optimization to search for low-rank matrices.
Combining von Neumann's trace inequality with Hölder's inequality for Euclidean space yields a version of Hölder's inequality for Schatten norms for :
In particular, this implies the Schatten norm inequality
A matrix norm is called monotone if it is monotonic with respect to the Loewner order. Thus, a matrix norm is increasing if
The Frobenius norm and spectral norm are examples of monotone norms. [8]
Another source of inspiration for matrix norms arises from considering a matrix as the adjacency matrix of a weighted, directed graph. [9] The so-called "cut norm" measures how close the associated graph is to being bipartite: where A∈Km×n. [9] [10] [11] Equivalent definitions (up to a constant factor) impose the conditions 2|S| > n& 2|T| > m; S = T; or S∩T = ∅. [10]
The cut-norm is equivalent to the induced operator norm ‖·‖∞→1, which is itself equivalent to another norm, called the Grothendieck norm. [11]
To define the Grothendieck norm, first note that a linear operator K1 → K1 is just a scalar, and thus extends to a linear operator on any Kk → Kk. Moreover, given any choice of basis for Kn and Km, any linear operator Kn → Km extends to a linear operator (Kk)n → (Kk)m, by letting each matrix element on elements of Kk via scalar multiplication. The Grothendieck norm is the norm of that extended operator; in symbols: [11]
The Grothendieck norm depends on choice of basis (usually taken to be the standard basis) and k.
For any two matrix norms and , we have that:
for some positive numbers r and s, for all matrices . In other words, all norms on are equivalent; they induce the same topology on . This is true because the vector space has the finite dimension .
Moreover, for every matrix norm on there exists a unique positive real number such that is a sub-multiplicative matrix norm for every ; to wit,
A sub-multiplicative matrix norm is said to be minimal, if there exists no other sub-multiplicative matrix norm satisfying .
Let once again refer to the norm induced by the vector p-norm (as above in the Induced norm section).
For matrix of rank , the following inequalities hold: [12] [13]
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.
Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups. Despite their name, they do not themselves have a natural group structure, though they are in some sense 'close' to a group.
In mathematics, the (field) norm is a particular mapping defined in field theory, which maps elements of a larger field into a subfield.
In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.
In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real lineR1, the real coordinate planeR2, and the real coordinate three-dimensional spaceR3. With component-wise addition and scalar multiplication, it is a real vector space.
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that for all x and y in the domain of f. More generally, the condition can be formulated for functions between any two metric spaces. The number is called the exponent of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder.
In mathematical analysis, the Schur test, named after German mathematician Issai Schur, is a bound on the operator norm of an integral operator in terms of its Schwartz kernel.
In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
In the mathematical field of linear algebra and convex analysis, the numerical range or field of values of a complex matrix A is the set
In mathematics, the structure constants or structure coefficients of an algebra over a field are the coefficients of the basis expansion of the products of basis vectors. Because the product operation in the algebra is bilinear, by linearity knowing the product of basis vectors allows to compute the product of any elements . Therefore, the structure constants can be used to specify the product operation of the algebra. Given the structure constants, the resulting product is obtained by bilinearity and can be uniquely extended to all vectors in the vector space, thus uniquely determining the product for the algebra.
In mathematics, and more precisely, in functional Analysis and PDEs, the Schauder estimates are a collection of results due to Juliusz Schauder concerning the regularity of solutions to linear, uniformly elliptic partial differential equations. The estimates say that when the equation has appropriately smooth terms and appropriately smooth solutions, then the Hölder norm of the solution can be controlled in terms of the Hölder norms for the coefficient and source terms. Since these estimates assume by hypothesis the existence of a solution, they are called a priori estimates.
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance , then it is possible in principle to recover an encoded message when up to fraction of the codeword symbols are corrupted. But when error rate is greater than , this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than fraction of errors.