In mathematics, the joint spectral radius is a generalization of the classical notion of spectral radius of a matrix, to sets of matrices. In recent years this notion has found applications in a large number of engineering fields and is still a topic of active research.
The joint spectral radius of a set of matrices is the maximal asymptotic growth rate of products of matrices taken in that set. For a finite (or more generally compact) set of matrices the joint spectral radius is defined as follows:
It can be proved that the limit exists and that the quantity actually does not depend on the chosen matrix norm (this is true for any norm but particularly easy to see if the norm is sub-multiplicative). The joint spectral radius was introduced in 1960 by Gian-Carlo Rota and Gilbert Strang, [1] two mathematicians from MIT, but started attracting attention with the work of Ingrid Daubechies and Jeffrey Lagarias. [2] They showed that the joint spectral radius can be used to describe smoothness properties of certain wavelet functions. [3] A wide number of applications have been proposed since then. It is known that the joint spectral radius quantity is NP-hard to compute or to approximate, even when the set consists of only two matrices with all nonzero entries of the two matrices which are constrained to be equal. [4] Moreover, the question "" is an undecidable problem. [5] Nevertheless, in recent years much progress has been done on its understanding, and it appears that in practice the joint spectral radius can often be computed to satisfactory precision, and that it moreover can bring interesting insight in engineering and mathematical problems.
In spite of the negative theoretical results on the joint spectral radius computability, methods have been proposed that perform well in practice. Algorithms are even known, which can reach an arbitrary accuracy in an a priori computable amount of time. These algorithms can be seen as trying to approximate the unit ball of a particular vector norm, called the extremal norm. [6] One generally distinguishes between two families of such algorithms: the first family, called polytope norm methods, construct the extremal norm by computing long trajectories of points. [7] [8] An advantage of these methods is that in the favorable cases it can find the exact value of the joint spectral radius and provide a certificate that this is the exact value.
The second family of methods approximate the extremal norm with modern optimization techniques, such as ellipsoid norm approximation, [9] semidefinite programming, [10] [11] Sum Of Squares, [12] and conic programming. [13] The advantage of these methods is that they are easy to implement, and in practice, they provide in general the best bounds on the joint spectral radius.
Related to the computability of the joint spectral radius is the following conjecture: [14]
"For any finite set of matrices there is a product of matrices in this set such that
In the above equation "" refers to the classical spectral radius of the matrix
This conjecture, proposed in 1995, was proven to be false in 2003. [15] The counterexample provided in that reference uses advanced measure-theoretical ideas. Subsequently, many other counterexamples have been provided, including an elementary counterexample that uses simple combinatorial properties matrices [16] and a counterexample based on dynamical systems properties. [17] Recently an explicit counterexample has been proposed in. [18] Many questions related to this conjecture are still open, as for instance the question of knowing whether it holds for pairs of binary matrices. [19] [20]
The joint spectral radius was introduced for its interpretation as a stability condition for discrete-time switching dynamical systems. Indeed, the system defined by the equations
is stable if and only if
The joint spectral radius became popular when Ingrid Daubechies and Jeffrey Lagarias showed that it rules the continuity of certain wavelet functions. Since then, it has found many applications, ranging from number theory to information theory, autonomous agents consensus, combinatorics on words,...
The joint spectral radius is the generalization of the spectral radius of a matrix for a set of several matrices. However, many more quantities can be defined when considering a set of matrices: The joint spectral subradius characterizes the minimal rate of growth of products in the semigroup generated by . The p-radius characterizes the rate of growth of the average of the norms of the products in the semigroup. The Lyapunov exponent of the set of matrices characterizes the rate of growth of the geometric average.
In mathematics, an associative algebraA over a commutative ring K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In quantum mechanics, a density matrix is a matrix that describes the quantum state of a physical system. It allows for the calculation of the probabilities of the outcomes of any measurement performed upon this system, using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states. Mixed states arise in quantum mechanics in two different situations:
In linear algebra, an invertible complex square matrix U is unitary if its conjugate transpose U* is also its inverse, that is, if
In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its operator norm. Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Informally, the operator norm of a linear map is the maximum factor by which it "lengthens" vectors.
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
In quantum mechanics, a quantum operation is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan. The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel.
In quantum information theory, a quantum channel is a communication channel which can transmit quantum information, as well as classical information. An example of quantum information is the state of a qubit. An example of classical information is a text document transmitted over the Internet.
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra.
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
In mathematics, we can define norms for the elements of a vector space. When the vector space in question consists of matrices, these are called matrix norms.
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability.
In mathematics, the Fourier transform on finite groups is a generalization of the discrete Fourier transform from cyclic to arbitrary finite groups.
In quantum information theory, strong subadditivity of quantum entropy (SSA) is the relation among the von Neumann entropies of various quantum subsystems of a larger quantum system consisting of three subsystems. It is a basic theorem in modern quantum information theory. It was conjectured by D. W. Robinson and D. Ruelle in 1966 and O. E. Lanford III and D. W. Robinson in 1968 and proved in 1973 by E.H. Lieb and M.B. Ruskai, building on results obtained by Lieb in his proof of the Wigner-Yanase-Dyson conjecture.
In quantum information, the diamond norm, also known as completely bounded trace norm, is a norm on the space of quantum operations, or more generally on any linear map that acts on complex matrices. Its main application is to measure the "single use distinguishability" of two quantum channels. If an agent is randomly given one of two quantum channels, permitted to pass one state through the unknown channel, and then measures the state in an attempt to determine which operation they were given, then their maximal probability of success is determined by the diamond norm of the difference of the two channels.
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.
In matrix analysis, Kreiss matrix theorem relates the so-called Kreiss constant of a matrix with the power iterates of this matrix. It was originally introduced by Heinz-Otto Kreiss to analyze the stability of finite difference methods for partial difference equations.