Last updated

In mathematics, the spectral radius of a square matrix or a bounded linear operator is the largest absolute value of its eigenvalues (i.e. supremum among the absolute values of the elements in its spectrum). It is sometimes denoted by ρ(·).

## Matrices

Let λ1, ..., λn be the (real or complex) eigenvalues of a matrix ACn×n. Then its spectral radius ρ(A) is defined as:

${\displaystyle \rho (A)=\max \left\{|\lambda _{1}|,\dotsc ,|\lambda _{n}|\right\}.}$

The spectral radius is a sort of infimum of all norms of a matrix. Indeed, on the one hand, ${\displaystyle \rho (A)\leqslant \|A\|}$ for every natural matrix norm ${\displaystyle \|\cdot \|}$; and on the other hand, Gelfand's formula states that ${\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}}$. Both these results are shown below.

However, the spectral radius does not necessarily satisfy ${\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}$ for arbitrary vectors ${\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}$. To see why, let ${\displaystyle r>1}$ be arbitrary and consider the matrix

${\displaystyle C_{r}={\begin{pmatrix}0&r^{-1}\\r&0\end{pmatrix}}}$.

The characteristic polynomial of ${\displaystyle C_{r}}$ is ${\displaystyle \lambda ^{2}-1}$, so its eigenvalues are ${\displaystyle \{-1,1\}}$ and thus ${\displaystyle \rho (C_{r})=1}$. However, ${\displaystyle C_{r}\mathbf {e} _{1}=r\mathbf {e} _{2}}$. As a result, for any ${\displaystyle \ell ^{p}}$ norm,

${\displaystyle \|C_{r}\mathbf {e} _{1}\|=r>1=\rho (C_{r})\|\mathbf {e} _{1}\|.}$

As an illustration of Gelfand's formula, note that ${\displaystyle \|C_{r}^{k}\|^{1/k}\to 1}$ as ${\displaystyle k\to \infty }$, since ${\displaystyle C_{r}^{k}=I}$ if ${\displaystyle k}$ is even and ${\displaystyle C_{r}^{k}=C_{r}}$ if ${\displaystyle k}$ is odd.

A special case in which ${\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}$ for all ${\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}$ is when ${\displaystyle A}$ is a Hermitian matrix and ${\displaystyle \|\cdot \|}$ is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length:

${\displaystyle \|A\mathbf {v} \|=\|U^{*}DU\mathbf {v} \|=\|DU\mathbf {v} \|\leqslant \rho (A)\|U\mathbf {v} \|=\rho (A)\|\mathbf {v} \|.}$

## Bounded linear operators

For a bounded linear operator A on a Banach space, the eigenvalues are replaced with the spectrum of the operator, those values ${\displaystyle \lambda }$ for which ${\displaystyle A-\lambda I}$ fails to be injective; we denote the spectrum by

${\displaystyle \sigma (A)=\left\{\lambda \in \mathbb {C}$ :\ker(A-\lambda I)\neq \emptyset \right\}}

The spectral radius is then defined as the supremum of the magnitudes of elements in the spectrum:

${\displaystyle \rho (A)=\sup _{\lambda \in \sigma (A)}|\lambda |}$

If we denote the operator norm by ${\displaystyle \|\cdot \|}$, then we have the spectral radius formula or Gelfand's formula:

${\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{\frac {1}{k}}.}$

A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator.

## Graphs

The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix.

This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph G define:

${\displaystyle \ell ^{2}(G)=\left\{f:V(G)\to \mathbf {R} \$ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.}

Let γ be the adjacency operator of G:

${\displaystyle {\begin{cases}\gamma$ :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}}

The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ.

## Upper bound

### Upper bounds for spectral radius of a matrix

The following proposition shows a simple yet useful upper bound for the spectral radius of a matrix:

Proposition. Let ACn×n with spectral radius ρ(A) and a consistent matrix norm ||⋅||. Then for each integer ${\displaystyle k\geqslant 1}$:

${\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}$

Proof

Let (v, λ) be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicative property of the matrix norm, we get:

${\displaystyle |\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|}$

and since v ≠ 0 we have

${\displaystyle |\lambda |^{k}\leq \|A^{k}\|}$

and therefore

${\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}$

### Upper bounds for spectral radius of a graph

There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if

${\displaystyle {\frac {(k-2)(k-3)}{2}}\leq m-n\leq {\frac {k(k-3)}{2}}}$

where ${\displaystyle 3\leq k\leq n}$ is an integer, then [1]

${\displaystyle \rho (G)\leq {\sqrt {2m-n-k+{\frac {5}{2}}+{\sqrt {2m-2n+{\frac {9}{4}}}}}}}$

## Power sequence

### Theorem

The spectral radius is closely related to the behaviour of the convergence of the power sequence of a matrix; namely, the following theorem holds:

Theorem. Let ACn×n with spectral radius ρ(A). Then ρ(A) < 1 if and only if
${\displaystyle \lim _{k\to \infty }A^{k}=0.}$
On the other hand, if ρ(A) > 1, ${\displaystyle \lim _{k\to \infty }\|A^{k}\|=\infty }$. The statement holds for any choice of matrix norm on Cn×n.

### Proof of theorem

Assume the limit in question is zero, we will show that ρ(A) < 1. Let (v, λ) be an eigenvector-eigenvalue pair for A. Since Akv = λkv we have:

{\displaystyle {\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }\left(A^{k}\mathbf {v} \right)\\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}}

and, since by hypothesis v ≠ 0, we must have

${\displaystyle \lim _{k\to \infty }\lambda ^{k}=0}$

which implies |λ| < 1. Since this must be true for any eigenvalue λ, we can conclude ρ(A) < 1.

Now assume the radius of A is less than 1. From the Jordan normal form theorem, we know that for all ACn×n, there exist V, JCn×n with V non-singular and J block diagonal such that:

${\displaystyle A=VJV^{-1}}$

with

${\displaystyle J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}}$

where

${\displaystyle J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbf {C} ^{m_{i}\times m_{i}},1\leq i\leq s.}$

It is easy to see that

${\displaystyle A^{k}=VJ^{k}V^{-1}}$

and, since J is block-diagonal,

${\displaystyle J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}}$

Now, a standard result on the k-power of an ${\displaystyle m_{i}\times m_{i}}$ Jordan block states that, for ${\displaystyle k\geq m_{i}-1}$:

${\displaystyle J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}}$

Thus, if ${\displaystyle \rho (A)<1}$ then for all i${\displaystyle |\lambda _{i}|<1}$. Hence for all i we have:

${\displaystyle \lim _{k\to \infty }J_{m_{i}}^{k}=0}$

which implies

${\displaystyle \lim _{k\to \infty }J^{k}=0.}$

Therefore,

${\displaystyle \lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V\left(\lim _{k\to \infty }J^{k}\right)V^{-1}=0}$

On the other side, if ${\displaystyle \rho (A)>1}$, there is at least one element in J which doesn't remain bounded as k increases, so proving the second part of the statement.

## Gelfand's formula

### Theorem

The next theorem gives the spectral radius as a limit of matrix norms.

Theorem (Gelfand's Formula; 1941). For any matrix norm ||⋅||, we have
${\displaystyle \rho (A)=\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}.}$. [2]

### Proof

For any ε > 0, first we construct the following two matrices:

${\displaystyle A_{\pm }={\frac {1}{\rho (A)\pm \varepsilon }}A.}$

Then:

${\displaystyle \rho \left(A_{\pm }\right)={\frac {\rho (A)}{\rho (A)\pm \varepsilon }},\qquad \rho (A_{+})<1<\rho (A_{-}).}$

First we apply the previous theorem to A+:

${\displaystyle \lim _{k\to \infty }A_{+}^{k}=0.}$

That means, by the sequence limit definition, there exists N+N such that for all k ≥ N+,

{\displaystyle {\begin{aligned}\left\|A_{+}^{k}\right\|<1\end{aligned}}}

so

{\displaystyle {\begin{aligned}\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon .\end{aligned}}}

Applying the previous theorem to A implies ${\displaystyle \|A_{-}^{k}\|}$ is not bounded and there exists NN such that for all k ≥ N,

{\displaystyle {\begin{aligned}\left\|A_{-}^{k}\right\|>1\end{aligned}}}

so

{\displaystyle {\begin{aligned}\left\|A^{k}\right\|^{\frac {1}{k}}>\rho (A)-\varepsilon .\end{aligned}}}

Let N = max{N+, N}, then we have:

${\displaystyle \forall \varepsilon >0\quad \exists N\in \mathbf {N} \quad \forall k\geq N\quad \rho (A)-\varepsilon <\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon }$

which, by definition, is

${\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A).}$

## Gelfand corollaries

Gelfand's formula leads directly to a bound on the spectral radius of a product of finitely many matrices, namely assuming that they all commute we obtain

${\displaystyle \rho (A_{1}\cdots A_{n})\leq \rho (A_{1})\cdots \rho (A_{n}).}$

Actually, in case the norm is consistent, the proof shows more than the thesis; in fact, using the previous lemma, we can replace in the limit definition the left lower bound with the spectral radius itself and write more precisely:

${\displaystyle \forall \varepsilon >0,\exists N\in \mathbf {N} ,\forall k\geq N\quad \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}<\rho (A)+\varepsilon }$

which, by definition, is

${\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A)^{+},}$

where the + means that the limit is approached from above.

## Example

Consider the matrix

${\displaystyle A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}}$

whose eigenvalues are 5, 10, 10; by definition, ρ(A) = 10. In the following table, the values of ${\displaystyle \|A^{k}\|^{\frac {1}{k}}}$ for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,${\displaystyle \|.\|_{1}=\|.\|_{\infty }}$):

k${\displaystyle \|.\|_{1}=\|.\|_{\infty }}$${\displaystyle \|.\|_{F}}$${\displaystyle \|.\|_{2}}$
11415.36229149610.681145748
212.64911064112.32829434810.595665162
311.93483191911.53245066410.500980846
411.50163316911.15100298610.418165779
511.21604315110.92124223510.351918183
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
1010.60494442210.45591043010.183690042
1110.54867768010.41370221310.166990229
1210.50192183510.37862093010.153031596
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
2010.29825439910.22550444710.091577411
3010.19786089210.14977692110.060958900
4010.14803164010.11212368110.045684426
5010.11825103510.08959882010.036530875
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
10010.05895175210.04469950810.018248786
20010.02943256210.02232483410.009120234
30010.01961209510.01487769010.006079232
40010.01470546910.01115619410.004559078
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
100010.00587959410.00446098510.001823382
200010.00293936510.00223024410.000911649
300010.00195948110.00148677410.000607757
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
1000010.00058780410.00044600910.000182323
2000010.00029389810.00022300210.000091161
3000010.00019593110.00014866710.000060774
${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$${\displaystyle \vdots }$
10000010.00005877910.00004460010.000018232

## Notes and references

1. Guo, Ji-Ming; Wang, Zhi-Wen; Li, Xin (2019). "Sharp upper bounds of the spectral radius of a graph". Discrete Mathematics. 342 (9): 2559–2563. doi:10.1016/j.disc.2019.05.017.
2. The formula holds for any Banach algebra; see Lemma IX.1.8 in Dunford & Schwartz 1963 and Lax 2002 , pp. 195–197

## Bibliography

• Dunford, Nelson; Schwartz, Jacob (1963), Linear operators II. Spectral Theory: Self Adjoint Operators in Hilbert Space, Interscience Publishers, Inc.
• Lax, Peter D. (2002), Functional Analysis, Wiley-Interscience, ISBN   0-471-55604-1

## Related Research Articles

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.

Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.

In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:

In linear algebra, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices.

In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In the mathematical discipline of matrix theory, a Jordan block over a ring R is a matrix composed of zeroes everywhere except for the diagonal, which is filled with a fixed element , and for the superdiagonal, which is composed of ones. The concept is named after Camille Jordan.

In mathematics, there are at least two results known as Weyl's inequality.

In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered.

Tau functions are an important ingredient in the modern theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term Tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili equation, and related integrable hierarchies. It is a central ingredient in the theory of solitons. Tau functions also appear as matrix model partition functions in the spectral theory of Random Matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.