Kreiss matrix theorem

Last updated

In matrix analysis, Kreiss matrix theorem relates the so-called Kreiss constant of a matrix with the power iterates of this matrix. It was originally introduced by Heinz-Otto Kreiss to analyze the stability of finite difference methods for partial difference equations. [1] [2]

Contents

Kreiss constant of a matrix

Given a matrix A, the Kreiss constant 𝒦(A) (with respect to the closed unit circle) of A is defined as [3]

while the Kreiss constant 𝒦lhp(A) with respect to the left-half plane is given by [3]

Properties

Statement of Kreiss matrix theorem

Let A be a square matrix of order n and e be the Euler's number. The modern and sharp version of Kreiss matrix theorem states that the inequality below is tight [3] [7]

and it follows from the application of Spijker's lemma. [8]

There also exists an analogous result in terms of the Kreiss constant with respect to the left-half plane and the matrix exponential: [3] [9]

Consequences and applications

The value (respectively, ) can be interpreted as the maximum transient growth of the discrete-time system (respectively, continuous-time system ).

Thus, the Kreiss matrix theorem gives both upper and lower bounds on the transient behavior of the system with dynamics given by the matrix A: a large (and finite) Kreiss constant indicates that the system will have an accentuated transient phase before decaying to zero. [5] [6]

Related Research Articles

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

<span class="mw-page-title-main">Vapnik–Chervonenkis theory</span> Branch of statistical computational learning theory

Vapnik–Chervonenkis theory was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.

In mathematics, the pseudospectrum of an operator is a set containing the spectrum of the operator and the numbers that are "almost" eigenvalues. Knowledge of the pseudospectrum can be particularly useful for understanding non-normal operators and their eigenfunctions.

In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limiting average taken around the point. The theorem is named for Henri Lebesgue.

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In mathematics, the Vitali covering lemma is a combinatorial and geometric result commonly used in measure theory of Euclidean spaces. This lemma is an intermediate step, of independent interest, in the proof of the Vitali covering theorem. The covering theorem is credited to the Italian mathematician Giuseppe Vitali. The theorem states that it is possible to cover, up to a Lebesgue-negligible set, a given subset E of Rd by a disjoint family extracted from a Vitali covering of E.

In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.

In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path. This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.

In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well. The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure.

In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. The map used for the embedding is at least Lipschitz, and can even be taken to be an orthogonal projection.

In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.

In mathematics, singular integral operators of convolution type are the singular integral operators that arise on Rn and Tn through convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples in harmonic analysis are the harmonic conjugation operator on the circle, the Hilbert transform on the circle and the real line, the Beurling transform in the complex plane and the Riesz transforms in Euclidean space. The continuity of these operators on L2 is evident because the Fourier transform converts them into multiplication operators. Continuity on Lp spaces was first established by Marcel Riesz. The classical techniques include the use of Poisson integrals, interpolation theory and the Hardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced by Alberto Calderón and Antoni Zygmund in 1952, were developed by a number of authors to give general criteria for continuity on Lp spaces. This article explains the theory for the classical operators and sketches the subsequent general theory.

References

  1. Kreiss, Heinz-Otto (1962). "Über Die Stabilitätsdefinition Für Differenzengleichungen Die Partielle Differentialgleichungen Approximieren". BIT. 2 (3): 153–181. doi:10.1007/bf01957330. ISSN   0006-3835. S2CID   118346536.
  2. Strikwerda, John; Wade, Bruce (1997). "A survey of the Kreiss matrix theorem for power bounded families of matrices and its extensions". Banach Center Publications. 38 (1): 339–360. doi: 10.4064/-38-1-339-360 . ISSN   0137-6934.
  3. 1 2 3 4 Raouafi, Samir (2018). "A generalization of the Kreiss Matrix Theorem". Linear Algebra and Its Applications. 549: 86–99. doi: 10.1016/j.laa.2018.03.011 . S2CID   126237400.
  4. Jacob Nathaniel Stroh (2006). Non-normality in scalar delay differential equations (PDF) (Thesis).
  5. 1 2 Mitchell, Tim (2020). "Computing the Kreiss Constant of a Matrix". SIAM Journal on Matrix Analysis and Applications. 41 (4): 1944–1975. arXiv: 1907.06537 . doi:10.1137/19m1275127. ISSN   0895-4798. S2CID   196622538.
  6. 1 2 Apkarian, Pierre; Noll, Dominikus (2020). "Optimizing the Kreiss Constant". SIAM Journal on Control and Optimization. 58 (6): 3342–3362. arXiv: 1910.12572 . doi:10.1137/19m1296215. ISSN   0363-0129. S2CID   204904802.
  7. Trefethen, Lloyd N.; Embree, Mark (2005), Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators, Princeton University Press, p. 177
  8. Wegert, Elias; Trefethen, Lloyd N. (1994). "From the Buffon Needle Problem to the Kreiss Matrix Theorem". The American Mathematical Monthly. 101 (2): 132. doi:10.2307/2324361. hdl: 1813/7113 . JSTOR   2324361.
  9. Trefethen, Lloyd N.; Embree, Mark (2005), Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators, Princeton University Press, p. 183