Arrowhead matrix

Last updated

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. [1] [2] In other words, the matrix has the form

Contents

Any symmetric permutation of the arrowhead matrix, , where P is a permutation matrix, is a (permuted) arrowhead matrix. Real symmetric arrowhead matrices are used in some algorithms for finding of eigenvalues and eigenvectors. [3]

Real symmetric arrowhead matrices

Let A be a real symmetric (permuted) arrowhead matrix of the form

where is diagonal matrix of order n-1,

is a vector and is a scalar. Let

be the eigenvalue decomposition of A, where

is a diagonal matrix whose diagonal elements are the eigenvalues of A, and

is an orthonormal matrix whose columns are the corresponding eigenvectors. The following holds:

Symmetric arrowhead matrices arise in descriptions of radiationless transitions in isolated molecules and oscillators vibrationally coupled with a Fermi liquid. [4]

Eigenvalues and eigenvectors

A symmetric arrowhead matrix is irreducible if for all i and for all . The eigenvalues of an irreducible real symmetric arrowhead matrix are the zeros of the secular equation

which can be, for example, computed by the bisection method. The corresponding eigenvectors are equal to

Direct application of the above formula may yield eigenvectors which are not numerically sufficiently orthogonal. [1] The forward stable algorithm which computes each eigenvalue and each component of the corresponding eigenvector to almost full accuracy is described in. [2] The Julia version of the software is available. [5]

Inverses

Let A be an irreducible real symmetric arrowhead matrix. If for some i, the inverse is a permuted irreducible real symmetric arrowhead matrix:

where


If for all i, the inverse is a rank-one modification of a diagonal matrix (diagonal-plus-rank-one matrix or DPR1):

where

Related Research Articles

Lorentz transformation Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

Jordan normal form Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form or JCF, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, the Frobenius companion matrix of the monic polynomial

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In the mathematical discipline of matrix theory, a Jordan block over a ring R is a matrix composed of zeroes everywhere except for the diagonal, which is filled with a fixed element , and for the superdiagonal, which is composed of ones. The concept is named after Camille Jordan.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.

In mathematics, power iteration is an eigenvalue algorithm: given a diagonalizable matrix , the algorithm will produce a number , which is the greatest eigenvalue of , and a nonzero vector , which is a corresponding eigenvector of , that is, . The algorithm is also known as the Von Mises iteration.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues , left eigenvectors and right eigenvectors such that

In linear algebra, two matrices and are said to commute if , or equivalently if their commutator is zero. A set of matrices is said to commute if they commute pairwise, meaning that every pair of matrices in the set commute with each other.

Grunsky matrix

In complex analysis and geometric function theory, the Grunsky matrices, or Grunsky operators, are infinite matrices introduced in 1939 by Helmut Grunsky. The matrices correspond to either a single holomorphic function on the unit disk or a pair of holomorphic functions on the unit disk and its complement. The Grunsky inequalities express boundedness properties of these matrices, which in general are contraction operators or in important special cases unitary operators. As Grunsky showed, these inequalities hold if and only if the holomorphic function is univalent. The inequalities are equivalent to the inequalities of Goluzin, discovered in 1947. Roughly speaking, the Grunsky inequalities give information on the coefficients of the logarithm of a univalent function; later generalizations by Milin, starting from the Lebedev–Milin inequality, succeeded in exponentiating the inequalities to obtain inequalities for the coefficients of the univalent function itself. The Grunsky matrix and its associated inequalities were originally formulated in a more general setting of univalent functions between a region bounded by finitely many sufficiently smooth Jordan curves and its complement: the results of Grunsky, Goluzin and Milin generalize to that case.

Weyr canonical form

In mathematics, in linear algebra, a Weyr canonical form is a square matrix satisfying certain conditions. A square matrix is said to be in the Weyr canonical form if the matrix satisfies the conditions defining the Weyr canonical form. The Weyr form was discovered by the Czech mathematician Eduard Weyr in 1885. The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the name Jordan canonical form. The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885. This form has been variously called as modified Jordan form,reordered Jordan form,second Jordan form, and H-form. The current terminology is credited to Shapiro who introduced it in a paper published in the American Mathematical Monthly in 1999.

References

  1. 1 2 O'Leary, D. P.; Stewart, G. W. (1990). "Computing the eigenvalues and eigenvectors of symmetric arrowhead matrices". Journal of Computational Physics. 90 (2): 497–505. Bibcode:1990JCoPh..90..497O. doi:10.1016/0021-9991(90)90177-3.
  2. 1 2 Jakovcevic Stor, Nevena; Slapnicar, Ivan; Barlow, Jesse L. (2015). "Accurate eigenvalue decomposition of real symmetric arrowhead matrices and applications". Linear Algebra and Its Applications. 464: 62–89. arXiv: 1302.7203 . doi:10.1016/j.laa.2013.10.007. S2CID   119640612.
  3. Gu, Ming; Eisenstat, Stanley C. (1995). "A Divide-and-Conquer Algorithm for the Symmetric Tridiagonal Eigenproblem". SIAM Journal on Matrix Analysis and Applications. 16: 172–191. doi:10.1137/S0895479892241287.
  4. O'Leary, D.P.; Stewart, G.W. (October 1990). "Computing the eigenvalues and eigenvectors of symmetric arrowhead matrices". Journal of Computational Physics. 90 (2): 497–505. Bibcode:1990JCoPh..90..497O. doi:10.1016/0021-9991(90)90177-3.
  5. "Arrowhead.jl"