Hollow matrix

Last updated

In mathematics, a hollow matrix may refer to one of several related classes of matrix: a sparse matrix; a matrix with a large block of zeroes; or a matrix with diagonal entries all zero.

Contents

Definitions

Sparse

A hollow matrix may be one with "few" non-zero entries: that is, a sparse matrix. [1]

Block of zeroes

A hollow matrix may be a square n × n matrix with an r × s block of zeroes where r + s > n. [2]

Diagonal entries all zero

A hollow matrix may be a square matrix whose diagonal elements are all equal to zero. [3] That is, an n × n matrix A = (aij) is hollow if aij = 0 whenever i = j (i.e. aii = 0 for all i). The most obvious example is the real skew-symmetric matrix. Other examples are the adjacency matrix of a finite simple graph, and a distance matrix or Euclidean distance matrix.

In other words, any square matrix that takes the form

is a hollow matrix, where the symbol denotes an arbitrary entry.

For example,

is a hollow matrix.

Properties

  • The trace of a hollow matrix is zero.
  • If A represents a linear map with respect to a fixed basis, then it maps each basis vector e into the complement of the span of e. That is, where
  • The Gershgorin circle theorem shows that the moduli of the eigenvalues of a hollow matrix are less or equal to the sum of the moduli of the non-diagonal row entries.

Related Research Articles

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

In linear algebra, a matrix is in row echelon form if it can be obtained as the result of Gaussian elimination. Every matrix can be put in row echelon form by applying a sequence of elementary row operations. The term echelon comes from the French "échelon", and refers to the fact that the nonzero entries of a matrix in row echelon form look like an inverted staircase.

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.

<span class="mw-page-title-main">Cartan subalgebra</span> Nilpotent subalgebra of a Lie algebra

In mathematics, a Cartan subalgebra, often abbreviated as CSA, is a nilpotent subalgebra of a Lie algebra that is self-normalising. They were introduced by Élie Cartan in his doctoral thesis. It controls the representation theory of a semi-simple Lie algebra over a field of characteristic .

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais.

In physics, the von Neumann entropy, named after John von Neumann, is an extension of the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics. For a quantum-mechanical system described by a density matrix ρ, the von Neumann entropy is

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In mathematics, the Schur orthogonality relations, which were proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3).

In mathematics, a dissipative operator is a linear operator A defined on a linear subspace D(A) of Banach space X, taking values in X such that for all λ > 0 and all xD(A)

In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as

In mathematics, Welch bounds are a family of inequalities pertinent to the problem of evenly spreading a set of unit vectors in a vector space. The bounds are important tools in the design and analysis of certain methods in telecommunication engineering, particularly in coding theory. The bounds were originally published in a 1974 paper by L. R. Welch.

References

  1. Pierre Massé (1962). Optimal Investment Decisions: Rules for Action and Criteria for Choice. Prentice-Hall. p. 142.
  2. Paul Cohn (2006). Free Ideal Rings and Localization in General Rings . Cambridge University Press. p.  430. ISBN   0-521-85337-0.
  3. James E. Gentle (2007). Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer-Verlag. p. 42. ISBN   978-0-387-70872-0.