Modal matrix

Last updated

In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors. [1]

Contents

Specifically the modal matrix for the matrix is the n × n matrix formed with the eigenvectors of as columns in . It is utilized in the similarity transformation

where is an n × n diagonal matrix with the eigenvalues of on the main diagonal of and zeros elsewhere. The matrix is called the spectral matrix for . The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right in . [2]

Example

The matrix

has eigenvalues and corresponding eigenvectors

A diagonal matrix , similar to is

One possible choice for an invertible matrix such that is

[3]

Note that since eigenvectors themselves are not unique, and since the columns of both and may be interchanged, it follows that both and are not unique. [4]

Generalized modal matrix

Let be an n × n matrix. A generalized modal matrix for is an n × n matrix whose columns, considered as vectors, form a canonical basis for and appear in according to the following rules:

One can show that

 

 

 

 

(1)

where is a matrix in Jordan normal form. By premultiplying by , we obtain

 

 

 

 

(2)

Note that when computing these matrices, equation ( 1 ) is the easiest of the two equations to verify, since it does not require inverting a matrix. [6]

Example

This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order. [7] The matrix

has a single eigenvalue with algebraic multiplicity . A canonical basis for will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; see generalized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors , one chain of two vectors , and two chains of one vector , .

An "almost diagonal" matrix in Jordan normal form, similar to is obtained as follows:

where is a generalized modal matrix for , the columns of are a canonical basis for , and . [8] Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both and may be interchanged, it follows that both and are not unique. [9]

Notes

  1. Bronson (1970 , pp. 179–183)
  2. Bronson (1970 , p. 181)
  3. Beauregard & Fraleigh (1973 , pp. 271, 272)
  4. Bronson (1970 , p. 181)
  5. Bronson (1970 , p. 205)
  6. Bronson (1970 , pp. 206–207)
  7. Nering (1970 , pp. 122, 123)
  8. Bronson (1970 , pp. 208, 209)
  9. Bronson (1970 , p. 206)

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

Principal component analysis Conversion of observations of possibly-correlated variables into values of fewer, uncorrelated variables

The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.

Singular value decomposition Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

Eulers rotation theorem Movement with a fixed point is rotation

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In classical mechanics, the Udwadia–Kalaba formulation is a method for deriving the equations of motion of a constrained mechanical system. The method was first described by Vereshchagin for the particular case of robotic arms, and later generalized to all mechanical systems by Firdaus E. Udwadia and Robert E. Kalaba in 1992. The approach is based on Gauss's principle of least constraint. The Udwadia–Kalaba method applies to both holonomic constraints and nonholonomic constraints, as long as they are linear with respect to the accelerations. The method generalizes to constraint forces that do not obey D'Alembert's principle.

In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.

References