Orthogonal diagonalization

Last updated

In linear algebra, an orthogonal diagonalization of a symmetric matrix is a diagonalization by means of an orthogonal change of coordinates. [1]

Linear algebra branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 × 3, because there are two rows and three columns:

In linear algebra, a square matrix is called diagonalizable or nondefective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix such that is a diagonal matrix. If is a finite-dimensional vector space, then a linear map is called diagonalizable if there exists an ordered basis of with respect to which is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map. A square matrix that is not diagonalizable is called defective.

The following is an orthogonal diagonalization algorithm that diagonalizes a quadratic form q(x) on Rn by means of an orthogonal change of coordinates X = PY. [2]

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

Symmetric matrix Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix as coefficients. The characteristic polynomial of an endomorphism of vector spaces of finite dimension is the characteristic polynomial of the matrix of the endomorphism over any base; it does not depend on the choice of a basis. The characteristic equation is the equation obtained by equating to zero the characteristic polynomial.

Root system Geometric arrangements of points, foundational to Lie theory

In mathematics, a root system is a configuration of vectors in a Euclidean space satisfying certain geometrical properties. The concept is fundamental in the theory of Lie groups and Lie algebras, especially the classification and representation theory of semisimple Lie algebras. Since Lie groups and Lie algebras have become important in many parts of mathematics during the twentieth century, the apparently special nature of root systems belies the number of areas in which they are applied. Further, the classification scheme for root systems, by Dynkin diagrams, occurs in parts of mathematics with no overt connection to Lie theory. Finally, root systems are important for their own sake, as in spectral graph theory.

The X=PY is the required orthogonal change of coordinates, and the diagonal entries of will be the eigenvalues which correspond to the columns of P.

Related Research Articles

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Square matrix matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. The term usually refers to square matrices. An example of a 2-by-2 diagonal matrix is ; the following matrix is a 3-by-3 diagonal matrix: . An identity matrix of any size, or any multiple of it, will be a diagonal matrix.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

Jordan normal form Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form of a linear operator on a finite-dimensional vector space is an upper triangular matrix of a particular form called a Jordan matrix, representing the operator with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if A is the symmetric matrix that defines the quadratic form, and S is any invertible matrix such that D = SAST is diagonal, then the number of negative elements in the diagonal of D is always the same, for all such S; and the same goes for the number of positive elements.

In the mathematical field of graph theory, the Laplacian matrix, sometimes called admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. The Laplacian matrix can be used to find many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the second smallest eigenvalue of its Laplacian by Cheeger's inequality. It can also be used to construct low dimensional embeddings, which can be useful for a variety of machine learning applications.

Phase plane

In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say, or etc.. It is a two-dimensional case of the general n-dimensional phase space.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.

Numerical linear algebra is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to mathematical questions. It is a subfield of numerical analysis, and a type of linear algebra. Because computers use floating-point arithmetic, they cannot exactly represent irrational data, and many algorithms increase that imprecision when implemented by a computer. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize computer error while retaining efficiency and precision.

In the mathematical fields of geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

In linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

References

  1. Poole, D. (2010). Linear Algebra: A Modern Introduction (in Dutch). Cengage Learning. p. 411. ISBN   978-0-538-73545-2 . Retrieved 12 November 2018.
  2. Seymour Lipschutz 3000 Solved Problems in Linear Algebra.
Maxime Bôcher American mathematician

Maxime Bôcher was an American mathematician who published about 100 papers on differential equations, series, and algebra. He also wrote elementary texts such as Trigonometry and Analytic Geometry. Bôcher's theorem, Bôcher's equation, and the Bôcher Memorial Prize are named after him.

HathiTrust digital library

HathiTrust is a large-scale collaborative repository of digital content from research libraries including content digitized via the Google Books project and Internet Archive digitization initiatives, as well as content digitized locally by libraries.