Pascal matrix

Last updated

In mathematics, particularly matrix theory and combinatorics, a Pascal matrix is a matrix (possibly infinite) containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5×5 matrices are:

Contents

There are other ways in which Pascal's triangle can be put into matrix form, but these are not easily extended to infinity. [1]

Definition

The non-zero elements of a Pascal matrix are given by the binomial coefficients:

such that the indices i, j start at 0, and ! denotes the factorial.

Properties

The matrices have the pleasing relationship Sn = LnUn. From this it is easily seen that all three matrices have determinant 1, as the determinant of a triangular matrix is simply the product of its diagonal elements, which are all 1 for both Ln and Un. In other words, matrices Sn, Ln, and Un are unimodular, with Ln and Un having trace n.

The trace of Sn is given by

with the first few terms given by the sequence 1, 3, 9, 29, 99, 351, 1275, ... (sequence A006134 in the OEIS ).

Construction

A Pascal matrix can actually be constructed by taking the matrix exponential of a special subdiagonal or superdiagonal matrix. The example below constructs a 7×7 Pascal matrix, but the method works for any desired n×n Pascal matrices. The dots in the following matrices represent zero elements.

It is important to note that one cannot simply assume exp(A)exp(B) = exp(A + B), for n×n matrices A and B; this equality is only true when AB = BA (i.e. when the matrices A and B commute). In the construction of symmetric Pascal matrices like that above, the sub- and superdiagonal matrices do not commute, so the (perhaps) tempting simplification involving the addition of the matrices cannot be made.

A useful property of the sub- and superdiagonal matrices used for the construction is that both are nilpotent; that is, when raised to a sufficiently great integer power, they degenerate into the zero matrix. (See shift matrix for further details.) As the n×n generalised shift matrices we are using become zero when raised to power n, when calculating the matrix exponential we need only consider the first n +1 terms of the infinite series to obtain an exact result.

Variants

Interesting variants can be obtained by obvious modification of the matrix-logarithm PL7 and then application of the matrix exponential.

The first example below uses the squares of the values of the log-matrix and constructs a 7×7 "Laguerre"- matrix (or matrix of coefficients of Laguerre polynomials

The Laguerre-matrix is actually used with some other scaling and/or the scheme of alternating signs. (Literature about generalizations to higher powers is not found yet)

The second example below uses the products v(v +1) of the values of the log-matrix and constructs a 7×7 "Lah"- matrix (or matrix of coefficients of Lah numbers)

Using v(v  1) instead provides a diagonal shifting to bottom-right.

The third example below uses the square of the original PL7-matrix, divided by 2, in other words: the first-order binomials (binomial(k, 2)) in the second subdiagonal and constructs a matrix, which occurs in context of the derivatives and integrals of the Gaussian error function:

If this matrix is inverted (using, for instance, the negative matrix-logarithm), then this matrix has alternating signs and gives the coefficients of the derivatives (and by extension the integrals) of Gauss' error-function. (Literature about generalizations to greater powers is not found yet.)

Another variant can be obtained by extending the original matrix to negative values:

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In mathematics, the affine group or general affine group of any affine space is the group of all invertible affine transformations from the space into itself. In the case of a Euclidean space, the affine group consists of those functions from the space to itself such that the image of every line is a line.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

<span class="mw-page-title-main">LMS color space</span> Color space represented by the response of the three types of cones of the human eye

LMS, is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.

In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It's also referred to as LR decomposition.

In mathematics, particularly matrix theory, the n×nLehmer matrix is the constant symmetric matrix defined by

In linear algebra, the Hermite normal form is an analogue of reduced echelon form for matrices over the integers Z. Just as reduced echelon form can be used to solve problems about the solution to the linear system Ax=b where x is in Rn, the Hermite normal form can solve problems about the solution to the linear system Ax=b where this time x is restricted to have integer coordinates only. Other applications of the Hermite normal form include integer programming, cryptography, and abstract algebra.

In mathematics, Krawtchouk matrices are matrices whose entries are values of Krawtchouk polynomials at nonnegative integer points. The Krawtchouk matrix K(N) is an (N + 1) × (N + 1) matrix. The first few Krawtchouk matrices are:

A square root of a 2×2 matrixM is another 2×2 matrix R such that M = R2, where R2 stands for the matrix product of R with itself. In general, there can be zero, two, four, or even an infinitude of square-root matrices. In many cases, such a matrix R can be obtained by an explicit formula.

In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. Compound matrices are closely related to exterior algebras, and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems.

Transpositions matrix is square matrix, , , which elements are obtained from the elements of given n-dimensional vector as follows: , where denotes operation "bitwise Exclusive or" (XOR). The rows and columns of Transpositions matrix consists permutation of elements of vector X, as there are n/2 transpositions between every two rows or columns of the matrix

References

  1. Birregah, Babiga; Doh, Prosper K.; Adjallah, Kondo H. (2010-07-01). "A systematic approach to matrix forms of the Pascal triangle: The twelve triangular matrix forms and relations". European Journal of Combinatorics. 31 (5): 1205–1216. doi:10.1016/j.ejc.2009.10.009. ISSN   0195-6698.