Broken diagonal

Last updated

In recreational mathematics and the theory of magic squares, a broken diagonal is a set of n cells forming two parallel diagonal lines in the square. Alternatively, these two lines can be thought of as wrapping around the boundaries of the square to form a single sequence.

Contents

In pandiagonal magic squares

A magic square in which the broken diagonals have the same sum as the rows, columns, and diagonals is called a pandiagonal magic square. [1] [2]

Examples of broken diagonals from the number square in the image are as follows: 3,12,14,5; 10,1,7,16; 10,13,7,4; 15,8,2,9; 15,12,2,5; and 6,13,11,4.

PanmagicSquare-Order4.svg

The fact that this square is a pandiagonal magic square can be verified by checking that all of its broken diagonals add up to the same constant:

3+12+14+5 = 34
10+1+7+16 = 34
10+13+7+4 = 34

One way to visualize a broken diagonal is to imagine a "ghost image" of the panmagic square adjacent to the original:

PanmagicSquare-Order4.svg PanmagicSquare-Order4.svg

The set of numbers {3, 12, 14, 5} of a broken diagonal, wrapped around the original square, can be seen starting with the first square of the ghost image and moving down to the left.

In linear algebra

Broken diagonals are used in a formula to find the determinant of 3 by 3 matrices.

For a 3×3 matrix A, its determinant is

[3]

Here, and are (products of the elements of) the broken diagonals of the matrix.

Broken diagonals are used in the calculation of the determinants of all matrices of size 3×3 or larger. This can be shown by using the matrix's minors to calculate the determinant.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix A is denoted det(A), det A, or |A|.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a different concept, the adjoint operator which is the conjugate transpose of the matrix.

In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In mathematics, specifically linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes. It generalizes the statement that the determinant of a product of square matrices is equal to the product of their determinants. The formula is valid for matrices with the entries from any commutative ring.

In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal . For example, the following matrix is tridiagonal:

In geometry, Plücker coordinates, introduced by Julius Plücker in the 19th century, are a way to assign six homogeneous coordinates to each line in projective 3-space, P3. Because they satisfy a quadratic constraint, they establish a one-to-one correspondence between the 4-dimensional space of lines in P3 and points on a quadric in P5. A predecessor and special case of Grassmann coordinates, Plücker coordinates arise naturally in geometric algebra. They have proved useful for computer graphics, and also can be extended to coordinates for the screws and wrenches in the theory of kinematics used for robot control.

In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × submatrices of B. Specifically, for every i,

In mathematics, Capelli's identity, named after Alfredo Capelli (1887), is an analogue of the formula det(AB) = det(A) det(B), for certain matrices with noncommuting entries, related to the representation theory of the Lie algebra . It can be used to relate an invariant ƒ to the invariant Ωƒ, where Ω is Cayley's Ω process.

References

  1. Pickover, Clifford A. (2011), The Zen of Magic Squares, Circles, and Stars: An Exhibition of Surprising Structures across the Dimensions, Princeton University Press, p. 7, ISBN   9781400841516 .
  2. Licks, H. E. (1921), Recreations in Mathematics, D. Van Nostrand Company, p. 42.
  3. title=Determinant|url=https://mathworld.wolfram.com/Determinant.html