This article relies largely or entirely on a single source .(September 2024) |
The Strachey method for magic squares is an algorithm for generating magic squares of singly even order 4k + 2. An example of magic square of order 6 constructed with the Strachey method:
Example | |||||
---|---|---|---|---|---|
35 | 1 | 6 | 26 | 19 | 24 |
3 | 32 | 7 | 21 | 23 | 25 |
31 | 9 | 2 | 22 | 27 | 20 |
8 | 28 | 33 | 17 | 10 | 15 |
30 | 5 | 34 | 12 | 14 | 16 |
4 | 36 | 29 | 13 | 18 | 11 |
Strachey's method of construction of singly even magic square of order n = 4k + 2.
1. Divide the grid into 4 quarters each having n2/4 cells and name them crosswise thus
A | C |
D | B |
2. Using the Siamese method (De la Loubère method) complete the individual magic squares of odd order 2k + 1 in subsquares A, B, C, D, first filling up the sub-square A with the numbers 1 to n2/4, then the sub-square B with the numbers n2/4 + 1 to 2n2/4,then the sub-square C with the numbers 2n2/4 + 1 to 3n2/4, then the sub-square D with the numbers 3n2/4 + 1 to n2. As a running example, we consider a 10×10 magic square, where we have divided the square into four quarters. The quarter A contains a magic square of numbers from 1 to 25, B a magic square of numbers from 26 to 50, C a magic square of numbers from 51 to 75, and D a magic square of numbers from 76 to 100.
17 | 24 | 1 | 8 | 15 | 67 | 74 | 51 | 58 | 65 |
23 | 5 | 7 | 14 | 16 | 73 | 55 | 57 | 64 | 66 |
4 | 6 | 13 | 20 | 22 | 54 | 56 | 63 | 70 | 72 |
10 | 12 | 19 | 21 | 3 | 60 | 62 | 69 | 71 | 53 |
11 | 18 | 25 | 2 | 9 | 61 | 68 | 75 | 52 | 59 |
92 | 99 | 76 | 83 | 90 | 42 | 49 | 26 | 33 | 40 |
98 | 80 | 82 | 89 | 91 | 48 | 30 | 32 | 39 | 41 |
79 | 81 | 88 | 95 | 97 | 29 | 31 | 38 | 45 | 47 |
85 | 87 | 94 | 96 | 78 | 35 | 37 | 44 | 46 | 28 |
86 | 93 | 100 | 77 | 84 | 36 | 43 | 50 | 27 | 34 |
3. Exchange the leftmost k columns in sub-square A with the corresponding columns of sub-square D.
92 | 99 | 1 | 8 | 15 | 67 | 74 | 51 | 58 | 65 |
98 | 80 | 7 | 14 | 16 | 73 | 55 | 57 | 64 | 66 |
79 | 81 | 13 | 20 | 22 | 54 | 56 | 63 | 70 | 72 |
85 | 87 | 19 | 21 | 3 | 60 | 62 | 69 | 71 | 53 |
86 | 93 | 25 | 2 | 9 | 61 | 68 | 75 | 52 | 59 |
17 | 24 | 76 | 83 | 90 | 42 | 49 | 26 | 33 | 40 |
23 | 5 | 82 | 89 | 91 | 48 | 30 | 32 | 39 | 41 |
4 | 6 | 88 | 95 | 97 | 29 | 31 | 38 | 45 | 47 |
10 | 12 | 94 | 96 | 78 | 35 | 37 | 44 | 46 | 28 |
11 | 18 | 100 | 77 | 84 | 36 | 43 | 50 | 27 | 34 |
4. Exchange the rightmost k - 1 columns in sub-square C with the corresponding columns of sub-square B.
92 | 99 | 1 | 8 | 15 | 67 | 74 | 51 | 58 | 40 |
98 | 80 | 7 | 14 | 16 | 73 | 55 | 57 | 64 | 41 |
79 | 81 | 13 | 20 | 22 | 54 | 56 | 63 | 70 | 47 |
85 | 87 | 19 | 21 | 3 | 60 | 62 | 69 | 71 | 28 |
86 | 93 | 25 | 2 | 9 | 61 | 68 | 75 | 52 | 34 |
17 | 24 | 76 | 83 | 90 | 42 | 49 | 26 | 33 | 65 |
23 | 5 | 82 | 89 | 91 | 48 | 30 | 32 | 39 | 66 |
4 | 6 | 88 | 95 | 97 | 29 | 31 | 38 | 45 | 72 |
10 | 12 | 94 | 96 | 78 | 35 | 37 | 44 | 46 | 53 |
11 | 18 | 100 | 77 | 84 | 36 | 43 | 50 | 27 | 59 |
5. Exchange the middle cell of the leftmost column of sub-square A with the corresponding cell of sub-square D. Exchange the central cell in sub-square A with the corresponding cell of sub-square D.
92 | 99 | 1 | 8 | 15 | 67 | 74 | 51 | 58 | 40 |
98 | 80 | 7 | 14 | 16 | 73 | 55 | 57 | 64 | 41 |
4 | 81 | 88 | 20 | 22 | 54 | 56 | 63 | 70 | 47 |
85 | 87 | 19 | 21 | 3 | 60 | 62 | 69 | 71 | 28 |
86 | 93 | 25 | 2 | 9 | 61 | 68 | 75 | 52 | 34 |
17 | 24 | 76 | 83 | 90 | 42 | 49 | 26 | 33 | 65 |
23 | 5 | 82 | 89 | 91 | 48 | 30 | 32 | 39 | 66 |
79 | 6 | 13 | 95 | 97 | 29 | 31 | 38 | 45 | 72 |
10 | 12 | 94 | 96 | 78 | 35 | 37 | 44 | 46 | 53 |
11 | 18 | 100 | 77 | 84 | 36 | 43 | 50 | 27 | 59 |
The result is a magic square of order n=4k + 2. [1]
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
A Pythagorean triple consists of three positive integers a, b, and c, such that a2 + b2 = c2. Such a triple is commonly written (a, b, c), a well-known example is (3, 4, 5). If (a, b, c) is a Pythagorean triple, then so is (ka, kb, kc) for any positive integer k. A triangle whose side lengths are a Pythagorean triple is a right triangle and called a Pythagorean triangle.
In mathematics, especially historical and recreational mathematics, a square array of numbers, usually positive integers, is called a magic square if the sums of the numbers in each row, each column, and both main diagonals are the same. The "order" of the magic square is the number of integers along one side (n), and the constant sum is called the "magic constant". If the array includes just the positive integers , the magic square is said to be "normal". Some authors take "magic square" to mean "normal magic square".
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In cryptography, the tabula recta is a square table of alphabets, each row of which is made by shifting the previous one to the left. The term was invented by the German author and monk Johannes Trithemius in 1508, and used in his Trithemius cipher.
In mathematics, a magic hypercube is the k-dimensional generalization of magic squares and magic cubes, that is, an n × n × n × ... × n array of integers such that the sums of the numbers on each pillar (along any axis) as well as on the main space diagonals are all the same. The common sum is called the magic constant of the hypercube, and is sometimes denoted Mk(n). If a magic hypercube consists of the numbers 1, 2, ..., nk, then it has magic number
In combinatorial mathematics, two Latin squares of the same size (order) are said to be orthogonal if when superimposed the ordered paired entries in the positions are all distinct. A set of Latin squares, all of the same order, all pairs of which are orthogonal is called a set of mutually orthogonal Latin squares. This concept of orthogonality in combinatorics is strongly related to the concept of blocking in statistics, which ensures that independent variables are truly independent with no hidden confounding correlations. "Orthogonal" is thus synonymous with "independent" in that knowing one variable's value gives no further information about another variable's likely value.
A pandiagonal magic square or panmagic square is a magic square with the additional property that the broken diagonals, i.e. the diagonals that wrap round at the edges of the square, also add up to the magic constant.
The magic constant or magic sum of a magic square is the sum of numbers in any row, column, or diagonal of the magic square. For example, the magic square shown below has a magic constant of 15. For a normal magic square of order n – that is, a magic square which contains the numbers 1, 2, ..., n2 – the magic constant is .
Conway's LUX method for magic squares is an algorithm by John Horton Conway for creating magic squares of order 4n+2, where n is a natural number.
An antimagic square of order n is an arrangement of the numbers 1 to n2 in a square, such that the sums of the n rows, the n columns and the two diagonals form a sequence of 2n + 2 consecutive integers. The smallest antimagic squares have order 4. Antimagic squares contrast with magic squares, where each row, column, and diagonal sum must have the same value.
Location arithmetic is the additive (non-positional) binary numeral systems, which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard-like grid.
Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids.
An associative magic square is a magic square for which each pair of numbers symmetrically opposite to the center sum up to the same value. For an n × n square, filled with the numbers from 1 to n2, this common sum must equal n2 + 1. These squares are also called associated magic squares, regular magic squares, regmagic squares, or symmetric magic squares.
Zhegalkinpolynomials, also known as algebraic normal form, are a representation of functions in Boolean algebra. Introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in 1927, they are the polynomial ring over the integers modulo 2. The resulting degeneracies of modular arithmetic result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coefficients nor exponents. Coefficients are redundant because 1 is the only nonzero coefficient. Exponents are redundant because in arithmetic mod 2, x2 = x. Hence a polynomial such as 3x2y5z is congruent to, and can therefore be rewritten as, xyz.
The Siamese method, or De la Loubère method, is a simple method to construct any size of n-odd magic squares. The method was brought to France in 1688 by the French mathematician and diplomat Simon de la Loubère, as he was returning from his 1687 embassy to the kingdom of Siam. The Siamese method makes the creation of magic squares straightforward.
A geometric magic square, often abbreviated to geomagic square, is a generalization of magic squares invented by Lee Sallows in 2001. A traditional magic square is a square array of numbers whose sum taken in any row, any column, or in either diagonal is the same target number. A geomagic square, on the other hand, is a square array of geometrical shapes in which those appearing in each row, column, or diagonal can be fitted together to create an identical shape called the target shape. As with numerical types, it is required that the entries in a geomagic square be distinct. Similarly, the eight trivial variants of any square resulting from its rotation and/or reflection are all counted as the same square. By the dimension of a geomagic square is meant the dimension of the pieces it uses. Hitherto interest has focused mainly on 2D squares using planar pieces, but pieces of any dimension are permitted.
Communication-avoiding algorithms minimize movement of data within a memory hierarchy for improving its running-time and energy consumption. These minimize the total of two costs : arithmetic and communication. Communication, in this context refers to moving data, either between levels of memory or between multiple processors over a network. It is much more expensive than arithmetic.