Magic graph

Last updated

A magic graph is a graph whose edges are labelled by the first q positive integers, where q is the number of edges, so that the sum over the edges incident with any vertex is the same, independent of the choice of vertex; or it is a graph that has such a labelling. The name "magic" sometimes means that the integers are any positive integers; then the graph and the labelling using the first q positive integers are called supermagic.

A graph is vertex-magic if its vertices can be labelled so that the sum on any edge is the same. It is total magic if its edges and vertices can be labelled so that the vertex label plus the sum of labels on edges incident with that vertex is a constant.

There are a great many variations on the concept of magic labelling of a graph. There is much variation in terminology as well. The definitions here are perhaps the most common.

Comprehensive references for magic labellings and magic graphs are Gallian (1998), Wallis (2001), and Marr and Wallis (2013).

Magic squares

Euler diagram of requirements of some types of 4 x 4 magic squares. Cells of the same colour sum to the magic constant. * In 4 x 4 most-perfect magic squares, any 2 cells that are 2 cells diagonally apart (including wraparound) sum to half the magic constant, hence any 2 such pairs also sum to the magic constant. 4x4 magic square hierarchy.svg
Euler diagram of requirements of some types of 4×4 magic squares. Cells of the same colour sum to the magic constant. * In 4×4 most-perfect magic squares, any 2 cells that are 2 cells diagonally apart (including wraparound) sum to half the magic constant, hence any 2 such pairs also sum to the magic constant.

A semimagic square is an n×n square with the numbers 1 to n2 in its cells, in which the sum of each row and column is the same. A semimagic square is equivalent to a magic labelling of the complete bipartite graph Kn,n. The two vertex sets of Kn,n correspond to the rows and the columns of the square, respectively, and the label on an edge risj is the value in row i, column j of the semimagic square.

The definition of semimagic squares differs from the definition of magic squares in the treatment of the diagonals of the square. Magic squares are required to have diagonals with the same sum as the row and column sums, but for semimagic squares this is not required. Thus, every magic square is semimagic, but not vice versa.

Related Research Articles

<span class="mw-page-title-main">Cube</span> Solid object with six equal square faces

In geometry, a cube is a three-dimensional solid object bounded by six square faces, facets, or sides, with three meeting at each vertex. Viewed from a corner, it is a hexagon and its net is usually depicted as a cross.

<span class="mw-page-title-main">Hypercube</span> Convex polytope, the n-dimensional analogue of a square and a cube

In geometry, a hypercube is an n-dimensional analogue of a square and a cube. It is a closed, compact, convex figure whose 1-skeleton consists of groups of opposite parallel line segments aligned in each of the space's dimensions, perpendicular to each other and of the same length. A unit hypercube's longest diagonal in n dimensions is equal to .

In combinatorics, Ramsey's theorem, in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph. To demonstrate the theorem for two colours (say, blue and red), let r and s be any two positive integers. Ramsey's theorem states that there exists a least positive integer R(r, s) for which every blue-red edge colouring of the complete graph on R(r, s) vertices contains a blue clique on r vertices or a red clique on s vertices. (Here R(r, s) signifies an integer that depends on both r and s.)

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

<span class="mw-page-title-main">Component (graph theory)</span> Maximal subgraph whose vertices can reach each other

In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components.

104 is the natural number following 103 and preceding 105.

In mathematics, an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related and 0 if they are not. There are variations; see below.

<span class="mw-page-title-main">Graph (abstract data type)</span> Abstract data type in computer science

In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.

An antimagic square of order n is an arrangement of the numbers 1 to n2 in a square, such that the sums of the n rows, the n columns and the two diagonals form a sequence of 2n + 2 consecutive integers. The smallest antimagic squares have order 4. Antimagic squares contrast with magic squares, where each row, column, and diagonal sum must have the same value.

In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the Laplacian matrix of the graph; specifically, the number is equal to any cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph.

In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.

In the mathematical discipline of graph theory, a graph labelling is the assignment of labels, traditionally represented by integers, to edges and/or vertices of a graph.

<span class="mw-page-title-main">Rook's graph</span> Graph of chess rook moves

In graph theory, a rook's graph is an undirected graph that represents all legal moves of the rook chess piece on a chessboard. Each vertex of a rook's graph represents a square on a chessboard, and there is an edge between any two squares sharing a row (rank) or column (file), the squares that a rook can move between. These graphs can be constructed for chessboards of any rectangular shape. Although rook's graphs have only minor significance in chess lore, they are more important in the abstract mathematics of graphs through their alternative constructions: rook's graphs are the Cartesian product of two complete graphs, and are the line graphs of complete bipartite graphs. The square rook's graphs constitute the two-dimensional Hamming graphs.

<span class="mw-page-title-main">Handshaking lemma</span> Every graph has evenly many odd vertices

In graph theory, a branch of mathematics, the handshaking lemma is the statement that, in every finite undirected graph, the number of vertices that touch an odd number of edges is even. For example, if there is a party of people who shake hands, the number of people who shake an odd number of other people's hands is even. The handshaking lemma is a consequence of the degree sum formula, also sometimes called the handshaking lemma, according to which the sum of the degrees equals twice the number of edges in the graph. Both results were proven by Leonhard Euler (1736) in his famous paper on the Seven Bridges of Königsberg that began the study of graph theory.

Polyhedral combinatorics is a branch of mathematics, within combinatorics and discrete geometry, that studies the problems of counting and describing the faces of convex polyhedra and higher-dimensional convex polytopes.

<span class="mw-page-title-main">Associahedron</span> Convex polytope of parenthesizations

In mathematics, an associahedronKn is an (n – 2)-dimensional convex polytope in which each vertex corresponds to a way of correctly inserting opening and closing parentheses in a string of n letters, and the edges correspond to single application of the associativity rule. Equivalently, the vertices of an associahedron correspond to the triangulations of a regular polygon with n + 1 sides and the edges correspond to edge flips in which a single diagonal is removed from a triangulation and replaced by a different diagonal. Associahedra are also called Stasheff polytopes after the work of Jim Stasheff, who rediscovered them in the early 1960s after earlier work on them by Dov Tamari.

<span class="mw-page-title-main">Directed graph</span> Graph with oriented edges

In mathematics, and more specifically in graph theory, a directed graph is a graph that is made up of a set of vertices connected by directed edges, often called arcs.

In graph theory, a friendly-index set is a finite set of integers associated with a given undirected graph and generated by a type of graph labeling called a friendly labeling.

In mathematics, the minimum rank is a graph parameter for a graph G. It was motivated by the Colin de Verdière graph invariant.

In the mathematical discipline of graph theory, a rainbow matching in an edge-colored graph is a matching in which all the edges have distinct colors.

References