In graph theory and theoretical computer science, the colour refinement algorithm also known as the naive vertex classification, or the 1-dimensional version of the Weisfeiler-Leman algorithm , is a routine used for testing whether two graphs are isomorphic. [1] While it solves graph isomorphism on almost all graphs, there are graphs such as all regular graphs that cannot be distinguished using colour refinement.
The algorithm takes as an input a graph with vertices. It proceeds in iterations and in each iteration produces a new colouring of the vertices. Formally a "colouring" is a function from the vertices of this graph into some set (of "colours"). In each iteration, we define a sequence of vertex colourings as follows:
In other words, the new colour of the vertex is the pair formed from the previous colour and the multiset of the colours of its neighbours. This algorithm keeps refining the current colouring. At some point it stabilises, i.e., . This final colouring is called the stable colouring.
Colour refinement can be used as a subroutine for an important computational problem: graph isomorphism. In this problem we have as input two graphs and our task is to determine whether they are isomorphic. Informally, this means that the two graphs are the same up to relabelling of vertices.
To test if and are isomorphic we could try the following. Run colour refinement on both graphs. If the stable colourings produced are different we know that the two graphs are not isomorphic. However, it could be that the same stable colouring is produced despite the two graphs not being isomorphic; see below.
It is easy to see that if colour refinement is given a vertex graph as input, a stable colouring is produced after at most iterations. Conversely, there exist graphs where this bound is realised. [2] This leads to a implementation where is the number of vertices and the number of edges. [3] This complexity has been proven to be optimal under reasonable assumptions. [4]
We say that two graphs and are distinguished by colour refinement if the algorithm yields a different output on as on . There are simple examples of graphs that are not distinguished by colour refinement. For example, it does not distinguish a cycle of length 6 from a pair of triangles (example V.1 in [5] ). Despite this, the algorithm is very powerful in that a random graph will be identified by the algorithm asymptotically almost surely. [6] Even stronger, it has been shown that as increases, the proportion of graphs that are not identified by colour refinement decreases exponentially in order . [7]
The expressivity of colour refinement also has a logical characterisation: two graphs can be distinguished by colour refinement if and only if they can be distinguished by the two variable fragment of first order logic with counting. [8]
This section is empty. You can help by adding to it. (September 2022) |
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each internal vertex are equal to each other. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k.
In combinatorics, Ramsey's theorem, in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph. To demonstrate the theorem for two colours (say, blue and red), let r and s be any two positive integers. Ramsey's theorem states that there exists a least positive integer R(r, s) for which every blue-red edge colouring of the complete graph on R(r, s) vertices contains a blue clique on r vertices or a red clique on s vertices. (Here R(r, s) signifies an integer that depends on both r and s.)
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.
In graph theory, an isomorphism of graphsG and H is a bijection between the vertex sets of G and H
In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem, and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing expander graphs.
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem.
In graph theory, a cograph, or complement-reducible graph, or P4-free graph, is a graph that can be generated from the single-vertex graph K1 by complementation and disjoint union. That is, the family of cographs is the smallest class of graphs that includes K1 and is closed under complementation and disjoint union.
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
In the mathematical field of graph theory, the Rado graph, Erdős–Rényi graph, or random graph is a countably infinite graph that can be constructed by choosing independently at random for each pair of its vertices whether to connect the vertices by an edge. The names of this graph honor Richard Rado, Paul Erdős, and Alfréd Rényi, mathematicians who studied it in the early 1960s; it appears even earlier in the work of Wilhelm Ackermann. The Rado graph can also be constructed non-randomly, by symmetrizing the membership relation of the hereditarily finite sets, by applying the BIT predicate to the binary representations of the natural numbers, or as an infinite Paley graph that has edges connecting pairs of prime numbers congruent to 1 mod 4 that are quadratic residues modulo each other.
In graph theory, a branch of mathematics, graph canonization is the problem of finding a canonical form of a given graph G. A canonical form is a labeled graph Canon(G) that is isomorphic to G, such that every graph that is isomorphic to G has the same canonical form as G. Thus, from a solution to the graph canonization problem, one could also solve the problem of graph isomorphism: to test whether two graphs G and H are isomorphic, compute their canonical forms Canon(G) and Canon(H), and test whether these two canonical forms are identical.
In graph theory, the zig-zag product of regular graphs , denoted by , is a binary operation which takes a large graph and a small graph and produces a graph that approximately inherits the size of the large one but the degree of the small one. An important property of the zig-zag product is that if is a good expander, then the expansion of the resulting graph is only slightly worse than the expansion of .
In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.
In the mathematical fields of graph theory and finite model theory, the logic of graphs deals with formal specifications of graph properties using sentences of mathematical logic. There are several variations in the types of logical operation that can be used in these sentences. The first-order logic of graphs concerns sentences in which the variables and predicates concern individual vertices and edges of a graph, while monadic second-order graph logic allows quantification over sets of vertices or edges. Logics based on least fixed point operators allow more general predicates over tuples of vertices, but these predicates can only be constructed through fixed-point operators, restricting their power.
In graph theory, the Weisfeiler Leman graph isomorphism test is a heuristic test for the existence of an isomorphism between two graphs G and H. It is a generalization of the color refinement algorithm and has been first described by Weisfeiler and Leman in 1968. The original formulation is based on graph canonization, a normal form for graphs, while there is also a combinatorial interpretation in the spirit of color refinement and a connection to logic.
The chromatic symmetric function is a symmetric function invariant of graphs studied in algebraic graph theory, a branch of mathematics. It is the weight generating function for proper graph colorings, and was originally introduced by Richard Stanley as a generalization of the chromatic polynomial of a graph.