In computer science, an e-graph is a data structure that stores an equivalence relation over terms of some language.
Let be a set of uninterpreted functions, where is the subset of consisting of functions of arity . Let be a countable set of opaque identifiers that may be compared for equality, called e-class IDs. The application of to e-class IDs is denoted and called an e-node.
The e-graph then represents equivalence classes of e-nodes, using the following data structures: [1]
In addition to the above structure, a valid e-graph conforms to several data structure invariants. [2] Two e-nodes are equivalent if they are in the same e-class. The congruence invariant states that an e-graph must ensure that equivalence is closed under congruence, where two e-nodes are congruent when . The hashcons invariant states that the hashcons maps canonical e-nodes to their e-class ID.
This section needs expansion. You can help by adding to it. (June 2021) |
E-graphs expose wrappers around the , , and operations from the union-find that preserve the e-graph invariants. The last operation, e-matching, is described below.
An e-graph can also be formulated as a bipartite graph where
There is a directed edge from each e-class to each of its members, and from each e-node to each of its children. [3]
Let be a set of variables and let be the smallest set that includes the 0-arity function symbols (also called constants), includes the variables, and is closed under application of the function symbols. In other words, is the smallest set such that , , and when and , then . A term containing variables is called a pattern, a term without variables is called ground.
An e-graph represents a ground term if one of its e-classes represents . An e-class represents if some e-node does. An e-node represents a term if and each e-class represents the term ( in ).
e-matching is an operation that takes a pattern and an e-graph , and yields all pairs where is a substitution mapping the variables in to e-class IDs and is an e-class ID such that the term is represented by . There are several known algorithms for e-matching, [4] [5] the relational e-matching algorithm is based on worst-case optimal joins and is worst-case optimal. [6]
Given an e-class and a cost function that maps each function symbol in to a natural number, the extraction problem is to find a ground term with minimal total cost that is represented by the given e-class. This problem is NP-hard. [7] There is also no constant-factor approximation algorithm for this problem, which can be shown by reduction from the set cover problem. However, for graphs with bounded treewidth, there is a linear-time, fixed-parameter tractable algorithm. [8]
This section needs expansion. You can help by adding to it. (June 2021) |
Equality saturation is a technique for building optimizing compilers using e-graphs. [10] It operates by applying a set of rewrites using e-matching until the e-graph is saturated, a timeout is reached, an e-graph size limit is reached, a fixed number of iterations is exceeded, or some other halting condition is reached. After rewriting, an optimal term is extracted from the e-graph according to some cost function, usually related to AST size or performance considerations.
E-graphs are used in automated theorem proving. They are a crucial part of modern SMT solvers such as Z3 [11] and CVC4, where they are used to decide the empty theory by computing the congruence closure of a set of equalities, and e-matching is used to instantiate quantifiers. [12] In DPLL(T)-based solvers that use conflict-driven clause learning (also known as non-chronological backtracking), e-graphs are extended to produce proof certificates. [13] E-graphs are also used in the Simplify theorem prover of ESC/Java. [14]
Equality saturation is used in specialized optimizing compilers, [15] e.g. for deep learning [16] and linear algebra. [17] Equality saturation has also been used for translation validation applied to the LLVM toolchain. [18]
E-graphs have been applied to several problems in program analysis, including fuzzing, [19] abstract interpretation, [20] and library learning. [21]
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In differential topology, the jet bundle is a certain construction that makes a new smooth fiber bundle out of a given smooth fiber bundle. It makes it possible to write differential equations on sections of a fiber bundle in an invariant form. Jets may also be seen as the coordinate free versions of Taylor expansions.
In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory.
In mathematics, the tautological one-form is a special 1-form defined on the cotangent bundle of a manifold In physics, it is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics.
String diagrams are a formal graphical language for representing morphisms in monoidal categories, or more generally 2-cells in 2-categories. They are a prominent tool in applied category theory. When interpreted in the monoidal category of vector spaces and linear maps with the tensor product, string diagrams are called tensor networks or Penrose graphical notation. This has led to the development of categorical quantum mechanics where the axioms of quantum theory are expressed in the language of monoidal categories.
In algebraic topology the cap product is a method of adjoining a chain of degree p with a cochain of degree q, such that q ≤ p, to form a composite chain of degree p − q. It was introduced by Eduard Čech in 1936, and independently by Hassler Whitney in 1938.
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
Assortativity, or assortative mixing, is a preference for a network's nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a node's degree. The addition of this characteristic to network models more closely approximates the behaviors of many real world networks.
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of -dimensional data items, a configuration of points in -dimensional space is sought that minimizes the so-called stress function . Usually is or , i.e. the matrix lists points in or dimensional Euclidean space so that the result may be visualised. The function is a cost or loss function that measures the squared differences between ideal distances and actual distances in r-dimensional space. It is defined as:
In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function from a certain function space, there exists a sequence of neural networks from the family, such that according to some criterion. That is, the family of neural networks is dense in the function space.
In computer science, more specifically in automata and formal language theory, nested words are a concept proposed by Alur and Madhusudan as a joint generalization of words, as traditionally used for modelling linearly ordered structures, and of ordered unranked trees, as traditionally used for modelling hierarchical structures. Finite-state acceptors for nested words, so-called nested word automata, then give a more expressive generalization of finite automata on words. The linear encodings of languages accepted by finite nested word automata gives the class of visibly pushdown languages. The latter language class lies properly between the regular languages and the deterministic context-free languages. Since their introduction in 2004, these concepts have triggered much research in that area.
In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of honest node is corrupted if at least one of the incoming packets is corrupted.
In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973.
The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm. It is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. The law and algorithm were introduced in a semi-tutorial by Srinivas M. Aji and Robert J. McEliece with the same title.
In discrete mathematics, the Bregman–Minc inequality, or Bregman's theorem, allows one to estimate the permanent of a binary matrix via its row or column sums. The inequality was conjectured in 1963 by Henryk Minc and first proved in 1973 by Lev M. Bregman. Further entropy-based proofs have been given by Alexander Schrijver and Jaikumar Radhakrishnan. The Bregman–Minc inequality is used, for example, in graph theory to obtain upper bounds for the number of perfect matchings in a bipartite graph.
The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts.
Flag algebras are an important computational tool in the field of graph theory which have a wide range of applications in homomorphism density and related topics. Roughly, they formalize the notion of adding and multiplying homomorphism densities and set up a framework to solve graph homomorphism inequalities with computers by reducing them to semidefinite programming problems. Originally introduced by Alexander Razborov in a 2007 paper, the method has since come to solve numerous difficult, previously unresolved graph theoretic questions. These include the question regarding the region of feasible edge density, triangle density pairs and the maximum number of pentagons in triangle free graphs.