The expander mixing lemma intuitively states that the edges of certain -regular graphs are evenly distributed throughout the graph. In particular, the number of edges between two vertex subsets and is always close to the expected number of edges between them in a random -regular graph, namely .
Define an -graph to be a -regular graph on vertices such that all of the eigenvalues of its adjacency matrix except one have absolute value at most The -regularity of the graph guarantees that its largest absolute value of an eigenvalue is In fact, the all-1's vector is an eigenvector of with eigenvalue , and the eigenvalues of the adjacency matrix will never exceed the maximum degree of in absolute value.
If we fix and then -graphs form a family of expander graphs with a constant spectral gap.
Let be an -graph. For any two subsets , let be the number of edges between S and T (counting edges contained in the intersection of S and T twice). Then
We can in fact show that
using similar techniques. [1]
For biregular graphs, we have the following variation, where we take to be the second largest eigenvalue. [2]
Let be a bipartite graph such that every vertex in is adjacent to vertices of and every vertex in is adjacent to vertices of . Let with and . Let . Then
Note that is the largest eigenvalue of .
Let be the adjacency matrix of and let be the eigenvalues of (these eigenvalues are real because is symmetric). We know that with corresponding eigenvector , the normalization of the all-1's vector. Define and note that . Because is symmetric, we can pick eigenvectors of corresponding to eigenvalues so that forms an orthonormal basis of .
Let be the matrix of all 1's. Note that is an eigenvector of with eigenvalue and each other , being perpendicular to , is an eigenvector of with eigenvalue 0. For a vertex subset , let be the column vector with coordinate equal to 1 if and 0 otherwise. Then,
Let . Because and share eigenvectors, the eigenvalues of are . By the Cauchy-Schwarz inequality, we have that . Furthermore, because is self-adjoint, we can write
This implies that and .
To show the tighter bound above, we instead consider the vectors and , which are both perpendicular to . We can expand
because the other two terms of the expansion are zero. The first term is equal to , so we find that
We can bound the right hand side by using the same methods as in the earlier proof.
The expander mixing lemma can be used to upper bound the size of an independent set within a graph. In particular, the size of an independent set in an -graph is at most This is proved by letting in the statement above and using the fact that
An additional consequence is that, if is an -graph, then its chromatic number is at least This is because, in a valid graph coloring, the set of vertices of a given color is an independent set. By the above fact, each independent set has size at most so at least such sets are needed to cover all of the vertices.
A second application of the expander mixing lemma is to provide an upper bound on the maximum possible size of an independent set within a polarity graph. Given a finite projective plane with a polarity the polarity graph is a graph where the vertices are the points a of , and vertices and are connected if and only if In particular, if has order then the expander mixing lemma can show that an independent set in the polarity graph can have size at most a bound proved by Hobart and Williford.
Bilu and Linial showed [3] that a converse holds as well: if a -regular graph satisfies that for any two subsets with we have
then its second-largest (in absolute value) eigenvalue is bounded by .
Friedman and Widgerson proved the following generalization of the mixing lemma to hypergraphs.
Let be a -uniform hypergraph, i.e. a hypergraph in which every "edge" is a tuple of vertices. For any choice of subsets of vertices,
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.
In graph theory, a strongly regular graph (SRG) is a regular graph G = (V, E) with v vertices and degree k such that for some given integers
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.
In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after soviet mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967.
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.
In spectral graph theory, the Alon–Boppana bound provides a lower bound on the second-largest eigenvalue of the adjacency matrix of a -regular graph, meaning a graph in which every vertex has degree . The reason for the interest in the second-largest eigenvalue is that the largest eigenvalue is guaranteed to be due to -regularity, with the all-ones vector being the associated eigenvector. The graphs that come close to meeting this bound are Ramanujan graphs, which are examples of the best possible expander graphs.
In mathematics, the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors. Analogously to the classical Fourier transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis.