This article needs additional citations for verification .(December 2018) |
In mathematics, the spectral gap is the difference between the moduli of the two largest eigenvalues of a matrix or operator; alternately, it is sometimes taken as the smallest non-zero eigenvalue. Various theorems relate this difference to other properties of the system. The spectral gap gets its name from the matrix spectrum, that is, for a matrix, the list of its eigenvalues. It provides insight on diffusion within the graph: corresponding the spectral gap to the smallest non-zero eigenvalue, it is then the mode of the network state that shows the slowest exponential decay over time.
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
In graph theory, a strongly regular graph (SRG) is a regular graph G = (V, E) with v vertices and degree k such that for some given integers
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special case of Cheeger inequalities in expander graphs.
In mathematics, the Cheeger constant of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling. The graph theoretical notion originated after the Cheeger isoperimetric constant of a compact Riemannian manifold.
In mathematics, a graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing, VLSI circuit design, and task scheduling in multiprocessor computers, among others. Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see Buluc et al. (2013). Two common examples of graph partitioning are minimum cut and maximum cut problems.
In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.
In theoretical computer science, graph theory, and mathematics, the conductance is a parameter of a Markov chain that is closely tied to its mixing time, that is, how rapidly the chain converges to its stationary distribution, should it exist. Equivalently, the conductance can be viewed as a parameter of a directed graph, in which case it can be used to analyze how quickly random walks in the graph converge.
The image segmentation problem is concerned with partitioning an image into multiple regions according to some homogeneity criterion. This article is primarily concerned with graph theoretic approaches to image segmentation applying graph partitioning via minimum cut or maximum cut. Segmentation-based object categorization can be viewed as a specific case of spectral clustering applied to image segmentation.
In Riemannian geometry, the Cheeger isoperimetric constant of a compact Riemannian manifold M is a positive real number h(M) defined in terms of the minimal area of a hypersurface that divides M into two disjoint pieces. In 1971, Jeff Cheeger proved an inequality that related the first nontrivial eigenvalue of the Laplace–Beltrami operator on M to h(M). In 1982, Peter Buser proved a reverse version of this inequality, and the two inequalities put together are sometimes called the Cheeger-Buser inequality. These inequalities were highly influential not only in Riemannian geometry and global analysis, but also in the theory of Markov chains and in graph theory, where they have inspired the analogous Cheeger constant of a graph and the notion of conductance.
Spectral geometry is a field in mathematics which concerns relationships between geometric structures of manifolds and spectra of canonically defined differential operators. The case of the Laplace–Beltrami operator on a closed Riemannian manifold has been most intensively studied, although other Laplace operators in differential geometry have also been examined. The field concerns itself with two kinds of questions: direct problems and inverse problems.
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem
Superstrong approximation is a generalisation of strong approximation in algebraic groups G, to provide spectral gap results. The spectrum in question is that of the Laplacian matrix associated to a family of quotients of a discrete group Γ; and the gap is that between the first and second eigenvalues. Here Γ is a subgroup of the rational points of G, but need not be a lattice: it may be a so-called thin group. The "gap" in question is a lower bound for the difference of those eigenvalues.