Laplacian matrix

Last updated

In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian , admittance matrix , Kirchhoff matrix or discrete Laplacian , is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.

Contents

The Laplacian matrix relates to many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality. The spectral decomposition of the Laplacian matrix allows constructing low dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing. Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.

The Laplacian matrix is the easiest to define for a simple graph, but more common in applications for an edge-weighted graph, i.e., with weights on its edges — the entries of the graph adjacency matrix. Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.

Definitions for simple graphs

Laplacian matrix

Given a simple graph with vertices , its Laplacian matrix is defined element-wise as [1]

or equivalently by the matrix

where D is the degree matrix and A is the adjacency matrix of the graph. Since is a simple graph, only contains 1s or 0s and its diagonal elements are all 0s.

Here is a simple example of a labelled, undirected graph and its Laplacian matrix.

Labelled graph Degree matrix Adjacency matrix Laplacian matrix
6n-graf.svg

We observe for the undirected graph that both the adjacency matrix and the Laplacian matrix are symmetric, and that row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular).

For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example:

Labelled graph Adjacency matrix Out-Degree matrixOut-Degree LaplacianIn-Degree matrixIn-Degree Laplacian
3 node Directed graph.png

In the directed graph, both the adjacency matrix and the Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used.

Laplacian matrix for an undirected graph via the oriented incidence matrix

The oriented incidence matrix B with element Bve for the vertex v and the edge e (connecting vertices and , with i  j) is defined by

Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian matrix L defined as

where is the matrix transpose of B.

Undirected graph Incidence matrix Laplacian matrix
Labeled undirected graph.svg

An alternative product defines the so-called edge-based Laplacian, as opposed to the original commonly used vertex-based Laplacian matrix L.

Symmetric Laplacian for a directed graph

The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional spectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to apply techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter.

In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a Boolean sum of the adjacency matrix of the original directed graph and its matrix transpose , where the zero and one entries of are treated as logical, rather than numerical, values, as in the following example:

Adjacency matrix Symmetrized adjacencySymmetric Laplacian matrix

Laplacian matrix normalization

A vertex with a large degree, also called a heavy node, results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization.

Symmetrically normalized Laplacian

The symmetrically normalized Laplacian matrix is defined as: [1]

where is the Moore–Penrose inverse.

The elements of are thus given by

The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric.

Adjacency matrix Degree matrixNormalized Laplacian

For a non-symmetric adjacency matrix of a directed graph, either of indegree and outdegree can be used for normalization:

Adjacency matrix Out-Degree matrixOut-Degree normalized LaplacianIn-Degree matrixIn-Degree normalized Laplacian

Left (random-walk) and right normalized Laplacians

The left (random-walk) normalized Laplacian matrix is defined as:

where is the Moore–Penrose inverse. The elements of are given by

Similarly, the right normalized Laplacian matrix is defined as

.

The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example,

Adjacency matrix Degree matrixLeft normalized LaplacianRight normalized Laplacian

The example also demonstrates that if has no isolated vertices, then right stochastic and hence is the matrix of a random walk, so that the left normalized Laplacian has each row summing to zero. Thus we sometimes alternatively call the random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian each column sums to zero since is left stochastic.

For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization:

Adjacency matrix Out-Degree matrixOut-Degree left normalized LaplacianIn-Degree matrixIn-Degree right normalized Laplacian

The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic .

Definitions for graphs with weighted edges

Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization.

Laplacian matrix

The Laplacian matrix is defined by

where D is the degree matrix and A is the adjacency matrix of the graph.

For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example:

Adjacency matrix In-Degree matrixIn-Degree LaplacianOut-Degree matrixOut-Degree Laplacian

Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values.

Symmetric Laplacian via the incidence matrix

A 2-dimensional spring system. Elastic network model.png
A 2-dimensional spring system.

For graphs with weighted edges one can define a weighted incidence matrix B and use it to construct the corresponding symmetric Laplacian as . An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges.

We thus reuse the definition of the weightless incidence matrix B with element Bve for the vertex v and the edge e (connecting vertexes and , with i > j) defined by

We now also define a diagonal matrix W containing the edge weights. Even though the edges in the definition of B are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian matrix L defined as

where is the matrix transpose of B.

The construction is illustrated in the following example, where every edge is assigned the weight value i, with

Undirected graph Incidence matrix Edge weightsLaplacian matrix
Labeled undirected graph.svg

Symmetric Laplacian for a directed graph

Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix of the original directed graph and its matrix transpose as in the following example:

Adjacency matrix Symmetrized adjacency matrixSymmetric Laplacian matrix

where the zero and one entries of are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2.

Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the indegree and outdegree, as in the following example:

Adjacency matrix Out-Degree matrixOut-Degree LaplacianIn-Degree matrixIn-Degree Laplacian

The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix.

Laplacian matrix normalization

The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.

Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors.

Symmetrically normalized Laplacian

The symmetrically normalized Laplacian is defined as

where L is the unnormalized Laplacian, A is the adjacency matrix, D is the degree matrix, and is the Moore–Penrose inverse. Since the degree matrix D is diagonal, its reciprocal square root is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of D. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example:

Adjacency matrix In-Degree matrixIn-Degree normalized LaplacianOut-Degree matrixOut-Degree normalized Laplacian

The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix A is symmetric and the diagonal entries of D are nonnegative, in which case we can use the term the symmetric normalized Laplacian.

The symmetric normalized Laplacian matrix can be also written as

using the weightless incidence matrix B and the diagonal matrix W containing the edge weights and defining the new weighted incidence matrix whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = {u, v} has an entry in the row corresponding to u, an entry in the row corresponding to v, and has 0 entries elsewhere.

Random walk normalized Laplacian

The random walk normalized Laplacian is defined as

where D is the degree matrix. Since the degree matrix D is diagonal, its inverse is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of D. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element to 0. The matrix elements of are given by

The name of the random-walk normalized Laplacian comes from the fact that this matrix is , where is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let denote the i-th standard basis vector. Then is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex ; i.e., . More generally, if the vector is a probability distribution of the location of a random walker on the vertices of the graph, then is the probability distribution of the walker after steps.

The random walk normalized Laplacian can also be called the left normalized Laplacian since the normalization is performed by multiplying the Laplacian by the normalization matrix on the left. It has each row summing to zero since is right stochastic, assuming all the weights are non-negative.

In the less uncommonly used right normalized Laplacian each column sums to zero since is left stochastic.

For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization:

Adjacency matrix Out-Degree matrixOut-Degree left normalized LaplacianIn-Degree matrixIn-Degree right normalized Laplacian

The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic .

Negative weights

Negative weights present several challenges for normalization:

  • The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for a isolated vertex.
  • Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist.
  • Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.

Properties

For an (undirected) graph G and its Laplacian matrix L with eigenvalues :

Because can be written as the inner product of the vector with itself, this shows that and so the eigenvalues of are all non-negative.

,

i.e., is similar to the normalized Laplacian . For this reason, even if is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian .

Interpretation as the discrete Laplace operator approximating the continuous Laplacian

The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method. (See Discrete Poisson equation) [2] In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.

Generalizations and extensions of the Laplacian matrix

Generalized Laplacian

The generalized Laplacian is defined as: [3]

Notice the ordinary Laplacian is a generalized Laplacian.

Admittance matrix of an AC circuit

The Laplacian of a graph was first introduced to model electrical networks. In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances. The weight of edge (i, j) is, by convention, minus the reciprocal of the impedance directly between i and j. In models of such networks, the entries of the adjacency matrix are complex, but the Kirchhoff matrix remains symmetric, rather than being Hermitian. Such a matrix is usually called an "admittance matrix", denoted , rather than a "Laplacian". This is one of the rare applications that give rise to complex symmetric matrices.

Adjacency matrix Weighted degree matrixAdmittance matrix

Magnetic Laplacian

There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a Hermitian matrix. The Magnetic Laplacian for a directed graph with real weights is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries

which encode the edge direction into the phase in the complex plane. In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter is called electric charge. [4] In the following example :

Adjacency matrix Symmetrized LaplacianPhase matrixMagnetic Laplacian

Deformed Laplacian

The deformed Laplacian is commonly defined as

where I is the identity matrix, A is the adjacency matrix, D is the degree matrix, and s is a (complex-valued) number. [5]
The standard Laplacian is just and is the signless Laplacian.

Signless Laplacian

The signless Laplacian is defined as

where is the degree matrix, and is the adjacency matrix. [6] Like the signed Laplacian , the signless Laplacian also is positive semi-definite as it can be factored as

where is the incidence matrix. has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as

This has a solution where if and only if the graph has a bipartite connected component.

Directed multigraphs

An analogue of the Laplacian matrix can be defined for directed multigraphs. [7] In this case the Laplacian matrix L is defined as

where D is a diagonal matrix with Di,i equal to the outdegree of vertex i and A is a matrix with Ai,j equal to the number of edges from i to j (including loops).

Open source software implementations

Application software

See also

Related Research Articles

<span class="mw-page-title-main">Simplex</span> Multi-dimensional generalization of triangle

In geometry, a simplex is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. For example,

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

In mathematics, an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related and 0 if they are not. There are variations; see below.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation

In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.

In graph theory, the resistance distance between two vertices of a simple, connected graph, G, is equal to the resistance between two equivalent points on an electrical network, constructed so as to correspond to G, with each edge being replaced by a resistance of one ohm. It is a metric on graphs.

In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the graph's Laplacian matrix; specifically, the number is equal to any cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph.

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix.

<span class="mw-page-title-main">Centrality</span> Degree of connectedness within a graph

In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with a ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

<span class="mw-page-title-main">Spectral clustering</span> Clustering methods

In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.

In applied mathematics, proto-value functions (PVFs) are automatically learned basis functions that are useful in approximating task-specific value functions, providing a compact representation of the powers of transition matrices. They provide a novel framework for solving the credit assignment problem. The framework introduces a novel approach to solving Markov decision processes (MDP) and reinforcement learning problems, using multiscale spectral and manifold learning methods. Proto-value functions are generated by spectral analysis of a graph, using the graph Laplacian.

References

  1. 1 2 3 Chung, Fan (1997) [1992]. Spectral Graph Theory. American Mathematical Society. ISBN   978-0821803158.
  2. Smola, Alexander J.; Kondor, Risi (2003), "Kernels and regularization on graphs", Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24–27, 2003, Proceedings, Lecture Notes in Computer Science, vol. 2777, Springer, pp. 144–158, CiteSeerX   10.1.1.3.7020 , doi:10.1007/978-3-540-45167-9_12, ISBN   978-3-540-40720-1 .
  3. Godsil, C.; Royle, G. (2001). Algebraic Graph Theory, Graduate Texts in Mathematics. Springer-Verlag.
  4. Satoshi Furutani; Toshiki Shibahara; Mitsuaki Akiyama; Kunio Hato; Masaki Aida (2020). Graph Signal Processing for Directed Graphs based on the Hermitian Laplacian (PDF). ECML PKDD 2019: Machine Learning and Knowledge Discovery in Databases. pp. 447–463. doi:10.1007/978-3-030-46150-8_27.
  5. Morbidi, F. (2013). "The Deformed Consensus Protocol" (PDF). Automatica. 49 (10): 3049–3055. doi:10.1016/j.automatica.2013.07.006. S2CID   205767404.
  6. Cvetković, Dragoš; Simić, Slobodan K. (2010). "Towards a Spectral Theory of Graphs Based on the Signless Laplacian, III". Applicable Analysis and Discrete Mathematics. 4 (1): 156–166. doi: 10.2298/AADM1000001C . ISSN   1452-8630. JSTOR   43671298.
  7. Chaiken, S.; Kleitman, D. (1978). "Matrix Tree Theorems". Journal of Combinatorial Theory, Series A. 24 (3): 377–381. doi: 10.1016/0097-3165(78)90067-5 . ISSN   0097-3165.
  8. "SciPy". GitHub . 4 October 2023.
  9. "NetworkX". GitHub . 4 October 2023.
  10. "Julia". GitHub . 4 October 2023.
  11. "2.3. Clustering".
  12. "PyGSP: Graph Signal Processing in Python". GitHub . 23 March 2022.
  13. "Megaman: Manifold Learning for Millions of Points". GitHub . 14 March 2022.
  14. "SmoothG". GitHub . 17 September 2020.
  15. "Announcing Our Paper at KDD 2020".
  16. "Harshangrjn/LaplacianOpt.jl". GitHub . 2 February 2022.
  17. "LigMG (Large Irregular Graph MultiGrid)-- A distributed memory graph Laplacian solver for large irregular graphs". GitHub . 5 January 2022.
  18. "Laplacians.jl". GitHub . 11 March 2022.