In mathematics, the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors. Analogously to the classical Fourier transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis.
The Graph Fourier transform is important in spectral graph theory. It is widely applied in the recent study of graph structured learning algorithms, such as the widely employed convolutional networks.
Given an undirected weighted graph , where is the set of nodes with ( being the number of nodes) and is the set of edges, a graph signal is a function defined on the vertices of the graph . The signal maps every vertex to a real number . Any graph signal can be projected on the eigenvectors of the Laplacian matrix . [1] Let and be the eigenvalue and eigenvector of the Laplacian matrix (the eigenvalues are sorted in an increasing order, i.e., [2] ), the graph Fourier transform (GFT) of a graph signal on the vertices of is the expansion of in terms of the eigenfunctions of . [3] It is defined as: [1] [4]
where .
Since is a real symmetric matrix, its eigenvectors form an orthogonal basis. Hence an inverse graph Fourier transform (IGFT) exists, and it is written as: [4]
Analogously to the classical Fourier transform, graph Fourier transform provides a way to represent a signal in two different domains: the vertex domain and the graph spectral domain. Note that the definition of the graph Fourier transform and its inverse depend on the choice of Laplacian eigenvectors, which are not necessarily unique. [3] The eigenvectors of the normalized Laplacian matrix are also a possible base to define the forward and inverse graph Fourier transform.
The Parseval relation holds for the graph Fourier transform, [5] that is, for any
This gives us Parseval's identity: [3]
The definition of convolution between two functions and cannot be directly applied to graph signals, because the signal translation is not defined in the context of graphs. [4] However, by replacing the complex exponential shift in classical Fourier transform with the graph Laplacian eigenvectors, convolution of two graph signals can be defined as: [3]
The generalized convolution operator satisfies the following properties: [3]
As previously stated, the classical translation operator cannot be generalized to the graph setting. One way to define a generalized translation operator is through generalized convolution with a delta function centered at vertex : [2]
where
The normalization constant ensures that the translation operator preserves the signal mean, [4] i.e.,
The generalized convolution operator satisfies the following properties: [3]
For any , and ,
Representing signals in frequency domain is a common approach to data compression. As graph signals can be sparse in their graph spectral domain, the graph Fourier transform can also be used for image compression. [6] [7]
Similar to classical noise reduction of signals based on Fourier transform, graph filters based on the graph Fourier transform can be designed for graph signal denoising. [8]
As the graph Fourier transform enables the definition of convolution on graphs, it makes possible to adapt the conventional convolutional neural networks (CNN) to work on graphs. Graph structured semi-supervised learning algorithms such as graph convolutional network (GCN), are able to propagate the labels of a graph signal throughout the graph with a small subset of labeled nodes, theoretically operating as a first order approximation of spectral graph convolutions without computing the graph Laplacian and its eigendecomposition. [9]
GSPBOX [10] [11] is a toolbox for signal processing of graphs, including the graph Fourier transform. It supports both Python and MATLAB languages.
In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, the Laplace transform, named after Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In mathematics, the Dirichlet convolution is a binary operation defined for arithmetic functions; it is important in number theory. It was developed by Peter Gustav Lejeune Dirichlet.
In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.
In graph theory, a strongly regular graph (SRG) is a regular graph G = (V, E) with v vertices and degree k such that for some given integers
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In theoretical physics, a source is an abstract concept, developed by Julian Schwinger, motivated by the physical effects of surrounding particles involved in creating or destroying another particle. So, one can perceive sources as the origin of the physical properties carried by the created or destroyed particle, and thus one can use this concept to study all quantum processes including the spacetime localized properties and the energy forms, i.e., mass and momentum, of the phenomena. The probability amplitude of the created or the decaying particle is defined by the effect of the source on a localized spacetime region such that the affected particle captures its physics depending on the tensorial and spinorial nature of the source. An example that Julian Schwinger referred to is the creation of meson due to the mass correlations among five mesons.
Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L2 that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
Shrinkage fields is a random field-based machine learning technique that aims to perform high quality image restoration using low computational overhead.
In statistics, modes of variation are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent. In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).