Topological deep learning (TDL) [1] [2] [3] [4] [5] [6] is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations , including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. [7] TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties.,. [8] [9] [10] [11] [12] [13] [14] The mathematical foundations of TDL are algebraic topology, differential topology, and geometric topology. Therefore, TDL can be generalized for data on differentiable manifolds, knots, links, tangles, curves, etc.
The term ``topological deep learning``, including multichannel TDL and multitask TDL, was first introduced in 2017. [15] Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types of data, in particular graphs, meshes, and molecules, resulted in the development of new techniques, culminating in the field of geometric deep learning, which originally proposed a signal-processing perspective for treating such data types. [16] While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, including simplicial complexes [17] [3] and CW complexes, [8] [18] with recent work proposing a unified perspective of message-passing on general combinatorial complexes. [1]
An independent perspective on different types of data originated from topological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging from local information to global information. [19] While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such as support vector machines or random forests. [20] Such descriptors ranged from new techniques for feature engineering over new ways of providing suitable coordinates for topological descriptors, [21] [22] [23] or the creation of more efficient dissimilarity measures. [24] [25] [26] [27]
Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existing deep-learning models or obtaining novel ways of training on topological domains.
Focusing on topology in the sense of point set topology, an active branch of TDL is concerned with learning on topological spaces, that is, on different topological domains.
One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidean data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be a topological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs.
Given a finite set S of abstract entities, a neighborhood function on S is an assignment that attach to every point in S a subset of S or a relation. Such a function can be induced by equipping S with an auxiliary structure. Edges provide one way of defining relations among the entities of S. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to model binary relations among entities of S since every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined on S to capture multi-way interactions among entities of S.
Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes.
Each of the enumerated topological domains has its own characteristics, advantages, and limitations:
The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations. [1]
A rank function on a higher-order domain X is an order-preserving function rk: X → Z, where rk(x) attaches a non-negative integer value to each relation x in X, preserving set inclusion in X. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations. [1]
Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations. [1]
The learning tasks in TDL can be broadly classified into three categories: [1]
In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces.
Central to TDL are topological neural networks (TNNs), [15] specialized architectures designed to operate on data structured in topological domains. [2] [1] Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation.
In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions.
Definition: Higher-Order Message Passing on a General Topological Domain
Let be a topological domain. We define a set of neighborhood functions on . Consider a cell and let for some . A message between cells and is a computation dependent on these two cells or the data supported on them. Denote as the multi-set , and let represent some data supported on cell at layer . Higher-order message passing on , [1] [8] induced by , is defined by the following four update rules:
Some remarks on Definition above are as follows.
First, Equation 1 describes how messages are computed between cells and . The message is influenced by both the data and associated with cells and , respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks.
Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood.
Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships.
Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the function updates the state of cell based on its current state and the aggregated message obtained from neighboring cells.
While the majority of TNNs follow the message passing paradigm from graph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al. [28] leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistency without relying on message passing. Furthermore, in [29] a contrastive loss-based method was suggested to learn the simplicial representation.
Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneering work by Hofer et al., [30] for instance, introduced a layer that permitted topological descriptors like persistence diagrams or persistence barcodes to be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field of representation learning. [31] Other such topological layers include layers based on extended persistent homology descriptors, [32] persistence landscapes, [33] or coordinate functions. [34] In parallel, persistent homology also found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks. [35] [36] [37]
Most existing TDL techniques are rooted in homology. However, alternative mathematical approaches, such as topological Laplacians and topological Dirac operators, also provide valuable insights into the topological properties of TDL. Topological Laplacians, including Hodge Laplacians on differentiable manifolds, [38] combinatorial Laplacians for point clouds, [39] [40] and Khovanov Laplacians for knots and links, [41] serve as powerful tools for extracting topological features from their respective data formats. Despite being defined in distinct contexts, these Laplacians share a common algebraic foundation. They are constructed using the (co-)boundary operator and its adjoint, with their kernels isomorphic to homology groups. Consequently, the number of zero eigenvalues corresponds to the Betti numbers of the associated (co-)homology groups. Moreover, the nonzero eigenvalues provide richer insights into the data structure, particularly when analyzed through the perspective of spectral theory.
Hodge Laplacians, combinatorial Laplacians, and Khovanov Laplacians draw upon the mathematical fields of differential geometry, graph theory, and geometric topology, respectively, to extend the classical theory of homology beyond the domain of algebraic topology. Each functions as a bridge, linking algebraic topology to its associated mathematical discipline.
Persistent Hodge Laplacians were first introduced in 2019 to analyze data on differentiable manifolds with boundary. [42] Additionally, persistent combinatorial Laplacians, also known as persistent Laplacians, were developed for point cloud data. [43] These approaches extend classical persistent homology and have stimulated research interest, fueling advancements in both theory [44] [45] and applications. [46] [47] [48] Persistent Laplacians outperform persistent homology in extensive protein engineering tasks [46] and the prediction of mutation induced protein-protein binding affinity changes. [47] Persistent topological Laplacians have been constructed on various mathematical objects, including simplicial complex, directed flag complex, path complex, cellular sheaves, hypergraph, hyperdigraph, and differentiable manifolds. [49]
Persistent Dirac was constructed on various topological spaces, including simplicial complex, path complex, and hypergraph. [50] [51] [52] These new approaches extend the scope of TDL to manifold topological learning and curve data learning. [49]
TDL is rapidly finding new applications across different domains, including data compression, [53] enhancing the expressivity and predictive performance of graph neural networks, [17] [18] [35] action recognition, [54] and trajectory prediction. [55]
Topology inherently simplifies data, which implies the irreversible loss of certain information. Therefore, competitive performance from TDL mostly involves intrinsically complex data, such as those arising in biological sciences. [15] [56] Perhaps some of the most compelling examples of applications in which TDL consistently demonstrates its advantages over other competing methods are the victories of TDL in the D3R Grand Challenges, [57] [58] the discovery of SARS-CoV-2 evolution mechanisms, [59] [60] and the successful forecasting of SARS-CoV-2 variants BA.2, [61] BA.4 and BA.5, [47] about two months in advance.
In mathematics, the term homology, originally introduced in algebraic topology, has three primary, closely-related usages. The most direct usage of the term is to take the homology of a chain complex, resulting in a sequence of abelian groups called homology groups. This operation, in turn, allows one to associate various named homologies or homology theories to various other types of mathematical objects. Lastly, since there are many homology theories for topological spaces that produce the same answer, one also often speaks of the homology of a topological space. There is also a related notion of the cohomology of a cochain complex, giving rise to various cohomology theories, in addition to the notion of the cohomology of a topological space.
In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by .
In mathematics, a simplicial complex is a structured set composed of points, line segments, triangles, and their n-dimensional counterparts, called simplices, such that all the faces and intersections of the elements are also included in the set. Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. To distinguish a simplicial complex from an abstract simplicial complex, the former is often called a geometric simplicial complex.
In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology.
In algebraic topology, a homology sphere is an n-manifold X having the homology groups of an n-sphere, for some integer . That is,
In combinatorics, an abstract simplicial complex (ASC), often called an abstract complex or just a complex, is a family of sets that is closed under taking subsets, i.e., every subset of a set in the family is also in the family. It is a purely combinatorial description of the geometric notion of a simplicial complex. For example, in a 2-dimensional simplicial complex, the sets in the family are the triangles, their edges, and their vertices.
In algebraic topology, simplicial homology is the sequence of homology groups of a simplicial complex. It formalizes the idea of the number of holes of a given dimension in the complex. This generalizes the number of connected components.
In mathematics, triangulation describes the replacement of topological spaces with simplicial complexes by the choice of an appropriate homeomorphism. A space that admits such a homeomorphism is called a triangulable space. Triangulations can also be used to define a piecewise linear structure for a space, if one exists. Triangulation has various applications both in and outside of mathematics, for instance in algebraic topology, in complex analysis, and in modeling.
In topology, the Vietoris–Rips complex, also called the Vietoris complex or Rips complex, is a way of forming a topological space from distances in a set of points. It is an abstract simplicial complex that can be defined from any metric space M and distance δ by forming a simplex for every finite set of points that has diameter at most δ. That is, it is a family of finite subsets of M, in which we think of a subset of k points as forming a (k − 1)-dimensional simplex (an edge for two points, a triangle for three points, a tetrahedron for four points, etc.); if a finite set S has the property that the distance between every pair of points in S is at most δ, then we include S as a simplex in the complex.
Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.
In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.
Clique complexes, independence complexes, flag complexes, Whitney complexes and conformal hypergraphs are closely related mathematical objects in graph theory and geometric topology that each describe the cliques of an undirected graph.
Persistent homology is a method for computing topological features of a space at different spatial resolutions. More persistent features are detected over a wide range of spatial scales and are deemed more likely to represent true features of the underlying space rather than artifacts of sampling, noise, or particular choice of parameters.
In mathematics, the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors. Analogously to the classical Fourier transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis.
In algebraic topology, homological connectivity is a property describing a topological space based on its homology groups.
Graph neural networks (GNN) are specialized artificial neural networks that are designed for tasks whose inputs are graphs.
The degree-Rips bifiltration is a simplicial filtration used in topological data analysis for analyzing the shape of point cloud data. It is a multiparameter extension of the Vietoris–Rips filtration that possesses greater stability to data outliers than single-parameter filtrations, and which is more amenable to practical computation than other multiparameter constructions. Introduced in 2015 by Lesnick and Wright, the degree-Rips bifiltration is a parameter-free and density-sensitive vehicle for performing persistent homology computations on point cloud data.
The offset filtration is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space. The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters, in order to observe the stability of topological features with respect to attractors. Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper Topological Persistence and Simplification. Since then, the offset filtration has become a primary example in the study of computational topology and data analysis.
In topological data analysis, a subdivision bifiltration is a collection of filtered simplicial complexes, typically built upon a set of data points in a metric space, that captures shape and density information about the underlying data set. The subdivision bifiltration relies on a natural filtration of the barycentric subdivision of a simplicial complex by flags of minimum dimension, which encodes density information about the metric space upon which the complex is built. The subdivision bifiltration was first introduced by Donald Sheehy in 2011 as part of his doctoral thesis as a discrete model of the multicover bifiltration, a continuous construction whose underlying framework dates back to the 1970s. In particular, Sheehy applied the construction to both the Vietoris-Rips and Čech filtrations, two common objects in the field of topological data analysis. Whereas single parameter filtrations are not robust with respect to outliers in the data, the subdivision-Rips and -Cech bifiltrations satisfy several desirable stability properties.
In persistent homology, a persistent Betti number is a multiscale analog of a Betti number that tracks the number of topological features that persist over multiple scale parameters in a filtration. Whereas the classical Betti number equals the rank of the homology group, the persistent Betti number is the rank of the persistent homology group. The concept of a persistent Betti number was introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in the 2002 paper Topological Persistence and Simplification, one of the seminal papers in the field of persistent homology and topological data analysis. Applications of the persistent Betti number appear in a variety of fields including data analysis, machine learning, and physics.
{{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link){{cite journal}}
: CS1 maint: date and year (link)