Topological deep learning

Last updated

Topological deep learning (TDL) [1] [2] [3] [4] [5] [6] is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations , including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. [7] TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties.,. [8] [9] [10] [11] [12] [13] [14]

Contents

History and motivation

Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types of data, in particular graphs, meshes, and molecules, resulted in the development of new techniques, culminating in the field of geometric deep learning, which originally proposed a signal-processing perspective for treating such data types. [15] While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, including simplicial complexes [16] [3] and CW complexes, [8] [17] with recent work proposing a unified perspective of message-passing on general combinatorial complexes. [1]

An independent perspective on different types of data originated from topological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging from local information to global information. [18] While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such as support vector machines or random forests. Such descriptors ranged from new techniques for feature engineering over new ways of providing suitable coordinates for topological descriptors, [19] [20] [21] or the creation of more efficient dissimilarity measures. [22] [23] [24] [25]

Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existing deep-learning models or obtaining novel ways of training on topological domains.

Learning on topological spaces

Learning Tasks on topological domains can be broadly classified into three categories : cell classification, cell prediction and complex classification. Learning tasks on topological spaces.jpg
Learning Tasks on topological domains can be broadly classified into three categories : cell classification, cell prediction and complex classification.

Focusing on topology in the sense of point set topology, an active branch of TDL is concerned with learning on topological spaces, that is, on different topological domains.

An introduction to topological domains

One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidian data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be a topological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs.

Given a finite set S of abstract entities, a neighborhood function on S is an assignment that attach to every point in S a subset of S or a relation. Such a function can be induced by equipping S with an auxiliary structure. Edges provide one way of defining relations among the entities of S. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to model binary relations among entities of S since every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined on S to capture multi-way interactions among entities of S.

Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes.

(a): A group S is made up of basic parts (vertices) without any connections.(b): A graph represents simple connections between its parts (vertices) that are elements of S.(c): A simplicial complex shows a way parts (relations) are connected to each other, but with strict rules about how they're connected.(d): Like simplicial complexes, a cell complex shows how parts (relations) are connected, but it's more flexible in how they're shaped (like 'cells').(f): A hypergraph shows any kind of connections between parts of S, but these connections aren't organized in any particular order.(e): A CC mixes elements from cell complexes (connections with order) and hypergraphs (varied connections), covering both kinds of setups. Higher order networks.png
(a): A group S is made up of basic parts (vertices) without any connections.(b): A graph represents simple connections between its parts (vertices) that are elements of S.(c): A simplicial complex shows a way parts (relations) are connected to each other, but with strict rules about how they're connected.(d): Like simplicial complexes, a cell complex shows how parts (relations) are connected, but it's more flexible in how they're shaped (like 'cells').(f): A hypergraph shows any kind of connections between parts of S, but these connections aren't organized in any particular order.(e): A CC mixes elements from cell complexes (connections with order) and hypergraphs (varied connections), covering both kinds of setups.

Comparisons among topological domains

Each of the enumerated topological domains has its own characteristics, advantages, and limitations:

  • Simplicial complexes
    • Simplest form of higher-order domains.
    • Extensions of graph-based models.
    • Admit hierarchical structures, making them suitable for various applications.
    • Hodge theory can be naturally defined on simplicial complexes.
    • Require relations to be subsets of larger relations, imposing constraints on the structure.
  • Cell Complexes
    • Generalize simplicial complexes.
    • Provide more flexibility in defining higher-order relations.
    • Each cell in a cell complex is homeomorphic to an open ball, attached together via attaching maps.
    • Boundary cells of each cell in a cell complex are also cells in the complex.
    • Represented combinatorially via incidence matrices.
  • Hypergraphs
    • Allow arbitrary set-type relations among entities.
    • Relations are not imposed by other relations, providing more flexibility.
    • Do not explicitly encode the dimension of cells or relations.
    • Useful when relations in the data do not adhere to constraints imposed by other models like simplicial and cell complexes.
  • Combinatorial Complexes [1] :
    • Generalize and bridge the gaps between simplicial complexes, cell complexes, and hypergraphs.
    • Allow for hierarchical structures and set-type relations.
    • Combine features of other complexes while providing more flexibility in modeling relations.
    • Can be represented combinatorially, similar to cell complexes.

Hierarchical structure and set-type relations

The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations. [1]

Rank function

A rank function on a higher-order domain X is an order-preserving function rk: XZ, where rk(x) attaches a non-negative integer value to each relation x in X, preserving set inclusion in X. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations. [1]

Set-type relations

Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations. [1]

The learning tasks in TDL can be broadly classified into three categories: [1]

  • Cell classification: Predict targets for each cell in a complex. Examples include triangular mesh segmentation, where the task is to predict the class of each face or edge in a given mesh.
  • Complex classification: Predict targets for an entire complex. For example, predict the class of each input mesh.
  • Cell prediction: Predict properties of cell-cell interactions in a complex, and in some cases, predict whether a cell exists in the complex. An example is the prediction of linkages among entities in hyperedges of a hypergraph.

In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces.

Topological neural networks

Central to TDL are topological neural networks (TNNs), specialized architectures designed to operate on data structured in topological domains. [2] [1] Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation.

Message passing topological neural networks

In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions.

Definition: Higher-Order Message Passing on a General Topological Domain

Higher order message passing is a deep learning model defined on a topological domain and relies on message passing information among entities in the underlying domain in order to perform a learning task. Higher order message passing.png
Higher order message passing is a deep learning model defined on a topological domain and relies on message passing information among entities in the underlying domain in order to perform a learning task.

Let be a topological domain. We define a set of neighborhood functions on . Consider a cell and let for some . A message between cells and is a computation dependent on these two cells or the data supported on them. Denote as the multi-set , and let represent some data supported on cell at layer . Higher-order message passing on , [1] [8] induced by , is defined by the following four update rules:

  1. , where is the intra-neighborhood aggregation function.
  2. , where is the inter-neighborhood aggregation function.
  3. , where are differentiable functions.

Some remarks on Definition above are as follows.

First, Equation 1 describes how messages are computed between cells and . The message is influenced by both the data and associated with cells and , respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks.

Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood.

Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships.

Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the function updates the state of cell based on its current state and the aggregated message obtained from neighboring cells.

Non-message passing topological neural networks

While the majority of TNNs follow the message passing paradigm from graph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al. [26] leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistency without relying on message passing. Furthermore, in [27] a contrastive loss-based method was suggested to learn the simplicial representation.

Learning on topological descriptors

Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneering work by Hofer et al., [28] for instance, introduced a layer that permitted topological descriptors like persistence diagrams or persistence barcodes to be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field of representation learning. [29] Other such topological layers include layers based on extended persistent homology descriptors, [30] persistence landscapes, [31] or coordinate functions. [32] In parallel, persistent homology also found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks. [33] [34] [35]

Applications

TDL is rapidly finding new applications across different domains, including data compression, [36] enhancing the expressivity and predictive performance of graph neural networks, [16] [17] [33] action recognition, [37] and trajectory prediction. [38]

Related Research Articles

In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by .

<span class="mw-page-title-main">Simplicial complex</span> Mathematical set

In mathematics, a simplicial complex is a set composed of points, line segments, triangles, and their n-dimensional counterparts. Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. To distinguish a simplicial complex from an abstract simplicial complex, the former is often called a geometric simplicial complex.

In mathematics, a piecewise linear or segmented function is a real-valued function of a real variable, whose graph is composed of straight-line segments.

<span class="mw-page-title-main">Barycentric subdivision</span>

In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology.

<span class="mw-page-title-main">Abstract simplicial complex</span> Mathematical object

In combinatorics, an abstract simplicial complex (ASC), often called an abstract complex or just a complex, is a family of sets that is closed under taking subsets, i.e., every subset of a set in the family is also in the family. It is a purely combinatorial description of the geometric notion of a simplicial complex. For example, in a 2-dimensional simplicial complex, the sets in the family are the triangles, their edges, and their vertices.

Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Inherently, Multi-task learning is a multi-objective optimization problem having trade-offs between different tasks. Early versions of MTL were called "hints".

<span class="mw-page-title-main">Triangulation (topology)</span> Representation of mathematical space

In mathematics, triangulation describes the replacement of topological spaces by piecewise linear spaces, i.e. the choice of a homeomorphism in a suitable simplicial complex. Spaces being homeomorphic to a simplicial complex are called triangulable. Triangulation has various uses in different branches of mathematics, for instance in algebraic topology, in complex analysis or in modeling.

In combinatorial mathematics, an independence system is a pair , where is a finite set and is a collection of subsets of with the following properties:

  1. The empty set is independent, i.e., .
  2. Every subset of an independent set is independent, i.e., for each , we have . This is sometimes called the hereditary property, or downward-closedness.

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the Representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.

<span class="mw-page-title-main">Transfer learning</span> Machine learning technique

Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.

<span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.

Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.

In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.

Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability.

<span class="mw-page-title-main">Clique complex</span> Abstract simplicial complex describing a graphs cliques

Clique complexes, independence complexes, flag complexes, Whitney complexes and conformal hypergraphs are closely related mathematical objects in graph theory and geometric topology that each describe the cliques of an undirected graph.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.

In machine learning, the term tensor informally refers to two different concepts for organizing and representing data. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods.

The degree-Rips bifiltration is a simplicial filtration used in topological data analysis for analyzing the shape of point cloud data. It is a multiparameter extension of the Vietoris–Rips filtration that possesses greater stability to data outliers than single-parameter filtrations, and which is more amenable to practical computation than other multiparameter constructions. Introduced in 2015 by Lesnick and Wright, the degree-Rips bifiltration is a parameter-free and density-sensitive vehicle for performing persistent homology computations on point cloud data.

Neural operators are a class of deep learning architectures designed to learn maps between infinite-dimensional function spaces. Neural operators represent an extension of traditional artificial neural networks, marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn operators between function spaces; they can receive input functions, and the output function can be evaluated at any discretization.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 12 Hajij, M.; Zamzmi, G.; Papamarkou, T.; Miolane, N.; Guzmán-Sáenz, A.; Ramamurthy, K. N.; Schaub, M. T. (2022), Topological deep learning: Going beyond graph data, arXiv: 2206.00606
  2. 1 2 Papillon, M.; Sanborn, S.; Hajij, M.; Miolane, N. (2023). "Architectures of topological deep learning: A survey on topological neural networks". arXiv: 2304.10031 [cs.LG].
  3. 1 2 Ebli, S.; Defferrard, M.; Spreemann, G. (2020), Simplicial neural networks, arXiv: 2010.03633
  4. Battiloro, C.; Testa, L.; Giusti, L.; Sardellitti, S.; Di Lorenzo, P.; Barbarossa, S. (2023), Generalized simplicial attention neural networks, arXiv: 2309.02138
  5. Yang, M.; Isufi, E. (2023), Convolutional learning on simplicial complexes, arXiv: 2301.11163
  6. Chen, Y.; Gel, Y. R.; Poor, H. V. (2022), "BScNets: Block simplicial complex neural networks", Proceedings of the AAAI Conference on Artificial Intelligence, 36 (6): 6333–6341, arXiv: 2112.06826 , doi:10.1609/aaai.v36i6.20583
  7. Uray, Martin; Giunti, Barbara; Kerber, Michael; Huber, Stefan (2024-10-01). "Topological Data Analysis in smart manufacturing: State of the art and future directions". Journal of Manufacturing Systems. 76: 75–91. arXiv: 2310.09319 . doi:10.1016/j.jmsy.2024.07.006. ISSN   0278-6125.
  8. 1 2 3 Hajij, M.; Istvan, K.; Zamzmi, G. (2020), Cell complex neural networks, arXiv: 2010.00743
  9. Bianchini, Monica; Scarselli, Franco (2014). "On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures". IEEE Transactions on Neural Networks and Learning Systems. 25 (8): 1553–1565. doi:10.1109/TNNLS.2013.2293637. ISSN   2162-237X.
  10. Naitzat, Gregory; Zhitnikov, Andrey; Lim, Lek-Heng (2020). "Topology of Deep Neural Networks" (PDF). Journal of Machine Learning Research. 21 (1): 184:7503–184:7542. ISSN   1532-4435.
  11. Birdal, Tolga; Lou, Aaron; Guibas, Leonidas J; Simsekli, Umut (2021). "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks". Advances in Neural Information Processing Systems. 34. Curran Associates, Inc.: 6776–6789.
  12. Ballester, Rubén; Clemente, Xavier Arnal; Casacuberta, Carles; Madadi, Meysam; Corneanu, Ciprian A.; Escalera, Sergio (2024). "Predicting the generalization gap in neural networks using topological data analysis". Neurocomputing. 596: 127787. arXiv: 2203.12330 . doi:10.1016/j.neucom.2024.127787.
  13. Rieck, Bastian; Togninalli, Matteo; Bock, Christian; Moor, Michael; Horn, Max; Gumbsch, Thomas; Borgwardt, Karsten (2018-09-27). "Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology". International Conference on Learning Representations.
  14. Dupuis, Benjamin; Deligiannidis, George; Simsekli, Umut (2023-07-03). "Generalization Bounds using Data-Dependent Fractal Dimensions". Proceedings of the 40th International Conference on Machine Learning. PMLR: 8922–8968.
  15. Bronstein, Michael M.; Bruna, Joan; LeCun, Yann; Szlam, Arthur; Vandergheynst, Pierre (2017). "Geometric Deep Learning: Going beyond Euclidean data". IEEE Signal Processing Magazine. 34 (4): 18–42. arXiv: 1611.08097 . doi:10.1109/MSP.2017.2693418. ISSN   1053-5888.
  16. 1 2 Bodnar, Cristian; Frasca, Fabrizio; Wang, Yuguang; Otter, Nina; Montufar, Guido F.; Lió, Pietro; Bronstein, Michael (2021-07-01). "Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks". Proceedings of the 38th International Conference on Machine Learning. PMLR: 1026–1037.
  17. 1 2 Bodnar, Cristian; Frasca, Fabrizio; Otter, Nina; Wang, Yuguang; Liò, Pietro; Montufar, Guido F; Bronstein, Michael (2021). "Weisfeiler and Lehman Go Cellular: CW Networks". Advances in Neural Information Processing Systems. 34. Curran Associates, Inc.: 2625–2640.
  18. Carlsson, Gunnar (2009-01-29). "Topology and data". Bulletin of the American Mathematical Society. 46 (2): 255–308. doi: 10.1090/S0273-0979-09-01249-X . ISSN   0273-0979.
  19. Adcock, Aaron; Carlsson, Erik; Carlsson, Gunnar (2016). "The ring of algebraic functions on persistence bar codes". Homology, Homotopy and Applications. 18 (1): 381–402. arXiv: 1304.0530 . doi:10.4310/HHA.2016.v18.n1.a21.
  20. Adams, Henry; Emerson, Tegan; Kirby, Michael; Neville, Rachel; Peterson, Chris; Shipman, Patrick; Chepushtanova, Sofya; Hanson, Eric; Motta, Francis; Ziegelmeier, Lori (2017). "Persistence Images: A Stable Vector Representation of Persistent Homology". Journal of Machine Learning Research. 18 (8): 1–35. ISSN   1533-7928.
  21. Bubenik, Peter (2015). "Statistical Topological Data Analysis using Persistence Landscapes". Journal of Machine Learning Research. 16 (3): 77–102. ISSN   1533-7928.
  22. Kwitt, Roland; Huber, Stefan; Niethammer, Marc; Lin, Weili; Bauer, Ulrich (2015). "Statistical Topological Data Analysis - A Kernel Perspective". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.
  23. Carrière, Mathieu; Cuturi, Marco; Oudot, Steve (2017-07-17). "Sliced Wasserstein Kernel for Persistence Diagrams". Proceedings of the 34th International Conference on Machine Learning. PMLR: 664–673.
  24. Kusano, Genki; Fukumizu, Kenji; Hiraoka, Yasuaki (2018). "Kernel Method for Persistence Diagrams via Kernel Embedding and Weight Factor". Journal of Machine Learning Research. 18 (189): 1–41. ISSN   1533-7928.
  25. Le, Tam; Yamada, Makoto (2018). "Persistence Fisher Kernel: A Riemannian Manifold Kernel for Persistence Diagrams". Advances in Neural Information Processing Systems. 31. Curran Associates, Inc.
  26. Maggs, Kelly; Hacker, Celia; Rieck, Bastian (2023-10-13). "Simplicial Representation Learning with Neural k-Forms". International Conference on Learning Representations.
  27. Ramamurthy, K. N.; Guzmán-Sáenz, A.; Hajij, M. (2023), Topo-mlp: A simplicial network without message passing, pp. 1–5
  28. Hofer, Christoph; Kwitt, Roland; Niethammer, Marc; Uhl, Andreas (2017). "Deep Learning with Topological Signatures". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
  29. Hofer, Christoph D.; Kwitt, Roland; Niethammer, Marc (2019). "Learning Representations of Persistence Barcodes". Journal of Machine Learning Research. 20 (126): 1–45. ISSN   1533-7928.
  30. Carriere, Mathieu; Chazal, Frederic; Ike, Yuichi; Lacombe, Theo; Royer, Martin; Umeda, Yuhei (2020-06-03). "PersLay: A Neural Network Layer for Persistence Diagrams and New Graph Topological Signatures". Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. PMLR: 2786–2796.
  31. Kim, Kwangho; Kim, Jisu; Zaheer, Manzil; Kim, Joon; Chazal, Frederic; Wasserman, Larry (2020). "PLLay: Efficient Topological Layer based on Persistent Landscapes". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 15965–15977.
  32. Gabrielsson, Rickard Brüel; Nelson, Bradley J.; Dwaraknath, Anjan; Skraba, Primoz (2020-06-03). "A Topology Layer for Machine Learning". Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. PMLR: 1553–1563.
  33. 1 2 Horn, Max; Brouwer, Edward De; Moor, Michael; Moreau, Yves; Rieck, Bastian; Borgwardt, Karsten (2021-10-06). "Topological Graph Neural Networks". International Conference on Learning Representations.
  34. Hofer, Christoph; Graf, Florian; Rieck, Bastian; Niethammer, Marc; Kwitt, Roland (2020-11-21). "Graph Filtration Learning". Proceedings of the 37th International Conference on Machine Learning. PMLR: 4314–4323.
  35. Immonen, Johanna; Souza, Amauri; Garg, Vikas (2023-12-15). "Going beyond persistent homology using persistent homology". Advances in Neural Information Processing Systems. 36: 63150–63173.
  36. Battiloro, C.; Di Lorenzo, P.; Ribeiro, A. (September 2023), Parametric dictionary learning for topological signal representation, IEEE, pp. 1958–1962
  37. Wang, C.; Ma, N.; Wu, Z.; Zhang, J.; Yao, Y. (August 2022), Survey of Hypergraph Neural Networks and Its Application to Action Recognition, Springer Nature Switzerland, pp. 387–398
  38. Roddenberry, T. M.; Glaze, N.; Segarra, S. (July 2021), Principled simplicial neural networks for trajectory prediction, PMLR, pp. 9020–9029, arXiv: 2102.10058