In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.[ citation needed ]
The initial motivation is to study the shape of data. TDA has combined algebraic topology and other tools from pure mathematics to allow mathematically rigorous study of "shape". The main tool is persistent homology, an adaptation of homology to point cloud data. Persistent homology has been applied to many types of data across many fields. Moreover, its mathematical foundation is also of theoretical importance. The unique features of TDA make it a promising bridge between topology and geometry.[ citation needed ]
TDA is premised on the idea that the shape of data sets contains relevant information. Real high-dimensional data is typically sparse, and tends to have relevant low dimensional features. One task of TDA is to provide a precise characterization of this fact. For example, the trajectory of a simple predator-prey system governed by the Lotka–Volterra equations [1] forms a closed circle in state space. TDA provides tools to detect and quantify such recurrent motion. [2]
Many algorithms for data analysis, including those used in TDA, require setting various parameters. Without prior domain knowledge, the correct collection of parameters for a data set is difficult to choose. The main insight of persistent homology is to use the information obtained from all parameter values by encoding this huge amount of information into an understandable and easy-to-represent form. With TDA, there is a mathematical interpretation when the information is a homology group. In general, the assumption is that features that persist for a wide range of parameters are "true" features. Features persisting for only a narrow range of parameters are presumed to be noise, although the theoretical justification for this is unclear. [3]
Precursors to the full concept of persistent homology appeared gradually over time. [4] In 1990, Patrizio Frosini introduced a pseudo-distance between submanifolds, and later the size function, which on 1-dim curves is equivalent to the 0th persistent homology. [5] [6] Nearly a decade later, Vanessa Robins studied the images of homomorphisms induced by inclusion. [7] Finally, shortly thereafter, Edelsbrunner et al. introduced the concept of persistent homology together with an efficient algorithm and its visualization as a persistence diagram. [8] Carlsson et al. reformulated the initial definition and gave an equivalent visualization method called persistence barcodes, [9] interpreting persistence in the language of commutative algebra. [10]
In algebraic topology the persistent homology has emerged through the work of Sergey Barannikov on Morse theory. The set of critical values of smooth Morse function was canonically partitioned into pairs "birth-death", filtered complexes were classified, their invariants, equivalent to persistence diagram and persistence barcodes, together with the efficient algorithm for their calculation, were described under the name of canonical forms in 1994 by Barannikov. [11] [12]
Some widely used concepts are introduced below. Note that some definitions may vary from author to author.
A point cloud is often defined as a finite set of points in some Euclidean space, but may be taken to be any finite metric space.
The Čech complex of a point cloud is the nerve of the cover of balls of a fixed radius around each point in the cloud.
A persistence module indexed by is a vector space for each , and a linear map whenever , such that for all and whenever [13] An equivalent definition is a functor from considered as a partially ordered set to the category of vector spaces.
The persistent homology group of a point cloud is the persistence module defined as , where is the Čech complex of radius of the point cloud and is the homology group.
A persistence barcode is a multiset of intervals in , and a persistence diagram is a multiset of points in ().
The Wasserstein distance between two persistence diagrams and is defined as where and ranges over bijections between and . Please refer to figure 3.1 in Munch [14] for illustration.
The bottleneck distance between and is This is a special case of Wasserstein distance, letting .
The first classification theorem for persistent homology appeared in 1994 [11] via Barannikov's canonical forms. The classification theorem interpreting persistence in the language of commutative algebra appeared in 2005: [10] for a finitely generated persistence module with field coefficients, Intuitively, the free parts correspond to the homology generators that appear at filtration level and never disappear, while the torsion parts correspond to those that appear at filtration level and last for steps of the filtration (or equivalently, disappear at filtration level ). [11]
Persistent homology is visualized through a barcode or persistence diagram. The barcode has its root in abstract mathematics. Namely, the category of finite filtered complexes over a field is semi-simple. Any filtered complex is isomorphic to its canonical form, a direct sum of one- and two-dimensional simple filtered complexes.
Stability is desirable because it provides robustness against noise. If is any space which is homeomorphic to a simplicial complex, and are continuous tame [15] functions, then the persistence vector spaces and are finitely presented, and , where refers to the bottleneck distance [16] and is the map taking a continuous tame function to the persistence diagram of its -th homology.
The basic workflow in TDA is: [17]
point cloud | nested complexes | persistence module | barcode or diagram |
Graphically speaking,
The first algorithm over all fields for persistent homology in algebraic topology setting was described by Barannikov [11] through reduction to the canonical form by upper-triangular matrices. The algorithm for persistent homology over was given by Edelsbrunner et al. [8] Zomorodian and Carlsson gave the practical algorithm to compute persistent homology over all fields. [10] Edelsbrunner and Harer's book gives general guidance on computational topology. [19]
One issue that arises in computation is the choice of complex. The Čech complex and Vietoris–Rips complex are most natural at first glance; however, their size grows rapidly with the number of data points. The Vietoris–Rips complex is preferred over Čech complex because its definition is simpler and the Čech complex requires extra effort to define in a general finite metric space. Efficient ways to lower the computational cost of homology have been studied. For example, the α-complex and witness complex are used to reduce the dimension and size of complexes. [20]
Recently, Discrete Morse theory has shown promise for computational homology because it can reduce a given simplicial complex to a much smaller cellular complex which is homotopic to the original one. [21] This reduction can in fact be performed as the complex is constructed by using matroid theory, leading to further performance increases. [22] Another recent algorithm saves time by ignoring the homology classes with low persistence. [23]
Various software packages are available, such as javaPlex, Dionysus, Perseus, PHAT, DIPHA, GUDHI, Ripser, and TDAstats. A comparison between these tools is done by Otter et al. [24] Giotto-tda is a Python package dedicated to integrating TDA in the machine learning workflow by means of a scikit-learn API. An R package TDA is capable of calculating recently invented concepts like landscape and the kernel distance estimator. [25] The Topology ToolKit is specialized for continuous data defined on manifolds of low dimension (1, 2 or 3), as typically found in scientific visualization. Cubicle is optimized for large (gigabyte-scale) grayscale image data in dimension 1, 2 or 3 using cubical complexes and discrete Morse theory. Another R package, TDAstats, uses the Ripser library to calculate persistent homology. [26]
High-dimensional data is impossible to visualize directly. Many methods have been invented to extract a low-dimensional structure from the data set, such as principal component analysis and multidimensional scaling. [27] However, it is important to note that the problem itself is ill-posed, since many different topological features can be found in the same data set. Thus, the study of visualization of high-dimensional spaces is of central importance to TDA, although it does not necessarily involve the use of persistent homology. However, recent attempts have been made to use persistent homology in data visualization. [28]
Carlsson et al. have proposed a general method called MAPPER. [29] It inherits the idea of Serre that a covering preserves homotopy. [30] A generalized formulation of MAPPER is as follows:
Let and be topological spaces and let be a continuous map. Let be a finite open covering of . The output of MAPPER is the nerve of the pullback cover , where each preimage is split into its connected components. [28] This is a very general concept, of which the Reeb graph [31] and merge trees are special cases.
This is not quite the original definition. [29] Carlsson et al. choose to be or , and cover it with open sets such that at most two intersect. [3] This restriction means that the output is in the form of a complex network. Because the topology of a finite point cloud is trivial, clustering methods (such as single linkage) are used to produce the analogue of connected sets in the preimage when MAPPER is applied to actual data.
Mathematically speaking, MAPPER is a variation of the Reeb graph. If the is at most one dimensional, then for each , [32] The added flexibility also has disadvantages. One problem is instability, in that some change of the choice of the cover can lead to major change of the output of the algorithm. [33] Work has been done to overcome this problem. [28]
Three successful applications of MAPPER can be found in Carlsson et al. [34] A comment on the applications in this paper by J. Curry is that "a common feature of interest in applications is the presence of flares or tendrils". [35]
A free implementation of MAPPER is available online written by Daniel Müllner and Aravindakshan Babu. MAPPER also forms the basis of Ayasdi's AI platform.
Multidimensional persistence is important to TDA. The concept arises in both theory and practice. The first investigation of multidimensional persistence was early in the development of TDA. [36] Carlsson-Zomorodian introduced the theory of multidimensional persistence in [37] and in collaboration with Singh [38] introduced the use of tools from symbolic algebra (Grobner basis methods) to compute MPH modules. Their definition presents multidimensional persistence with n parameters as a graded module over a polynomial ring in n variables. Tools from commutative and homological algebra are applied to the study of multidimensional persistence in work of Harrington-Otter-Schenck-Tillman. [39] The first application to appear in the literature is a method for shape comparison, similar to the invention of TDA. [40]
The definition of an n-dimensional persistence module in is [35]
It might be worth noting that there are controversies on the definition of multidimensional persistence. [35]
One of the advantages of one-dimensional persistence is its representability by a diagram or barcode. However, discrete complete invariants of multidimensional persistence modules do not exist. [41] The main reason for this is that the structure of the collection of indecomposables is extremely complicated by Gabriel's theorem in the theory of quiver representations, [42] although a finitely generated n-dim persistence module can be uniquely decomposed into a direct sum of indecomposables due to the Krull-Schmidt theorem. [43]
Nonetheless, many results have been established. Carlsson and Zomorodian introduced the rank invariant, defined as the , in which is a finitely generated n-graded module. In one dimension, it is equivalent to the barcode. In the literature, the rank invariant is often referred as the persistent Betti numbers (PBNs). [19] In many theoretical works, authors have used a more restricted definition, an analogue from sublevel set persistence. Specifically, the persistence Betti numbers of a function are given by the function , taking each to , where and .
Some basic properties include monotonicity and diagonal jump. [44] Persistent Betti numbers will be finite if is a compact and locally contractible subspace of . [45]
Using a foliation method, the k-dim PBNs can be decomposed into a family of 1-dim PBNs by dimensionality deduction. [46] This method has also led to a proof that multi-dim PBNs are stable. [47] The discontinuities of PBNs only occur at points where either is a discontinuous point of or is a discontinuous point of under the assumption that and is a compact, triangulable topological space. [48]
Persistent space, a generalization of persistent diagram, is defined as the multiset of all points with multiplicity larger than 0 and the diagonal. [49] It provides a stable and complete representation of PBNs. An ongoing work by Carlsson et al. is trying to give geometric interpretation of persistent homology, which might provide insights on how to combine machine learning theory with topological data analysis. [50]
The first practical algorithm to compute multidimensional persistence was invented very early. [51] After then, many other algorithms have been proposed, based on such concepts as discrete morse theory [52] and finite sample estimating. [53]
The standard paradigm in TDA is often referred as sublevel persistence. Apart from multidimensional persistence, many works have been done to extend this special case.
The nonzero maps in persistence module are restricted by the preorder relationship in the category. However, mathematicians have found that the unanimity of direction is not essential to many results. "The philosophical point is that the decomposition theory of graph representations is somewhat independent of the orientation of the graph edges". [54] Zigzag persistence is important to the theoretical side. The examples given in Carlsson's review paper to illustrate the importance of functorality all share some of its features. [3]
There are some attempts to loosen the stricter restriction of the function. [55] Please refer to the Categorification and cosheaves and Impact on mathematics sections for more information.
It's natural to extend persistence homology to other basic concepts in algebraic topology, such as cohomology and relative homology/cohomology. [56] An interesting application is the computation of circular coordinates for a data set via the first persistent cohomology group. [57]
Normal persistence homology studies real-valued functions. The circle-valued map might be useful, "persistence theory for circle-valued maps promises to play the role for some vector fields as does the standard persistence theory for scalar fields", as commented in Dan Burghelea et al. [58] The main difference is that Jordan cells (very similar in format to the Jordan blocks in linear algebra) are nontrivial in circle-valued functions, which would be zero in real-valued case, and combining with barcodes give the invariants of a tame map, under moderate conditions. [58]
Two techniques they use are Morse-Novikov theory [59] and graph representation theory. [60] More recent results can be found in D. Burghelea et al. [61] For example, the tameness requirement can be replaced by the much weaker condition, continuous.
The proof of the structure theorem relies on the base domain being field, so not many attempts have been made on persistence homology with torsion. Frosini defined a pseudometric on this specific module and proved its stability. [62] One of its novelty is that it doesn't depend on some classification theory to define the metric. [63]
One advantage of category theory is its ability to lift concrete results to a higher level, showing relationships between seemingly unconnected objects. Bubenik et al. [64] offers a short introduction of category theory fitted for TDA.
Category theory is the language of modern algebra, and has been widely used in the study of algebraic geometry and topology. It has been noted that "the key observation of [10] is that the persistence diagram produced by [8] depends only on the algebraic structure carried by this diagram." [65] The use of category theory in TDA has proved to be fruitful. [64] [65]
Following the notations made in Bubenik et al., [65] the indexing category is any preordered set (not necessarily or ), the target category is any category (instead of the commonly used ), and functors are called generalized persistence modules in , over .
One advantage of using category theory in TDA is a clearer understanding of concepts and the discovery of new relationships between proofs. Take two examples for illustration. The understanding of the correspondence between interleaving and matching is of huge importance, since matching has been the method used in the beginning (modified from Morse theory). A summary of works can be found in Vin de Silva et al. [66] Many theorems can be proved much more easily in a more intuitive setting. [63] Another example is the relationship between the construction of different complexes from point clouds. It has long been noticed that Čech and Vietoris-Rips complexes are related. Specifically, . [67] The essential relationship between Cech and Rips complexes can be seen much more clearly in categorical language. [66]
The language of category theory also helps cast results in terms recognizable to the broader mathematical community. Bottleneck distance is widely used in TDA because of the results on stability with respect to the bottleneck distance. [13] [16] In fact, the interleaving distance is the terminal object in a poset category of stable metrics on multidimensional persistence modules in a prime field. [63] [68]
Sheaves, a central concept in modern algebraic geometry, are intrinsically related to category theory. Roughly speaking, sheaves are the mathematical tool for understanding how local information determines global information. Justin Curry regards level set persistence as the study of fibers of continuous functions. The objects that he studies are very similar to those by MAPPER, but with sheaf theory as the theoretical foundation. [35] Although no breakthrough in the theory of TDA has yet used sheaf theory, it is promising since there are many beautiful theorems in algebraic geometry relating to sheaf theory. For example, a natural theoretical question is whether different filtration methods result in the same output. [69]
Stability is of central importance to data analysis, since real data carry noises. By usage of category theory, Bubenik et al. have distinguished between soft and hard stability theorems, and proved that soft cases are formal. [65] Specifically, general workflow of TDA is
data | topological persistence module | algebraic persistence module | discrete invariant |
The soft stability theorem asserts that is Lipschitz continuous, and the hard stability theorem asserts that is Lipschitz continuous.
Bottleneck distance is widely used in TDA. The isometry theorem asserts that the interleaving distance is equal to the bottleneck distance. [63] Bubenik et al. have abstracted the definition to that between functors when is equipped with a sublinear projection or superlinear family, in which still remains a pseudometric. [65] Considering the magnificent characters of interleaving distance, [70] here we introduce the general definition of interleaving distance(instead of the first introduced one): [13] Let (a function from to which is monotone and satisfies for all ). A -interleaving between F and G consists of natural transformations and , such that and .
The two main results are [65]
These two results summarize many results on stability of different models of persistence.
For the stability theorem of multidimensional persistence, please refer to the subsection of persistence.
The structure theorem is of central importance to TDA; as commented by G. Carlsson, "what makes homology useful as a discriminator between topological spaces is the fact that there is a classification theorem for finitely generated abelian groups". [3] (see the fundamental theorem of finitely generated abelian groups).
The main argument used in the proof of the original structure theorem is the standard structure theorem for finitely generated modules over a principal ideal domain. [10] However, this argument fails if the indexing set is . [3]
In general, not every persistence module can be decomposed into intervals. [71] Many attempts have been made at relaxing the restrictions of the original structure theorem.[ clarification needed ] The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset of is solved based on the work of Webb. [72] The most notable result is done by Crawley-Boevey, which solved the case of . Crawley-Boevey's theorem states that any pointwise finite-dimensional persistence module is a direct sum of interval modules. [73]
To understand the definition of his theorem, some concepts need introducing. An interval in is defined as a subset having the property that if and if there is an such that , then as well. An interval module assigns to each element the vector space and assigns the zero vector space to elements in . All maps are the zero map, unless and , in which case is the identity map. [35] Interval modules are indecomposable. [74]
Although the result of Crawley-Boevey is a very powerful theorem, it still doesn't extend to the q-tame case. [71] A persistence module is q-tame if the rank of is finite for all . There are examples of q-tame persistence modules that fail to be pointwise finite. [75] However, it turns out that a similar structure theorem still holds if the features that exist only at one index value are removed. [74] This holds because the infinite dimensional parts at each index value do not persist, due to the finite-rank condition. [76] Formally, the observable category is defined as , in which denotes the full subcategory of whose objects are the ephemeral modules ( whenever ). [74]
Note that the extended results listed here do not apply to zigzag persistence, since the analogue of a zigzag persistence module over is not immediately obvious.
Real data is always finite, and so its study requires us to take stochasticity into account. Statistical analysis gives us the ability to separate true features of the data from artifacts introduced by random noise. Persistent homology has no inherent mechanism to distinguish between low-probability features and high-probability features.
One way to apply statistics to topological data analysis is to study the statistical properties of topological features of point clouds. The study of random simplicial complexes offers some insight into statistical topology. K. Turner et al. [77] offers a summary of work in this vein.
A second way is to study probability distributions on the persistence space. The persistence space is , where is the space of all barcodes containing exactly intervals and the equivalences are if . [78] This space is fairly complicated; for example, it is not complete under the bottleneck metric. The first attempt made to study it is by Y. Mileyko et al. [79] The space of persistence diagrams in their paper is defined as where is the diagonal line in . A nice property is that is complete and separable in the Wasserstein metric . Expectation, variance, and conditional probability can be defined in the Fréchet sense. This allows many statistical tools to be ported to TDA. Works on null hypothesis significance test, [80] confidence intervals, [81] and robust estimates [82] are notable steps.
A third way is to consider the cohomology of probabilistic space or statistical systems directly, called information structures and basically consisting in the triple (), sample space, random variables and probability laws. [83] [84] Random variables are considered as partitions of the n atomic probabilities (seen as a probability (n-1)-simplex, ) on the lattice of partitions (). The random variables or modules of measurable functions provide the cochain complexes while the coboundary is considered as the general homological algebra first discovered by Hochschild with a left action implementing the action of conditioning. The first cocycle condition corresponds to the chain rule of entropy, allowing to derive uniquely up to the multiplicative constant, Shannon entropy as the first cohomology class. The consideration of a deformed left-action generalises the framework to Tsallis entropies. The information cohomology is an example of ringed topos. Multivariate k-Mutual information appear in coboundaries expressions, and their vanishing, related to cocycle condition, gives equivalent conditions for statistical independence. [85] Minima of mutual-informations, also called synergy, give rise to interesting independence configurations analog to homotopical links. Because of its combinatorial complexity, only the simplicial subcase of the cohomology and of information structure has been investigated on data. Applied to data, those cohomological tools quantifies statistical dependences and independences, including Markov chains and conditional independence, in the multivariate case. [86] Notably, mutual-informations generalize correlation coefficient and covariance to non-linear statistical dependences. These approaches were developed independently and only indirectly related to persistence methods, but may be roughly understood in the simplicial case using Hu Kuo Tin Theorem that establishes one-to-one correspondence between mutual-informations functions and finite measurable function of a set with intersection operator, to construct the Čech complex skeleton. Information cohomology offers some direct interpretation and application in terms of neuroscience (neural assembly theory and qualitative cognition [87] ), statistical physic, and deep neural network for which the structure and learning algorithm are imposed by the complex of random variables and the information chain rule. [88]
Persistence landscapes, introduced by Peter Bubenik, are a different way to represent barcodes, more amenable to statistical analysis. [89] The persistence landscape of a persistent module is defined as a function , , where denotes the extended real line and . The space of persistence landscapes is very nice: it inherits all good properties of barcode representation (stability, easy representation, etc.), but statistical quantities can be readily defined, and some problems in Y. Mileyko et al.'s work, such as the non-uniqueness of expectations, [79] can be overcome. Effective algorithms for computation with persistence landscapes are available. [90] Another approach is to use revised persistence, which is image, kernel and cokernel persistence. [91]
More than one way exists to classify the applications of TDA. Perhaps the most natural way is by field. A very incomplete list of successful applications includes [92] data skeletonization, [93] shape study, [94] graph reconstruction, [95] [96] [97] [98] [99] image analysis, [100] [101] material, [102] [103] progression analysis of disease, [104] [105] sensor network, [67] signal analysis, [106] cosmic web, [107] complex network, [108] [109] [110] [111] fractal geometry, [112] viral evolution, [113] propagation of contagions on networks , [114] bacteria classification using molecular spectroscopy, [115] super-resolution microscopy, [116] hyperspectral imaging in physical-chemistry, [117] remote sensing, [118] feature selection, [119] and early warning signs of financial crashes. [120]
Another way is by distinguishing the techniques by G. Carlsson, [78]
one being the study of homological invariants of data on individual data sets, and the other is the use of homological invariants in the study of databases where the data points themselves have geometric structure.
There are several notable interesting features of the recent applications of TDA:
One of the main fields of data analysis today is machine learning. Some examples of machine learning in TDA can be found in Adcock et al. [125] The links between TDA and machine learning became more pronounced over time, culminating in the fields of topological machine learning and topological deep learning. In order to apply tools from machine learning, the information obtained from TDA should be represented in vector form. An ongoing and promising attempt is the persistence landscape discussed above. Another attempt uses the concept of persistence images. [126] However, one problem of this method is the loss of stability, since the hard stability theorem depends on the barcode representation.
Topological data analysis and persistent homology have had impacts on Morse theory. [127] Morse theory has played a very important role in the theory of TDA, including on computation. Some work in persistent homology has extended results about Morse functions to tame functions or, even to continuous functions[ citation needed ]. A forgotten result of R. Deheuvels long before the invention of persistent homology extends Morse theory to all continuous functions. [128]
One recent result is that the category of Reeb graphs is equivalent to a particular class of cosheaf. [129] This is motivated by theoretical work in TDA, since the Reeb graph is related to Morse theory and MAPPER is derived from it. The proof of this theorem relies on the interleaving distance.
Persistent homology is closely related to spectral sequences. [130] [131] In particular the algorithm bringing a filtered complex to its canonical form [11] permits much faster calculation of spectral sequences than the standard procedure of calculating groups page by page. Zigzag persistence may turn out to be of theoretical importance to spectral sequences.
The Database of Original & Non-Theoretical Uses of Topology (DONUT) is a database of scholarly articles featuring practical applications of topological data analysis to various areas of science. DONUT was started in 2017 by Barbara Giunti, Janis Lazovskis, and Bastian Rieck, [132] and as of October 2023 currently contains 447 articles. [133] DONUT was featured in the November 2023 issue of the Notices of the American Mathematical Society . [134]
In mathematics, the term homology, originally introduced in algebraic topology, has three primary, closely-related usages. The most direct usage of the term is to take the homology of a chain complex, resulting in a sequence of abelian groups called homology groups. This operation, in turn, allows one to associate various named homologies or homology theories to various other types of mathematical objects. Lastly, since there are many homology theories for topological spaces that produce the same answer, one also often speaks of the homology of a topological space. There is also a related notion of the cohomology of a cochain complex, giving rise to various cohomology theories, in addition to the notion of the cohomology of a topological space.
In mathematics, a sheaf is a tool for systematically tracking data attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could be the ring of continuous functions defined on that open set. Such data are well behaved in that they can be restricted to smaller open sets, and also the data assigned to an open set are equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original open set.
In mathematics, group cohomology is a set of mathematical tools used to study groups using cohomology theory, a technique from algebraic topology. Analogous to group representations, group cohomology looks at the group actions of a group G in an associated G-moduleM to elucidate the properties of the group. By treating the G-module as a kind of topological space with elements of representing n-simplices, topological properties of the space may be computed, such as the set of cohomology groups . The cohomology groups in turn provide insight into the structure of the group G and G-module M themselves. Group cohomology plays a role in the investigation of fixed points of a group action in a module or space and the quotient module or space with respect to a group action. Group cohomology is used in the fields of abstract algebra, homological algebra, algebraic topology and algebraic number theory, as well as in applications to group theory proper. As in algebraic topology, there is a dual theory called group homology. The techniques of group cohomology can also be extended to the case that instead of a G-module, G acts on a nonabelian G-group; in effect, a generalization of a module to non-Abelian coefficients.
In mathematics, and specifically in topology, a CW complex is a topological space that is built by gluing together topological balls of different dimensions in specific ways. It generalizes both manifolds and simplicial complexes and has particular significance for algebraic topology. It was initially introduced by J. H. C. Whitehead to meet the needs of homotopy theory. CW complexes have better categorical properties than simplicial complexes, but still retain a combinatorial nature that allows for computation.
In mathematics, triangulation describes the replacement of topological spaces by piecewise linear spaces, i.e. the choice of a homeomorphism in a suitable simplicial complex. Spaces being homeomorphic to a simplicial complex are called triangulable. Triangulation has various uses in different branches of mathematics, for instance in algebraic topology, in complex analysis or in modeling.
In mathematics, Hochschild homology (and cohomology) is a homology theory for associative algebras over rings. There is also a theory for Hochschild homology of certain functors. Hochschild cohomology was introduced by Gerhard Hochschild (1945) for algebras over a field, and extended to algebras over more general rings by Henri Cartan and Samuel Eilenberg (1956).
Size functions are shape descriptors, in a geometrical/topological sense. They are functions from the half-plane to the natural numbers, counting certain connected components of a topological space. They are used in pattern recognition and topology.
Persistent homology is a method for computing topological features of a space at different spatial resolutions. More persistent features are detected over a wide range of spatial scales and are deemed more likely to represent true features of the underlying space rather than artifacts of sampling, noise, or particular choice of parameters.
In mathematics, particularly in algebraic topology, a taut pair is a topological pair whose direct limit of cohomology module of open neighborhood of that pair which is directed downward by inclusion is isomorphic to the cohomology module of original pair.
The degree-Rips bifiltration is a simplicial filtration used in topological data analysis for analyzing the shape of point cloud data. It is a multiparameter extension of the Vietoris–Rips filtration that possesses greater stability to data outliers than single-parameter filtrations, and which is more amenable to practical computation than other multiparameter constructions. Introduced in 2015 by Lesnick and Wright, the degree-Rips bifiltration is a parameter-free and density-sensitive vehicle for performing persistent homology computations on point cloud data.
The offset filtration is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space. The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters, in order to observe the stability of topological features with respect to attractors. Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper Topological Persistence and Simplification. Since then, the offset filtration has become a primary example in the study of computational topology and data analysis.
The multicover bifiltration is a two-parameter sequence of nested topological spaces derived from the covering of a finite set in a metric space by growing metric balls. It is a multidimensional extension of the offset filtration that captures density information about the underlying data set by filtering the points of the offsets at each index according to how many balls cover each point. The multicover bifiltration has been an object of study within multidimensional persistent homology and topological data analysis.
A persistence module is a mathematical structure in persistent homology and topological data analysis that formally captures the persistence of topological features of an object across a range of scale parameters. A persistence module often consists of a collection of homology groups corresponding to a filtration of topological spaces, and a collection of linear maps induced by the inclusions of the filtration. The concept of a persistence module was first introduced in 2005 as an application of graded modules over polynomial rings, thus importing well-developed algebraic ideas from classical commutative algebra theory to the setting of persistent homology. Since then, persistence modules have been one of the primary algebraic structures studied in the field of applied topology.
In topological data analysis, a subdivision bifiltration is a collection of filtered simplicial complexes, typically built upon a set of data points in a metric space, that captures shape and density information about the underlying data set. The subdivision bifiltration relies on a natural filtration of the barycentric subdivision of a simplicial complex by flags of minimum dimension, which encodes density information about the metric space upon which the complex is built. The subdivision bifiltration was first introduced by Donald Sheehy in 2011 as part of his doctoral thesis as a discrete model of the multicover bifiltration, a continuous construction whose underlying framework dates back to the 1970s. In particular, Sheehy applied the construction to both the Vietoris-Rips and Čech filtrations, two common objects in the field of topological data analysis. Whereas single parameter filtrations are not robust with respect to outliers in the data, the subdivision-Rips and -Cech bifiltrations satisfy several desirable stability properties.
In topological data analysis, the Vietoris–Rips filtration is the collection of nested Vietoris–Rips complexes on a metric space created by taking the sequence of Vietoris–Rips complexes over an increasing scale parameter. Often, the Vietoris–Rips filtration is used to create a discrete, simplicial model on point cloud data embedded in an ambient metric space. The Vietoris–Rips filtration is a multiscale extension of the Vietoris–Rips complex that enables researchers to detect and track the persistence of topological features, over a range of parameters, by way of computing the persistent homology of the entire filtration. It is named after Leopold Vietoris and Eliyahu Rips.
In topological data analysis, the interleaving distance is a measure of similarity between persistence modules, a common object of study in topological data analysis and persistent homology. The interleaving distance was first introduced by Frédéric Chazal et al. in 2009. since then, it and its generalizations have been a central consideration in the study of applied algebraic topology and topological data analysis.
In persistent homology, a persistent Betti number is a multiscale analog of a Betti number that tracks the number of topological features that persist over multiple scale parameters in a filtration. Whereas the classical Betti number equals the rank of the homology group, the persistent Betti number is the rank of the persistent homology group. The concept of a persistent Betti number was introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in the 2002 paper Topological Persistence and Simplification, one of the seminal papers in the field of persistent homology and topological data analysis. Applications of the persistent Betti number appear in a variety of fields including data analysis, machine learning, and physics.
In persistent homology, a persistent homology group is a multiscale analog of a homology group that captures information about the evolution of topological features across a filtration of spaces. While the ordinary homology group represents nontrivial homology classes of an individual topological space, the persistent homology group tracks only those classes that remain nontrivial across multiple parameters in the underlying filtration. Analogous to the ordinary Betti number, the ranks of the persistent homology groups are known as the persistent Betti numbers. Persistent homology groups were first introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in a 2002 paper Topological Persistence and Simplification, one of the foundational papers in the fields of persistent homology and topological data analysis, based largely on the persistence barcodes and the persistence algorithm, that were first described by Serguei Barannikov in the 1994 paper. Since then, the study of persistent homology groups has led to applications in data science, machine learning, materials science, biology, and economics.
In topological data analysis, a persistence barcode, sometimes shortened to barcode, is an algebraic invariant associated with a filtered chain complex or a persistence module that characterizes the stability of topological features throughout a growing family of spaces. Formally, a persistence barcode consists of a multiset of intervals in the extended real line, where the length of each interval corresponds to the lifetime of a topological feature in a filtration, usually built on a point cloud, a graph, a function, or, more generally, a simplicial complex or a chain complex. Generally, longer intervals in a barcode correspond to more robust features, whereas shorter intervals are more likely to be noise in the data. A persistence barcode is a complete invariant that captures all the topological information in a filtration. In algebraic topology, the persistence barcodes were first introduced by Sergey Barannikov in 1994 as the "canonical forms" invariants consisting of a multiset of line segments with ends on two parallel lines, and later, in geometry processing, by Gunnar Carlsson et al. in 2004.
Topological Deep Learning (TDL) is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations, including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties.,.
{{cite book}}
: |journal=
ignored (help){{cite journal}}
: CS1 maint: DOI inactive as of November 2024 (link)Section 5