Suffix automaton | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Type | Substring index | ||||||||||||||
Invented | 1983 | ||||||||||||||
Invented by | Anselm Blumer; Janet Blumer; Andrzej Ehrenfeucht; David Haussler; Ross McConnell | ||||||||||||||
|
In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.
In terms of automata theory, a suffix automaton is the minimal partial deterministic finite automaton that recognizes the set of suffixes of a given string . The state graph of a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for any deterministic acyclic finite state automaton.
Suffix automata were introduced in 1983 by a group of scientists from the University of Denver and the University of Colorado Boulder. They suggested a linear time online algorithm for its construction and showed that the suffix automaton of a string having length at least two characters has at most states and at most transitions. Further works have shown a close connection between suffix automata and suffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.
Suffix automata provide efficient solutions to problems such as substring search and computation of the largest common substring of two and more strings.
The concept of suffix automaton was introduced in 1983 [1] by a group of scientists from University of Denver and University of Colorado Boulder consisting of Anselm Blumer, Janet Blumer, Andrzej Ehrenfeucht, David Haussler and Ross McConnell, although similar concepts had earlier been studied alongside suffix trees in the works of Peter Weiner, [2] Vaughan Pratt [3] and Anatol Slissenko. [4] In their initial work, Blumer et al. showed a suffix automaton built for the string of length greater than has at most states and at most transitions, and suggested a linear algorithm for automaton construction. [5]
In 1983, Mu-Tian Chen and Joel Seiferas independently showed that Weiner's 1973 suffix-tree construction algorithm [2] while building a suffix tree of the string constructs a suffix automaton of the reversed string as an auxiliary structure. [6] In 1987, Blumer et al. applied the compressing technique used in suffix trees to a suffix automaton and invented the compacted suffix automaton, which is also called the compacted directed acyclic word graph (CDAWG). [7] In 1997, Maxime Crochemore and Renaud Vérin developed a linear algorithm for direct CDAWG construction. [1] In 2001, Shunsuke Inenaga et al. developed an algorithm for construction of CDAWG for a set of words given by a trie. [8]
Usually when speaking about suffix automata and related concepts, some notions from formal language theory and automata theory are used, in particular: [9]
Formally, deterministic finite automaton is determined by 5-tuple , where: [10]
Most commonly, deterministic finite automaton is represented as a directed graph ("diagram") such that: [10]
In terms of its diagram, the automaton recognizes the word only if there is a path from the initial vertex to some final vertex such that concatenation of characters on this path forms . The set of words recognized by an automaton forms a language that is set to be recognized by the automaton. In these terms, the language recognized by a suffix automaton of is the language of its (possibly empty) suffixes. [9]
"Right context" of the word with respect to language is a set that is a set of words such that their concatenation with forms a word from . Right contexts induce a natural equivalence relation on the set of all words. If language is recognized by some deterministic finite automaton, there exists unique up to isomorphism automaton that recognizes the same language and has the minimum possible number of states. Such an automaton is called a minimal automaton for the given language . Myhill–Nerode theorem allows it to define it explicitly in terms of right contexts: [11] [12]
Theorem — Minimal automaton recognizing language over the alphabet may be explicitly defined in the following way:
In these terms, a "suffix automaton" is the minimal deterministic finite automaton recognizing the language of suffixes of the word . The right context of the word with respect to this language consists of words , such that is a suffix of . It allows to formulate the following lemma defining a bijection between the right context of the word and the set of right positions of its occurrences in : [13] [14]
Theorem — Let be the set of right positions of occurrences of in .
There is a following bijection between and :
For example, for the word and its subword , it holds and . Informally, is formed by words that follow occurrences of to the end of and is formed by right positions of those occurrences. In this example, the element corresponds with the word while the word corresponds with the element .
It implies several structure properties of suffix automaton states. Let , then: [14]
Any state of the suffix automaton recognizes some continuous chain of nested suffixes of the longest word recognized by this state. [14]
"Left extension" of the string is the longest string that has the same right context as . Length of the longest string recognized by is denoted by . It holds: [15]
Theorem — Left extension of may be represented as , where is the longest word such that any occurrence of in is preceded by .
"Suffix link" of the state is the pointer to the state that contains the largest suffix of that is not recognized by .
In this terms it can be said recognizes exactly all suffixes of that is longer than and not longer than . It also holds: [15]
A "prefix tree" (or "trie") is a rooted directed tree in which arcs are marked by characters in such a way no vertex of such tree has two out-going arcs marked with the same character. Some vertices in trie are marked as final. Trie is said to recognize a set of words defined by paths from its root to final vertices. In this way prefix trees are a special kind of deterministic finite automata if you perceive its root as an initial vertex. [16] The "suffix trie" of the word is a prefix tree recognizing a set of its suffixes. "A suffix tree" is a tree obtained from a suffix trie via the compaction procedure, during which consequent edges are merged if the degree of the vertex between them is equal to two. [15]
By its definition, a suffix automaton can be obtained via minimization of the suffix trie. It may be shown that a compacted suffix automaton is obtained by both minimization of the suffix tree (if one assumes each string on the edge of the suffix tree is a solid character from the alphabet) and compaction of the suffix automaton. [17] Besides this connection between the suffix tree and the suffix automaton of the same string there is as well a connection between the suffix automaton of the string and the suffix tree of the reversed string . [18]
Similarly to right contexts one may introduce "left contexts" , "right extensions" corresponding to the longest string having same left context as and the equivalence relation . If one considers right extensions with respect to the language of "prefixes" of the string it may be obtained: [15]
Theorem — Suffix tree of the string may be defined explicitly in the following way:
Here triplet means there is an edge from to with the string written on it
, which implies the suffix link tree of the string and the suffix tree of the string are isomorphic: [18]
Suffix structures of words "abbcbc" and "cbcbba" |
---|
|
Similarly to the case of left extensions, the following lemma holds for right extensions: [15]
Theorem — Right extension of the string may be represented as , where is the longest word such that every occurrence of in is succeeded by .
A suffix automaton of the string of length has at most states and at most transitions. These bounds are reached on strings and correspondingly. [13] This may be formulated in a stricter way as where and are the numbers of transitions and states in automaton correspondingly. [14]
Maximal suffix automata |
---|
|
Initially the automaton only consists of a single state corresponding to the empty word, then characters of the string are added one by one and the automaton is rebuilt on each step incrementally. [19]
After a new character is appended to the string, some equivalence classes are altered. Let be the right context of with respect to the language of suffixes. Then the transition from to after is appended to is defined by lemma: [14]
Theorem — Let be some words over and be some character from this alphabet. Then there is a following correspondence between and :
After adding to the current word the right context of may change significantly only if is a suffix of . It implies equivalence relation is a refinement of . In other words, if , then . After the addition of a new character at most two equivalence classes of will be split and each of them may split in at most two new classes. First, equivalence class corresponding to empty right context is always split into two equivalence classes, one of them corresponding to itself and having as a right context. This new equivalence class contains exactly and all its suffixes that did not occur in , as the right context of such words was empty before and contains only empty word now. [14]
Given the correspondence between states of the suffix automaton and vertices of the suffix tree, it is possible to find out the second state that may possibly split after a new character is appended. The transition from to corresponds to the transition from to in the reversed string. In terms of suffix trees it corresponds to the insertion of the new longest suffix into the suffix tree of . At most two new vertices may be formed after this insertion: one of them corresponding to , while the other one corresponds to its direct ancestor if there was a branching. Returning to suffix automata, it means the first new state recognizes and the second one (if there is a second new state) is its suffix link. It may be stated as a lemma: [14]
Theorem — Let , be some word and character over . Also let be the longest suffix of , which occurs in , and let . Then for any substrings of it holds:
It implies that if (for example, when didn't occur in at all and ), then only the equivalence class corresponding to the empty right context is split. [14]
Besides suffix links it is also needed to define final states of the automaton. It follows from structure properties that all suffixes of a word recognized by are recognized by some vertex on suffix path of . Namely, suffixes with length greater than lie in , suffixes with length greater than but not greater than lie in and so on. Thus if the state recognizing is denoted by , then all final states (that is, recognizing suffixes of ) form up the sequence . [19]
After the character is appended to possible new states of suffix automaton are and . Suffix link from goes to and from it goes to . Words from occur in only as its suffixes therefore there should be no transitions at all from while transitions to it should go from suffixes of having length at least and be marked with the character . State is formed by subset of , thus transitions from should be same as from . Meanwhile, transitions leading to should go from suffixes of having length less than and at least , as such transitions have led to before and corresponded to seceded part of this state. States corresponding to these suffixes may be determined via traversal of suffix link path for . [19]
Theoretical results above lead to the following algorithm that takes character and rebuilds the suffix automaton of into the suffix automaton of : [19]
The whole procedure is described by the following pseudo-code: [19]
functionadd_letter(x): definep = lastassignlast = new_state()assignlen(last) = len(p) + 1whileδ(p, x) is undefined: assignδ(p, x) = last, p = link(p)defineq = δ(p, x)ifq = last: assignlink(last) = q0else iflen(q) = len(p) + 1: assignlink(last) = qelse: definecl = new_state()assignlen(cl) = len(p) + 1assignδ(cl) = δ(q), link(cl) = link(q)assignlink(last) = link(q) = clwhileδ(p, x) = q: assignδ(p, x) = cl, p = link(p)
Here is the initial state of the automaton and is a function creating new state for it. It is assumed , , and are stored as global variables. [19]
Complexity of the algorithm may vary depending on the underlying structure used to store transitions of the automaton. It may be implemented in with memory overhead or in with memory overhead if one assumes that memory allocation is done in . To obtain such complexity, one has to use the methods of amortized analysis. The value of strictly reduces with each iteration of the cycle while it may only increase by as much as one after the first iteration of the cycle on the next add_letter call. Overall value of never exceeds and it is only increased by one between iterations of appending new letters that suggest total complexity is at most linear as well. The linearity of the second cycle is shown in a similar way. [19]
The suffix automaton is closely related to other suffix structures and substring indices. Given a suffix automaton of a specific string one may construct its suffix tree via compacting and recursive traversal in linear time. [20] Similar transforms are possible in both directions to switch between the suffix automaton of and the suffix tree of reversed string . [18] Other than this several generalizations were developed to construct an automaton for the set of strings given by trie, [8] compacted suffix automation (CDAWG), [7] to maintain the structure of the automaton on the sliding window, [21] and to construct it in a bidirectional way, supporting the insertion of a characters to both the beginning and the end of the string. [22]
As was already mentioned above, a compacted suffix automaton is obtained via both compaction of a regular suffix automaton (by removing states which are non-final and have exactly one out-going arc) and the minimization of a suffix tree. Similarly to the regular suffix automaton, states of compacted suffix automaton may be defined in explicit manner. A two-way extension of a word is the longest word , such that every occurrence of in is preceded by and succeeded by . In terms of left and right extensions it means that two-way extension is the left extension of the right extension or, which is equivalent, the right extension of the left extension, that is . In terms of two-way extensions compacted automaton is defined as follows: [15]
Theorem — Compacted suffix automaton of the word is defined by a pair , where:
Two-way extensions induce an equivalence relation which defines the set of words recognized by the same state of compacted automaton. This equivalence relation is a transitive closure of the relation defined by , which highlights the fact that a compacted automaton may be obtained by both gluing suffix tree vertices equivalent via relation (minimization of the suffix tree) and gluing suffix automaton states equivalent via relation (compaction of suffix automaton). [23] If words and have same right extensions, and words and have same left extensions, then cumulatively all strings , and have same two-way extensions. At the same time it may happen that neither left nor right extensions of and coincide. As an example one may take , and , for which left and right extensions are as follows: , but and . That being said, while equivalence relations of one-way extensions were formed by some continuous chain of nested prefixes or suffixes, bidirectional extensions equivalence relations are more complex and the only thing one may conclude for sure is that strings with the same two-way extension are substrings of the longest string having the same two-way extension, but it may even happen that they don't have any non-empty substring in common. The total number of equivalence classes for this relation does not exceed which implies that compacted suffix automaton of the string having length has at most states. The amount of transitions in such automaton is at most . [15]
Consider a set of words . It is possible to construct a generalization of suffix automaton that would recognize the language formed up by suffixes of all words from the set. Constraints for the number of states and transitions in such automaton would stay the same as for a single-word automaton if you put . [23] The algorithm is similar to the construction of single-word automaton except instead of state, function add_letter would work with the state corresponding to the word assuming the transition from the set of words to the set . [24] [25]
This idea is further generalized to the case when is not given explicitly but instead is given by a prefix tree with vertices. Mohri et al. showed such an automaton would have at most and may be constructed in linear time from its size. At the same time, the number of transitions in such automaton may reach , for example for the set of words over the alphabet the total length of words is equal to , the number of vertices in corresponding suffix trie is equal to and corresponding suffix automaton is formed of states and transitions. Algorithm suggested by Mohri mainly repeats the generic algorithm for building automaton of several strings but instead of growing words one by one, it traverses the trie in a breadth-first search order and append new characters as it meet them in the traversal, which guarantees amortized linear complexity. [26]
Some compression algorithms, such as LZ77 and RLE may benefit from storing suffix automaton or similar structure not for the whole string but for only last its characters while the string is updated. This is because compressing data is usually expressively large and using memory is undesirable. In 1985, Janet Blumer developed an algorithm to maintain a suffix automaton on a sliding window of size in worst-case and on average, assuming characters are distributed independently and uniformly. She also showed complexity cannot be improved: if one considers words construed as a concatenation of several words, where , then the number of states for the window of size would frequently change with jumps of order , which renders even theoretical improvement of for regular suffix automata impossible. [27]
The same should be true for the suffix tree because its vertices correspond to states of the suffix automaton of the reversed string but this problem may be resolved by not explicitly storing every vertex corresponding to the suffix of the whole string, thus only storing vertices with at least two out-going edges. A variation of McCreight's suffix tree construction algorithm for this task was suggested in 1989 by Edward Fiala and Daniel Greene; [28] several years later a similar result was obtained with the variation of Ukkonen's algorithm by Jesper Larsson. [29] [30] The existence of such an algorithm, for compacted suffix automaton that absorbs some properties of both suffix trees and suffix automata, was an open question for a long time until it was discovered by Martin Senft and Tomasz Dvorak in 2008, that it is impossible if the alphabet's size is at least two. [31]
One way to overcome this obstacle is to allow window width to vary a bit while staying . It may be achieved by an approximate algorithm suggested by Inenaga et al. in 2004. The window for which suffix automaton is built in this algorithm is not guaranteed to be of length but it is guaranteed to be at least and at most while providing linear overall complexity of the algorithm. [32]
Suffix automaton of the string may be used to solve such problems as: [33] [34]
It is assumed here that is given on the input after suffix automaton of is constructed. [33]
Suffix automata are also used in data compression, [35] music retrieval [36] [37] and matching on genome sequences. [38]
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the dimensionless change in magnitude or phase per unit length. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
In mathematics, a Lie algebroid is a vector bundle together with a Lie bracket on its space of sections and a vector bundle morphism , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In differential geometry and mathematical physics, a spin connection is a connection on a spinor bundle. It is induced, in a canonical manner, from the affine connection. It can also be regarded as the gauge field generated by local Lorentz transformations. In some canonical formulations of general relativity, a spin connection is defined on spatial slices and can also be regarded as the gauge field generated by local rotations.
In the mathematical discipline of set theory, there are many ways of describing specific countable ordinals. The smallest ones can be usefully and non-circularly expressed in terms of their Cantor normal forms. Beyond that, many ordinals of relevance to proof theory still have computable ordinal notations. However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not ; various more-concrete ways of defining ordinals that definitely have notations are available.
Continuous wavelets of compact support alpha can be built, which are related to the beta distribution. The process is derived from probability distributions using blur derivative. These new wavelets have just one cycle, so they are termed unicycle wavelets. They can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two parameters and . Closed-form expressions for beta wavelets and scale functions as well as their spectra are derived. Their importance is due to the Central Limit Theorem by Gnedenko and Kolmogorov applied for compactly supported signals.
A yield surface is a five-dimensional surface in the six-dimensional space of stresses. The yield surface is usually convex and the state of stress of inside the yield surface is elastic. When the stress state lies on the surface the material is said to have reached its yield point and the material is said to have become plastic. Further deformation of the material causes the stress state to remain on the yield surface, even though the shape and size of the surface may change as the plastic deformation evolves. This is because stress states that lie outside the yield surface are non-permissible in rate-independent plasticity, though not in some models of viscoplasticity.
In set theory, a mathematical discipline, the Jensen hierarchy or J-hierarchy is a modification of Gödel's constructible hierarchy, L, that circumvents certain technical difficulties that exist in the constructible hierarchy. The J-Hierarchy figures prominently in fine structure theory, a field pioneered by Ronald Jensen, for whom the Jensen hierarchy is named. Rudimentary functions describe a method for iterating through the Jensen hierarchy.
The Birnbaum–Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. There are several alternative formulations of this distribution in the literature. It is named after Z. W. Birnbaum and S. C. Saunders.
The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution marginals. In the same way as for the univariate case, the distribution is defined in terms of its characteristic function.
The table of chords, created by the Greek astronomer, geometer, and geographer Ptolemy in Egypt during the 2nd century AD, is a trigonometric table in Book I, chapter 11 of Ptolemy's Almagest, a treatise on mathematical astronomy. It is essentially equivalent to a table of values of the sine function. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy. Since the 8th and 9th centuries, the sine and other trigonometric functions have been used in Islamic mathematics and astronomy, reforming the production of sine tables. Khwarizmi and Habash al-Hasib later produced a set of trigonometric tables.
In mathematics and mechanics, the Euler–Rodrigues formula describes the rotation of a vector in three dimensions. It is based on Rodrigues' rotation formula, but uses a different parametrization.
In statistics, the matrix t-distribution is the generalization of the multivariate t-distribution from vectors to matrices.
Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states. The idea is well known in the context of lattice Yang–Mills theory. Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry.
In supersymmetry, eleven-dimensional supergravity is the theory of supergravity in the highest number of dimensions allowed for a supersymmetric theory. It contains a graviton, a gravitino, and a 3-form gauge field, with their interactions uniquely fixed by supersymmetry. Discovered in 1978 by Eugène Cremmer, Bernard Julia, and Joël Scherk, it quickly became a popular candidate for a theory of everything during the 1980s. However, interest in it soon faded due to numerous difficulties that arise when trying to construct physically realistic models. It came back to prominence in the mid-1990s when it was found to be the low energy limit of M-theory, making it crucial for understanding various aspects of string theory.