In symbolic dynamics and related branches of mathematics, a shift space or subshift is a set of infinite words that represent the evolution of a discrete system. In fact, shift spaces and symbolic dynamical systems are often considered synonyms. The most widely studied shift spaces are the subshifts of finite type and the sofic shifts.
In the classical framework [1] a shift space is any subset of , where is a finite set, which is closed for the Tychonov topology and invariant by translations. More generally one can define a shift space as the closed and translation-invariant subsets of , where is any non-empty set and is any monoid. [2] [3]
Let be a monoid, and given , denote the operation of with by the product . Let denote the identity of . Consider a non-empty set (an alphabet) with the discrete topology, and define as the set of all patterns over indexed by . For and a subset , we denote the restriction of to the indices of as .
On , we consider the prodiscrete topology, which makes a Hausdorff and totally disconnected topological space. In the case of being finite, it follows that is compact. However, if is not finite, then is not even locally compact.
This topology will be metrizable if and only if is countable, and, in any case, the base of this topology consists of a collection of open/closed sets (called cylinders), defined as follows: given a finite set of indices , and for each , let . The cylinder given by and is the set
When , we denote the cylinder fixing the symbol at the entry indexed by simply as .
In other words, a cylinder is the set of all set of all infinite patterns of which contain the finite pattern .
Given , the g-shift map on is denoted by and defined as
.
A shift space over the alphabet is a set that is closed under the topology of and invariant under translations, i.e., for all . [note 1] We consider in the shift space the induced topology from , which has as basic open sets the cylinders .
For each , define , and . An equivalent way to define a shift space is to take a set of forbidden patterns and define a shift space as the set
Intuitively, a shift space is the set of all infinite patterns that do not contain any forbidden finite pattern of .
Given a shift space and a finite set of indices , let , where stands for the empty word, and for let be the set of all finite configurations of that appear in some sequence of , i.e.,
Note that, since is a shift space, if is a translation of , i.e., for some , then if and only if there exists such that if . In other words, and contain the same configurations modulo translation. We will call the set
the language of . In the general context stated here, the language of a shift space has not the same mean of that in Formal Language Theory, but in the classical framework which considers the alphabet being finite, and being or with the usual addition, the language of a shift space is a formal language.
The classical framework for shift spaces consists of considering the alphabet as finite, and as the set of non-negative integers () with the usual addition, or the set of all integers () with the usual addition. In both cases, the identity element corresponds to the number 0. Furthermore, when , since all can be generated from the number 1, it is sufficient to consider a unique shift map given by for all . On the other hand, for the case of , since all can be generated from the numbers {-1, 1}, it is sufficient to consider two shift maps given for all by and by .
Furthermore, whenever is or with the usual addition (independently of the cardinality of ), due to its algebraic structure, it is sufficient consider only cylinders in the form
Moreover, the language of a shift space will be given by
where and stands for the empty word, and
In the same way, for the particular case of , it follows that to define a shift space we do not need to specify the index of on which the forbidden words of are defined, that is, we can just consider and then
However, if , if we define a shift space as above, without to specify the index of where the words are forbidden, then we will just capture shift spaces which are invariant through the shift map, that is, such that . In fact, to define a shift space such that it will be necessary to specify from which index on the words of are forbidden.
In particular, in the classical framework of being finite, and being ) or with the usual addition, it follows that is finite if and only if is finite, which leads to classical definition of a shift of finite type as those shift spaces such that for some finite .
Among several types of shift spaces, the most widely studied are the shifts of finite type and the sofic shifts.
In the case when the alphabet is finite, a shift space is a shift of finite type if we can take a finite set of forbidden patterns such that , and is a sofic shift if it is the image of a shift of finite type under sliding block code [1] (that is, a map that is continuous and invariant for all -shift maps ). If is finite and is or with the usual addition, then the shift is a sofic shift if and only if is a regular language.
The name "sofic" was coined by Weiss (1973), based on the Hebrew word סופי meaning "finite", to refer to the fact that this is a generalization of a finiteness property. [4]
When is infinite, it is possible to define shifts of finite type as shift spaces for those one can take a set of forbidden words such that
is finite and . [3] In this context of infinite alphabet, a sofic shift will be defined as the image of a shift of finite type under a particular class of sliding block codes. [3] Both, the finiteness of and the additional conditions the sliding block codes, are trivially satisfied whenever is finite.
Shift spaces are the topological spaces on which symbolic dynamical systems are usually defined.
Given a shift space and a -shift map it follows that the pair is a topological dynamical system.
Two shift spaces and are said to be topologically conjugate (or simply conjugate) if for each -shift map it follows that the topological dynamical systems and are topologically conjugate, that is, if there exists a continuous map such that . Such maps are known as generalized sliding block codes or just as sliding block codes whenever is uniformly continuous. [3]
Although any continuous map from to itself will define a topological dynamical system , in symbolic dynamics it is usual to consider only continuous maps which commute with all -shift maps, i. e., maps which are generalized sliding block codes. The dynamical system is known as a ' generalized cellular automaton' (or just as a cellular automaton whenever is uniformly continuous).
The first trivial example of shift space (of finite type) is the full shift.
Let . The set of all infinite words over A containing at most one b is a sofic subshift, not of finite type. The set of all infinite words over A whose b form blocks of prime length is not sofic (this can be shown by using the pumping lemma).
The space of infinite strings in two letters, is called the Bernoulli process. It is isomorphic to the Cantor set.
The bi-infinite space of strings in two letters, is commonly known as the Baker's map, or rather is homomorphic to the Baker's map.
In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C), and it has as a subgroup the special unitary group, consisting of those unitary matrices with determinant 1.
In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if
In mathematics, the adele ring of a global field is a central object of class field theory, a branch of algebraic number theory. It is the restricted product of all the completions of the global field and is an example of a self-dual topological ring.
In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.
In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of . Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:
In mathematics, particularly in functional analysis, a projection-valued measure is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
In physics and mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x as
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.
In mathematics, lifting theory was first introduced by John von Neumann in a pioneering paper from 1931, in which he answered a question raised by Alfréd Haar. The theory was further developed by Dorothy Maharam (1958) and by Alexandra Ionescu Tulcea and Cassius Ionescu Tulcea (1961). Lifting theory was motivated to a large extent by its striking applications. Its development up to 1969 was described in a monograph of the Ionescu Tulceas. Lifting theory continued to develop since then, yielding new results and applications.
Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.
In the theory of Lie groups, Lie algebras and their representation theory, a Lie algebra extensione is an enlargement of a given Lie algebra g by another Lie algebra h. Extensions arise in several ways. There is the trivial extension obtained by taking a direct sum of two Lie algebras. Other types are the split extension and the central extension. Extensions may arise naturally, for instance, when forming a Lie algebra from projective group representations. Such a Lie algebra will contain central charges.
Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems.
In mathematics, specifically in spectral theory, an eigenvalue of a closed linear operator is called normal if the space admits a decomposition into a direct sum of a finite-dimensional generalized eigenspace and an invariant subspace where has a bounded inverse. The set of normal eigenvalues coincides with the discrete spectrum.