Computation in the limit

Last updated

In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable.

Contents

If the sequence is uniformly computable relative to D, then the function is limit computable in D.

Formal definition

A total function is limit computable if there is a total computable function such that

The total function is limit computable in D if there is a total function computable in D also satisfying

A set of natural numbers is defined to be computable in the limit if and only if its characteristic function is computable in the limit. In contrast, the set is computable if and only if it is computable in the limit by a function and there is a second computable function that takes input i and returns a value of t large enough that the has stabilized.

Limit lemma

The limit lemma states that a set of natural numbers is limit computable if and only if the set is computable from (the Turing jump of the empty set). The relativized limit lemma states that a set is limit computable in if and only if it is computable from . Moreover, the limit lemma (and its relativization) hold uniformly. Thus one can go from an index for the function to an index for relative to . One can also go from an index for relative to to an index for some that has limit .

Proof

As is a [computably enumerable] set, it must be computable in the limit itself as the computable function can be defined

whose limit as goes to infinity is the characteristic function of .

It therefore suffices to show that if limit computability is preserved by Turing reduction, as this will show that all sets computable from are limit computable. Fix sets which are identified with their characteristic functions and a computable function with limit . Suppose that for some Turing reduction and define a computable function as follows

Now suppose that the computation converges in steps and only looks at the first bits of . Now pick such that for all . If then the computation converges in at most steps to . Hence has a limit of , so is limit computable.

As the sets are just the sets computable from by Post's theorem, the limit lemma also entails that the limit computable sets are the sets.

Limit computable real numbers

A real number x is computable in the limit if there is a computable sequence of rational numbers (or, which is equivalent, computable real numbers) which converges to x. In contrast, a real number is computable if and only if there is a sequence of rational numbers which converges to it and which has a computable modulus of convergence.

When a real number is viewed as a sequence of bits, the following equivalent definition holds. An infinite sequence of binary digits is computable in the limit if and only if there is a total computable function taking values in the set such that for each i the limit exists and equals . Thus for each i, as t increases the value of eventually becomes constant and equals . As with the case of computable real numbers, it is not possible to effectively move between the two representations of limit computable reals.

Examples

Set-theoretic extension

There is a modified version of the limit lemma for α-recursion theory via functions in the -arithmetical hierarchy, which is a hierarchy defined relative to some admissible ordinal . [1]

For a given admissible ordinal , define the -arithmetical hierarchy:

Let be a partial function from to . The following are equivalent:

denotes that either and are both undefined, or they are both defined and equal.

See also

Related Research Articles

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics, particularly in operator theory and C*-algebra theory, the continuous functional calculus is a functional calculus which allows the application of a continuous function to normal elements of a C*-algebra.

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.

In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems.

In calculus, the Leibniz integral rule for differentiation under the integral sign states that for an integral of the form

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

<span class="mw-page-title-main">Truncated normal distribution</span> Type of probability distribution

In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics.

In set theory, a mathematical discipline, the Jensen hierarchy or J-hierarchy is a modification of Gödel's constructible hierarchy, L, that circumvents certain technical difficulties that exist in the constructible hierarchy. The J-Hierarchy figures prominently in fine structure theory, a field pioneered by Ronald Jensen, for whom the Jensen hierarchy is named. Rudimentary functions describe a method for iterating through the Jensen hierarchy.

<span class="mw-page-title-main">Logit-normal distribution</span>

In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).

Type-1 OWA operators are a set of aggregation operators that generalise the Yager's OWA operators) in the interest of aggregating fuzzy sets rather than crisp values in soft decision making and data mining.

In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.

In statistics, the matrix t-distribution is the generalization of the multivariate t-distribution from vectors to matrices. The matrix t-distribution shares the same relationship with the multivariate t-distribution that the matrix normal distribution shares with the multivariate normal distribution. For example, the matrix t-distribution is the compound distribution that results from sampling from a matrix normal distribution having sampled the covariance matrix of the matrix normal from an inverse Wishart distribution.

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

References

  1. S. G. Simpson, "Degree Theory on Admissible Ordinals", pp.170--171. Appearing in J. Fenstad, P. Hinman, Generalized Recursion Theory: Proceedings of the 1972 Oslo Symposium (1974), ISBN   0-7204-22760.
  1. J. Schmidhuber, "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit", International Journal of Foundations of Computer Science, 2002, doi : 10.1142/S0129054102001291.
  2. R. Soare. Recursively Enumerable Sets and Degrees. Springer-Verlag 1987.
  3. V. Brattka. A Galois connection between Turing jumps and limits. Log. Methods Comput. Sci., 2018, doi : 10.23638/LMCS-14(3:13)2018.