In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, [1] effective numbers, [2] computable reals, [3] or recursive reals. [4] The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time. [5]
Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.[ citation needed ]
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; [6] i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1: [7]
A computable number [is] one for which there is a Turing machine which, given n on its initial tape, terminates with the nth digit of that number [encoded on its tape].
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
A real number a is computable if it can be approximated by some computable function in the following manner: given any positive integer n, the function produces an integer f(n) such that:
A complex number is called computable if its real and imaginary parts are computable.
There are two similar definitions that are equivalent:
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function which when provided with a rational number as input returns or , satisfying the following conditions:
An example is given by a program D that defines the cube root of 3. Assuming this is defined by:
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function).
Assigning a Gödel number to each Turing machine definition produces a subset of the natural numbers corresponding to the computable numbers and identifies a surjection from to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number the well ordering principle provides that there is a minimal element in which corresponds to , and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero. These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B,) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an approximation of a+b.
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954. [8]
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number . Then there is no Turing machine which on input A outputs "YES" if and "NO" if To see why, suppose the machine described by A keeps outputting 0 as approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers and , where , and outputs whether or It is sufficient to use -approximations where so by taking increasingly small (approaching 0), one eventually can decide whether or
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. [9] A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. [10] Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including:
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine. A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
Turing's original paper defined computable numbers as follows:
A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer as input and produces the -th digit of the real number's decimal expansion as output.
(The decimal expansion of a only refers to the digits following the decimal point.)
Turing was aware that this definition is equivalent to the -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the sense: if , then the first n digits of the decimal expansion for a provide an approximation of a. For the converse, we pick an computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of (total 0,1 valued functions) instead of reals numbers in . The members of can be identified with binary decimal expansions, but since the decimal expansions and denote the same real number, the interval can only be bijectively (and homeomorphically under the subset topology) identified with the subset of not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. [11] Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures and are essentially identical. Thus, computability theorists often refer to members of as reals. While is totally disconnected, for questions about classes or randomness it is easier to work in .
Elements of are sometimes called reals as well and though containing a homeomorphic image of , isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the satisfying , with quantifier free, must be computable while the unique satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics. [12]
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". [13] Modern examples include the CoRN library (Coq), [14] and the RealLib package (C++). [15] A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the iRRAM package. [16]
{{cite journal}}
: CS1 maint: multiple names: authors list (link)In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
In mathematics, the infimum of a subset of a partially ordered set is the greatest element in that is less than or equal to each element of if such an element exists. If the infimum of exists, it is unique, and if b is a lower bound of , then b is less than or equal to the infimum of . Consequently, the term greatest lower bound is also commonly used. The supremum of a subset of a partially ordered set is the least element in that is greater than or equal to each element of if such an element exists. If the supremum of exists, it is unique, and if b is an upper bound of , then the supremum of is less than or equal to b. Consequently, the supremum is also referred to as the least upper bound.
In mathematical logic, Goodstein's theorem is a statement about the natural numbers, proved by Reuben Goodstein in 1944, which states that every Goodstein sequence eventually terminates at 0. Laurence Kirby and Jeff Paris showed that it is unprovable in Peano arithmetic. This was the third example of a true statement about natural numbers that is unprovable in Peano arithmetic, after the examples provided by Gödel's incompleteness theorem and Gerhard Gentzen's 1943 direct proof of the unprovability of ε0-induction in Peano arithmetic. The Paris–Harrington theorem gave another example.
In mathematics, transfinite numbers or infinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers. These include the transfinite cardinals, which are cardinal numbers used to quantify the size of infinite sets, and the transfinite ordinals, which are ordinal numbers used to provide an ordering of infinite sets. The term transfinite was coined in 1895 by Georg Cantor, who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals as infinite numbers. Nevertheless, the term transfinite also remains in use.
In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying fluids, such as ionized gases, electrolytes, and charge carriers in electronic conductors . In a fluid, with a given permittivity ε, composed of electrically charged constituent particles, each pair of particles interact through the Coulomb force as where the vector r is the relative position between the charges. This interaction complicates the theoretical treatment of the fluid. For example, a naive quantum mechanical calculation of the ground-state energy density yields infinity, which is unreasonable. The difficulty lies in the fact that even though the Coulomb force diminishes with distance as 1/r2, the average number of particles at each distance r is proportional to r2, assuming the fluid is fairly isotropic. As a result, a charge fluctuation at any one point has non-negligible effects at large distances.
In mathematics, a real number is said to be simply normal in an integer base b if its infinite sequence of digits is distributed uniformly in the sense that each of the b digit values has the same natural density 1/b. A number is said to be normal in base b if, for every positive integer n, all possible strings n digits long have density b−n.
In mathematics, the Lucas–Lehmer test (LLT) is a primality test for Mersenne numbers. The test was originally developed by Édouard Lucas in 1878 and subsequently proved by Derrick Henry Lehmer in 1930.
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers, one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors.
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval.
Transcendental number theory is a branch of number theory that investigates transcendental numbers, in both qualitative and quantitative ways.
An elliptic filter is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple. Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations.
Intuitively, an algorithmically random sequence is a sequence of binary digits that appears random to any algorithm running on a universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet. Random sequences are key objects of study in algorithmic information theory.
In the mathematical discipline of set theory, there are many ways of describing specific countable ordinals. The smallest ones can be usefully and non-circularly expressed in terms of their Cantor normal forms. Beyond that, many ordinals of relevance to proof theory still have computable ordinal notations. However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not ; various more-concrete ways of defining ordinals that definitely have notations are available.
Axiomatic constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories.
In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object. Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 but effective dimension 1.
In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow.
In mathematics, a real number is a number that can be used to measure a continuous one-dimensional quantity such as a distance, duration or temperature. Here, continuous means that pairs of values can have arbitrarily small differences. Every real number can be almost uniquely represented by an infinite decimal expansion.
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A|B) or occasionally PB(A). This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): .
In mathematics, a limit is the value that a function approaches as the argument approaches some value. Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist.
A mathematical constant is a number whose value is fixed by an unambiguous definition, often referred to by a special symbol, or by mathematicians' names to facilitate using it across multiple mathematical problems. Constants arise in many areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory, statistics, and calculus.