In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.
The importance of primitive recursive functions lies in the fact that most computable functions that are studied in number theory (and more generally in mathematics) are primitive recursive. For example, addition and division, the factorial and exponential function, and the function which returns the nth prime are all primitive recursive. [1] In fact, for showing that a computable function is primitive recursive, it suffices to show that its time complexity is bounded above by a primitive recursive function of the input size. [2] It is hence not particularly easy to devise a computable function that is not primitive recursive; some examples are shown in section § Limitations below.
The set of primitive recursive functions is known as PR in computational complexity theory.
A primitive recursive function takes a fixed number of arguments, each a natural number (nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takes n arguments it is called n-ary.
The basic primitive recursive functions are given by these axioms:
More complex primitive recursive functions can be obtained by applying the operations given by these axioms:
The primitive recursive functions are the basic functions and those obtained from the basic functions by applying these operations a finite number of times.
A definition of the 2-ary function , to compute the sum of its arguments, can be obtained using the primitive recursion operator . To this end, the well-known equations
are "rephrased in primitive recursive function terminology": In the definition of , the first equation suggests to choose to obtain ; the second equation suggests to choose to obtain . Therefore, the addition function can be defined as . As a computation example,
Given , the 1-ary function doubles its argument, .
In a similar way as addition, multiplication can be defined by . This reproduces the well-known multiplication equations:
and
The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rules and . A primitive recursive definition is . As a computation example,
The limited subtraction function (also called "monus", and denoted "") is definable from the predecessor function. It satisfies the equations
Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction, . Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, as . To get rid of the reversed argument order, then define . As a computation example,
In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers with truth values (that is for true and for false),[ citation needed ] or that produce truth values as outputs. [4] This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth value with the number and the truth value with the number . Once this identification has been made, the characteristic function of a set , which always returns or , can be viewed as a predicate that tells whether a number is in the set . Such an identification of predicates with numeric functions will be assumed for the remainder of this article.
As an example for a primitive recursive predicate, the 1-ary function shall be defined such that if , and , otherwise. This can be achieved by defining . Then, and e.g. .
Using the property , the 2-ary function can be defined by . Then if , and , otherwise. As a computation example,
Once a definition of is obtained, the converse predicate can be defined as . Then, is true (more precisely: has value 1) if, and only if, .
The 3-ary if-then-else operator known from programming languages can be defined by . Then, for arbitrary ,
and
That is, returns the then-part, , if the if-part, , is true, and the else-part, , otherwise.
Based on the function, it is easy to define logical junctors. For example, defining , one obtains , that is, is true if, and only if, both and are true (logical conjunction of and ).
Similarly, and lead to appropriate definitions of disjunction and negation: and .
Using the above functions , and , the definition implements the equality predicate. In fact, is true if, and only if, equals .
Similarly, the definition implements the predicate "less-than", and implements "greater-than".
Exponentiation and primality testing are primitive recursive. Given primitive recursive functions , , , and , a function that returns the value of when and the value of otherwise is primitive recursive.
By using Gödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers and rational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then the field operations are all primitive recursive.
The following examples and definitions are from Kleene (1952 , pp. 223–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, in Boolos, Burgess & Jeffrey (2002 , pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation.
In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =def a'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed as Gödel numbers.
The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input.
Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function. [5]
An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely:
f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction.
However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true.
Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows:
Now define the "evaluator function" with two arguments, by . Clearly is total and computable, since one can effectively determine the definition of , and being a primitive recursive function is itself total and computable, so is always defined and effectively computable. However a diagonal argument will show that the function of two arguments is not primitive recursive.
Suppose were primitive recursive, then the unary function defined by would also be primitive recursive, as it is defined by composition from the successor function and . But then occurs in the enumeration, so there is some number such that . But now gives a contradiction.This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings.
Other examples of total recursive but not primitive recursive functions are known:
Instead of , alternative definitions use just one 0-ary zero function as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator.
Robinson [6] considered various restrictions of the recursion rule. One is the so-called iteration rule where the function h does not have access to the parameters xi (in this case, we may assume without loss of generality that the function g is just the identity, as the general case can be obtained by substitution):
He proved that the class of all primitive recursive functions can still be obtained in this way.
Another restriction considered by Robinson [6] is pure recursion, where h does not have access to the induction variable y:
Gladstone [7] proved that this rule is enough to generate all primitive recursive functions. Gladstone [8] improved this so that even the combination of these two restrictions, i.e., the pure iteration rule below, is enough:
Further improvements are possible: Severin [9] prove that even the pure iteration rule without parameters, namely
suffices to generate all unary primitive recursive functions if we extend the set of initial functions with truncated subtraction x ∸ y. We get all primitive recursive functions if we additionally include + as an initial function.
Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions.
The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run.
An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as while loops or IF-THEN plus GOTO, are admitted in a primitive recursive language.
The LOOP language, introduced in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie, [10] is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language is Douglas Hofstadter's BlooP in Gödel, Escher, Bach . Adding unbounded loops (WHILE, GOTO) makes the language general recursive and Turing-complete, as are all real-world computer programming languages.
The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, the halting problem is undecidable for general recursive functions.
The primitive recursive functions are closely related to mathematical finitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired. Primitive recursive arithmetic (PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose.
PRA is much weaker than Peano arithmetic, which is not a finitistic system. Nevertheless, many results in number theory and in proof theory can be proved in PRA. For example, Gödel's incompleteness theorem can be formalized into PRA, giving the following theorem:
Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs.
In proof theory and set theory, there is an interest in finitistic consistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theory T implies the consistency of a theory S by producing a primitive recursive function that can transform any proof of an inconsistency from S into a proof of an inconsistency from T. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained by forcing can be recast as syntactic proofs that can be formalized in PRA.
Recursive definitions had been used more or less formally in mathematics before, but the construction of primitive recursion is traced back to Richard Dedekind's theorem 126 of his Was sind und was sollen die Zahlen? (1888). This work was the first to give a proof that a certain recursive construction defines a unique function. [11] [12] [13]
Primitive recursive arithmetic was first proposed by Thoralf Skolem [14] in 1923.
The current terminology was coined by Rózsa Péter (1934) after Ackermann had proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions. [12] [13]
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.
In mathematics, exponentiation is an operation involving two numbers: the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; often said as "b to the power n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases: In particular, .
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold. It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In mathematics, a Lie algebroid is a vector bundle together with a Lie bracket on its space of sections and a vector bundle morphism , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra.
The representation theory of groups is a part of mathematics which examines how groups act on given structures.
The dyadic transformation is the mapping
In mathematics, the jet is an operation that takes a differentiable function f and produces a polynomial, the Taylor polynomial of f, at each point of its domain. Although this is the definition of a jet, the theory of jets regards these polynomials as being abstract polynomials rather than polynomial functions.
In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable.
Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
Primitive recursive arithmetic (PRA) is a quantifier-free formalization of the natural numbers. It was first proposed by Norwegian mathematician Skolem (1923), as a formalization of his finitistic conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε0, which is the proof-theoretic ordinal of Peano arithmetic. PRA's proof theoretic ordinal is ωω, where ω is the smallest transfinite ordinal. PRA is sometimes called Skolem arithmetic, although that has another meaning, see Skolem arithmetic.
In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
A hydrogen-like atom (or hydrogenic atom) is any atom or ion with a single valence electron. These atoms are isoelectronic with hydrogen. Examples of hydrogen-like atoms include, but are not limited to, hydrogen itself, all alkali metals such as Rb and Cs, singly ionized alkaline earth metals such as Ca+ and Sr+ and other ions such as He+, Li2+, and Be3+ and isotopes of any of the above. A hydrogen-like atom includes a positively charged core consisting of the atomic nucleus and any core electrons as well as a single valence electron. Because helium is common in the universe, the spectroscopy of singly ionized helium is important in EUV astronomy, for example, of DO white dwarf stars.
In the mathematical theory of conformal and quasiconformal mappings, the extremal length of a collection of curves is a measure of the size of that is invariant under conformal mappings. More specifically, suppose that is an open set in the complex plane and is a collection of paths in and is a conformal mapping. Then the extremal length of is equal to the extremal length of the image of under . One also works with the conformal modulus of , the reciprocal of the extremal length. The fact that extremal length and conformal modulus are conformal invariants of makes them useful tools in the study of conformal and quasi-conformal mappings. One also works with extremal length in dimensions greater than two and certain other metric spaces, but the following deals primarily with the two dimensional setting.
Zero-forcing precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Zero-forcing has been used in LTE mobile networks.
A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.
In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.
The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory. Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale. As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.
In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.
The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts.