Recursive definition

Last updated
Four stages in the construction of a Koch snowflake. As with many other fractals, the stages are obtained via a recursive definition. KochFlake.svg
Four stages in the construction of a Koch snowflake. As with many other fractals, the stages are obtained via a recursive definition.

In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set.

Contents

A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function n! is defined by the rules

This definition is valid for each natural number n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function n!, starting from n = 0 and proceeding onwards with n = 1, 2, 3 etc.

The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction. [1]

An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set of natural numbers is:

  1. 1 is in
  2. If an element n is in then n + 1 is in
  3. is the smallest set satisfying (1) and (2).

There are many sets that satisfy (1) and (2) – for example, the set {1, 1.649, 2, 2.649, 3, 3.649, …} satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members.

Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of n + 1 whenever it holds of n, then the property holds of all natural numbers (Aczel 1977:742).

Form of recursive definitions

Most recursive definitions have two foundations: a base case (basis) and an inductive clause.

The difference between a circular definition and a recursive definition is that a recursive definition must always have base cases, cases that satisfy the definition without being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., closer to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case". [2]

In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress.

That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial. [3] Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of f(0) (i.e., base case) is given, and that for n > 0, an algorithm is given for determining f(n) in terms of n, (i.e., inductive clause).

More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' Topology. However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below. [4]

Principle of recursive definition

Let A be a set and let a0 be an element of A. If ρ is a function which assigns to each function f mapping a nonempty section of the positive integers into A, an element of A, then there exists a unique function such that

Examples of recursive definitions

Elementary functions

Addition is defined recursively based on counting as

Multiplication is defined recursively as

Exponentiation is defined recursively as

Binomial coefficients can be defined recursively as

Prime numbers

The set of prime numbers can be defined as the unique set of positive integers satisfying

The primality of the integer 2 is the base case; checking the primality of any larger integer X by this definition requires knowing the primality of every integer between 2 and X, which is well defined by this definition. That last point can be proved by induction on X, for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible.

Non-negative even numbers

The even numbers can be defined as consisting of

Well formed formula

The notion of a well-formed formula (wff) in propositional logic is defined recursively as the smallest set satisfying the three rules:

  1. p is a wff if p is a propositional variable.
  2. ¬ p is a wff if p is a wff.
  3. (p • q) is a wff if p and q are wffs and • is one of the logical connectives ∨, ∧, →, or ↔.

The definition can be used to determine whether any particular string of symbols is a wff:

Recursive definitions as logic programs

Logic programs can be understood as sets of recursive definitions. [5] [6] For example, the recursive definition of even number can be written as the logic program:

even(0).even(s(s(X))):-even(X).

Here :- represents if, and s(X) represents the successor of X, namely X+1, as in Peano arithmetic.

The logic programming language Prolog uses backward reasoning to solve goals and answer queries. For example, given the query ?-even(s(s(0))) it produces the answer true. Given the query ?-even(s(0)) it produces the answer false.

The program can be used not only to check whether a query is true, but also to generate answers that are true. For example:

?-even(X).X=0X=s(s(0))X=s(s(s(s(0))))X=s(s(s(s(s(s(0)))))).....

Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented by negation as failure, as in the definition:

even(0).even(s(X)):-not(even(X)).

See also

Notes

  1. Henkin, Leon (1960). "On Mathematical Induction". The American Mathematical Monthly. 67 (4): 323–338. doi:10.2307/2308975. ISSN   0002-9890. JSTOR   2308975.
  2. "All About Recursion". www.cis.upenn.edu. Retrieved 2019-10-24.
  3. For a proof of Recursion Theorem, see On Mathematical Induction (1960) by Leon Henkin.
  4. Munkres, James (1975). Topology, a first course (1st ed.). New Jersey: Prentice-Hall. p.  68, exercises 10 and 12. ISBN   0-13-925495-1.
  5. Denecker, M., Ternovska, E.: A logic of nonmonotone inductive definitions. ACM Trans. Comput. Log. 9(2), 14:1–14:52 (2008)
  6. Warren, D.S. and Denecker, M., 2023. A better logical semantics for prolog. In Prolog: The Next 50 Years (pp. 82-92). Cham: Springer Nature Switzerland.

Related Research Articles

In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.

<span class="mw-page-title-main">Mathematical induction</span> Form of mathematical proof

Mathematical induction is a method for proving that a statement is true for every natural number , that is, that the infinitely many cases   all hold. This is done by first proving a simple case, then also showing that if we assume the claim is true for a given case, then the next case is also true. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:

Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung and that from each rung we can climb up to the next one.

<span class="mw-page-title-main">Natural number</span> Number used for counting

In mathematics, the natural numbers are the numbers 1, 2, 3, etc., possibly including 0 as well.[under discussion] Some definitions, including the standard ISO 80000-2, begin the natural numbers with 0, corresponding to the non-negative integers0, 1, 2, 3, ..., whereas others start with 1, corresponding to the positive integers1, 2, 3, ... Texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, while in other writings, that term is used instead for the integers. In common language, particularly in primary school education, natural numbers may be called counting numbers to intuitively exclude the negative integers and zero, and also to contrast the discreteness of counting to the continuity of measurement—a hallmark characteristic of real numbers.

In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops. Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.

In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th-century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete.

<span class="mw-page-title-main">Recursion</span> Process of repeating items in a self-similar way

Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances, it is often done in such a way that no infinite loop or infinite chain of references can occur.

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.

<span class="mw-page-title-main">Surreal number</span> Generalization of the real numbers

In mathematics, the surreal number system is a totally ordered proper class containing not only the real numbers but also infinite and infinitesimal numbers, respectively larger or smaller in absolute value than any positive real number. Research on the Go endgame by John Horton Conway led to the original definition and construction of surreal numbers. Conway's construction was introduced in Donald Knuth's 1974 book Surreal Numbers: How Two Ex-Students Turned On to Pure Mathematics and Found Total Happiness.

In combinatory logic for computer science, a fixed-point combinator, is a higher-order function that returns some fixed point of its argument function, if one exists.

In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book Introduction to Metamathematics. A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr.

Structural induction is a proof method that is used in mathematical logic, computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction.

In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:

Intuitionistic type theory is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.

In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable.

In computability theory the Myhill isomorphism theorem, named after John Myhill, provides a characterization for two numberings to induce the same notion of computability on a set.

<span class="mw-page-title-main">Recursion (computer science)</span> Use of functions that call themselves

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.

In the mathematical discipline of set theory, there are many ways of describing specific countable ordinals. The smallest ones can be usefully and non-circularly expressed in terms of their Cantor normal forms. Beyond that, many ordinals of relevance to proof theory still have computable ordinal notations. However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not ; various more-concrete ways of defining ordinals that definitely have notations are available.

Axiomatic constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories.

In proof theory, ordinal analysis assigns ordinals to mathematical theories as a measure of their strength. If theories have the same proof-theoretic ordinal they are often equiconsistent, and if one theory has a larger proof-theoretic ordinal than another it can often prove the consistency of the second theory.

In type theory, a system has inductive types if it has facilities for creating a new type from constants and functions that create terms of that type. The feature serves a role similar to data structures in a programming language and allows a type theory to add concepts like numbers, relations, and trees. As the name suggests, inductive types can be self-referential, but usually only in a way that permits structural recursion.

References