In mathematics, the hyperoperation sequence [nb 1] is an infinite sequence of arithmetic operations (called hyperoperations in this context) [1] [11] [13] that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).
After that, the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.) [5] and can be written as using n − 2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by:
It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:
This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes's number and googolplexplex (e.g. is much larger than Skewes's number and googolplexplex), but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3). [14]
This recursion rule is common to many variants of hyperoperations.
The hyperoperation sequence is the sequence of binary operations , defined recursively as follows:
(Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument.)
For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as
The operations for n ≥ 3 can be written in Knuth's up-arrow notation.
So what will be the next operation after exponentiation? We defined multiplication so that and defined exponentiation so that so it seems logical to define the next operation, tetration, so that with a tower of three 'a'. Analogously, the pentation of (a, 3) will be tetration(a, tetration(a, a)), with three "a" in it.
Knuth's notation could be extended to negative indices ≥ −2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that
the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term; [15] so a is the base, b is the exponent (or hyperexponent), [12] and n is the rank (or grade), [6] and moreover, is read as "the bth n-ation of a", e.g. is read as "the 9th tetration of 7", and is read as "the 789th 123-ation of 456".
In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x + 1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.
Define iteration of a function f of two variables as
The hyperoperation sequence can be defined in terms of iteration, as follows. For all integers define
As iteration is associative, the last line can be replaced by
The definitions of the hyperoperation sequence can naturally be transposed to term rewriting systems (TRS).
The basic definition of the hyperoperation sequence corresponds with the reduction rules
To compute one can use a stack, which initially contains the elements .
Then, repeatedly until no longer possible, three elements are popped and replaced according to the rules [nb 2]
Schematically, starting from :
WHILE stackLength <> 1 { POP 3 elements; PUSH 1 or 5 elements according to the rules r1, r2, r3, r4, r5; }
Example
Compute . [16]
The reduction sequence is [nb 2] [17]
When implemented using a stack, on input
the stack configurations | represent the equations |
The definition using iteration leads to a different set of reduction rules
As iteration is associative, instead of rule r11 one can define
Like in the previous section the computation of can be implemented using a stack.
Initially the stack contains the four elements .
Then, until termination, four elements are popped and replaced according to the rules [nb 2]
Schematically, starting from :
WHILE stackLength <> 1 { POP 4 elements; PUSH 1 or 7 elements according to the rules r6, r7, r8, r9, r10, r11; }
Example
Compute .
On input the successive stack configurations are
The corresponding equalities are
When reduction rule r11 is replaced by rule r12, the stack is transformed acoording to
The successive stack configurations will then be
The corresponding equalities are
Remarks
Below is a list of the first seven (0th to 6th) hyperoperations (0⁰ is defined as 1).
n | Operation, Hn(a, b) | Definition | Names | Domain |
---|---|---|---|---|
0 | or | Increment, successor, zeration, hyper0 | Arbitrary | |
1 | or | Addition, hyper1 | ||
2 | or | Multiplication, hyper2 | ||
3 | or | Exponentiation, hyper3 | b real, with some multivalued extensions to complex numbers | |
4 | or | Tetration, hyper4 | a ≥ 0 or an integer, b an integer ≥ −1 [nb 5] (with some proposed extensions) | |
5 | or | Pentation, hyper5 | a, b integers ≥ −1 [nb 5] | |
6 | Hexation, hyper6 |
Hn(0, b) =
Hn(1, b) =
Hn(a, 0) =
Hn(a, 1) =
Hn(a, a) =
Hn(a, −1) = [nb 5]
Hn(2, 2) =
One of the earliest discussions of hyperoperations was that of Albert Bennett in 1914, who developed some of the theory of commutative hyperoperations (see below). [6] About 12 years later, Wilhelm Ackermann defined the function , which somewhat resembles the hyperoperation sequence. [20]
In his 1947 paper, [5] Reuben Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, etc.). As a three-argument function, e.g., , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function — recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.
The original three-argument Ackermann function uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for result in , thus differing from the hyperoperations beyond exponentiation. [7] [21] [22] The significance of the b + 1 in the previous expression is that = , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in , and so on for the higher-level operations. (See the Ackermann function article for details.)
This is a list of notations that have been used for hyperoperations.
Name | Notation equivalent to | Comment |
---|---|---|
Knuth's up-arrow notation | Used by Knuth [23] (for n ≥ 3), and found in several reference books. [24] [25] | |
Hilbert's notation | Used by David Hilbert. [26] | |
Goodstein's notation | Used by Reuben Goodstein. [5] | |
Original Ackermann function | Used by Wilhelm Ackermann (for n ≥ 1) [20] | |
Ackermann–Péter function | This corresponds to hyperoperations for base 2 (a = 2) | |
Nambiar's notation | Used by Nambiar (for n ≥ 1) [27] | |
Superscript notation | Used by Robert Munafo. [21] | |
Subscript notation (for lower hyperoperations) | Used for lower hyperoperations by Robert Munafo. [21] | |
Operator notation (for "extended operations") | Used for lower hyperoperations by John Doner and Alfred Tarski (for n ≥ 1). [28] | |
Square bracket notation | Used in many online forums; convenient for ASCII. | |
Conway chained arrow notation | Used by John Horton Conway (for n ≥ 3) |
In 1928, Wilhelm Ackermann defined a 3-argument function which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function was less similar to modern hyperoperations, because his initial conditions start with for all n > 2. Also he assigned addition to n = 0, multiplication to n = 1 and exponentiation to n = 2, so the initial conditions produce very different operations for tetration and beyond.
n | Operation | Comment |
---|---|---|
0 | ||
1 | ||
2 | ||
3 | An offset form of tetration. The iteration of this operation is different than the iteration of tetration. | |
4 | Not to be confused with pentation. |
Another initial condition that has been used is (where the base is constant ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.
In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows. [29] Since then, many other authors [30] [31] [32] have renewed interest in the application of hyperoperations to floating-point representation. (Since Hn(a, b) are all defined for b = -1.) While discussing tetration, Clenshaw et al. assumed the initial condition , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.
n | Operation | Comment |
---|---|---|
0 | ||
1 | ||
2 | ||
3 | ||
4 | An offset form of tetration. The iteration of this operation is much different than the iteration of tetration. | |
5 | Not to be confused with pentation. |
An alternative for these hyperoperations is obtained by evaluation from left to right. [9] Since
define (with ° or subscript)
with
This was extended to ordinal numbers by Doner and Tarski, [33] by :
It follows from Definition 1(i), Corollary 2(ii), and Theorem 9, that, for a≥ 2 and b≥ 1, that [ original research? ]
But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyperoperators: [34] [nb 6]
If α≥ 2 and γ≥ 2, [28] [Corollary 33(i)] [nb 6]
n | Operation | Comment |
---|---|---|
0 | Increment, successor, zeration | |
1 | ||
2 | ||
3 | ||
4 | Not to be confused with tetration. | |
5 | Not to be confused with pentation. Similar to tetration. |
Commutative hyperoperations were considered by Albert Bennett as early as 1914, [6] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule
which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.
n | Operation | Comment |
---|---|---|
0 | Smooth maximum | |
1 | ||
2 | This is due to the properties of the logarithm. | |
3 | ||
4 | Not to be confused with tetration. |
R. L. Goodstein [5] used the sequence of hyperoperators to create systems of numeration for the nonnegative integers. The so-called complete hereditary representation of integer n, at level k and base b, can be expressed as follows using only the first k hyperoperators and using as digits only 0, 1, ..., b − 1, together with the base b itself:
Unnecessary parentheses can be avoided by giving higher-level operators higher precedence in the order of evaluation; thus,
and so on.
In this type of base-bhereditary representation, the base itself appears in the expressions, as well as "digits" from the set {0, 1, ..., b− 1}. This compares to ordinary base-2 representation when the latter is written out in terms of the base b; e.g., in ordinary base-2 notation, 6 = (110)2 = 2 [3] 2 [2] 1 [1] 2 [3] 1 [2] 1 [1] 2 [3] 0 [2] 0, whereas the level-3 base-2 hereditary representation is 6 = 2 [3] (2 [3] 1 [2] 1 [1] 0) [2] 1 [1] (2 [3] 1 [2] 1 [1] 0). The hereditary representations can be abbreviated by omitting any instances of [1] 0, [2] 1, [3] 1, [4] 1, etc.; for example, the above level-3 base-2 representation of 6 abbreviates to 2 [3] 2 [1] 2.
Examples: The unique base-2 representations of the number 266, at levels 1, 2, 3, 4, and 5 are as follows:
In mathematics, the associative property is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.
In mathematics, a binary operation or dyadic operation is a rule for combining two elements to produce another element. More formally, a binary operation is an operation of arity two.
In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops. Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.
In mathematics, exponentiation is an operation involving two numbers: the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; this is pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:
Large numbers, far beyond those encountered in everyday life—such as simple counting or financial transactions—play a crucial role in various domains. These expansive quantities appear prominently in mathematics, cosmology, cryptography, and statistical mechanics. While they often manifest as large positive integers, they can also take other forms in different contexts. Googology delves into the naming conventions and properties of these immense numerical entities.
Graham's number is an immense number that arose as an upper bound on the answer of a problem in the mathematical field of Ramsey theory. It is much larger than many other large numbers such as Skewes's number and Moser's number, both of which are in turn much larger than a googolplex. As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe. Thus Graham's number cannot be expressed even by physical universe-scale power towers of the form , even though Graham's number is indeed a power of 3.
In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.
Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. .
In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n + 1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function.
In mathematics and set theory, hereditarily finite sets are defined as finite sets whose elements are all hereditarily finite sets. In other words, the set itself is finite, and all of its elements are finite sets, recursively all the way down to the empty set.
In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though Knuth's up arrow notation and the left-exponent xb are common.
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
In the formal language theory of computer science, left recursion is a special case of recursion where a string is recognized as part of a language by the fact that it decomposes into a string from that same language and a suffix. For instance, can be recognized as a sum because it can be broken into , also a sum, and , a suitable suffix.
In mathematics, pentation is the fifth hyperoperation. Pentation is defined to be repeated tetration, similarly to how tetration is repeated exponentiation, exponentiation is repeated multiplication, and multiplication is repeated addition. The concept of "pentation" was named by English mathematician Reuben Goodstein in 1947, when he came up with the naming scheme for hyperoperations.
In the theory of computation, the Sudan function is an example of a function that is recursive, but not primitive recursive. This is also true of the better-known Ackermann function.
In computability theory, course-of-values recursion is a technique for defining number-theoretic functions by recursion. In a definition of a function f by course-of-values recursion, the value of f(n) is computed from the sequence .
65536 is the natural number following 65535 and preceding 65537.
The Grzegorczyk hierarchy, named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels.
LOOP is a simple register language that precisely captures the primitive recursive functions. The language is derived from the counter-machine model. Like the counter machines the LOOP language comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer. A few arithmetic instructions operate on the registers. The only control flow instruction is 'LOOP x DO...END'. It causes the instructions within its scope to be repeated x times.