Hyperoperation

Last updated

In mathematics, the hyperoperation sequence [nb 1] is an infinite sequence of arithmetic operations (called hyperoperations in this context) [1] [11] [13] that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).

Contents

After that, the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.) [5] and can be written as using n − 2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by:

It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:

This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes's number and googolplexplex (e.g. is much larger than Skewes's number and googolplexplex), but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3). [14]

This recursion rule is common to many variants of hyperoperations.

Definition

Definition, most common

The hyperoperation sequence is the sequence of binary operations , defined recursively as follows:

(Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument.)

For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as

The operations for n ≥ 3 can be written in Knuth's up-arrow notation.

So what will be the next operation after exponentiation? We defined multiplication so that and defined exponentiation so that so it seems logical to define the next operation, tetration, so that with a tower of three 'a'. Analogously, the pentation of (a, 3) will be tetration(a, tetration(a, a)), with three "a" in it.

Knuth's notation could be extended to negative indices ≥ −2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that

the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term; [15] so a is the base, b is the exponent (or hyperexponent), [12] and n is the rank (or grade), [6] and moreover, is read as "the bth n-ation of a", e.g. is read as "the 9th tetration of 7", and is read as "the 789th 123-ation of 456".

In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x + 1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.

Definition, using iteration

Define iteration of a function f of two variables as

The hyperoperation sequence can be defined in terms of iteration, as follows. For all integers define

As iteration is associative, the last line can be replaced by

Computation

The definitions of the hyperoperation sequence can naturally be transposed to term rewriting systems (TRS).

TRS based on definition sub 1.1

The basic definition of the hyperoperation sequence corresponds with the reduction rules

To compute one can use a stack, which initially contains the elements .

Then, repeatedly until no longer possible, three elements are popped and replaced according to the rules [nb 2]

Schematically, starting from :

WHILE stackLength <> 1 {    POP 3 elements;    PUSH 1 or 5 elements according to the rules r1, r2, r3, r4, r5; }

Example

Compute . [16]

The reduction sequence is [nb 2] [17]

    
    
    
    
    
    
    
    
    

When implemented using a stack, on input

the stack configurations    represent the equations
        
        
        
        
        
        
        
        
        

TRS based on definition sub 1.2

The definition using iteration leads to a different set of reduction rules

As iteration is associative, instead of rule r11 one can define

Like in the previous section the computation of can be implemented using a stack.

Initially the stack contains the four elements .

Then, until termination, four elements are popped and replaced according to the rules [nb 2]

Schematically, starting from :

WHILE stackLength <> 1 {    POP 4 elements;    PUSH 1 or 7 elements according to the rules r6, r7, r8, r9, r10, r11; }

Example

Compute .

On input the successive stack configurations are

The corresponding equalities are

When reduction rule r11 is replaced by rule r12, the stack is transformed acoording to

The successive stack configurations will then be

The corresponding equalities are

Remarks

Examples

Below is a list of the first seven (0th to 6th) hyperoperations (0⁰ is defined as 1).

nOperation,
Hn(a, b)
DefinitionNamesDomain
0 or hyper0, increment, successor, zerationArbitrary
1 or hyper1, addition
2 or hyper2, multiplication
3 or hyper3, exponentiation b real, with some multivalued extensions to complex numbers
4 or hyper4, tetration a ≥ 0 or an integer, b an integer ≥ −1 [nb 5] (with some proposed extensions)
5hyper5, pentation a, b integers ≥ −1 [nb 5]
6hyper6, hexation

Special cases

Hn(0, b) =

b + 1, when n = 0
b, when n = 1
0, when n = 2
1, when n = 3 and b = 0 [nb 3] [nb 4]
0, when n = 3 and b > 0 [nb 3] [nb 4]
1, when n > 3 and b is even (including 0)
0, when n > 3 and b is odd

Hn(1, b) =

b, when n = 2
1, when n ≥ 3

Hn(a, 0) =

0, when n = 2
1, when n = 0, or n ≥ 3
a, when n = 1

Hn(a, 1) =

a, when n ≥ 2

Hn(a, a) =

Hn+1(a, 2), when n ≥ 1

Hn(a, −1) = [nb 5]

0, when n = 0, or n ≥ 4
a − 1, when n = 1
a, when n = 2
1/a , when n = 3

Hn(2, 2) =

3, when n = 0
4, when n ≥ 1, easily demonstrable recursively.

History

One of the earliest discussions of hyperoperations was that of Albert Bennett [6] in 1914, who developed some of the theory of commutative hyperoperations (see below). About 12 years later, Wilhelm Ackermann defined the function [20] which somewhat resembles the hyperoperation sequence.

In his 1947 paper, [5] Reuben Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, etc.). As a three-argument function, e.g., , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.

The original three-argument Ackermann function uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for result in , thus differing from the hyperoperations beyond exponentiation. [7] [21] [22] The significance of the b + 1 in the previous expression is that = , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in , and so on for the higher-level operations. (See the Ackermann function article for details.)

Notations

This is a list of notations that have been used for hyperoperations.

NameNotation equivalent to Comment
Knuth's up-arrow notation Used by Knuth [23] (for n ≥ 3), and found in several reference books. [24] [25]
Hilbert's notationUsed by David Hilbert. [26]
Goodstein's notationUsed by Reuben Goodstein. [5]
Original Ackermann function Used by Wilhelm Ackermann (for n ≥ 1) [20]
Ackermann–Péter function This corresponds to hyperoperations for base 2 (a = 2)
Nambiar's notationUsed by Nambiar (for n ≥ 1) [27]
Superscript notationUsed by Robert Munafo. [21]
Subscript notation (for lower hyperoperations)Used for lower hyperoperations by Robert Munafo. [21]
Operator notation (for "extended operations")Used for lower hyperoperations by John Doner and Alfred Tarski (for n ≥ 1). [28]
Square bracket notationUsed in many online forums; convenient for ASCII.
Conway chained arrow notation Used by John Horton Conway (for n ≥ 3)

Variant starting from a

In 1928, Wilhelm Ackermann defined a 3-argument function which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function was less similar to modern hyperoperations, because his initial conditions start with for all n > 2. Also he assigned addition to n = 0, multiplication to n = 1 and exponentiation to n = 2, so the initial conditions produce very different operations for tetration and beyond.

nOperationComment
0
1
2
3An offset form of tetration. The iteration of this operation is different than the iteration of tetration.
4Not to be confused with pentation.

Another initial condition that has been used is (where the base is constant ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.

Variant starting from 0

In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows. [29] Since then, many other authors [30] [31] [32] have renewed interest in the application of hyperoperations to floating-point representation. (Since Hn(a, b) are all defined for b = -1.) While discussing tetration, Clenshaw et al. assumed the initial condition , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.

nOperationComment
0
1
2
3
4An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
5Not to be confused with pentation.

Lower hyperoperations

An alternative for these hyperoperations is obtained by evaluation from left to right. [9] Since

define (with ° or subscript)

with

This was extended to ordinal numbers by Doner and Tarski, [33] by :

It follows from Definition 1(i), Corollary 2(ii), and Theorem 9, that, for a 2 and b 1, that [ original research? ]

But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyperoperators: [34] [nb 6]

If α 2 and γ 2, [28] [Corollary 33(i)] [nb 6]

nOperationComment
0increment, successor, zeration
1
2
3
4Not to be confused with tetration.
5Not to be confused with pentation.
Similar to tetration.

Commutative hyperoperations

Commutative hyperoperations were considered by Albert Bennett as early as 1914, [6] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule

which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.

nOperationComment
0 Smooth maximum
1
2This is due to the properties of the logarithm.
3
4Not to be confused with tetration.

Numeration systems based on the hyperoperation sequence

R. L. Goodstein [5] used the sequence of hyperoperators to create systems of numeration for the nonnegative integers. The so-called complete hereditary representation of integer n, at level k and base b, can be expressed as follows using only the first k hyperoperators and using as digits only 0, 1, ..., b − 1, together with the base b itself:

b [k] xk [k 1] xk 1 [k - 2] ... [2] x2 [1] x1
where xk, ..., x1 are the largest integers satisfying (in turn)
b [k] xkn
b [k] xk [k 1] xk 1n
...
b [k] xk [k 1] xk 1 [k - 2] ... [2] x2 [1] x1n
Any xi exceeding b 1 is then re-expressed in the same manner, and so on, repeating this procedure until the resulting form contains only the digits 0, 1, ..., b 1, together with the base b.

Unnecessary parentheses can be avoided by giving higher-level operators higher precedence in the order of evaluation; thus,

level-1 representations have the form b [1] X, with X also of this form;
level-2 representations have the form b [2] X [1] Y, with X,Y also of this form;
level-3 representations have the form b [3] X [2] Y [1] Z, with X,Y,Z also of this form;
level-4 representations have the form b [4] X [3] Y [2] Z [1] W, with X,Y,Z,W also of this form;

and so on.

In this type of base-bhereditary representation, the base itself appears in the expressions, as well as "digits" from the set {0, 1, ..., b 1}. This compares to ordinary base-2 representation when the latter is written out in terms of the base b; e.g., in ordinary base-2 notation, 6 = (110)2 = 2 [3] 2 [2] 1 [1] 2 [3] 1 [2] 1 [1] 2 [3] 0 [2] 0, whereas the level-3 base-2 hereditary representation is 6 = 2 [3] (2 [3] 1 [2] 1 [1] 0) [2] 1 [1] (2 [3] 1 [2] 1 [1] 0). The hereditary representations can be abbreviated by omitting any instances of [1] 0, [2] 1, [3] 1, [4] 1, etc.; for example, the above level-3 base-2 representation of 6 abbreviates to 2 [3] 2 [1] 2.

Examples: The unique base-2 representations of the number 266, at levels 1, 2, 3, 4, and 5 are as follows:

Level 1: 266 = 2 [1] 2 [1] 2 [1] ... [1] 2 (with 133 2s)
Level 2: 266 = 2 [2] (2 [2] (2 [2] (2 [2] 2 [2] 2 [2] 2 [2] 2 [1] 1)) [1] 1)
Level 3: 266 = 2 [3] 2 [3] (2 [1] 1) [1] 2 [3] (2 [1] 1) [1] 2
Level 4: 266 = 2 [4] (2 [1] 1) [3] 2 [1] 2 [4] 2 [2] 2 [1] 2
Level 5: 266 = 2 [5] 2 [4] 2 [1] 2 [5] 2 [2] 2 [1] 2

See also

Notes

  1. Sequences similar to the hyperoperation sequence have historically been referred to by many names, including: the Ackermann function [1] (3-argument), the Ackermann hierarchy, [2] the Grzegorczyk hierarchy [3] [4] (which is more general), Goodstein's version of the Ackermann function, [5] operation of the nth grade, [6] z-fold iterated exponentiation of x with y, [7] arrow operations, [8] reihenalgebra [9] and hyper-n. [1] [9] [10] [11] [12]
  2. 1 2 3 This implements the leftmost-innermost (one-step) strategy.
  3. 1 2 3 For more details, see Powers of zero.
  4. 1 2 3 For more details, see Zero to the power of zero.
  5. 1 2 3 Let x = a[n](−1). By the recursive formula, a[n]0 = a[n − 1](a[n](−1)) ⇒ 1 = a[n − 1]x. One solution is x = 0, because a[n − 1]0 = 1 by definition when n ≥ 4. This solution is unique because a[n − 1]b > 1 for all a > 1, b > 0 (proof by recursion).
  6. 1 2 Ordinal addition is not commutative; see ordinal arithmetic for more information

Related Research Articles

<span class="mw-page-title-main">Associative property</span> Property of a mathematical operation

In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.

In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

<span class="mw-page-title-main">Binary operation</span> Mathematical operation with two operands

In mathematics, a binary operation or dyadic operation is a rule for combining two elements to produce another element. More formally, a binary operation is an operation of arity two.

In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops. Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.

<span class="mw-page-title-main">Exponentiation</span> Mathematical operation

In mathematics, exponentiation is an operation involving two numbers, the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; this is pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:

Large numbers are numbers significantly larger than those typically used in everyday life, appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts. Googology is the study of nomenclature and properties of large numbers.

Graham's number is an immense number that arose as an upper bound on the answer of a problem in the mathematical field of Ramsey theory. It is much larger than many other large numbers such as Skewes's number and Moser's number, both of which are in turn much larger than a googolplex. As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe. Thus Graham's number cannot be expressed even by physical universe-scale power towers of the form .

In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.

Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. .

In mathematics, Steinhaus–Moser notation is a notation for expressing certain large numbers. It is an extension of Hugo Steinhaus's polygon notation.

In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n +1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function.

In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts strong multiplications inside a loop into weaker additions – something that frequently occurs in array addressing.(Cooper, Simpson & Vick 1995, p. 1)

<span class="mw-page-title-main">Tetration</span> Repeated exponentiation

In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though and the left-exponent xb are common.

In computational complexity theory, the complexity class ELEMENTARY of elementary recursive functions is the union of the classes

<span class="mw-page-title-main">Pentation</span> Mathematical hyperoperation

In mathematics, pentation is the next hyperoperation after tetration and before hexation. It is defined as iterated (repeated) tetration, just as tetration is iterated right-associative exponentiation. It is a binary operation defined with two numbers a and b, where a is tetrated to itself b − 1 times. For instance, using hyperoperation notation for pentation and tetration, means tetrating 2 to itself 2 times, or . This can then be reduced to

<span class="mw-page-title-main">LOCC</span> Method in quantum computation and communication

LOCC, or local operations and classical communication, is a method in quantum information theory where a local (product) operation is performed on part of the system, and where the result of that operation is "communicated" classically to another part where usually another local operation is performed conditioned on the information received.

In mathematics, the super-logarithm is one of the two inverse functions of tetration. Just as exponentiation has two inverse functions, roots and logarithms, tetration has two inverse functions, super-roots and super-logarithms. There are several ways of interpreting super-logarithms:

The Grzegorczyk hierarchy, named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels.

In mathematics, superfunction is a nonstandard name for an iterated function for complexified continuous iteration index. Roughly, for some function f and for some variable x, the superfunction could be defined by the expression

LOOP is a simple register language that precisely captures the primitive recursive functions. The language is derived from the counter-machine model. Like the counter machines the LOOP language comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer. A few arithmetic instructions operate on the registers. The only control flow instruction is 'LOOP x DO...END'. It causes the instructions within its scope to be repeated x times.

References

Bibliography