Knuth's up-arrow notation

Last updated

In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976. [1]

Contents

In his 1947 paper, [2] R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc. Various notations have been used to represent hyperoperations. One such notation is . Knuth's up-arrow notation is another. For example:

The general definition of the up-arrow notation is as follows (for ):

Here, stands for n arrows, so for example

The square brackets are another notation for hyperoperations.

Introduction

The hyperoperations naturally extend the arithmetic operations of addition and multiplication as follows. Addition by a natural number is defined as iterated incrementation:

Multiplication by a natural number is defined as iterated addition:

For example,

Exponentiation for a natural power is defined as iterated multiplication, which Knuth denoted by a single up-arrow:

For example,

Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”:

For example,

Expressions are evaluated from right to left, as the operators are defined to be right-associative.

According to this definition,

etc.

This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.

Pentation, defined as iterated tetration, is represented by the “triple arrow”:

Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:

and so on. The general rule is that an -arrow operator expands into a right-associative series of ()-arrow operators. Symbolically,

Examples:

Notation

In expressions such as , the notation for exponentiation is usually to write the exponent as a superscript to the base number . But many environments such as programming languages and plain-text e-mail do not support superscript typesetting. People have adopted the linear notation for such environments; the up-arrow suggests 'raising to the power of'. If the character set does not contain an up arrow, the caret (^) is used instead.

The superscript notation doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notation instead.

is a shorter alternative notation for n uparrows. Thus .

Writing out up-arrow notation in terms of powers

Attempting to write using the familiar superscript notation gives a power tower.

For example:

If b is a variable (or is too large), the power tower might be written using dots and a note indicating the height of the tower.

Continuing with this notation, could be written with a stack of such power towers, each describing the size of the one above it.

Again, if b is a variable or is too large, the stack might be written using dots and a note indicating its height.

Furthermore, might be written using several columns of such stacks of power towers, each column describing the number of power towers in the stack to its left:

And more generally:

This might be carried out indefinitely to represent as iterated exponentiation of iterated exponentiation for any a, n and b (although it clearly becomes rather cumbersome).

Using tetration

The Rudy Rucker notation for tetration allows us to make these diagrams slightly simpler while still employing a geometric representation (we could call these tetration towers).

Finally, as an example, the fourth Ackermann number could be represented as:

Generalizations

Some numbers are so large that multiple arrows of Knuth's up-arrow notation become too cumbersome; then an n-arrow operator is useful (and also for descriptions with a variable number of arrows), or equivalently, hyper operators.

Some numbers are so large that even that notation is not sufficient. The Conway chained arrow notation can then be used: a chain of three elements is equivalent with the other notations, but a chain of four or more is even more powerful.

= , Since = = , Thus the result comes out with

= or (Petillion)

Even faster-growing functions can be categorized using an ordinal analysis called the fast-growing hierarchy. The fast-growing hierarchy uses successive function iteration and diagonalization to systematically create faster-growing functions from some base function . For the standard fast-growing hierarchy using , already exhibits exponential growth, is comparable to tetrational growth and is upper-bounded by a function involving the first four hyperoperators;. Then, is comparable to the Ackermann function, is already beyond the reach of indexed arrows but can be used to approximate Graham's number, and is comparable to arbitrarily-long Conway chained arrow notation.

These functions are all computable. Even faster computable functions, such as the Goodstein sequence and the TREE sequence require the usage of large ordinals, may occur in certain combinatorical and proof-theoretic contexts. There exist functions which grow uncomputably fast, such as the Busy Beaver, whose very nature will be completely out of reach from any up-arrow, or even any ordinal-based analysis.

Definition

Without reference to hyperoperation the up-arrow operators can be formally defined by

for all integers with [nb 1] .

This definition uses exponentiation as the base case, and tetration as repeated exponentiation. This is equivalent to the hyperoperation sequence except it omits the three more basic operations of succession, addition and multiplication.

One can alternatively choose multiplication as the base case and iterate from there. Then exponentiation becomes repeated multiplication. The formal definition would be

for all integers with .

Note, however, that Knuth did not define the "nil-arrow" (). One could extend the notation to negative indices (n ≥ -2) in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

The up-arrow operation is a right-associative operation, that is, is understood to be , instead of . If ambiguity is not an issue parentheses are sometimes dropped.

Tables of values

Computing 0↑n b

Computing results in

0, when n = 0  [nb 2]
1, when n = 1 and b = 0   [nb 1] [nb 3]
0, when n = 1 and b > 0   [nb 1] [nb 3]
1, when n > 1 and b is even (including 0)
0, when n > 1 and b is odd

Computing 2↑n b

Computing can be restated in terms of an infinite table. We place the numbers in the top row, and fill the left column with values 2. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.

Values of = = = 2 → b → n
b
123456formula
1248163264
2241665536
32465536
424

The table is the same as that of the Ackermann function, except for a shift in and , and an addition of 3 to all values.

Computing 3↑n b

We place the numbers in the top row, and fill the left column with values 3. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.

Values of = = = 3 → b → n
b
12345formula
1392781243
23277,625,597,484,987
337,625,597,484,987
43

Computing 4↑n b

We place the numbers in the top row, and fill the left column with values 4. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.

Values of = = = 4 → b → n
b
12345formula
1416642561024
24256
34
44

Computing 10↑n b

We place the numbers in the top row, and fill the left column with values 10. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.

Values of = = = 10 → b → n
b
12345formula
1101001,00010,000100,000
21010,000,000,000
310
410

For 2 ≤ b ≤ 9 the numerical order of the numbers is the lexicographical order with n as the most significant number, so for the numbers of these 8 columns the numerical order is simply line-by-line. The same applies for the numbers in the 97 columns with 3 ≤ b ≤ 99, and if we start from n = 1 even for 3 ≤ b ≤ 9,999,999,999.

See also

Notes

  1. 1 2 3 For more details, see Powers of zero.
  2. Keep in mind that Knuth did not define the operator .
  3. 1 2 For more details, see Zero to the power of zero.

Related Research Articles

<span class="mw-page-title-main">Associative property</span> Property of a mathematical operation

In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.

In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

<span class="mw-page-title-main">Binary operation</span> Mathematical operation with two operands

In mathematics, a binary operation or dyadic operation is a rule for combining two elements to produce another element. More formally, a binary operation is an operation of arity two.

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.

<span class="mw-page-title-main">Exponentiation</span> Arithmetic operation

In mathematics, exponentiation is an operation involving two numbers: the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; this is pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

Large numbers are numbers significantly larger than those typically used in everyday life, appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts. Googology is the study of nomenclature and properties of large numbers.

Graham's number is an immense number that arose as an upper bound on the answer of a problem in the mathematical field of Ramsey theory. It is much larger than many other large numbers such as Skewes's number and Moser's number, both of which are in turn much larger than a googolplex. As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe. Thus Graham's number cannot be expressed even by physical universe-scale power towers of the form .

Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. .

<span class="mw-page-title-main">Integral test for convergence</span> Test for infinite series of monotonous terms for convergence

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

<span class="mw-page-title-main">Tetration</span> Arithmetic operation

In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though Knuth's up arrow notation and the left-exponent xb are common.

<span class="mw-page-title-main">Iterated logarithm</span> Inverse function to a tower of powers

In computer science, the iterated logarithm of , written log* , is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation:

<span class="mw-page-title-main">Pentation</span> Arithmetic operation

In mathematics, pentation is the fifth hyperoperation. Pentation is defined to be repeated tetration, similarly to how tetration is repeated exponentiation, exponentiation is repeated multiplication, and multiplication is repeated addition. The concept of "pentation" was named by English mathematician Reuben Goodstein in 1947, when he came up with the naming scheme for hyperoperations.

In mathematics, Cutler's bar notation is a notation system for large numbers, introduced by Mark Cutler in 2004. The idea is based on iterated exponentiation in much the same way that exponentiation is iterated multiplication.

<span class="mw-page-title-main">65,536</span> Natural number

65536 is the natural number following 65535 and preceding 65537.

In mathematics, the hyperoperation sequence is an infinite sequence of arithmetic operations (called hyperoperations in this context) that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).

A mathematical constant is a key number whose value is fixed by an unambiguous definition, often referred to by a special symbol, or by mathematicians' names to facilitate using it across multiple mathematical problems. Constants arise in many areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory, statistics, and calculus.

The Hawkins–Simon condition refers to a result in mathematical economics, attributed to David Hawkins and Herbert A. Simon, that guarantees the existence of a non-negative output vector that solves the equilibrium relation in the input–output model where demand equals supply. More precisely, it states a condition for under which the input–output system

Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing high-order accurate interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant takes the form of a weighted sum of radial basis functions. RBF interpolation is a mesh-free method, meaning the nodes need not lie on a structured grid, and does not require the formation of a mesh. It is often spectrally accurate and stable for large numbers of nodes even in high dimensions.

References

  1. Knuth, Donald E. (1976). "Mathematics and Computer Science: Coping with Finiteness". Science. 194 (4271): 1235–1242. Bibcode:1976Sci...194.1235K. doi:10.1126/science.194.4271.1235. PMID   17797067. S2CID   1690489.
  2. R. L. Goodstein (Dec 1947). "Transfinite Ordinals in Recursive Number Theory". Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR   2266486. S2CID   1318943.