This article is missing information about truly self-referential (encodes and prints the large number) versions in Tupper 2007 "selfplot" and Jakob Trávnik 2011.(October 2021) |
Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the (x, y) plane.
The formula was defined by Jeff Tupper and appears as an example in Tupper's 2001 SIGGRAPH paper on reliable two-dimensional computer graphing algorithms. [1] This paper discusses methods related to the GrafEq formula-graphing program developed by Tupper. [2]
Although the formula is called "self-referential", Tupper did not name it as such. [3]
The formula is an inequality defined as:
where denotes the floor function, and mod is the modulo operation.
Let equal the following 543-digit integer:
Graphing the set of points in and which satisfy the formula, results in the following plot: [note 1]
The formula is a general-purpose method of decoding a bitmap stored in the constant , and it could be used to draw any other image. When applied to the unbounded positive range , the formula tiles a vertical swath of the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmap depicts the drawing formula itself, but this is not remarkable, since other slices depict all other possible formulae that might fit in a 17-pixel-tall bitmap. Tupper has created extended versions of his original formula that rule out all but one slice. [4]
The constant is a simple monochrome bitmap image of the formula treated as a binary number and multiplied by 17. If is divided by 17, the least significant bit encodes the upper-right corner ; the 17 least significant bits encode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on.
It fundamentally describes a way to plot points on a two-dimensional surface. The value of is the number whose binary digits form the plot. The following plot demonstrates the addition of different values of . In the fourth subplot, the k-value of "AFGP" and "Aesthetic Function Graph" is added to get the resultant graph, where both texts can be seen with some distortion due to the effects of binary addition. The information regarding the shape of the plot is stored within . [5]
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the smallest integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
Golomb coding is a lossless data compression method using a family of data compression codes invented by Solomon W. Golomb in the 1960s. Alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making Golomb coding highly suitable for situations in which the occurrence of small values in the input stream is significantly more likely than large values.
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. Formulas for calculating primes do exist; however, they are computationally very slow. A number of constraints are known, showing what such a "formula" can and cannot be.
In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,
In mathematics, a pairing function is a process to uniquely encode two natural numbers into a single natural number.
Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, called the modulus of the operation.
Elias ω coding or Elias omega coding is a universal code encoding the positive integers developed by Peter Elias. Like Elias gamma coding and Elias delta coding, it works by prefixing the positive integer with a representation of its order of magnitude in a universal code. Unlike those other two codes, however, Elias omega recursively encodes that prefix; thus, they are sometimes known as recursive Elias codes.
In dynamical systems theory, the baker's map is a chaotic map from the unit square into itself. It is named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed.
In mathematics, Hermite's identity, named after Charles Hermite, gives the value of a summation involving the floor function. It states that for every real number x and for every positive integer n the following identity holds:
In mathematics and computer science, the BIT predicate, sometimes written , is a predicate that tests whether the th bit of the number is 1, when is written as a binary number. Its mathematical applications include modeling the membership relation of hereditarily finite sets, and defining the adjacency relation of the Rado graph. In computer science, it is used for efficient representations of set data structures using bit vectors, in defining the private information retrieval problem from communication complexity, and in descriptive complexity theory to formulate logical descriptions of complexity classes.
In mathematics, Abel's summation formula, introduced by Niels Henrik Abel, is intensively used in analytic number theory and the study of special functions to compute series.
The Lambek–Moser theorem is a mathematical description of partitions of the natural numbers into two complementary sets. For instance, it applies to the partition of numbers into even and odd, or into prime and non-prime. There are two parts to the Lambek–Moser theorem. One part states that any two non-decreasing integer functions that are inverse, in a certain sense, can be used to split the natural numbers into two complementary subsets, and the other part states that every complementary partition can be constructed in this way. When a formula is known for the th natural number in a set, the Lambek–Moser theorem can be used to obtain a formula for the th number not in the set.
In mathematics, the Dedekind numbers are a rapidly growing sequence of integers named after Richard Dedekind, who defined them in 1897. The Dedekind number M(n) is the number of monotone Boolean functions of n variables. Equivalently, it is the number of antichains of subsets of an n-element set, the number of elements in a free distributive lattice with n generators, and one more than the number of abstract simplicial complexes on a set with n elements.
In modular arithmetic, Barrett reduction is a reduction algorithm introduced in 1986 by P.D. Barrett.
AN codes are error-correcting code that are used in arithmetic applications. Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made and correct it. Without these codes, processors would be unreliable since any errors would go undetected. AN codes are arithmetic codes that are named for the integers and that are used to encode and decode the codewords.
In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.